History tables on 8.8

Guys,
We've got various add-on solutions, and reports, that examine the history tables for various B1 objects
(documents, Business Partners, etc).
Up until 8.8, the history tables always stored the current revision of a record as well as the historical
revisions. In 8.8, it seems that the current revision is no longer stored in the history tables - the highest
revision stored is the one previous to the current state.
This has quite an impact on our add-ons and reports - does anyone know if this behaviour is by design,or a bug in 8.8?

For example, if your create a Sales Order under 2005 or 2007, the ordr table is populated with the sales order and its values. Also a record is created in the adoc table with a loginstanc of 1, with those same (current values). Now edit the SO, (by altering the comments/remarks field for instance). What happens now is that the record in the ordr table reflects your change, as you would expect. In the adoc table, a new record with a loginstanc of 2 is created, containing these current values, so you now have two records in adoc, the first being the original data, the second being the current data.
Now, do the same in 8.8 (I have PL06 installed). When you create the SO, the order table is populated as you would expect. However, no record is created in adoc at all. When you edit the SO, a record is created in adoc with a loginstanc of 1 which contains the original data.
Now, we have add ons and reports that work on the historical data (not just for documents, for other objects too) that rely on thebehaviour shown in 2005/2007 as it means we don't have to link back to the other tables to get the current data - which is a big bonus when working with adoc, otherwise you have to join dependant on the value of objtype.
If this has changed by design in 8.8 it's going to cause us quite a bit of pain.
I can't find any info on this in the 8.8 documents, nor in the list of changes in the database/SDK for 8.8.
I just want to know if this is by design, or it's a bug in the ramp-up versions of 8.8.
I'm also not sure what's going to happen when we upgrade a 2007 DB to 8.8 - will it strip out the current revisions from teh history tables? I might try that today.

Similar Messages

  • How history tables like MBEWH applied to Inventory cube 0IC_C03

    Hi BW Gurus,
    Need your advice on this. How actually we can applied history tables like MBEWH, MARCH, MCHBH, MKOLH, MSKUH link with the cube? Is it create as master data or create customer exist to 2LIS_03_BF data source?
    As my concern is how actually MonthYear data in this history table can map to transaction cube.
    Appreciate your help. Thanks.
    Regards,
    Jeff ([email protected])

    HI  Ramanjaneyulu ,
    Follow these steps & PDF FILE AT THE END.
    1. Activation:
    Activate the extract structure MC03BF0 for the corresponding DataSources
    2LIS_03_BX/2LIS_03_BF in the LO cockpit (transaction code in R/3: LBWE).
    2. Initialization:
    Initialization of the current stock (opening balance) in the R/3 source system with
    DataSource 2LIS_03_BX (transaction code in R/3: MCNB)
    Data is added to the setup table MC03BX0SETUP.
    3. Setup of statistical data:
    You can find the setup of statistical data under transaction SBIW in the R/3 system or use transaction code OLI1BW for material movement and transaction OLIZBW revolutions.
    4. Loading the opening stock balance (2LIS_03_BX):
    In the InfoPackage, choose the upload mode “Create opening balance”.
    5. Compressing the request:
    Compress the request containing the opening stock that was just uploaded. Make sure the "No marker update" indicator is not set. Please consider note very carefully 643687 before you carry out the compression of requests in stock InfoCubes.
    6. Loading the historical movements:
    Choose the upload mode "Initializing the delta process".
    7. Compress the request:
    After successfully uploading the historical material movements, the associated request has to be compressed. You must make sure the "No marker update"
    indicator is set.
    8. Start the unserialized V3 update or queued delta control run:
    Transaction code: SBIW. Data is written from the extraction queue or central update table into the delta queue (transaction code in R/3: RSA7). Data is now available for extraction for BI system.
    9. Start delta uploads:
    Successive (for example, daily) delta uploads can be started with the DataSource
    2LIS_03_BF from this point in time on.
    <b>PDF ON INVENTORY:</b>
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    Thanks,
    kiran

  • PO History tables.

    Hi Guys,
    Can anyone please tell me the table names for PO history information?
    I want to join tables to create a report, as ME80FN transaction has the report I need in three screens, and I want to join all of them.
    Thanks in advance.

    Hello SRao,
    PO history table is EKBE.
    There are many standard reports available in Purchasing. I am just wondering why you neeed to create new report.
    Regards
    Arif Mansuri

  • Purchase Requision History Table

    Can anyone tell me the name of the table in which  revision history of Purchase requisition is maintained?
    Like PO you have ekbe.
    But for PR which is the revision history table??

    Hi try this way...
    SELECT COUNT(*) FROM cdhdr INTO t_output-w_poc
                                      WHERE objectclas IN ('EINKBELEG',
                                      'BANF' , 'COND_A' , 'VERKBELEG' ,
                                      'INFOSATZ' , 'MM_SERVICE' , 'VBEX' )
                                      AND udate IN r_date
                                    AND tcode IN ('ME22' , 'ME22N','ME23N').
          IF sy-subrc EQ 0.
    "Join with table CDPOS where table name is EBAN
          ENDIF.
    Prabhudas

  • Moving Data from Normal table to History tables

    Hi All,
    I'm in the process of moving data from normal tables to
    History tables.
    It can be some sort of a procedure which should be a cron job running at night.
    My aim is to move data say 1.5 yrs or 2yrs old data to History tables.
    What aspects i need to check when moving data. And how can i write a procedure for this requirement.
    The schema is same in both the normal table and history table.
    It has to be a procedure based on particular field RCRE_DT.
    If the rcre_dt is above 2 yrs the data needs to be moved to HIS_<table>.
    I have to insert record in to HIS_table and simultaneously delete record from the normal table.
    This is in Production system and the tables are quite big.
    Pls do find enclosed the attached sample schema for Normal table and HIS_<table>.
    If i want to automate this script as a Cron job for similarly other History tables
    how am i to do it in a single procedure assuming the procedure for moving the data is the same procedure.
    Thanks for ur help in advance.
    SQL> DESC PXM_FLT;
    Name Null? Type
    RCRE_USER_ID NOT NULL VARCHAR2(15)
    RCRE_DT NOT NULL DATE
    LCHG_USER_ID VARCHAR2(15)
    LCHG_DT DATE
    AIRLINE_CD NOT NULL VARCHAR2(5)
    REF_ID NOT NULL VARCHAR2(12)
    BATCH_DT NOT NULL DATE
    CPY_NO NOT NULL NUMBER(2)
    ACCRUAL_STATUS NOT NULL VARCHAR2(1)
    FLT_DT NOT NULL DATE
    OPERATING_CARRIER_CD NOT NULL VARCHAR2(3)
    OPERATING_FLT_NO NOT NULL NUMBER(4)
    MKTING_CARRIER_CD VARCHAR2(3)
    MKTING_FLT_NO NUMBER(4)
    BOARD_PT NOT NULL VARCHAR2(5)
    OFF_PT NOT NULL VARCHAR2(5)
    AIR_CD_SHARE_IND VARCHAR2(1)
    UPLOAD_ERR_CD VARCHAR2(5)
    MID_PT1 VARCHAR2(5)
    MID_PT2 VARCHAR2(5)
    MID_PT3 VARCHAR2(5)
    MID_PT4 VARCHAR2(5)
    MID_PT5 VARCHAR2(5)
    PAX_TYPE VARCHAR2(3)
    PAY_PRINCIPLE VARCHAR2(1)
    SQL> DESC HIS_PXM_FLT;
    Name Null? Type
    RCRE_USER_ID NOT NULL VARCHAR2(15)
    RCRE_DT NOT NULL DATE
    LCHG_USER_ID VARCHAR2(15)
    LCHG_DT DATE
    AIRLINE_CD NOT NULL VARCHAR2(5)
    REF_ID NOT NULL VARCHAR2(12)
    BATCH_DT NOT NULL DATE
    CPY_NO NOT NULL NUMBER(2)
    ACCRUAL_STATUS NOT NULL VARCHAR2(1)
    FLT_DT NOT NULL DATE
    OPERATING_CARRIER_CD NOT NULL VARCHAR2(3)
    OPERATING_FLT_NO NOT NULL NUMBER(4)
    MKTING_CARRIER_CD VARCHAR2(3)
    MKTING_FLT_NO NUMBER(4)
    BOARD_PT NOT NULL VARCHAR2(5)
    OFF_PT NOT NULL VARCHAR2(5)
    AIR_CD_SHARE_IND VARCHAR2(1)
    UPLOAD_ERR_CD VARCHAR2(5)
    MID_PT1 VARCHAR2(5)
    MID_PT2 VARCHAR2(5)
    MID_PT3 VARCHAR2(5)
    MID_PT4 VARCHAR2(5)
    MID_PT5 VARCHAR2(5)
    PAX_TYPE VARCHAR2(3)
    PAY_PRINCIPLE VARCHAR2(1)

    Hi All,
    Thanks for ur valuable suggestion.But can u explain me bit more on this as i'm still confused about switching between partitoned tables and temporary table.Suppose if i have a table called PXM_FLT and an correspoding similar table named HIS_PXM_FLT.How can i do the partitioning shd i do the partitioning on the normal table or HIS_PXM_FLT.i do have a date field for me to partition based on range.Can u pls explain why shd i again create a temp.table.What's the purpose.Now the application is designed in such a way that old records have to be moved to HIS_PXM_FLT.can u pls highlight more on this.Your suggestions are greatly appreciated.As i'm relatively new to this partitioning technique i'm bit confused on how it works.But i came to understand Partitioning is a better operation than the normal insert or delte as it is more data intensive as millions of record need to be moved.Thanks for feedback and ur precious time.

  • Big history table based on SCD2

    Ola,
    Has anyone of you ever experienced difficulties running millions of records through a SCD2 mapping? I created a couple of history tables and created mappings to fill them, based on SCD2-method. When I tested the mappings with a subset of data (10.000 - 40-000 rows), it all went OK.
    Now I test the mapping with the full data set and three of the most large tables fail. The largest table is 23.7 million records. The smallest of the failing ones is 17.8 million. I get an error like this:
    ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06512: at "OWNER_OWR.WB_RT_MAPAUDIT", line 1762 ORA-06512: at "OWNER_OWR.WB_RT_MAPAUDIT", line 2651 ORA-06512: at "OWNER_LCN.E_ASL_SERVICE_HIST_LASH", line 3159 ORA-01555: snapshot too old: rollback segment number 2 with name "_SYSSMU2$" too small ORA-06512: at "OWNER_LCN.E_ASL_SERVICE_HIST_LASH", line 3680 ORA-06512: at "OWNER_LCN.E_ASL_SERVICE_HIST_LASH", line 4229 ORA-06512: at line 1
    Only thing I changed is the dataset. Nothing else...
    Does the 'snapshot too old' message mean that it takes too much time to finish the mapping process?
    Anyone any clue?
    Kind regards,
    Moscowic

    Though I havent faced the error while using OWB, saw this error on other occasaions when the the transaction was pretty large or if the UNDO segment was too small and couldnt hold the rollback information of a transaction.
    Check a couple of things...
    1. if you are running mapping it in a SET based mode with large data, u r likely to hit this error.
    2. Ask your DBA to lookinto the UNDO segment/tablesapce, if in case any fragmentation issues.
    Hope this helps.
    Nat

  • Change History Table in CRM

    Hello All,
    Please let me know the transaction change history table in CRM.
    Kindly help.
    Regards
    DJ

    Hi,
      Tables for change document history are
    CDHDR
    CDPOS_STR
    CDPOS_UID
    Regards
    Srinu

  • Number of Visible rows on the "Leave History" Table

    Dear SDN,
    please, how can I set the height of the "Leave History" table on the PZ54 ESS transaction ?
    The user is requesting more visible rows, but the table only displays 2 rows, and I can't find on the SDN how I can set this table property.
    --> This transaction can be accessed through the Portal, under ESS -> Leave -> Leave Application / Leave History, which is an ITS for the PZ54 transaction.
    Thanks in advance,
    Fabio

    This issue was fixed on post ITS iView height is compressed at first run

  • Best practices for creating and querying a history table?

    Suppose I have a table of name-value pairs, and I want to keep track of changes to them so that I can query the value of any pair at any point in time.
    A direct approach would be to use a schema like this:
    CREATE TABLE NAME_VALUE_HISTORY (
      NAME      VARCHAR2(...),
      VALUE     VARCHAR2(...),
      MODIFIED DATE
    );When a name-value pair is updated, a new row is added to this table with the date of the change.
    To determine the value associated with a name at a particular point in time, one uses a query like:
      SELECT * FROM NAME_VALUE_HISTORY
      WHERE NAME = :name
        AND MODIFIED IN (SELECT MAX(MODIFIED)
                        FROM NAME_VALUE_HISTORY
                        WHERE NAME = :name AND MODIFIED <= :time)My question is: is there a better way to accomplish this? What indexes/hints would you recommend?
    What about a two-table approach like this one? http://pratchev.blogspot.com/2007/05/keeping-history-data-in-sql-server.html
    Edited by: user10936714 on Aug 9, 2012 8:35 AM

    user10936714 wrote:
    There is one advantage... recording the change of a value is just one insert, and it is also atomic without the use of transactions.At the risk of being dumb, why is that an advantage? Oracle always and everywhere uses transactions so it's not like you're avoiding some overhead by not using transactions.
    If, for instance, the performance of reading the value of a name at a point in time is not important, then you can get by with just using one table - the history table.If you're not overly concerned with the performance implications of having the current data and the history data in the same table, rather than rolling your own solution, I'd be strongly tempted to use Workspace Manager to let Oracle keep track of the changes.
    You can create a table, enable versioning, and do whatever DML operations you'd like
    SQL> create table address(
      2    address_id number primary key,
      3    address    varchar2(100)
      4  );
    Table created.
    SQL> exec dbms_wm.enableVersioning( 'ADDRESS', 'VIEW_WO_OVERWRITE' );
    PL/SQL procedure successfully completed.
    SQL> insert into address values( 1, 'First Address' );
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> update address
      2     set address = 'Second Address'
      3   where address_id = 1;
    1 row updated.
    SQL> commit;
    Commit complete.Then you can either query the history view
    SQL> ed
    Wrote file afiedt.buf
      1  select address_id, address, wm_createtime
      2*   from address_hist
    SQL> /
    ADDRESS_ID ADDRESS                        WM_CREATETIME
             1 First Address                  09-AUG-12 01.48.58.566000 PM -04:00
             1 Second Address                 09-AUG-12 01.49.17.259000 PM -04:00Or, even cooler, you can go back to an arbitrary point in time, run a query, and see the historical information. I can go back to a point between the time that I committed the first change and the second change, query the ADDRESS view, and see the old data. This is invaluable if you want to take existing queries and/or reports and run them as of certain dates in the past when you're trying to debug a problem.
    SQL> select *
      2    from address;
    ADDRESS_ID ADDRESS
             1 First AddressYou can also do things like set savepoints which are basically named points in time that you can go back to. That lets you do things like create a savepoint for the data as soon as month-end processing is completed so you can easily go back to "July Month End" without needing to figure out exactly what time that occurred. And you can have multiple workspaces so different users can be working on completely different sets of changes simultaneously without interfering with each other. This was actually why Workspace Manager was originally created-- to allow users manipulating spatial data to have extremely long-running transactions that could span days or months-- and to be able to switch back and forth between the current live data and the data in each of these long-running scenarios.
    Justin

  • Cost to change hash partition key column in a history table

    Hi All,
    I have the following scenario.
    We have a history table in production which has 16 hash partitions on the basis of key_column.
    But the nature of data that we have in history table that has 878 distinct values of the key_column and about 1000 million data and all partitons are in same tablespace.
    Now we have a Pro*C module which purges data from this history table in the following way..
    > DELETE FROM hsitory_tab
    > WHERE p_date < (TO_DATE(sysdate+1, 'YYYYMMDD') - 210)
    > AND t_date < (TO_DATE(sysdate+1, 'YYYYMMDD') - 210)
    > AND ROWNUM <= 210;
    Now (p_date,t_data are one of the two columns in history table) data is deleted using thiese two date column conditions but key_column for partition is different.
    So as per aboove statement this history table containd 6 months data.
    DBA is asking to change this query and use partiton date wise.Now will it be proper to change the partition key_column (the existing hash partiton key_column >have 810 distinct values) and what things we need to cosider to calculate cost behind this hash partition key_column cahange(if it is appropriate to change >partition )key_column)Hope i explained my problem clearly and waiting for your suggestions .
    Thanks in advance.

    Hi Sir
    Many thanks for the reply.
    For first point -
    we are in plan to move the database to 10g after a lot of hastle between client.For second point -
    If we do partition by date or week we will have 30 or 7 partitions .As suggested by you as we have 16 partitions in the table best approach would be to have >partition by week then we will have 7 partitions and then each query will heat 7 partitions .For third point -
    Our main aim to reduce the timings of a job(a Pro*C program) which contains the following delete query to delete data from a history table .So accroding to the >query it is deleting data every day for 7 months and while deleting it it queries this hug etable by date.So in this case hash partition or range partiton or >hash/range partition which will be more suitable.
    DELETE FROM hsitory_tab
    WHERE p_date < (TO_DATE(sysdate+1, 'YYYYMMDD') - 210)
    AND t_date < (TO_DATE(sysdate+1, 'YYYYMMDD') - 210)
    AND ROWNUM <= 210;I have read in hash partition is used so that data will be evenly distributed in all partitions (though it depends on nature of data).In my case i want some suggestion from you to take the best approach .

  • Create History Table from Main

    I'm seeking help from the experts because I'm far from an expert and I have been unsuccessful at figuring this out on my own.
    So far I've created a history table which is to keep all our data history from our main table. It is almost the same table, but with a few added columns to better keep records. The purpose of creating the table is to keep from having so much data in our main table so it will cut down on data query times.
    This, to me, is a very complicated SQL statement. I'm trying to use an INSERT statement to do this. But I'm updating the history table from multiple tables. An example of what I'm trying is:
    INSERT INTO history_table a (column1, column2, column3, etc. )
    SELECT (b.column1, b.column2, c.title||' '||c.l_name||', '||c.f_name, b.column4, etc.)
    FROM main_table b, code_id_table c
    WHERE b.column3 = c.column1
    The problem is when I encounter null values in table3 c. Since I'm concatenating a few columns into one, if any of those columns are null, I get an error since it doesn't know what data to pull. I want it to just put NULL values where it's pulling NULL values. I'm pretty sure it will work with an IF THEN statement, but I'm not sure exactly how to handle it or if I'm even going at this with the right approach. The goal is the create a history table with a little more information than the main table so we have to pull information from multiple tables into the history table and get around the null values.
    I hope I've made sense with any of this. If someone has some ideas, advice, or examples for me I'd greatly appreciate it. Thanks for your time.
    -FC1

    Oh I apologize. I'm running version 8i at the moment but will be upgrading to 10g within the next few weeks.
    -FC1

  • DB Adapter inserting data in to regular tables and LOG / History tables

    I have a requirement of reading a XML file using the File Adapter and inserting into Data base.
    I'm using File Adapter --> Mediator --> DB Adapter
    When I insert the data I have to insert in to regular tables and LOG / History tables the same data.
    There is a foreign key relationship with regular table and History table. How to insert in to both tables (regular and LOG / History)?
    Thanks in advance

    while configuring the adapter, you need to create the relationships between the two tables, while importing the tables, import the two tables...
    http://docs.oracle.com/cd/E23943_01/integration.1111/e10231/adptr_db.htm#BDCGCCJB
    go to 9.2.7 Creating Relationships.
    HTH
    N

  • History Table/History Tables?

    Hi All
    I have 11 tables of which 1 is a main table, 9 tables are referencing the main table and the other table is referencing one of the 9 tables.
    E.G. Table names:
    A,B,C,D,E,F,G,H,I,J,K
    A is the main table
    B,C,D,E,F,G,H,I and J are referencing A
    K is referencing J.I have to keep history of any updates made on all tables. What will be the best practice to keep history - must I create a history table for each table or must I create one huge history table? If it's one history table, how should I go about it.
    Thanks in advance
    Kind regards
    Mel

    Multiple history tables.
    One history table will result in massive buffer busy waits and corresponding concurrency/scalability issues.
    Unless this is 1-user application of course.
    Been there, done that. It was not funny.
    Sybrand Bakker
    Senior Oracle DBA

  • History tables and audit columns?

    I need suggestions that is it a best practice to make a history table of every table and audit columns in every table as well?
    Kind Regards
    Abbas

    I agree with Christian. I'd also add that you do NOT want to implement this as a Forms solution , this should be done within the database and be transparent to the Forms. There is also standard Oracle Audit software that can be configured and used too.

  • How to insert past record after updating the master table in history table through store PROC

    Master Table
    Party Status
    A Active
    B Inactive
    C Active
    D Inactive
    Duplicate Table
    Party Status
    A Active
    B Active
    C Active
    D Inactive
    Updated Master Table
    Party Status
    A Active
    B Active
    C Active
    D Inactive
    Party History Table
    B Inactive
    I have two table one master and another duplicate I need to update master table based on duplicate table insert the record which updated into Party history table as shown above. need help to write store proc.

    Check MERGE syntax in BOL (example D). There should be a sample with output, e.g.
    insert into PartyHistory (Party, [Status])
    select Party, [Status] FROM
    (MERGE Master M using Duplicate D on M.[Party]=D.[Party]AND M.[Status]<>D.[Status]
    WHEN MATCHED THEN UPDATE
    SET [Status] = D.[Status]
    OUTPUT Deleted.[Party], Deleted.[Status], $Action) AS Changes (Party, [Status], Action) WHERE Action = 'UPDATE'
    For every expert, there is an equal and opposite expert. - Becker's Law
    My blog
    My TechNet articles

Maybe you are looking for

  • Tasks/Reminders added from Outlook (iCloud Control Panel synced) can't be checked off as completed on iOS

    I have Outlook on my PC, and can interact with the iOS reminders/tasks through syncing with the iCloud Control Panel. However, if I add a task/reminder this way I have been unable to mark the reminder as complete on my iOS devices (iPhone and iPad).

  • Repeat same page on every even page

    Hi, The customer printed his documents prepinted pages which has printed back side. I need to print same text(Terms and conditions) onto each page back side. This means I need to put fix page as every even page independently the main dynamic text. Is

  • PDF has not been configured for this web server linux 11.1.2

    Hi, We installed 11.1.2 Hyperion WorkforcePlanning on Linux 32 bit OS. When the user opens the workspace and tries to see the "Profit and Loss" and selects "File --> Open In --> PDF format" , it just flicks for a sec and gives following message : " P

  • Macbook Pro keyboard not working correctly.

    I have upgraded through a developer account to OS X 10.8 and sometimes when i use safari or any other application the keyboard wont work, so i have to close out of the application and then re open it and if that still does not work, i have to restart

  • Dialog Box Error or warning

    Hello To all You Mighty fine programers My name is ken I am new to java and came upon this program that didnt work {Figure 3.17. Using JOptionPane to display multiple lines in a dialog box. 1  // Fig. 3.17: Dialog1.java 2  // Printing multiple lines