Equipment History Table

Hi Experts,
I want to know the Table  where the Equipment history details (Installation / Dismantling etc.) are stored. 
Would appreciate if you can provide the Field where the Equipment Installation date is stored.
Regards,
Thushantha.

Hi,
Lets take an example of EQUI table...
Here if suppose, we have 2 key fields MANDT(100) and EQUI(000000000000011010),
then in the query to CDHDR table we have to concatenate the key field values
OBJECTCLAS --> Concatenate MANDT AND EQUI (100000000000000011010)
OBJECTID --> EQUI (for Equipments)
You can use the additional fields like user name, date and time you know them.....
Hope this helps.
Regards,
Kunjal

Similar Messages

  • Equipment History report ( imorove performance)

    Hi all,
    My requirement is to pull the equipment history report based on the valid from date field and user status
    I also have selection screen parameters as
    Vaid from date field v_equi-datab.
    functional location v_equi-tplnr
    technician id v_equi-TIDNR
    Partner ihpa-parnr.
    my problem is fetching data from v_equi is taking long time as im not using any key field or hitting index on the tables.
    Is there any better way to pull the history data based on valid from date.
    Please let me know if you need more info

    I have coded it that way just using datab in where condition.
    i wanted to know if there is any other better way to pull the data.
    Will it help creating index on datab field.
    Thanks
    Vijay

  • How history tables like MBEWH applied to Inventory cube 0IC_C03

    Hi BW Gurus,
    Need your advice on this. How actually we can applied history tables like MBEWH, MARCH, MCHBH, MKOLH, MSKUH link with the cube? Is it create as master data or create customer exist to 2LIS_03_BF data source?
    As my concern is how actually MonthYear data in this history table can map to transaction cube.
    Appreciate your help. Thanks.
    Regards,
    Jeff ([email protected])

    HI  Ramanjaneyulu ,
    Follow these steps & PDF FILE AT THE END.
    1. Activation:
    Activate the extract structure MC03BF0 for the corresponding DataSources
    2LIS_03_BX/2LIS_03_BF in the LO cockpit (transaction code in R/3: LBWE).
    2. Initialization:
    Initialization of the current stock (opening balance) in the R/3 source system with
    DataSource 2LIS_03_BX (transaction code in R/3: MCNB)
    Data is added to the setup table MC03BX0SETUP.
    3. Setup of statistical data:
    You can find the setup of statistical data under transaction SBIW in the R/3 system or use transaction code OLI1BW for material movement and transaction OLIZBW revolutions.
    4. Loading the opening stock balance (2LIS_03_BX):
    In the InfoPackage, choose the upload mode “Create opening balance”.
    5. Compressing the request:
    Compress the request containing the opening stock that was just uploaded. Make sure the "No marker update" indicator is not set. Please consider note very carefully 643687 before you carry out the compression of requests in stock InfoCubes.
    6. Loading the historical movements:
    Choose the upload mode "Initializing the delta process".
    7. Compress the request:
    After successfully uploading the historical material movements, the associated request has to be compressed. You must make sure the "No marker update"
    indicator is set.
    8. Start the unserialized V3 update or queued delta control run:
    Transaction code: SBIW. Data is written from the extraction queue or central update table into the delta queue (transaction code in R/3: RSA7). Data is now available for extraction for BI system.
    9. Start delta uploads:
    Successive (for example, daily) delta uploads can be started with the DataSource
    2LIS_03_BF from this point in time on.
    <b>PDF ON INVENTORY:</b>
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    Thanks,
    kiran

  • PO History tables.

    Hi Guys,
    Can anyone please tell me the table names for PO history information?
    I want to join tables to create a report, as ME80FN transaction has the report I need in three screens, and I want to join all of them.
    Thanks in advance.

    Hello SRao,
    PO history table is EKBE.
    There are many standard reports available in Purchasing. I am just wondering why you neeed to create new report.
    Regards
    Arif Mansuri

  • Z-Report for Equipment History

    Hello Experts,
                         I have a requirement from my client, that they would like to have an Equipment History based on there legacy system, where they would like to have a report comprising of Both Notifications and orders based on Equipment.
    the out put should be the list of notification and orders with there Category, duration, list of operations and components issued(in orders), and reported by fields.
    Please help
    and thanks in advance
    Regards,
    Yawar Khan

    Hi,
    The report details can be seen in standard SAP. If you see in the menu option after executing IW38 or IW39 etc you have option of selecting multiple level reports, and goods movement reports etc.
    As you were asking relavent to equipment then you have to filter the output report and then just go to multi level reports.
    The details you were asking will not be available in single screen except in multi level reports. Here also you have choice to create variant.
    Regards,
    NNR

  • Purchase Requision History Table

    Can anyone tell me the name of the table in which  revision history of Purchase requisition is maintained?
    Like PO you have ekbe.
    But for PR which is the revision history table??

    Hi try this way...
    SELECT COUNT(*) FROM cdhdr INTO t_output-w_poc
                                      WHERE objectclas IN ('EINKBELEG',
                                      'BANF' , 'COND_A' , 'VERKBELEG' ,
                                      'INFOSATZ' , 'MM_SERVICE' , 'VBEX' )
                                      AND udate IN r_date
                                    AND tcode IN ('ME22' , 'ME22N','ME23N').
          IF sy-subrc EQ 0.
    "Join with table CDPOS where table name is EBAN
          ENDIF.
    Prabhudas

  • Moving Data from Normal table to History tables

    Hi All,
    I'm in the process of moving data from normal tables to
    History tables.
    It can be some sort of a procedure which should be a cron job running at night.
    My aim is to move data say 1.5 yrs or 2yrs old data to History tables.
    What aspects i need to check when moving data. And how can i write a procedure for this requirement.
    The schema is same in both the normal table and history table.
    It has to be a procedure based on particular field RCRE_DT.
    If the rcre_dt is above 2 yrs the data needs to be moved to HIS_<table>.
    I have to insert record in to HIS_table and simultaneously delete record from the normal table.
    This is in Production system and the tables are quite big.
    Pls do find enclosed the attached sample schema for Normal table and HIS_<table>.
    If i want to automate this script as a Cron job for similarly other History tables
    how am i to do it in a single procedure assuming the procedure for moving the data is the same procedure.
    Thanks for ur help in advance.
    SQL> DESC PXM_FLT;
    Name Null? Type
    RCRE_USER_ID NOT NULL VARCHAR2(15)
    RCRE_DT NOT NULL DATE
    LCHG_USER_ID VARCHAR2(15)
    LCHG_DT DATE
    AIRLINE_CD NOT NULL VARCHAR2(5)
    REF_ID NOT NULL VARCHAR2(12)
    BATCH_DT NOT NULL DATE
    CPY_NO NOT NULL NUMBER(2)
    ACCRUAL_STATUS NOT NULL VARCHAR2(1)
    FLT_DT NOT NULL DATE
    OPERATING_CARRIER_CD NOT NULL VARCHAR2(3)
    OPERATING_FLT_NO NOT NULL NUMBER(4)
    MKTING_CARRIER_CD VARCHAR2(3)
    MKTING_FLT_NO NUMBER(4)
    BOARD_PT NOT NULL VARCHAR2(5)
    OFF_PT NOT NULL VARCHAR2(5)
    AIR_CD_SHARE_IND VARCHAR2(1)
    UPLOAD_ERR_CD VARCHAR2(5)
    MID_PT1 VARCHAR2(5)
    MID_PT2 VARCHAR2(5)
    MID_PT3 VARCHAR2(5)
    MID_PT4 VARCHAR2(5)
    MID_PT5 VARCHAR2(5)
    PAX_TYPE VARCHAR2(3)
    PAY_PRINCIPLE VARCHAR2(1)
    SQL> DESC HIS_PXM_FLT;
    Name Null? Type
    RCRE_USER_ID NOT NULL VARCHAR2(15)
    RCRE_DT NOT NULL DATE
    LCHG_USER_ID VARCHAR2(15)
    LCHG_DT DATE
    AIRLINE_CD NOT NULL VARCHAR2(5)
    REF_ID NOT NULL VARCHAR2(12)
    BATCH_DT NOT NULL DATE
    CPY_NO NOT NULL NUMBER(2)
    ACCRUAL_STATUS NOT NULL VARCHAR2(1)
    FLT_DT NOT NULL DATE
    OPERATING_CARRIER_CD NOT NULL VARCHAR2(3)
    OPERATING_FLT_NO NOT NULL NUMBER(4)
    MKTING_CARRIER_CD VARCHAR2(3)
    MKTING_FLT_NO NUMBER(4)
    BOARD_PT NOT NULL VARCHAR2(5)
    OFF_PT NOT NULL VARCHAR2(5)
    AIR_CD_SHARE_IND VARCHAR2(1)
    UPLOAD_ERR_CD VARCHAR2(5)
    MID_PT1 VARCHAR2(5)
    MID_PT2 VARCHAR2(5)
    MID_PT3 VARCHAR2(5)
    MID_PT4 VARCHAR2(5)
    MID_PT5 VARCHAR2(5)
    PAX_TYPE VARCHAR2(3)
    PAY_PRINCIPLE VARCHAR2(1)

    Hi All,
    Thanks for ur valuable suggestion.But can u explain me bit more on this as i'm still confused about switching between partitoned tables and temporary table.Suppose if i have a table called PXM_FLT and an correspoding similar table named HIS_PXM_FLT.How can i do the partitioning shd i do the partitioning on the normal table or HIS_PXM_FLT.i do have a date field for me to partition based on range.Can u pls explain why shd i again create a temp.table.What's the purpose.Now the application is designed in such a way that old records have to be moved to HIS_PXM_FLT.can u pls highlight more on this.Your suggestions are greatly appreciated.As i'm relatively new to this partitioning technique i'm bit confused on how it works.But i came to understand Partitioning is a better operation than the normal insert or delte as it is more data intensive as millions of record need to be moved.Thanks for feedback and ur precious time.

  • Big history table based on SCD2

    Ola,
    Has anyone of you ever experienced difficulties running millions of records through a SCD2 mapping? I created a couple of history tables and created mappings to fill them, based on SCD2-method. When I tested the mappings with a subset of data (10.000 - 40-000 rows), it all went OK.
    Now I test the mapping with the full data set and three of the most large tables fail. The largest table is 23.7 million records. The smallest of the failing ones is 17.8 million. I get an error like this:
    ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06512: at "OWNER_OWR.WB_RT_MAPAUDIT", line 1762 ORA-06512: at "OWNER_OWR.WB_RT_MAPAUDIT", line 2651 ORA-06512: at "OWNER_LCN.E_ASL_SERVICE_HIST_LASH", line 3159 ORA-01555: snapshot too old: rollback segment number 2 with name "_SYSSMU2$" too small ORA-06512: at "OWNER_LCN.E_ASL_SERVICE_HIST_LASH", line 3680 ORA-06512: at "OWNER_LCN.E_ASL_SERVICE_HIST_LASH", line 4229 ORA-06512: at line 1
    Only thing I changed is the dataset. Nothing else...
    Does the 'snapshot too old' message mean that it takes too much time to finish the mapping process?
    Anyone any clue?
    Kind regards,
    Moscowic

    Though I havent faced the error while using OWB, saw this error on other occasaions when the the transaction was pretty large or if the UNDO segment was too small and couldnt hold the rollback information of a transaction.
    Check a couple of things...
    1. if you are running mapping it in a SET based mode with large data, u r likely to hit this error.
    2. Ask your DBA to lookinto the UNDO segment/tablesapce, if in case any fragmentation issues.
    Hope this helps.
    Nat

  • Change History Table in CRM

    Hello All,
    Please let me know the transaction change history table in CRM.
    Kindly help.
    Regards
    DJ

    Hi,
      Tables for change document history are
    CDHDR
    CDPOS_STR
    CDPOS_UID
    Regards
    Srinu

  • Number of Visible rows on the "Leave History" Table

    Dear SDN,
    please, how can I set the height of the "Leave History" table on the PZ54 ESS transaction ?
    The user is requesting more visible rows, but the table only displays 2 rows, and I can't find on the SDN how I can set this table property.
    --> This transaction can be accessed through the Portal, under ESS -> Leave -> Leave Application / Leave History, which is an ITS for the PZ54 transaction.
    Thanks in advance,
    Fabio

    This issue was fixed on post ITS iView height is compressed at first run

  • Best practices for creating and querying a history table?

    Suppose I have a table of name-value pairs, and I want to keep track of changes to them so that I can query the value of any pair at any point in time.
    A direct approach would be to use a schema like this:
    CREATE TABLE NAME_VALUE_HISTORY (
      NAME      VARCHAR2(...),
      VALUE     VARCHAR2(...),
      MODIFIED DATE
    );When a name-value pair is updated, a new row is added to this table with the date of the change.
    To determine the value associated with a name at a particular point in time, one uses a query like:
      SELECT * FROM NAME_VALUE_HISTORY
      WHERE NAME = :name
        AND MODIFIED IN (SELECT MAX(MODIFIED)
                        FROM NAME_VALUE_HISTORY
                        WHERE NAME = :name AND MODIFIED <= :time)My question is: is there a better way to accomplish this? What indexes/hints would you recommend?
    What about a two-table approach like this one? http://pratchev.blogspot.com/2007/05/keeping-history-data-in-sql-server.html
    Edited by: user10936714 on Aug 9, 2012 8:35 AM

    user10936714 wrote:
    There is one advantage... recording the change of a value is just one insert, and it is also atomic without the use of transactions.At the risk of being dumb, why is that an advantage? Oracle always and everywhere uses transactions so it's not like you're avoiding some overhead by not using transactions.
    If, for instance, the performance of reading the value of a name at a point in time is not important, then you can get by with just using one table - the history table.If you're not overly concerned with the performance implications of having the current data and the history data in the same table, rather than rolling your own solution, I'd be strongly tempted to use Workspace Manager to let Oracle keep track of the changes.
    You can create a table, enable versioning, and do whatever DML operations you'd like
    SQL> create table address(
      2    address_id number primary key,
      3    address    varchar2(100)
      4  );
    Table created.
    SQL> exec dbms_wm.enableVersioning( 'ADDRESS', 'VIEW_WO_OVERWRITE' );
    PL/SQL procedure successfully completed.
    SQL> insert into address values( 1, 'First Address' );
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> update address
      2     set address = 'Second Address'
      3   where address_id = 1;
    1 row updated.
    SQL> commit;
    Commit complete.Then you can either query the history view
    SQL> ed
    Wrote file afiedt.buf
      1  select address_id, address, wm_createtime
      2*   from address_hist
    SQL> /
    ADDRESS_ID ADDRESS                        WM_CREATETIME
             1 First Address                  09-AUG-12 01.48.58.566000 PM -04:00
             1 Second Address                 09-AUG-12 01.49.17.259000 PM -04:00Or, even cooler, you can go back to an arbitrary point in time, run a query, and see the historical information. I can go back to a point between the time that I committed the first change and the second change, query the ADDRESS view, and see the old data. This is invaluable if you want to take existing queries and/or reports and run them as of certain dates in the past when you're trying to debug a problem.
    SQL> select *
      2    from address;
    ADDRESS_ID ADDRESS
             1 First AddressYou can also do things like set savepoints which are basically named points in time that you can go back to. That lets you do things like create a savepoint for the data as soon as month-end processing is completed so you can easily go back to "July Month End" without needing to figure out exactly what time that occurred. And you can have multiple workspaces so different users can be working on completely different sets of changes simultaneously without interfering with each other. This was actually why Workspace Manager was originally created-- to allow users manipulating spatial data to have extremely long-running transactions that could span days or months-- and to be able to switch back and forth between the current live data and the data in each of these long-running scenarios.
    Justin

  • Cost to change hash partition key column in a history table

    Hi All,
    I have the following scenario.
    We have a history table in production which has 16 hash partitions on the basis of key_column.
    But the nature of data that we have in history table that has 878 distinct values of the key_column and about 1000 million data and all partitons are in same tablespace.
    Now we have a Pro*C module which purges data from this history table in the following way..
    > DELETE FROM hsitory_tab
    > WHERE p_date < (TO_DATE(sysdate+1, 'YYYYMMDD') - 210)
    > AND t_date < (TO_DATE(sysdate+1, 'YYYYMMDD') - 210)
    > AND ROWNUM <= 210;
    Now (p_date,t_data are one of the two columns in history table) data is deleted using thiese two date column conditions but key_column for partition is different.
    So as per aboove statement this history table containd 6 months data.
    DBA is asking to change this query and use partiton date wise.Now will it be proper to change the partition key_column (the existing hash partiton key_column >have 810 distinct values) and what things we need to cosider to calculate cost behind this hash partition key_column cahange(if it is appropriate to change >partition )key_column)Hope i explained my problem clearly and waiting for your suggestions .
    Thanks in advance.

    Hi Sir
    Many thanks for the reply.
    For first point -
    we are in plan to move the database to 10g after a lot of hastle between client.For second point -
    If we do partition by date or week we will have 30 or 7 partitions .As suggested by you as we have 16 partitions in the table best approach would be to have >partition by week then we will have 7 partitions and then each query will heat 7 partitions .For third point -
    Our main aim to reduce the timings of a job(a Pro*C program) which contains the following delete query to delete data from a history table .So accroding to the >query it is deleting data every day for 7 months and while deleting it it queries this hug etable by date.So in this case hash partition or range partiton or >hash/range partition which will be more suitable.
    DELETE FROM hsitory_tab
    WHERE p_date < (TO_DATE(sysdate+1, 'YYYYMMDD') - 210)
    AND t_date < (TO_DATE(sysdate+1, 'YYYYMMDD') - 210)
    AND ROWNUM <= 210;I have read in hash partition is used so that data will be evenly distributed in all partitions (though it depends on nature of data).In my case i want some suggestion from you to take the best approach .

  • Create History Table from Main

    I'm seeking help from the experts because I'm far from an expert and I have been unsuccessful at figuring this out on my own.
    So far I've created a history table which is to keep all our data history from our main table. It is almost the same table, but with a few added columns to better keep records. The purpose of creating the table is to keep from having so much data in our main table so it will cut down on data query times.
    This, to me, is a very complicated SQL statement. I'm trying to use an INSERT statement to do this. But I'm updating the history table from multiple tables. An example of what I'm trying is:
    INSERT INTO history_table a (column1, column2, column3, etc. )
    SELECT (b.column1, b.column2, c.title||' '||c.l_name||', '||c.f_name, b.column4, etc.)
    FROM main_table b, code_id_table c
    WHERE b.column3 = c.column1
    The problem is when I encounter null values in table3 c. Since I'm concatenating a few columns into one, if any of those columns are null, I get an error since it doesn't know what data to pull. I want it to just put NULL values where it's pulling NULL values. I'm pretty sure it will work with an IF THEN statement, but I'm not sure exactly how to handle it or if I'm even going at this with the right approach. The goal is the create a history table with a little more information than the main table so we have to pull information from multiple tables into the history table and get around the null values.
    I hope I've made sense with any of this. If someone has some ideas, advice, or examples for me I'd greatly appreciate it. Thanks for your time.
    -FC1

    Oh I apologize. I'm running version 8i at the moment but will be upgrading to 10g within the next few weeks.
    -FC1

  • DB Adapter inserting data in to regular tables and LOG / History tables

    I have a requirement of reading a XML file using the File Adapter and inserting into Data base.
    I'm using File Adapter --> Mediator --> DB Adapter
    When I insert the data I have to insert in to regular tables and LOG / History tables the same data.
    There is a foreign key relationship with regular table and History table. How to insert in to both tables (regular and LOG / History)?
    Thanks in advance

    while configuring the adapter, you need to create the relationships between the two tables, while importing the tables, import the two tables...
    http://docs.oracle.com/cd/E23943_01/integration.1111/e10231/adptr_db.htm#BDCGCCJB
    go to 9.2.7 Creating Relationships.
    HTH
    N

  • History Table/History Tables?

    Hi All
    I have 11 tables of which 1 is a main table, 9 tables are referencing the main table and the other table is referencing one of the 9 tables.
    E.G. Table names:
    A,B,C,D,E,F,G,H,I,J,K
    A is the main table
    B,C,D,E,F,G,H,I and J are referencing A
    K is referencing J.I have to keep history of any updates made on all tables. What will be the best practice to keep history - must I create a history table for each table or must I create one huge history table? If it's one history table, how should I go about it.
    Thanks in advance
    Kind regards
    Mel

    Multiple history tables.
    One history table will result in massive buffer busy waits and corresponding concurrency/scalability issues.
    Unless this is 1-user application of course.
    Been there, done that. It was not funny.
    Sybrand Bakker
    Senior Oracle DBA

Maybe you are looking for

  • Blue screen error /Blue screen of death(Your pc ran into a problem and needs to be restart)

    Hello,          About 15 days ago,my laptop is showing blue screen errors,really this is very frustrating.          I use a lenovo laptop, Processor Core i3 (3rd Generation) System Memory 4 GB DDR3 HDD Capacity 320 GB Operating System Windows 8 Syste

  • Color Finesse in Premiere Pro?

    In the November Issue of Sound on Sound Magazine Mark Butler did a review of CS5 Production Premium, and states the following: "The version bundled with After Effects is Color Finesse 3 LE, and it can also be used directly in Premiere Pro if you like

  • Find Indexes not used

    Hi , Is there any script that i can run to see all the indexes that are not used in the Entire batch Jobs (these batch job has many DML's on large number of tables) but present(in valid state) during the Run. The main idea is to have these indexes an

  • Can you create playlists from Apple TV?

    Does apple TV allow you to create a playlist? I'm thinking similar function to how on the go works on the ipod.

  • Keeps crashing when I add two clips

    I add one clip and the app was fine. Then I added music and it was still fine. But when I added a second video the app would crash. I tried closing it and reopening the app but it would crash as soon as it started up. I closed all my apps and reboted