Archiving table CDCLS

Hi gurus,
I have been requested to provide soution to archive CDCLS table entries with periodic process as this table encoured 'problem' while migrating to UNICODE (it was test scenario but we noted that export took very long time because of lots of entries)
As I am not expert on SAP archiving, this survey is based on ADK (as data won't be archived on external support) for which I already found lots of documentations in SDN or SAP community websites.
Thus, what I do require is a point of view and/or insight from 'archiving gurus' (used to perform such a task) to reach my goal.
I understood that I have to create archiving object (within AOBJ) but I do not know (even I suppose) that prerequisites exist maybe?
Ending, entries which are required when creating new archiving object are not very easy to determine...
Thanks to anyone who would be able to help me.
Rgds,
Zulain

Hi,
CDLCLS (cluster) contains change documents which is basically log of changes to master records and documents.
The archiving object CHANGDOCU should only be used to archive the change documents of master data. Refer SAP Note 140255. Change documents of transaction data records should still be archived along with the appropriate archiving object.
To find the appropriate application object you will need to do table analysis using TA: TAANA. The SAP Note 689036 give you more details on change document table analysis.
Further you can delete change documents using report RSCDOK99. Check SAP note 180927 and 183558.
Hope this helps
Cheers!
Samanjay

Similar Messages

  • Accrual Formula Archive Table Issue?

    Hello Experts,
    I customized PTO_PAYROLL_BALANCE_CALCULATION as per our business requirment. High Level of the plan- It is based on the overtime an employee works per pay period and depending on overtime worked an employee gets time and half times. For example if the emp works 12 hours he accrues 18 hours of benefit time(formula calculating fine). To meet the business requirment we included employees who are hired in the mid of the pay period , here the issue is when the archive process is run
    We are unable to populate the employee's period accural(acrrued in a particular pay period) and if we populate the period accrual we are unable to process period accrual for an employee terminated in middle of the pay period. The requirement is to populate both in the archive table. ie to include both period accrual and mid pay period terminated employees.
    Below is the customized formula. Thanks much and appreciate your time and response in advance.
    DEFAULT FOR ACP_START IS 'HD'
    DEFAULT FOR ACP_INELIGIBILITY_PERIOD_TYPE IS 'CM'
    DEFAULT FOR ACP_INELIGIBILITY_PERIOD_LENGTH IS 0
    DEFAULT FOR ACP_CONTINUOUS_SERVICE_DATE IS '4712/12/31 00:00:00' (date)
    DEFAULT FOR ACP_ENROLLMENT_END_DATE IS '4712/12/31 00:00:00' (date)
    DEFAULT FOR ACP_TERMINATION_DATE IS '4712/12/31 00:00:00' (date)
    DEFAULT FOR ACP_ENROLLMENT_START_DATE IS '4712/12/31 00:00:00' (date)
    DEFAULT FOR ACP_SERVICE_START_DATE IS '4712/12/31 00:00:00' (date)
    default for Accrual_Start_Date is '4712/12/31 00:00:00' (date)
    default for Accrual_Latest_Balance is 0
    INPUTS ARE
    Calculation_Date (date),
    Accrual_Start_Date (date),
    Accrual_Latest_Balance
    /* bug 4047666*/
    prm_Accrual_Start_Date (date) = Accrual_Start_Date
    prm_Calculation_Date (date) = Calculation_Date
    /* bug 4047666*/
    E = CALCULATE_PAYROLL_PERIODS()
    For the payroll year that spans the Calculation Date
    get the first days of the payroll year. If we have a latest balance,
    we use the Accrual Start Date.
    Calculation_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
    Calculation_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
    /**XXX CUSTOM **/
    /*Calculation_Date = get_date('PAYROLL_PERIOD_END_DATE')*/
    Payroll_Year_First_Valid_Date = GET_DATE('PAYROLL_YEAR_FIRST_VALID_DATE')
    IF (Calculation_Date <> Calculation_Period_ED) AND
    (Calculation_Period_SD > Payroll_Year_First_Valid_Date) THEN
    E = GET_PAYROLL_PERIOD(ADD_DAYS(Calculation_Period_SD,-1))
    Calculation_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
    Calculation_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
    ELSE IF (Calculation_Period_SD = Payroll_Year_First_Valid_Date) AND
    (Calculation_Date <> Calculation_Period_ED) THEN
    Calculation_Period_ED = ADD_DAYS(Calculation_Period_SD,-1)
    Set the Calculation_Date to the Termination Date / Enrollment end date if not defaulted
    IF NOT (ACP_TERMINATION_DATE WAS DEFAULTED) OR
    NOT (ACP_ENROLLMENT_END_DATE WAS DEFAULTED) THEN
    Early_End_Date = least(ACP_TERMINATION_DATE, ACP_ENROLLMENT_END_DATE)
    IF (Early_End_Date < Calculation_Date) THEN
    Calculation_Date = Early_End_Date
    Get the last whole payroll period prior to the Calculation Date and ensure that it is within the
    Payroll Year (if the Calculation Date is the End of a Period then use that period)
    E = GET_PAYROLL_PERIOD(Calculation_Date)
    Calculation_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
    Calculation_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
    /**XXX CUSTOM **/
    /*Calculation_Date = get_date('PAYROLL_PERIOD_END_DATE')*/
    IF (Calculation_Date <> Calculation_Period_ED) AND
    (Calculation_Period_SD > Payroll_Year_First_Valid_Date) THEN
    E = GET_PAYROLL_PERIOD(ADD_DAYS(Calculation_Period_SD,-1))
    Calculation_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
    Calculation_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
    ELSE IF (Calculation_Period_SD = Payroll_Year_First_Valid_Date) AND
    (Calculation_Date <> Calculation_Period_ED) THEN
    Calculation_Period_ED = ADD_DAYS(Calculation_Period_SD,-1)
    Set the Continuous Service Global Variable, whilst also
    ensuring that the continuous service date is before the Calculation Period
    IF (ACP_CONTINUOUS_SERVICE_DATE WAS DEFAULTED) THEN
    E = set_date('CONTINUOUS_SERVICE_DATE', ACP_SERVICE_START_DATE)
    ELSE IF(ACP_CONTINUOUS_SERVICE_DATE > Calculation_Period_SD) THEN
    Total_Accrued_PTO = 0
    E = PUT_MESSAGE('HR_52796_PTO_FML_CSD')
    E = set_date('CONTINUOUS_SERVICE_DATE', ACP_CONTINUOUS_SERVICE_DATE)
    ELSE
    E = set_date('CONTINUOUS_SERVICE_DATE', ACP_CONTINUOUS_SERVICE_DATE)
    Determine the Accrual Start Rule and modify the start date of the accrual calculation accordingly
    N.B. In this calculation the Accrual Start Rule determines the date from which a person may first accrue
    PTO. The Ineligibility Rule determines the period of time during which the PTO is not registered.
    Once this date has passed the accrual is registered from the date determined by the Accrual Start Rule.
    Continuous_Service_Date = get_date('CONTINUOUS_SERVICE_DATE')
    IF (ACP_START = 'BOY') THEN
    First_Eligible_To_Accrue_Date =
    to_date('01/01/'||to_char(add_months(Continuous_Service_Date, 12), 'YYYY'),
    'DD/MM/YYYY')
    ELSE IF (ACP_START = 'PLUS_SIX_MONTHS') THEN
    First_Eligible_To_Accrue_Date = add_months(Continuous_Service_Date,6)
    ELSE IF (ACP_START = 'HD') THEN
    First_Eligible_To_Accrue_Date = Continuous_Service_Date
    Determine the date on which accrued PTo may first be registered, i.e the date on which the
    Ineligibility Period expires
    Accrual_Ineligibility_Expired_Date = First_Eligible_To_Accrue_Date
    IF (ACP_START <> 'PLUS_SIX_MONTHS' AND
    ACP_INELIGIBILITY_PERIOD_LENGTH > 0) THEN
    IF ACP_INELIGIBILITY_PERIOD_TYPE = 'BM' THEN
    Accrual_Ineligibility_Expired_Date = add_months(Continuous_Service_Date,
    ACP_INELIGIBILITY_PERIOD_LENGTH*2)
    ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'F' THEN
    Accrual_Ineligibility_Expired_Date = add_days(Continuous_Service_Date,
    ACP_INELIGIBILITY_PERIOD_LENGTH*14)
    ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'CM' THEN
    Accrual_Ineligibility_Expired_Date = add_months(Continuous_Service_Date,
    ACP_INELIGIBILITY_PERIOD_LENGTH)
    ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'LM' THEN
    Accrual_Ineligibility_Expired_Date = add_days(Continuous_Service_Date,
    ACP_INELIGIBILITY_PERIOD_LENGTH*28)
    ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'Q' THEN
    Accrual_Ineligibility_Expired_Date = add_months(Continuous_Service_Date,
    ACP_INELIGIBILITY_PERIOD_LENGTH*3)
    ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'SM' THEN
    Accrual_Ineligibility_Expired_Date = add_months(Continuous_Service_Date,
    ACP_INELIGIBILITY_PERIOD_LENGTH/2)
    ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'SY' THEN
    Accrual_Ineligibility_Expired_Date = add_months(Continuous_Service_Date,
    ACP_INELIGIBILITY_PERIOD_LENGTH*6)
    ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'W' THEN
    Accrual_Ineligibility_Expired_Date = add_days(Continuous_Service_Date,
    ACP_INELIGIBILITY_PERIOD_LENGTH*7)
    ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'Y' THEN
    Accrual_Ineligibility_Expired_Date = add_months(Continuous_Service_Date,
    ACP_INELIGIBILITY_PERIOD_LENGTH*12)
    IF Accrual_Ineligibility_Expired_Date > First_Eligible_To_Accrue_Date
    AND Calculation_Date < Accrual_Ineligibility_Expired_Date THEN
    First_Eligible_To_Accrue_Date = Accrual_Ineligibility_Expired_Date
    If the employee is eligible to accrue before the start of this year,
    we must get the period dates for the first period of the year.
    Otherwise, we do not need these dates, as we will never accrue that
    far back.
    IF (not Accrual_Start_Date was defaulted) AND
    ((Calculation_Date < Accrual_Ineligibility_Expired_Date) OR
    (Accrual_Start_Date > Accrual_Ineligibility_Expired_Date)) THEN
    * This function checks for unprocessed plan element entries, and
    * returns the EE effective start date of the earliest it finds. This may
    * be useful if we amend the design to process a partial year starting at
    * this date.
    * At the moment, however, we simply recalculate for the entire plan term
    * in these circumstances, so Adjusted_Start_Date is never used
    Adjusted_Start_Date = Get_Start_Date(Accrual_Start_Date,
    Payroll_Year_First_Valid_Date)
    /* Check whether RESET_PTO_ACCRUAL action parameter is defined and set to Y */
    /* If yes, then we need to calculate from the beginning */
    Reset_Accruals = Reset_PTO_Accruals()
    /* Check for retrospective Assignment changes */
    /* Return earliest effective date */
    Earliest_AsgUpd_Date = Get_Earliest_AsgChange_Date
    ( 'PTO Event Group',
    add_days(Calculation_Period_SD,-1),
    Calculation_Period_ED,
    Accrual_Start_Date)
    New_Adj_Start_Date = LEAST(Adjusted_Start_Date,
    Earliest_AsgUpd_Date)
    IF ((New_Adj_Start_Date < Accrual_Start_Date) OR
    (Reset_Accruals = 'TRUE')) THEN
    Process_Full_Term = 'Y'
    ELSE
    Process_Full_Term = 'N'
    ELSE
    Process_Full_Term = 'Y'
    Latest_Balance = 0
    IF (Process_Full_Term = 'Y') THEN
    /* Ensure the Payroll Year Start Date gets reset if caculating */
    /* from the beginning of the year. */
    E = SET_DATE('PAYROLL_YEAR_SD', Payroll_Year_First_Valid_Date)
    IF (Process_Full_Term = 'N') AND
    (Accrual_Start_Date >= First_Eligible_To_Accrue_Date) THEN
    E = GET_PAYROLL_PERIOD(Adjusted_Start_Date)
    Payroll_Year_1st_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
    Payroll_Year_1st_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
    Latest_Balance = Accrual_Latest_Balance
    Effective_Start_Date = Adjusted_Start_Date
    ) /* XXX Custom to include mid pay period hires*/
    ELSE IF First_Eligible_To_Accrue_Date <= Payroll_Year_First_Valid_Date THEN
    IF (not Accrual_Start_Date was defaulted) THEN
    Latest_Balance = Accrual_Latest_Balance
    ELSE
    Latest_Balance = 0
    E = GET_PAYROLL_PERIOD(Payroll_Year_First_Valid_Date)
    Payroll_Year_1st_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
    Payroll_Year_1st_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
    Effective_Start_Date = Payroll_Year_First_Valid_Date
    ELSE
    Get the first full payroll period following the First_Eligible_To_Accrue_Date
    (if it falls on the beginning of the period then use that period)
    IF (not Accrual_Start_Date was defaulted) THEN
    Latest_Balance = Accrual_Latest_Balance
    ELSE
    Latest_Balance = 0
    E = GET_PAYROLL_PERIOD(First_Eligible_To_Accrue_Date )
    First_Eligible_To_Accrue_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
    First_Eligible_To_Accrue_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
    /* IF First_Eligible_To_Accrue_Date <> First_Eligible_To_Accrue_Period_SD THEN
    E = GET_PAYROLL_PERIOD(add_days(First_Eligible_To_Accrue_Period_ED,1))
    First_Eligible_To_Accrue_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
    First_Eligible_To_Accrue_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
    IF (First_Eligible_To_Accrue_Period_SD > Calculation_Period_ED) THEN
    Total_Accrued_PTO = 0
    E = PUT_MESSAGE('HR_52793_PTO_FML_ASG_INELIG')
    ) */ /* XXX Custom to include mid pay period hires*/
    Payroll_Year_1st_Period_SD = First_Eligible_To_Accrue_Period_SD
    Payroll_Year_1st_Period_ED = First_Eligible_To_Accrue_Period_ED
    Effective_Start_Date = First_Eligible_To_Accrue_Date
    Effective_Start_Date = GREATEST(Effective_Start_Date, ACP_ENROLLMENT_START_DATE)
    Output messages based on calculated date
    IF (Early_End_Date < Payroll_Year_1st_Period_ED) THEN
    Total_Accrued_PTO = 0
    E = PUT_MESSAGE('HR_52794_PTO_FML_ASG_TER')
    If (Calculation_Period_ED < Payroll_Year_1st_Period_ED) THEN
    Total_Accrued_PTO = 0
    E = PUT_MESSAGE('HR_52795_PTO_FML_CALC_DATE')
    Determine the date on which PTO actually starts accruing based on Hire Date,
    Continuous Service Date and plan Enrollment Start Date. Remember, we have
    already determined whether to user hire date or CSD earlier in the formula.
    If this date is after the 1st period and the fisrt eligible date then
    establish the first full payroll period after this date
    (if the Actual Start Date falls on the beginning of a payroll period then
    use this period)
    Enrollment_Start_Date = ACP_ENROLLMENT_START_DATE
    Actual_Accrual_Start_Date = GREATEST(Enrollment_Start_Date,
    Continuous_Service_Date,
    Payroll_Year_1st_Period_SD)
    Determine the actual start of the accrual calculation
    IF (Actual_Accrual_Start_Date > Payroll_Year_1st_Period_SD AND
    Actual_Accrual_Start_Date > First_Eligible_To_Accrue_Date) THEN
    E = GET_PAYROLL_PERIOD(Actual_Accrual_Start_Date)
    Accrual_Start_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
    Accrual_Start_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
    IF Actual_Accrual_Start_Date > Accrual_Start_Period_SD THEN
    ( E = GET_PAYROLL_PERIOD(Actual_Accrual_Start_Date) /* XXX CUSTOM*/
    Accrual_Start_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
    Accrual_Start_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
    E = GET_PAYROLL_PERIOD(add_days(Accrual_Start_Period_ED,1))
    Accrual_Start_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
    Accrual_Start_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
    If the Actual Acrual Period is after the Calculation Period then end the processing.
    IF (Accrual_Start_Period_SD > Calculation_Period_ED) THEN
    Total_Accrued_PTO = 0
    E = PUT_MESSAGE('HR_52797_PTO_FML_ACT_ACCRUAL')
    ELSE IF (First_Eligible_To_Accrue_Date > Payroll_Year_1st_Period_SD) THEN
    Accrual_Start_Period_SD = First_Eligible_To_Accrue_Period_SD
    Accrual_Start_Period_ED = First_Eligible_To_Accrue_Period_ED
    ELSE
    Accrual_Start_Period_SD = Payroll_Year_1st_Period_SD
    Accrual_Start_Period_ED = Payroll_Year_1st_Period_ED
    Now set up the information that will be used in when looping
    through the payroll periods
    IF Calculation_Period_ED >= Accrual_Start_Period_ED THEN
    E = set_date('PERIOD_SD',Accrual_Start_Period_SD)
    E = set_date('PERIOD_ED',Accrual_Start_Period_ED)
    E = set_date('LAST_PERIOD_SD',Calculation_Period_SD)
    E = set_date('LAST_PERIOD_ED',Calculation_Period_ED)
    IF (Process_Full_Term = 'N') THEN
    E = set_number('TOTAL_ACCRUED_PTO', Latest_Balance)
    ELSE
    E = set_number('TOTAL_ACCRUED_PTO', 0)
    Initialize Band Information
    E = set_number('ANNUAL_RATE', 0)
    E = set_number('UPPER_LIMIT', 0)
    E = set_number('CEILING', 0)
    E = LOOP_CONTROL('PTO_PAYROLL_PERIOD_ACCRUAL')
    Total_Accrued_PTO = get_number('TOTAL_ACCRUED_PTO') - Latest_Balance
    IF Accrual_Start_Period_SD <= Calculation_Period_SD THEN
    Accrual_end_date = Calculation_Period_ED
    IF Process_Full_Term = 'Y' AND
    Effective_Start_Date > Actual_Accrual_Start_Date THEN
    Effective_Start_Date = Actual_Accrual_Start_Date
    Effective_End_Date = Calculation_Date
    /* bug 4047666*/
    IF Process_Full_Term = 'N' AND NOT (Accrual_Start_Date WAS DEFAULTED)
    AND NOT (Accrual_Latest_Balance WAS DEFAULTED)
    AND prm_Accrual_Start_Date > prm_Calculation_Date THEN
    Effective_Start_Date = ADD_DAYS(Effective_End_Date,1)
    ELSE
    /* bug 4047666*/
    IF Effective_Start_Date >= Effective_End_Date THEN
    Effective_Start_Date = least(Effective_End_Date, Accrual_Start_Period_SD)
    RETURN Total_Accrued_PTO, Effective_start_date, Effective_end_date, Accrual_end_date
    Regards
    Edited by: user13149420 on Sep 5, 2012 2:50 PM

    issue in tcode : OAC0.. Content server path was incorrect.

  • How to use for all entires clause while fetching data from archived tables

    How to use for all entires clause while fetching data from archived tables using the FM
    /PBS/SELECT_INTO_TABLE' .
    I need to fetch data from an Archived table for all the entries in an internal table.
    Kindly provide some inputs for the same.
    thanks n Regards
    Ramesh

    Hi Ramesh,
    I have a query regarding accessing archived data through PBS.
    I have archived SAP FI data ( Object FI_DOCUMNT) using SAP standard process through TCODE : SARA.
    Now please tell me can I acees this archived data through the PBS add on FM : '/PBS/SELECT_INTO_TABLE'.
    Do I need to do something else to access data archived through SAP standard process ot not ? If yes, then please tell me as I am not able to get the data using the above FM.
    The call to the above FM is as follows :
    CALL FUNCTION '/PBS/SELECT_INTO_TABLE'
      EXPORTING
        archiv           = 'CFI'
        OPTION           = ''
        tabname          = 'BKPF'
        SCHL1_NAME       = 'BELNR'
        SCHL1_VON        =  belnr-low
        SCHL1_BIS        =  belnr-low
        SCHL2_NAME       = 'GJAHR'
        SCHL2_VON        =  GJAHR-LOW
        SCHL2_BIS        =  GJAHR-LOW
        SCHL3_NAME       =  'BUKRS'
        SCHL3_VON        =  bukrs-low
        SCHL3_BIS        =  bukrs-low
      SCHL4_NAME       =
      SCHL4_VON        =
      SCHL4_BIS        =
        CLR_ITAB         = 'X'
      MAX_ZAHL         =
      tables
        i_tabelle        =  t_bkpf
      SCHL1_IN         =
      SCHL2_IN         =
      SCHL3_IN         =
      SCHL4_IN         =
    EXCEPTIONS
       EOF              = 1
       OTHERS           = 2
       OTHERS           = 3
    It gives me the following error :
    Index for table not supported ! BKPF BELNR.
    Please help ASAP.
    Thnaks and Regards
    Gurpreet Singh

  • What are the cleanup opportunities in term of Temporary tables,Archive tabl

    what are the cleanup opportunities in term of Temporary tables,Archive tables etc...
    can you provide any scripts which will give storage by environment and by Oracle ID.(to check size in terms of GB)
    Example:
    =========
    APPS : xxxGB

    for archiving and purging, take a look at the documents below.
    Reducing Your Oracle E-Business Suite Data Footprint using Archiving, Purging, and Information Lifecycle Management [ID 752322.1]
    Edited by: Erman Arslan on 31.Ara.2012 03:35
    Edited by: Erman Arslan on 31.Ara.2012 03:37

  • Urgent Archiving tables in ORACLE

    Hello Every body,
    Any body having experince in archving records using oracle please help me...
    My client need a way to archieve the history records but Oracle documentation does not provide the exact procedure as how to archive the records so that it can be restored later......
    If anybody is aware of it ...Please go forward and shared your experirnce as what is the exact procedure....
    Also if the exact procedure is not provided by Oracle is there any third party software avaliable that would help.....
    Regards
    Mahesh

    I don't know of any third party software, though that doesn't mean none exists. If we're talking just 1 or 2 tables, though, it's a pretty easy script to write yourself.
    1) Call SQL*Plus and have it
    - truncate table <yourArchiveTable>
    - insert into <yourArchiveTable> (<select rows you want to archive>)
    - delete rows you want to archive from table
    2) Call exp to export the data from <yourArchiveTable> to a flat file.
    3) Move this flat file wherever you'd like
    On the restore side,
    1) Call imp to insert data back into <yourArchiveTable> (you might choose to have both an incoming & outgoing archive table)
    2) inset into <yourTable> (select * from <yourArchiveTable>
    If you partition the table and are able to limit the archive & restore to a single partition, it's even easier. For individual tables, it shouldn't take more than a day or two to roll the code.
    If you need more complicated logic, i.e. some rows that you archive aren't removed from the database, resolving collisions if you try to insert rows back into a table where they might have changed, that increases the complexity significantly. Since dealing with these sort of cases requires very particular business logic, though, there's no tool in the world that would be able to handle it-- you'd have to roll your own.
    Justin

  • How to check the data of an archived table.

    I have archived a table created by me. I have executed the write program for the archiving object in SARA. Now how can check the data of my archived table.

    Hello Vinod,
    One thing to check in the customizing settings is your "Place File in Storage System" option.  If you have selected the option to Store before deleting, the archive file will not be available for selection within the delete job until the store job has completed successfully.
    As for where your archive file will be stored - there are a number of things to check.  The archive write job will place the archive file in whatever filesystem you have set up within the /nFILE transaction.  There is a logical file path (for example ARCHIVE_GLOBAL_PATH)where you "assign" the physical path (for example UNIX: /sapmnt/<SYSID>/archivefiles).  The logical path is associated with a logical file name (for example ARCHIVE_DATA_FILE_WITH_ARCHIVE_LINK).  This is the file name that is used within the customizing settings of the archive object.
    Then, the file will be stored using the content repository you defined within the customizing settings as well.  Depending on what you are using to store your files (IXOS, IBM Commonstore, SAP Content Server, that is where the file will be stored. 
    Hope this helps.
    Regards,
    Karin Tillotson

  • Archive Tables Apporach

    Hi,
        I have running database in the production environment. Daily the data is increasing and need to archive the old data. I have planned to archive weekly into another table (archive table) in the same database.
       I am new to this so not thinking Partitioning like big concepts.
      I have planned simply move the one week old data to the archive table which is created in the same database with prefix as (AR). My idea is
    1. Create one SP with TRansaction to insert the data into the archive table and delete the data from original tables
    2. SChedule this as weekly in the SQL Server Agent Job
    Here my questions are;
    1. I have 50 numbers of transaction tables. So i write an sp with 50 tables and transaction enabled, how the performance will be?
        I mean it will take sometime when running the job. So it affect the general transaction in the base table?
        Is it advisable to perform all table archive in the same sp?
    2. I am going to use Transaction in the sp to rollback when error. So is it advisable to use Try..Catch statement .?
      I read one article from
    here like my approach but there is no try .. catch. It checks the failure in every statement and rollback. Which one is better? Code shown in the article is below (Code just shown example of Order tables)
    CREATE PROC dbo.ArchiveData
    @CutOffDate datetime = NULL
    AS
    BEGIN
    SET NOCOUNT ON
    IF @CutOffDate IS NULL
    BEGIN
    SET @CutOffDate = DATEADD(mm, -6, CURRENT_TIMESTAMP)
    END
    ELSE
    BEGIN
    IF @CutOffDate > DATEADD(mm, -3, CURRENT_TIMESTAMP)
    BEGIN
    RAISERROR ('Cannot delete orders from last three months', 16, 1)
    RETURN -1
    END
    END
    BEGIN TRAN
    INSERT INTO Archive.dbo.Orders
    SELECT *
    FROM dbo.Orders
    WHERE OrderDate < @CutOffDate
    IF @@ERROR <> 0
    BEGIN
    ROLLBACK TRAN
    RAISERROR ('Error occured while copying data to Archive.dbo.Orders', 16, 1)
    RETURN -1
    END
    INSERT INTO Archive.dbo.OrderDetails
    SELECT *
    FROM dbo.OrderDetails
    WHERE OrderID IN
    SELECT OrderID
    FROM dbo.Orders
    WHERE OrderDate < @CutOffDate
    IF @@ERROR <> 0
    BEGIN
    ROLLBACK TRAN
    RAISERROR ('Error occured while copying data to Archive.dbo.OrderDetails', 16, 1)
    RETURN -1
    END
    DELETE dbo.OrderDetails
    WHERE OrderID IN
    SELECT OrderID
    FROM dbo.Orders
    WHERE OrderDate < @CutOffDate
    IF @@ERROR <> 0
    BEGIN
    ROLLBACK TRAN
    RAISERROR ('Error occured while deleting data from dbo.OrderDetails', 16, 1)
    RETURN -1
    END
    DELETE dbo.Orders
    WHERE OrderDate < @CutOffDate
    IF @@ERROR <> 0
    BEGIN
    ROLLBACK TRAN
    RAISERROR ('Error occured while deleting data from dbo.Orders', 16, 1)
    RETURN -1
    END
    IF @@TRANCOUNT > 0
    BEGIN
    COMMIT TRAN
    RETURN 0
    END
    END

    I think that this approach I posted above is more efficient than your because it performs insert/delete as one query , you would probably benefit from performance increasing...
    INSERT INTO Archive.dbo.Orders
    SELECT *
    FROM dbo.Orders
    WHERE OrderDate < @CutOffDate
    --SQL2008
    insert into Archive.dbo.Orders
    select getdate(),d.*
    from (delete 
            from foo..Orders
    WHERE clause put here 
            output deleted.*) d
            go
    SQL Server IO might be affected  by running to archive all tables, so perhaps you can  schedule the job or a fer jobs to run on different
    period of time,
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Unable to store PDF , XL , Word document into SAP archived tables

    Hi Experts,
    I have created a web interface in WD ABAP which stores the employee's attachement data into sap arcived link.
    For that I have carried out following activities.
    I have Complted customizing for a document type under business object PREL and for this I have reffered following link.
    SASAP Archived Link
    Now I have used file Upload UI element in WDABAP which passes the local file data in Xstring to following function modoules for creating
    an attachment.
    data: it_out type TOADT,
    it_storage type ZDMSSTORAGE.
    CALL FUNCTION 'ARCHIV_CREATE_TABLE'
    EXPORTING
    ar_object = 'HRPDATA' " Object catagory that I have created for file storage under Business Object PREL
    object_id = '10000008'
    sap_object = 'PREL'
    document = filedata
    IMPORTING
    OUTDOC = it_out
    EXCEPTIONS
    error_archiv = 1
    error_communicationtable = 2
    error_connectiontable = 3
    error_kernel = 4
    error_parameter = 5
    error_user_exit = 6
    OTHERS = 7.
    For reading the attached document I am using following FM's
    CALL FUNCTION 'SCMS_AO_TABLE_GET'
    EXPORTING
    MANDT = SY-MANDT
    arc_id = 'Z1'
    doc_id = lv_doc_type"im_doc "'4D5D8445165220C8E10000000A3C082E'
    COMP_ID = 'data'
    IMPORTING
    LENGTH =
    tables
    data = bindata.
    *data: BINARY_TAB type
    CALL FUNCTION 'SCMS_BINARY_TO_XSTRING'
    EXPORTING
    input_length = 10000
    FIRST_LINE = 0
    LAST_LINE = 0
    IMPORTING
    BUFFER = V_XSTRING
    tables
    binary_tab = bindata.
    Now when I upload any file text or image file, its working fine. When I try to upload any pdf , XL or word document, it uploads the file into acchived table
    but when I read this file by converting file data into xstring and passing it into file downlod UI element, It says file is corrupt. Please suggest that if that
    is a issue with object catagory configration ( With storage class) or it is wrong way of reading a document for PDF, XL and word
    Thanks in advance
    Abhay

    hi
    please check the function module used whether it can handle the PDF as well as xl, word documents. if not use some other function module

  • Archive table -Procedure

    Hello Experts,
    I have developed a store procedure to archive of old data for a huge table. Before deleting a row from the table, inserts the same row in another table. I need to schedule the same through dbms_job for daily execution.
    CREATE OR REPLACE PROCEDURE Proc1 AS
    BEGIN
    DECLARE
    success_flag varchar2(1);
    total1 NUMBER := 0;
    CURSOR c1 IS SELECT rowid,col1,col2 from table1 WHERE CREATE_DATE < sysdate -120;
    BEGIN
    FOR z IN c1
    LOOP
    INSERT INTO table2 values(z.val1,val2 …);
    success_flag :='T';
    if success_flag ='T' then
    COMMIT;
    DELETE FROM table1
    WHERE rowid = z.rowid;
    COMMIT;
    ELSE
    ROLLBACK;
    END IF;
    END LOOP;
    EXCEPTION
    WHEN OTHERS THEN
    success_flag:= 'F';
    END;
    END;
    Please help me to handle the space related issues through this procedure. Please let me know any better approach to achieve the same.
    Thanks in Advance,
    Bala

    Don't know what FileNet is. Why can't you partition it?
    Anyway, if you must use insert/delete technique then why not
    insert into table2
    select <whatever>
    from table1
    where create_date < sysdate -120;
    delete from table1
    where create_date < sysdate -120;Also, do NOT commit between those two statements. If your procedure aborts during the delete
    it would roll back the delete but not the insert. When you restart the procedure it will insert again.
    Get rid of useless success flag as well.
    Also, remove the exception section: it adds nothing useful and hides the real error.

  • Best Solution for Archiving Table data

    Hi All,
    I have a table with huge data. It is not partitioned table.
    On an average per day 10000 records will be inserted into this table. Now I want to archive(backup)
    every one years data manually and keep in safe location and hence delete those archived rows
    from the table. Whenever required it should be easily imported back to this table. All this happens through
    Application.
    One appraoch in my mind right now, is transferring the data from table to flat file with comma separted,
    and whenever required again importing back to the table from Flat file using external tables concept.
    Can any body suggest what is best solution for this.
    Thanks

    The best solution would be partitioning.
    Any other solution requires DML - running DELETE and INSERT transactions to remove a data set and to add a data set (if need be) again.
    With partitioning this is achieved (in sub-seconds) using DDL by exchanging a partition's contents with that of a table. Which means that after the archived data has been loaded (SQL*Loaded, Import, etc) into a table (and indexes created), that table (with indexes) is "swapped" into the partition table as a partition.

  • Appraoch for archiving table data from a oracledb and load in archiv server

    Hi,
    I've a requirement where I need to archive and purge old data in datawarehouse.
    The archival strategy will select data from a list of tables and load into files.
    I also need to use sql loader control files to load data from the above files into archival server.
    I want to know which is the better approach to load data into files.
    Should I use utl_file or spool table data into file using select stmts.
    I also have some clob columns in some tables.

    I've doing something like this a couple of months ago. After some performance tests: the fastest way is to create files through UTL_FILE used in procedure with BULK SQL. Another good idea is to create files with Python which operates on text files blazingly. (Use PL/SQL if you need to create that files on server, but if you have to create files on remote machine use something else (Python?))

  • Options to Archive Tables in 10g

    Hi
    I just want to archive one table in oracle 10g data base. Can any on tell me the options to do that?
    I have 65M records in that table. Also Do I need to rebuild indexes after archiving the tables?

    What do you mean with archiving ??
    We e.g. purge data from tables every three months.
    This data then is stored in a history database, with an identical schema and identical tables.
    To access this historical data an overall history view is created in the source database, that accesses the local real-time table and - through a database link - the history table
    After purging data from a table it is smart to shrink (oracle 10g and above ) or rebuild the table and the indexes

  • Archive table data

    I am new to do DBA work.
    We have a table containing 227M records. Now we want to only keep 24 months data. So we need to archive data for this table and only keep data from 2005 to 2007. This table has partition for each month. What steps exactly we need to follow? Very appreciate if someone can give me some ideas. Thanks.

    One approach to what you want to do would be to create a second table to hold the archive data. For example, if your current table is called table_data then you could create a second table called table_data_archive. The second table must be created as an exact copy of the original table... same field names, key structure, etc. And of course you would want to partition the table, too.
    Then execute a sql query to copy the data you wish to archive from the original table into the second table. The query could be something like: INSERT INTO table_data_archive (SELECT * FROM table_data WHERE date_field <= 12/31/2004).
    Let me know if you have any additional questions, and good luck!

  • How to archive table SMODBLTXT

    Dear Sir,
    I found table SMODBLTXT quite big. Do you know how to archive this table?
    Regards,

    Are you really creating so many notes for your objects? That table stores notes for all Mobile objects...the only way to archive it would be to archive the objects owning/creating the notes...
    Regards,
    Ankan

  • How to check  which column data differs from master table and archive table

    Hi All,
    i have two tables, table a (a1 number,a2 varchar2,a3 varchar2) and table b (b1 number,b2 varchar2,b3 varchar2).
    how to check the data in both the table are same( including all columns).
    data in a.a1 is same as b.b1 and a.a2 is same as b.b2 like that.
    if they not same , i need to know which field differs.
    Kindly Share ur ideas.

    887268 wrote:
    thanks Sven W. ,
    above reply clearly shows what my question is.
    one column must be primary key, based on that key i need to find out which are the fields having different data..
    im strugling with this, i tried the following already, but not able to get.
    select the columns from a MINUS select the columns from b.
    -- from this i can find whether the difference occurred or not.
    but i cant able to get which are the fields value changed.Good. Then you would match the rows using the PK column and need to compare the columns
    Instead of a MINUS + UNION ALL + MINUS we can now use a FULL OUTER JOIN
    It is a little task to write out all column names, but 40 columns can be handled.
    This statement would show you both tables with matching rows on the same line.
    select a.*, b.*
    from a
    FULL OUTER JOIN b on a.id = b.idNow filter/check for mismatches
    select case when a.col1 != b.col1 then 'COL1 value changed'
                    when a.col2 != b.col2 then 'COL2 value changed'
                    when a.col3 != b.col3 then 'COL3 value changed'
             end as compare_result
            ,a.*, b.*
    from a
    FULL OUTER JOIN b on a.id = b.id
    /* return only non matching columns */
    where (a.col1,a.col2,a.col3) != (b.col1,b.col2,b.col3) You might need to add nvls to take care of null values. Test this!
    Another way could be to group upon the primary key
    select *
    from (
      select id 
               ,count(distinct col1)-1 cnt_col1
               ,count(distinct col2)-1 cnt_col2
               ,count(distinct col3)-1 cnt_col3
       from
         select 'A' source, a.*
         from a
         UNION ALL
         select 'B' source, b.*
         from b)
       group by ID
    /* only records with differences */
    where 1 in (cnt_col1, cnt_col2, cnt_col3)
    ;The count columns will hold either 1 or 0. If it is 1 then this column has a difference.

Maybe you are looking for

  • Mini DisplayPort to VGA Adaptor not Mirroring

    13-inch MacBook Pro (Mid 2012) running 10.8.3 Monoprice Mini DisplayPort to VGA Adaptors freeze the system (no issues with Monoprice Mini DisplayPort to DVI connected to monitor) so I tried an Apple Mini DisplayPort to VGA Adaptor and that works ok,

  • Trying to connect new E70

    I just purchased a New E70 Smartphone. The reason I bought a Nokia is because of all the brands it has the best Linux support (at least the largest Linux user base) But immediately I noticed that I was unable to connect the phone to my PC via the USB

  • Change item category of a GL account

    Hi All, We need to change the item category of a G/L account from 04000-cash account to 01000-Balance sheet account. While configuring this, I got a warning message which indicates that problems may occur in the target system after the transport. Ple

  • Oracle.Dataaccess.dll deploy windows application

    I have a windows application that I developed in VS 2005 on XP. It access the DB via the oracle data provider. I just switched to vista and was adding some new items ot the application and then tried to redeploy it. I had to add a new installer to th

  • CALL_FUNCTION_OPEN_ERROR

    Hi experts, We are have an issue with the RFC connection. it gives a short dump with CALL_FUNCTION_OPEN_ERROR Error when opening an RFC connection when the number of users goes up. The RFC is link/gateway connection is not getting disconnected when t