Important ..DATA ARCHIVING

Hi
My client is unable to get the payslip for employees beyond a certain period.I have used Transaction SARA and PC_PAYRESULT but still unable to find out the real reason.My questions:
i.How do I find out till how much the data is archived?
ii.Is the payslip programme RPCETD00 unable to read the arcfived data?
iii.Can archived data be read from the clusters?
Thanks and Regards
Aniket C

Hi Aniket,
1. There is no direct way to do this. You should go to SARA->Management and find out what data was archived in each completed archive sessions.
2. The following are the transactions/programs which can acess archived data for payroll results:
PC_PAYRESULT
PU12
RPCKTOx0 ["x" = country letter]
H99CWTR0
3. What exactly you mean ?
And for archiving in HR, its recommended to use archive groups, using transaction PU22. Hope your client is using this?
http://help.sap.com/saphelp_erp2004/helpdata/en/8d/3e6f60462a11d189000000e8323d3a/frameset.htm
Hope this helps,
Naveen

Similar Messages

  • Beginner trying to import data from GL_interface to GL

    Hello I’m Beginner with Oracle GL and I ‘m not able to do an import from GL_Interface, I have putted my data into GL_interface but I think that something is wrong with it Becose when I try to import data
    with Journals->Import->run oracle answer me that GL_interface is empty!
    I think that maybe my insert is not correct and that's why oracle don't want to use it... can someone help me?
    ----> I have put the data that way:
    insert into gl_interface (status,set_of_books_id,accounting_date, currency_code,date_created,
    created_by, actual_flag, user_je_category_name, user_je_source_name, segment1,segment2,segment3,
    entered_dr, entered_cr,transaction_date,reference1 )
    values ('NEW', '1609', sysdate, 'FRF', sysdate,1008009, 'A', 'xx jab payroll', 'xx jab payroll', '01','002','4004',111.11,0,
    sysdate,'xx_finance_jab_demo');
    insert into gl_interface (status,set_of_books_id,accounting_date, currency_code,date_created,
    created_by, actual_flag, user_je_category_name, user_je_source_name, segment1,segment2,segment3,
    entered_dr, entered_cr,transaction_date,reference1 )
    values ('NEW', '1609', sysdate, 'FRF', sysdate,1008009, 'A', 'xx jab payroll', 'xx jab payroll', '01','002','1005',0,111.11,
    sysdate,'xx_finance_jab_demo');
    ------------> Oracle send me that message:
    General Ledger: Version : 11.5.0 - Development
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    GLLEZL module: Journal Import
    Current system time is 14-MAR-2007 15:39:25
    Running in Debug Mode
    gllsob() 14-MAR-2007 15:39:25sob_id = 124
    sob_name = Vision France
    coa_id = 50569
    num_segments = 6
    delim = '.'
    segments =
    SEGMENT1
    SEGMENT2
    SEGMENT3
    SEGMENT4
    SEGMENT5
    SEGMENT6
    index segment is SEGMENT2
    balancing segment is SEGMENT1
    currency = EUR
    sus_flag = Y
    ic_flag = Y
    latest_opened_encumbrance_year = 2006
    pd_type = Month
    << gllsob() 14-MAR-2007 15:39:25
    gllsys() 14-MAR-2007 15:39:25fnd_user_id = 1008009
    fnd_user_name = JAB-DEVELOPPEUR
    fnd_login_id = 2675718
    con_request_id = 2918896
    sus_on = 0
    from_date =
    to_date =
    create_summary = 0
    archive = 0
    num_rec = 1000
    num_flex = 2500
    run_id = 55578
    << gllsys() 14-MAR-2007 15:39:25
    SHRD0108: Retrieved 51 records from fnd_currencies
    gllcsa() 14-MAR-2007 15:39:25<< gllcsa() 14-MAR-2007 15:39:25
    gllcnt() 14-MAR-2007 15:39:25SHRD0118: Updated 1 record(s) in table: gl_interface_control
    source name = xx jab payroll
    group id = -1
    LEZL0001: Found 1 sources to process.
    glluch() 14-MAR-2007 15:39:25<< glluch() 14-MAR-2007 15:39:25
    gl_import_hook_pkg.pre_module_hook() 14-MAR-2007 15:39:26<< gl_import_hook_pkg.pre_module_hook() 14-MAR-2007 15:39:26
    glusbe() 14-MAR-2007 15:39:26<< glusbe() 14-MAR-2007 15:39:26
    << gllcnt() 14-MAR-2007 15:39:26
    gllpst() 14-MAR-2007 15:39:26SHRD0108: Retrieved 110 records from gl_period_statuses
    << gllpst() 14-MAR-2007 15:39:27
    glldat() 14-MAR-2007 15:39:27Successfully built decode fragment for period_name and period_year
    gllbud() 14-MAR-2007 15:39:27SHRD0108: Retrieved 10 records from the budget tables
    << gllbud() 14-MAR-2007 15:39:27
    gllenc() 14-MAR-2007 15:39:27SHRD0108: Retrieved 15 records from gl_encumbrance_types
    << gllenc() 14-MAR-2007 15:39:27
    glldlc() 14-MAR-2007 15:39:27<< glldlc() 14-MAR-2007 15:39:27
    gllcvr() 14-MAR-2007 15:39:27SHRD0108: Retrieved 6 records from gl_daily_conversion_types
    << gllcvr() 14-MAR-2007 15:39:27
    gllfss() 14-MAR-2007 15:39:27LEZL0005: Successfully finished building dynamic SQL statement.
    << gllfss() 14-MAR-2007 15:39:27
    gllcje() 14-MAR-2007 15:39:27main_stmt:
    select int.rowid
    decode(int.SEGMENT1|| int.SEGMENT2|| int.SEGMENT3|| int.SEGMENT4|| int.SEGMENT5|| int.SEGMENT6
    , '', replace(ccid_cc.SEGMENT2,'.','
    ') || '.' || replace(ccid_cc.SEGMENT1,'.','
    ') || '.' || replace(ccid_cc.SEGMENT3,'.','
    ') || '.' || replace(ccid_cc.SEGMENT4,'.','
    ') || '.' || replace(ccid_cc.SEGMENT5,'.','
    ') || '.' || replace(ccid_cc.SEGMENT6,'.','
    , replace(int.SEGMENT2,'.','
    ') || '.' || replace(int.SEGMENT1,'.','
    ') || '.' || replace(int.SEGMENT3,'.','
    ') || '.' || replace(int.SEGMENT4,'.','
    ') || '.' || replace(int.SEGMENT5,'.','
    ') || '.' || replace(int.SEGMENT6,'.','
    ') ) flexfield , nvl(flex_cc.code_combination_id,
    nvl(int.code_combination_id, -4))
    , decode(int.SEGMENT1|| int.SEGMENT2|| int.SEGMENT3|| int.SEGMENT4|| int.SEGMENT5|| int.SEGMENT6
    , '', decode(ccid_cc.code_combination_id,
    null, decode(int.code_combination_id, null, -4, -5),
    decode(sign(nvl(ccid_cc.start_date_active, int.accounting_date-1)
    - int.accounting_date),
    1, -1,
    decode(sign(nvl(ccid_cc.end_date_active, int.accounting_date +1)
    - int.accounting_date),
    -1, -1, 0)) +
    decode(ccid_cc.enabled_flag,
    'N', -10, 0) +
    decode(ccid_cc.summary_flag, 'Y', -100,
    decode(int.actual_flag,
    'B', decode(ccid_cc.detail_budgeting_allowed_flag,
    'N', -100, 0),
    decode(ccid_cc.detail_posting_allowed_flag,
    'N', -100, 0)))),
    decode(flex_cc.code_combination_id,
    null, -4,
    decode(sign(nvl(flex_cc.start_date_active, int.accounting_date-1)
    - int.accounting_date),
    1, -1,
    decode(sign(nvl(flex_cc.end_date_active, int.accounting_date +1)
    - int.accounting_date),
    -1, -1, 0)) +
    decode(flex_cc.enabled_flag,
    'N', -10, 0) +
    decode(flex_cc.summary_flag, 'Y', -100,
    decode(int.actual_flag,
    'B', decode(flex_cc.detail_budgeting_allowed_flag,
    'N', -100, 0),
    decode(flex_cc.detail_posting_allowed_flag,
    'N', -100, 0)))))
    , int.user_je_category_name
    , int.user_je_category_name
    , 'UNKNOWN' period_name
    , decode(actual_flag, 'B'
         , decode(period_name, NULL, '-1' ,period_name), nvl(period_name, '0')) period_name2
    , currency_code
    , decode(actual_flag
         , 'A', actual_flag
         , 'B', decode(budget_version_id
         , 1210, actual_flag
         , 1211, actual_flag
         , 1212, actual_flag
         , 1331, actual_flag
         , 1657, actual_flag
         , 1658, actual_flag
         , NULL, '1', '6')
         , 'E', decode(encumbrance_type_id
         , 1000, actual_flag
         , 1001, actual_flag
         , 1022, actual_flag
         , 1023, actual_flag
         , 1024, actual_flag
         , 1048, actual_flag
         , 1049, actual_flag
         , 1050, actual_flag
         , 1025, actual_flag
         , 999, actual_flag
         , 1045, actual_flag
         , 1046, actual_flag
         , 1047, actual_flag
         , 1068, actual_flag
         , 1088, actual_flag
         , NULL, '3', '4'), '5') actual_flag
    , '0' exception_rate
    , decode(currency_code
         , 'EUR', 1
         , 'STAT', 1
         , decode(actual_flag, 'E', -8, 'B', 1
         , decode(user_currency_conversion_type
         , 'User', decode(currency_conversion_rate, NULL, -1, currency_conversion_rate)
         ,'Corporate',decode(currency_conversion_date,NULL,-2,-6)
         ,'Spot',decode(currency_conversion_date,NULL,-2,-6)
         ,'Reporting',decode(currency_conversion_date,NULL,-2,-6)
         ,'HRUK',decode(currency_conversion_date,NULL,-2,-6)
         ,'DALY',decode(currency_conversion_date,NULL,-2,-6)
         ,'HLI',decode(currency_conversion_date,NULL,-2,-6)
         , NULL, decode(currency_conversion_rate,NULL,
         decode(decode(nvl(to_char(entered_dr),'X'),'X',1,2),decode(nvl(to_char(accounted_dr),'X'),'X',1,2),
         decode(decode(nvl(to_char(entered_cr),'X'),'X',1,2),decode(nvl(to_char(accounted_cr),'X'),'X',1,2),-20,-3),-3),-9),-9))) currency_conversion_rate
    , to_number(to_char(nvl(int.currency_conversion_date, int.accounting_date), 'J'))
    , decode(int.actual_flag
         , 'A', decode(int.currency_code
              , 'EUR', 'User'
         , 'STAT', 'User'
              , nvl(int.user_currency_conversion_type, 'User'))
         , 'B', 'User', 'E', 'User'
         , nvl(int.user_currency_conversion_type, 'User')) user_currency_conversion_type
    , ltrim(rtrim(substrb(rtrim(substrb(int.reference1, 1, 50)) || ' ' || int.user_je_source_name || ' 2918896: ' || int.actual_flag || ' ' || int.group_id, 1, 100)))
    , rtrim(substrb(nvl(rtrim(int.reference2), 'Journal Import ' || int.user_je_source_name || ' 2918896:'), 1, 240))
    , ltrim(rtrim(substrb(rtrim(rtrim(substrb(int.reference4, 1, 25)) || ' ' || int.user_je_category_name || ' ' || int.currency_code || decode(int.actual_flag, 'E', ' ' || int.encumbrance_type_id, 'B', ' ' || int.budget_version_id, '') || ' ' || int.user_currency_conversion_type || ' ' || decode(int.user_currency_conversion_type, NULL, '', 'User', to_char(int.currency_conversion_rate), to_char(int.currency_conversion_date))) || ' ' || substrb(int.reference8, 1, 15) || int.originating_bal_seg_value, 1, 100)))
    , rtrim(nvl(rtrim(int.reference5), 'Journal Import 2918896:'))
    , rtrim(substrb(nvl(rtrim(int.reference6), 'Journal Import Created'), 1, 80))
    , rtrim(decode(upper(substrb(nvl(rtrim(int.reference7), 'N'), 1, 1)),'Y','Y', 'N'))
    , decode(upper(substrb(int.reference7, 1, 1)), 'Y', decode(rtrim(reference8), NULL, '-1', rtrim(substrb(reference8, 1, 15))), NULL)
    , rtrim(upper(substrb(int.reference9, 1, 1)))
    , rtrim(nvl(rtrim(int.reference10), nvl(to_char(int.subledger_doc_sequence_value), 'Journal Import Created')))
    , int.entered_dr
    , int.entered_cr
    , to_number(to_char(int.accounting_date,'J'))
    , to_char(int.accounting_date, 'YYYY/MM/DD')
    , int.user_je_source_name
    , nvl(int.encumbrance_type_id, -1)
    , nvl(int.budget_version_id, -1)
    , NULL
    , int.stat_amount
    , decode(int.actual_flag
    , 'E', decode(int.currency_code, 'STAT', '1', '0'), '0')
    , decode(int.actual_flag
    , 'A', decode(int.budget_version_id
    , NULL, decode(int.encumbrance_type_id, NULL, '0', '1')
    , decode(int.encumbrance_type_id, NULL, '2', '3'))
    , 'B', decode(int.encumbrance_type_id
    , NULL, '0', '4')
    , 'E', decode(int.budget_version_id
    , NULL, '0', '5'), '0')
    , int.accounted_dr
    , int.accounted_cr
    , nvl(int.group_id, -1)
    , nvl(int.average_journal_flag, 'N')
    , int.originating_bal_seg_value
    from GL_INTERFACE int,
    gl_code_combinations flex_cc,
    gl_code_combinations ccid_cc
    where int.set_of_books_id = 124
    and int.status != 'PROCESSED'
    and (int.user_je_source_name,nvl(int.group_id,-1)) in (('xx jab payroll', -1))
    and flex_cc.SEGMENT1(+) = int.SEGMENT1
    and flex_cc.SEGMENT2(+) = int.SEGMENT2
    and flex_cc.SEGMENT3(+) = int.SEGMENT3
    and flex_cc.SEGMENT4(+) = int.SEGMENT4
    and flex_cc.SEGMENT5(+) = int.SEGMENT5
    and flex_cc.SEGMENT6(+) = int.SEGMENT6
    and flex_cc.chart_of_accounts_id(+) = 50569
    and flex_cc.template_id(+) is NULL
    and ccid_cc.code_combination_id(+) = int.code_combination_id
    and ccid_cc.chart_of_accounts_id(+) = 50569
    and ccid_cc.template_id(+) is NULL
    order by decode(int.SEGMENT1|| int.SEGMENT2|| int.SEGMENT3|| int.SEGMENT4|| int.SEGMENT5|| int.SEGMENT6
    , rpad(ccid_cc.SEGMENT2,30) || '.' || rpad(ccid_cc.SEGMENT1,30) || '.' || rpad(ccid_cc.SEGMENT3,30) || '.' || rpad(ccid_cc.SEGMENT4,30) || '.' || rpad(ccid_cc.SEGMENT5,30) || '.' || rpad(ccid_cc.SEGMENT6,30)
    , rpad(int.SEGMENT2,30) || '.' || rpad(int.SEGMENT1,30) || '.' || rpad(int.SEGMENT3,30) || '.' || rpad(int.SEGMENT4,30) || '.' || rpad(int.SEGMENT5,30) || '.' || rpad(int.SEGMENT6,30)
    ) , int.entered_dr, int.accounted_dr, int.entered_cr, int.accounted_cr, int.accounting_date
    control->len_mainsql = 16402
    length of main_stmt = 7428
    upd_stmt.arr:
    update GL_INTERFACE
    set status = :status
    , status_description = :description
    , je_batch_id = :batch_id
    , je_header_id = :header_id
    , je_line_num = :line_num
    , code_combination_id = decode(:ccid, '-1', code_combination_id, :ccid)
    , accounted_dr = :acc_dr
    , accounted_cr = :acc_cr
    , descr_flex_error_message = :descr_description
    , request_id = to_number(:req_id)
    where rowid = :row_id
    upd_stmt.len: 394
    ins_stmt.arr:
    insert into gl_je_lines
    ( je_header_id, je_line_num, last_update_date, creation_date, last_updated_by, created_by , set_of_books_id, code_combination_id ,period_name, effective_date , status , entered_dr , entered_cr , accounted_dr , accounted_cr , reference_1 , reference_2
    , reference_3 , reference_4 , reference_5 , reference_6 , reference_7 , reference_8 , reference_9 , reference_10 , description
    , stat_amount , attribute1 , attribute2 , attribute3 , attribute4 , attribute5 , attribute6 ,attribute7 , attribute8
    , attribute9 , attribute10 , attribute11 , attribute12 , attribute13 , attribute14, attribute15, attribute16, attribute17
    , attribute18 , attribute19 , attribute20 , context , context2 , context3 , invoice_amount , invoice_date , invoice_identifier
    , tax_code , no1 , ussgl_transaction_code , gl_sl_link_id , gl_sl_link_table , subledger_doc_sequence_id , subledger_doc_sequence_value
    , jgzz_recon_ref , ignore_rate_flag)
    SELECT
    :je_header_id , :je_line_num , sysdate , sysdate , 1008009 , 1008009 , 124 , :ccid , :period_name
    , decode(substr(:account_date, 1, 1), '-', trunc(sysdate), to_date(:account_date, 'YYYY/MM/DD'))
    , 'U' , :entered_dr , :entered_cr , :accounted_dr , :accounted_cr
    , reference21, reference22, reference23, reference24, reference25, reference26, reference27, reference28, reference29
    , reference30, :description, :stat_amt, '' , '', '', '', '', '', '', '', '' , '', '', '', '', '', '', '', '', '', '', ''
    , '', '', '', '', '', '', '', '', '', gl_sl_link_id
    , gl_sl_link_table
    , subledger_doc_sequence_id
    , subledger_doc_sequence_value
    , jgzz_recon_ref
    , null
    FROM GL_INTERFACE
    where rowid = :row_id
    ins_stmt.len: 1818
    glluch() 14-MAR-2007 15:39:27<< glluch() 14-MAR-2007 15:39:27
    LEZL0008: Found no interface records to process.
    LEZL0009: Check SET_OF_BOOKS_ID, GROUP_ID, and USER_JE_SOURCE_NAME of interface records.
    If no GROUP_ID is specified, then only data with no GROUP_ID will be retrieved. Note that most data
    from the Oracle subledgers has a GROUP_ID, and will not be retrieved if no GROUP_ID is specified.
    SHRD0119: Deleted 1 record(s) from gl_interface_control.
    Start of log messages from FND_FILE
    End of log messages from FND_FILE
    Executing request completion options...
    Finished executing request completion options.
    No data was found in the GL_INTERFACE table.
    Concurrent request completed
    Current system time is 14-MAR-2007 15:39:27
    ---------------------------------------------------------------------------

    As per the error message said, you need to specify a group id.
    as per documentation :
    GROUP_ID: Enter a unique group number to distinguish import data within a
    source. You can run Journal Import in parallel for the same source if you specify a
    unique group number for each request.
    For example if you put data for payables and receivables, you need to put different group id to separate payables and receivables data.
    HTH

  • How to create the Export Data and Import Data using flat file interface

    Hi,
    Request to let me know based on the requirement below on how to export and import data using flat file interface.....
    Please provide the steps involved for the same.......
    BW/BI - Recovery Process for SNP data. 
    For each SNP InfoProvider,
    create:
    1) Export Data:
    1.a)  Create an export data source, InfoPackage, comm structure, etc. necessary to create an ASCII fixed length flat file on the XI
    ctnhsappdata\iface\SCPI063\Out folder for each SNP InfoProvider. 
    1.b)  All fields in each InfoProvider should be exported and included in the flat file. 
    1.c)  A process chain should be created for each InfoProvider with a start event. 
    1.d)  If the file exists on the target drive it should be overwritten. 
    1.e)  The exported data file name should include the InfoProvider technical name.
    1.f)  Include APO Planning Version, Date of Planning Run, APO Location, Calendar Year/Month, Material and BW Plant as selection criteria.
    2) Import Data:
    2.a) Create a flat file source system InfoPackage, comm structure, etc. necessary to import ASCII fixed length flat files from the XI
    ctnhsappdata\iface\SCPI063\Out folder for each SNP InfoProvider.
    2.b)  All fields for each InfoProvider should be mapped and imported from the flat file.
    2.c)  A process chain should be created for each InfoProvider with a start event. 
    2.d)  The file should be archived in the
    ctnhsappdata\iface\SCPI063\Archive directory.  Each file name should have the date appended in YYYYMMDD format.  Each file should be deleted from the \Out directory after it is archived. 
    Thanks in advance.
    Tyson

    Here's some info on working with plists:
    http://developer.apple.com/documentation/Cocoa/Conceptual/PropertyLists/Introduc tion/chapter1_section1.html
    They can be edited with any text editor. Xcode provides a graphical editor for them - make sure to use the .plist extension so Xcode will recognize it.

  • Data Archiving - System Prerequisites

    Hi,
    We are planning to ARCHIVE some of the tables to reduce the TCO. (Total cost of Ownership)
    In this regard, I would like to know the following:
    On the Basis side, I want to check for any prerequisites etc (like Minimum SP LEVEL, Kernel version, Import Notes to be applied etc)
    Are there any document which clearly talks about these prequisites for preparing the System in order to be able to carry out the Archive work without any issues.
    (Note:  We are not using ILM solution for Archiving)
    I am mostly concerned with the SAP Notes that are considered to be the prerequisites.
    Best Regards
    Raghunahth L
    Important System information :
    Our system version is as follows :
    System  -> ERP 2005  (Production Server)
    OS      -> Windows 2003
    DB      -> Oracle 10.2.0.2.0
    SPAM    -> 7.00 / 0023
    Kernel  -> 7.00 (133)
    Unicode -> Yes
    SP Info:
    SAP_BASIS 700(0011)
    SAP_ABA 700(0011)
    PI_BASIS 2005_1_700(0003)
    ST-PI 2005_1_700(0003)
    SAP_BW 700(0012)
    SAP_AP 700(0008)
    SAP_HR 600(0003)
    SAP_AP 700(0008)
    SAP_HR 600(0013)
    SAP_APPL 600(0008)
    EA-IPPE 400(0008)
    EA-DFPS 600(0008)
    EA-HR 600(0013)
    EA-FINSERV 600(0008)
    FINBASIS 600(0008)
    EA-PS 600(0008)
    EA-RETAIL 600(0008)
    EA-GLTRADE 600(0008)
    ECC-DIMP 600(0008)
    ERECRUIT 600(0008)
    FI-CA 600(0008)
    FI-CAX 600(0008)
    INSURANCE 600(0008)
    IS-CWM 600(0008)
    IS-H 600(0008)
    IS-M 600(0008)
    IS-OIL 600(0008)
    IS-PS-CA 600(0008)
    IS-UT 600(0008)
    SEM-BW 600(0008)
    LSOFE 600(0008)
    ST-A/PI 01J_ECC600(0000)
    Tables we are planning to archive
    AGKO,     BFIT_A,     BFIT_A0,     BFO_A_RA,     BFOD_A,     BFOD_AB,     BFOK_A,     BFOK_AB,     BKPF,     BSAD,     BSAK,     BSAS,     BSBW,     BSE_CLR,     BSE_CLR_ASGMT,     BSEG_ADD,     BSEGC,     BSIM,     BSIP,     BSIS,     BVOR,     CDCLS,     CDHDR,     CDPOS_STR,     CDPOS_UID,     ETXDCH,     ETXDCI,     ETXDCJ,     FAGL_BSBW_HISTRY,     FAGL_SPLINFO,     FAGL_SPLINFO_VAL,     FAGLFLEXA,     FIGLDOC,     RF048,     RFBLG,     STXB,     STXH,     STXL,     TOA01,     TOA02,     TOA03,     TOAHR,     TTXI,     TTXY,     WITH_ITEM,          COFIP,     COFIS,     COFIT,     ECMCA,     ECMCT,     EPIDXB,     EPIDXC,     FBICRC001A,     FBICRC001P,     FBICRC001T,     FBICRC002A,     FBICRC002P,     FBICRC002T,     FILCA,     FILCP,     FILCT,     GLFLEXA,     GLFLEXP,     GLFLEXT,     GLFUNCA,     GLFUNCP,     GLFUNCT,     GLFUNCU,     GLFUNCV,     GLIDXA,     GLP0,     GLPCA,     GLPCP,     GLPCT,     GLSP,     GLT0,     GLT3,     GLTP,     GLTPC,     GMAVCA,     GMAVCP,     GMAVCT,     JVPO1,     JVPSC01A,     JVPSC01P,     JVPSC01T,     JVSO1,     JVSO2,     JVTO1,     JVTO2,     STXB,     STXH,     STXL,     TRACTSLA,     TRACTSLP,     TRACTSLT,
    in addition we have some Z Tables to be archived.

    Hi,
    Pre-requisites for search OSS notes or BC Sets depends upon the program that is used in archive, delete or read.
    If there is no proper selection criteria in write variant selection,
    If the program is getting terminated due to long processing time,
    If percentage of data archived for your selection is less even after data meeting minimum criteria,
    If system allows user to change archived data,
    With all the above scenarios we will search for OSS notes. If SAP has released BC sets then we will implement these.
    If you have any problem while archiving and based on the archiving experience if you think that some of the oss notes will help then take a call on implementing it.
    With the tables you have mentioned i can say that archiving objects such as FI_DOCUMNT, FI_SL_DATA, EC_PCA_ITM, EC_PCA_DATA, CHANGEDOCU, ARCHIVELINK.
    You have to search any latest release BC sets or OSS notes for your system application.
    -Thanks,
    Ajay

  • Cant Import data from CSMARS 4.3.6 to CSMARS 5.3.6..??

    I have problem with my csmars..
    I want to migrate data from CSMARS 4.3.6 to CSMARS 5.3.6 (Different device), the result is completed, but this import is failed..
    the size data for import is 21 Gig.
    This is activity capture for this migrate
    ++++++
    pnimp> import data 10.1.199.207:/PTMN/CMARS-PHQ_2011-03-26-22-00-57 09/28/07
    Last imported configuration archive is from 10.1.199.186:/MARS/CMARS-PHQ_2011-03-23-14-56-08/2011-03-23/CF/cf-4360-436_2011-03-23-14-57-21.pna created at 2011-03-23-14-57-21. Because events received after the config archive was created may not be imported correctly, you should import a latest copy of configuration from the Gen-1 MARS box before trying this command if possible.
    Do you wish to continue (yes/no): yes
    Total number of days with data : 1152
    Total number of event archives to import: 0
    Total number of report result archives to import: 0
    Total number of statistics archives to import: 0
    Total number of incident archives to import: 0
    Estimated time to import all events: 0 hours 0 minutes
    Do you wish to continue (yes/no):yes
    Tue Mar 29 10:29:35 2011 INFO Mounting 10.1.199.207:/PTMN/CMARS-PHQ_2011-03-26-22-00-57...
    Tue Mar 29 10:29:35 2011 INFO Mounted to 10.1.199.207:/PTMN/CMARS-PHQ_2011-03-26-22-00-57
    Tue Mar 29 10:29:35 2011 INFO Scanning mounting point ...
    Tue Mar 29 10:29:35 2011 INFO Start data importing from date : 2011-03-25
    Tue Mar 29 10:29:35 2011 INFO Number of day's data to import: 1153
    Tue Mar 29 10:29:39 2011 INFO Total size of data to import: 0 (MB)
    Tue Mar 29 10:29:39 2011 INFO Available disk space on MARS: 191146 (MB)
    Tue Mar 29 10:29:39 2011 INFO Scanning archive files ...
    Tue Mar 29 10:29:39 2011 INFO Building indexes for raw message files
    Tue Mar 29 10:29:39 2011 INFO (Index builder 0) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-23/ES
    Tue Mar 29 10:29:39 2011 INFO (Index builder 0) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-24/ES
    Tue Mar 29 10:29:39 2011 INFO (Index builder 0) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-25/ES
    Tue Mar 29 10:29:39 2011 INFO (Index builder 0) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-26/ES
    Tue Mar 29 10:29:39 2011 INFO (Index builder 1) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-23/ES
    Tue Mar 29 10:29:39 2011 INFO (Index builder 0) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-27/ES
    Tue Mar 29 10:29:39 2011 INFO (Index builder 1) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-24/ES
    Tue Mar 29 10:29:39 2011 INFO (Index builder 0) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-28/ES
    Tue Mar 29 10:29:39 2011 INFO (Index builder 1) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-25/ES
    Tue Mar 29 10:29:39 2011 INFO (Index builder 0) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-29/ES
    Tue Mar 29 10:29:39 2011 INFO Finished index building
    Tue Mar 29 10:29:39 2011 INFO (Index builder 1) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-26/ES
    Tue Mar 29 10:29:39 2011 INFO (Index builder 1) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-27/ES
    Tue Mar 29 10:29:39 2011 INFO (Index builder 1) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-28/ES
    Tue Mar 29 10:29:39 2011 INFO (Index builder 1) begin building raw message indexes for data in /pnarchive/DATA_POOL/2011-03-29/ES
    Tue Mar 29 10:29:39 2011 INFO Finished index building
    Tue Mar 29 10:29:39 2011 INFO Unmounting 10.1.199.207:/PTMN/CMARS-PHQ_2011-03-26-22-00-57 ...
    Tue Mar 29 10:29:40 2011 INFO Data importing successfully completed!
    ++++++
    How could be the total number of days with data is 1152 and the number of other is 0?
    and how could be "Tue Mar 29 10:29:35 2011 INFO Start data importing from date : 2011-03-25" ? even though in migrate i use the start date is 09/28/07??
    Anyone can help to me?
    Thank you..

    No.
    You can only update to the latest available

  • Import data in 9i

    Dear Experts
    I want to import data in 9i database but i dont want archives(redo) generation during importing data.
    how can i do this?
    regards
    saima

    Hi,
    Not possible, if you are using imp utility to import the dump.
    Regards
    Anurag Tibrewal.

  • Put Together A Data Archiving Strategy And Execute It Before Embarking On Sap Upgrade

    A significant amount is invested by organizations in a SAP upgrade project. However few really know that data archiving before embarking on SAP upgrade yields significant benefits not only from a cost standpoint but also due to reduction in complexity during an upgrade. This article not only describes why this is a best practice  but also details what benefits accrue to organizations as a result of data archiving before SAP upgrade. Avaali is a specialist in the area of Enterprise Information Management.  Our consultants come with significant global experience implementing projects for the worlds largest corporations.
    Archiving before Upgrade
    It is recommended to undertake archiving before upgrading your SAP system in order to reduce the volume of transaction data that is migrated to the new system. This results in shorter upgrade projects and therefore less upgrade effort and costs. More importantly production downtime and the risks associated with the upgrade will be significantly reduced. Storage cost is another important consideration: database size typically increases by 5% to 10% with each new SAP software release – and by as much as 30% if a Unicode conversion is required. Archiving reduces the overall database size, so typically no additional storage costs are incurred when upgrading.
    It is also important to ensure that data in the SAP system is cleaned before your embark on an upgrade. Most organizations tend to accumulate messy and unwanted data such as old material codes, technical data and subsequent posting data. Cleaning your data beforehand smoothens the upgrade process, ensure you only have what you need in the new version and helps reduce project duration. Consider archiving or even purging if needed to achieve this. Make full use of the upgrade and enjoy a new, more powerful and leaner system with enhanced functionality that can take your business to the next level.
    Archiving also yields Long-term Cost Savings
    By implementing SAP Data Archiving before your upgrade project you will also put in place a long term Archiving Strategy and Policy that will help you generate on-going cost savings for your organization. In addition to moving data from the production SAP database to less costly storage devices, archived data is also compressed by a factor of five relative to the space it would take up in the production database. Compression dramatically reduces space consumption on the archive storage media and based on average customer experience, can reduce hardware requirements by as much as 80% or 90%. In addition, backup time, administration time and associated costs are cut in half. Storing data on less costly long-term storage media reduces total cost of ownership while providing users with full, transparent access to archived information.

    Maybe this article can help; it uses XML for structural change flexiblity: http://www.oracle.com/technetwork/oramag/2006/06-jul/o46xml-097640.html

  • Import data from a php5/mysql to office 365

    is there a way to import data from a deployed php5/mysql application to office 365 ??

    Hi  ,
    According to your description, my understanding is that you want to migrate your php application to SharePoint Online.
    For your issue, you can try to convert your php application to asp.net application, and 
    migrate asp.net application to Office 365.
    For more information, you can have a look at the blogs:
    http://www.asp.net/downloads/archived-v11/migration-assistants/php-to-aspnet
    http://msdn.microsoft.com/en-us/library/aa479002.aspx
    http://www.codeproject.com/Articles/21465/Converting-an-ASP-NET-site-into-a-SharePoint-site
    http://krishnamfs.blogspot.com/2013/09/steps-to-migrate-php-intranet-site-into.html
    Best Regards,
    Eric
    Eric Tao
    TechNet Community Support

  • SAP Support for Data Archiving

    Hello,
    We just wanted to confirm if SAP will support issues if we encounter issues while restoring data archived via SARA. For example, if in the process of restoring archived data a defect is found, is this still supported by SAP?

    Hi
    Supported and the SAP Data Archiving Support Tools  are SARA , DART and SARI
    SARA --- SAP Archive Administration
    All data archiving activities commence with Archive administration (transaction SARA). SARA provides the overall administration of archiving schedules and manages the archiving sessions. The process includes customization of objects, their conversion to archived sequential archived files and most importantly, their overall management. In addition to these, the archive administration process also retrieves the archived files and converts the data through an automated process, if there is a change in software, hardware or the data structure. The data archiving process is streamlined and simplified through the Central command of archiving administration
    DART --- Data Retention Tool
    A data retention strategy is required for retaining enterprise information for long periods of time. The Data Retention Tool (DART) provides this functionality. It is capable of extracting data, which is period-specific as well as any supporting information. DART transforms database objects into flat files that can be read by any third-party software designed for flat files. It is available as an add-on for older versions of SAP R/3, but is an integrated feature of the recent versions of SAP R/3.
    SARI --- SAP Archive Information System
    SARI provides much-needed retrieval capabilities against previously archived data. It requires the archive files be loaded into new tables in the database. In this approach, AS requires detailed, technical knowledge of the database table structure and field layouts. Database structure of an enterprise needs to be flexible and dynamic to be able to quickly adapt to organisational changes. It might happen that the current database structure is not compatible with the structure of database when archive files were initially generated. Archive Information System is a standard tool delivered by SAP that facilitates customized access to the archived data
    Regards
    Venkat

  • Importing Outlook Archive.pst to Microsoft online exchange mailbox

    I have recently migrated to Exchange online and have one mailbox. All is working well.
    I have also recently subscribed to Office 365 and that works fine as well.
    I have on my computer an archive.pst that I copied form my old computer to my new computer. I then opened it in outlook 365 on my new computer.
    I would like the archive.pst to be accessible from any computer.
    I assume to do this I need to either to import the archive back into my main online exchange account or import the archive.pst onto  but not into my online exchange account.
    How exactly do I do either of those things?
    If what I am suggesting is not the right way to do it then what is?
    Many thanks in advance for your assistance.

    Hi,
    Agree with Philip.
    Also found some useful articles for your reference:
    Migrate email and contacts into a new Office 365 account
    http://office.microsoft.com/en-in/office365-suite-help/migrate-email-and-contacts-into-a-new-office-365-account-HA103169067.aspx
    Import Outlook items from an Outlook Data File (.pst)
    http://office.microsoft.com/en-in/outlook-help/import-outlook-items-from-an-outlook-data-file-pst-HA102919679.aspx
    Hope it is helpful
    Thanks
    Mavis
    Mavis Huang
    TechNet Community Support

  • Need Step By Step Guide For Data Archiving

    Can anybody provide me step by step Data Archiving guide for ECC 6.0 system.
    I searched OSS but did not get any guide.
    Thanks

    Hi
    You can go through the below links related to archiving which transactions we use for archiving the objects and how we do that.
    http://help.sap.com/saphelp_srm30/helpdata/en/15/c9df3b6ac34b44e10000000a114084/content.htm
    http://help.sap.com/SAPHELP_NW04S/helpdata/EN/43/f0ed9f81917063e10000000a1553f6/content.htm
    Basically Data archiving process comprises three major phases. They are:
    1. Creating an archive file: The archived files of data are created in the SAP database by the Archiving Management system. The Management system reads the data from the database and writes it to the archive files in the background. In instances of archived files exceeding the maximum specified limit, or if the number of data objects exceed the stipulated limit in the system, then the system automatically creates new archive files.
    At the end of the process of saving data into archive files, ADK triggers the system event SAP_ARCHIVING_WRITE_FINISHED, which is an indicator to the system to start next phase of archiving process.
    2. Removing the archived data from the database:
    While archiving management system writes data on the archive files, another program deletes it from the database permanently. The program checks whether the data has been transferred to the archive. It is quite important too as it is the last check performed by the system before deleting data permanently from the database. Several deletion programs run simultaneously, because the archiving program is much more faster than the deletion programs. This is important as it increases the efficiency of archiving process.
    3. Transferring the archived files to a location outside the SAP database
    Once the Archive management system has finished archiving the data, the next step is to save the archived files at a different location other than the SAP database. This can be accomplished by an automated process in the system or by a manual process. This step is optional since many enterprises may wish to keep the archived files within the current database. However, large enterprises transfer their data periodically as a part of their data archiving processes.
    Hope this helps!!
    Thanks and Kind Regards
    Esha

  • System crash during data archiving

    Gurus:
    We have a serious concern that is if our R3 server crash during our data archiving runs:
    1) if crash happens at write phase, can we simply remove  the generated archive , restart the write  job?
         We are NOT sure if  the second run will archive those records that were archived in the failed first run.
    2) if crash happens at delete phase, a simple restart of the same delete job will be enough? (We use "store before delete").
    Because this archiving is too important to us, please help. 
    Thanks a lot!

    Christoph:
    Thank you for your reply.
    This crash has not happened yet.
    We are making a plan about how to cope with this possibility.
    But we do not have experience with any crash during archiving process.
    Therefore we ask  for people's experience about following scenarios:
    1) if crash happens at write phase, can we simply remove the generated archive , restart the write job?
    We are NOT sure if the second run will archive those records that were archived in the failed first run.
    2) if crash happens at delete phase, a simple restart of the same delete job will be enough? (We use "store before delete").
    Would you please help?
    Thank you again.

  • How to update link and import data of relocated incx file into inca file?

    Subject : <br />how to update link and import data of relocated incx file into inca file.?<br />The incx file was originally part of the inca file and it has been relocated.<br />-------------------<br /><br />Hello All,<br /><br />I am working on InDesignCS2 and InCopyCS2.<br />From indesign I am creating an assignment file as well as incopy files.(.inca and .incx file created through exporing).<br />Now indesign hardcodes the path of the incx files in inca file.So if I put the incx files in different folder then after opening the inca file in InCopy , I am getting the alert stating that " The document doesn't consists of any incopy story" and all the linked story will flag a red question mark icon.<br />So I tried to recreate and update the links.<br />Below is my code for that<br /><br />//code start*****************************<br />//creating kDataLinkHelperBoss<br />InterfacePtr<IDataLinkHelper> dataLinkHelper(static_cast<IDataLinkHelper*><br />(CreateObject2<IDataLinkHelper>(kDataLinkHelperBoss)));<br /><br />/**<br />The newFileToBeLinkedPath is the path of the incx file which is relocated.<br />And it was previously part of the inca file.<br />eg. earlier it was c:\\test.incx now it is d:\\test.incx<br />*/<br />IDFile newIDFileToBeLinked(newFileToBeLinkedPath);<br /><br />//create the datelink<br />IDataLink * dlk = dataLinkHelper->CreateDataLink(newIDFileToBeLinked);<br /><br />NameInfo name;<br />PMString type;<br />uint32 fileType;<br /><br />dlk->GetNameInfo(&name,&type,&fileType);<br /><br />//relink the story     <br />InterfacePtr<ICommand> relinkCmd(CmdUtils::CreateCommand(kRestoreLinkCmdBoss)); <br /><br />InterfacePtr<IRestoreLinkCmdData> relinkCmdData(relinkCmd, IID_IRESTORELINKCMDDATA);<br /><br />relinkCmdData->Set(database, dataLinkUID, &name, &type, fileType, IDataLink::kLinkNormal); <br /><br />ErrorCode err = CmdUtils::ProcessCommand(relinkCmd); <br /><br />//Update the link now                         <br />InterfacePtr<IUpdateLink> updateLink(dataLinkHelper, UseDefaultIID()); <br />UID newLinkUID; <br />err = updateLink->DoUpdateLink(dl, &newLinkUID, kFullUI); <br />//code end*********************<br /><br />I am able to create the proper link.But the data which is there in the incx file is not getting imported in the linked story.But if I modify the newlinked story from the inca file,the incx file will be getting update.(all its previous content will be deleted.)<br />I tried using <br />Utils<IInCopyWorkflow>()->ImportStory()<br /> ,But its import the incx file in xml format.<br /><br />What is the solution of this then?<br />Kindly help me as I am terribly stuck since last few days.<br /><br />Thanks and Regards,<br />Yopangjo

    >
    I can say that anybody with
    no experience could easily do an export/import in
    MSSQLServer 2000.
    Anybody with no experience should not mess up my Oracle Databases !

  • Object reference not set to an instance of an object error with Import data

    Hi Experts,
    We are using BPC 7.5M with SQL Server 2008 in Multiserver environment, I am getting an error "Object reference not set to an instance of an object." while running Import data package, earlier we use to get this error sometime(once in a month) but it goes away if we reboot the application server but this time I have rebotted the Application server multiple times but still getting the same error.
    Please Advice.
    Thanks & Regards,
    Rohit

    Hi Rohit,
    please see the sap note 1615837, maybe this help you.
    Best regards
    Roberto Vidotti

  • HR PA Data Archiving

    Hi,
    We are undergoing the archiving project for HR module. For PD data, we can use object BC_HROBJ, for PCL4 data, we can use PA_LDOC. What about 2000 series infotypes data, such as PA2001, PA2002, PA2003, etc.? Because all changes to these infotypes are stored in PCL4 cluster LA, we only need to purge the data from these tables. What is the best way to purge PA2xxx data? We cannot use transaction PA30/PA61 to delete records because user exit edits prevent any action against old data beyond certain time frame.
    Thanks very much,
    Li

    This is not directly related to SAP NetWeaver MDM. You may find more information about data archiving on SAP Service Marketplace at http://www.service.sap.com/data-archiving or http://www.service.sap.com/slo.
    Regards,
    Markus

  • How to import data from a text file into a table

    Hello,
    I need help with importing data from a .csv file with comma delimiter into a table.
    I've been struggling to figure out how to use the "Import from Files" wizard in Oracle 10g web-base Enterprise Manager.
    I have not been able to find a simple instruction on how to use the Wizard.
    I have looked at the Oracle Database Utilities - Overview of Oracle Data Pump and the Help on the "Import: Files" page.
    Neither one gave me enough instruction to be able to do the import successfully.
    Using the "Import from file" wizard, I created a Directory Object using the Create Directory Object button. I Copied the file from which i needed to import the data into the Operating System Directory i had defined in the Create Directory Object page. I chose "Entire files" for the Import type.
    Step 1 of 4 is the "Import:Re-Mapping" page, I have no idea what i need to do on this page. All i know i am not tying to import data that was in one schema into a different schema and I am not importing data that was in one tablespace into a different tablespace and i am not R-Mapping datafiles either. I am importing data from a csv file.
    For step 2 of 4, "Import:Options" page, I selected the same directory object i had created.
    For step 3 of 4, I entered a job name and a description and selected Start Immediately option.
    What i noticed going through the wizard, the wizard never asked into which table do i want to import the data.
    I submitted the job and I got ORA-31619 invalid dump file error.
    I was sure that the wizard was going to fail when it never asked me into which table do i want to import the data.
    I tried to use the "imp" utility in command-line window.
    After I entered (imp), i was prompted for the username and the password and then the buffer size as soon as i entered the min buffer size I got the following error and the import was terminated:
    C:\>imp
    Import: Release 10.1.0.2.0 - Production on Fri Jul 9 12:56:11 2004
    Copyright (c) 1982, 2004, Oracle. All rights reserved.
    Username: user1
    Password:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Produc
    tion
    With the Partitioning, OLAP and Data Mining options
    Import file: EXPDAT.DMP > c:\securParms\securParms.csv
    Enter insert buffer size (minimum is 8192) 30720> 8192
    IMP-00037: Character set marker unknown
    IMP-00000: Import terminated unsuccessfully
    Please show me the easiest way to import a text file into a table. How complex could it be to do a simple import into a table using a text file?
    We are testing our application against both an Oracle database and a MSSQLServer 2000 database.
    I was able to import the data into a table in MSSQLServer database and I can say that anybody with no experience could easily do an export/import in MSSQLServer 2000.
    I appreciate if someone could show me how to the import from a file into a table!
    Thanks,
    Mitra

    >
    I can say that anybody with
    no experience could easily do an export/import in
    MSSQLServer 2000.
    Anybody with no experience should not mess up my Oracle Databases !

Maybe you are looking for