Archiving SAP data to Filesystems

We're currently using Filnet as our archival content server. The repository is actually on magnetic disk rather than optical disk. The retrieval times are slow for some of our archival records.
Our archiving admin recently performed SAP archiving to the Unix filesystem level and the respond time are much better.
My question are:
1) Is archiving to system level filesystem supported by SAP?
2) What are the pitfalls that we should be looking for when archiving to Unix filesystems ?  As far as removal of expired files, we can do that much easier in Unix than Filenet MSAR structure.
3) If you're using Filenet for SAP archiving, what are your plan for future archiving?
4) Does anyone know of any ways to migrate from one archiving content server to another?
Thanks in advance for your hellp.

Yes. We did successfully migrated all our SAP archived to AIX and are very happy with the results. Not only we get rid of the Filenet software and hardware, we gain a 5 fold increase in retrieval times to the archived files. Our archiving effort was stalled due to slow response time with Filenet.
We use an archiving consultant firm to accomplish this. They retrieved the archived files back to the OS level, and then mofidied the internal ADMI tables with the new locations. We then reset the configs to point to the new locations for the new archived files.
I hope this helps.
Duc Hong

Similar Messages

  • When archiving SAP data, can "network graphic" be altered?

    The "network graphic" at SARA shows the archiving sequence of related archiving objects.
    Is this rule that must stick to OR there is some flexibility?
    Thanks a lot!

    Hi Linda,
    You can view network graphic more as a 'guidance' than a 'rule'.  it shows the business process that influences the archiving sequence. You get maximum throughput for archiving when you follow this sequence.
    Also note that the network graphic is not exhaustive depiction of all the dependencies that exist between objects that exist in the system.
    Please refer to this documentation for more details:
    Link[http://help.sap.com/saphelp_nw70/helpdata/en/8d/3e4c11462a11d189000000e8323d3a/frameset.htm]
    Introduction to data archiving-> archive administration-> network graphic.
    Hope this helps.
    By the way, is there a reason you dont want to follow the sequence?
    Thanks,
    Naveen

  • Regarding sending certificate to the third party while archiving SAP data

    Hi,
    I am stuck at one point in SAP. The steps I have followed are:
    1. go to transaction code "oac0" (Display content repositories).
    2. double click to any archive.
    3. Here we can see a "Send Certificate", button.
         By pressing this button, an http request is fired over the network, addressing the application server in its configuration configured for that archive. Together with it an certificate is also sent to the server outside SAP.
    Generally we do validation of the certificate outside SAP in our application, e.g. in our JAVA code, using iaik jars (security jars).
    Now I need to validate this certificate inside SAP itself prior to be sent over the network from SAP.
    Is it possible any way.

    Mridul,
    Depending on your ERP version you can use STRUST to import the certificate into the ABAP system.
    Regards,
    Salvatore Castro
    Sr. Solution Architect

  • Reading Archived SAP data

    Hi ,
    I need to read the archived records from table QMEL based on the one on key field of this table (custom field).
    There is  one PBS utitlity which converts the select query to function module which in turns gives back the entries read based upon the import parameters of the FM.But problem is I need to get the value based on non key field so the function module is taking infinite time then also not returning any value.(transaction reset)

    Hello sachin,
    If your a licensed PBS customer you can contact one of the resellers in America ( Dolphin IT, Brandywine, Sigma) and ask them for help. If your contract is direct with PBS you can write there Hotline ( Hotline (at) pbs-software.com ) and ask them for help. You just have to mention your Company name, your contract number if known and let them know briefly what your problem is ( sample coding and such and they will get back with you to help you out.
    The conversion tool is just a small help to convert customer programs over a call function, but the PBS developers normaly have a few tricks to get your information faster.
    kind regards
    Mark

  • SAP Data Archiving in R/3 4.6C and ECC 6.0.

    Hi Guys,
        Need some suggestions,
    We currently working on SAP R/3 4.6 and have plans to upgrade it to ECC 6.0,
    In the mean time there is an requirement for SAP Data Archiving for reducing the database size and increase system performance.So wanted to know if its better to do Data Archiving before upgrade or after. Technically which will be comfortable, Also wanted to know if any advanced method available in ECC 6.0 compared to SAP R/3 4.6.
    Please provide your valuable suggestion.
    Thanks and Regards
    Deepu

    Hi Deepu,
    With respect to archiving process there will be no major difference in 4.6 and ECC 6.0 system. However you may get more advantage in ECC6.0 system because of Archive routing and also upgraded Write, Delete programs (upgraded program will depend on your current program in 4.6 system). For example In 4.6 system for archive MM_EKKO write program RM06EW30, archiving of purchase document is based on company code in the selection criteria and there is no preprocessing functionality. In ECC 6.0 you can archive by purchase organization selection criteria and preprocessing functionality will additional help in archiving of PO.
    In case if you archive documents in 4.6 and later upgrade it to ECC 6.0, SAP system will assure you to retrieve the archived data.
    With this i can say that going with archiving after upgrade to ECC 6.0 system will be better with respect to archiving process.
    -Thanks,
    Ajay

  • Access SAP Data Archival file from outside SAP

    Hello Everyone,
    I have a requirement to archive the SAP data, dump that outside SAP in some other system like ILM or BI and build a reporting tool on top of that data.
    So, basically customer want to shutdown the SAP and want to retain data for legal and audit pourpose.
    I was doing some RnD and done archiving of MM_EKKO using SARA. the file got generated with extention .ARCHIVE. I donwnloaded teh file but it is encoded file with all special character in it.
    My question is:
    1. How can I read the archieved SAP data from outside SAP system.
    2. Can we decode the .ARCHIVE file to get it in .DAT format?
    3. Or Is there any other way to access the SAP data outside SAP in a report format.
    Thanks,
    Chintan SOni.

    Hi Chintan,
    1. How can I read the archieved SAP data from outside SAP system.
    For this you could refer SAP Note   460620 - Migrating archive files
    2. Can we decode the .ARCHIVE file to get it in .DAT format?
    As per my knowledge,it's not possible to decode or move to .DAT format.
    3. Or Is there any other way to access the SAP data outside SAP in a report format.
    Refer my first response & the SAP note.
    Hope this will help you.
    Good luck !!
    Gaurav

  • Is there any t code in SAP to display archived shipping data

    Hi All
    we have a issue with unarchiving shipping doc , our basis team has unzipped the file from the path it was archived and provided display access , when i cross checks in Tcode SARI  theya are un zipped and in sap this document is still in status archived i am not able to view vith vt03n
    for archived billing documents once thay are unzipped , document will not  open in vf03 but we can display in vf07
    Please let us know how to view this shipping data in sap ?
    Is there any t code in SAP to display archived shipping data (like for archived billing dicuments  vf07)
    Your kind help would be highly appreciated.
    Thank you
    Rajendra Prasad

    Hello,
    Once shipment document is archived then you can't display by VT03N transaction. As you have pointed out SARI or SARE transaction will help in displaying the archived shipment documents from archive server. (you have to select Archiving object = SD_VTTK and Archive Infosturcture = Select from display option).
    VF07 - Display archived billing document. We call this transaction VF07 as archived enable transaction.
    I have gone through the OSS note 590656 mentioned by Eduardo Hinojosa, with this enanchment of VT03N (respective program) you should be able to display archived shipment document. This Oss note should help you.
    let me know if required further clarification on this.
    -Thanks,
    Ajay
    Edited by: Ajay Kumar on Aug 25, 2009 6:16 AM

  • New to SAP Data Archiving

    Hi,
    Is it a must to have third party tool/ software for SAP Data Archiving
    OR
    SAP has its own solution for (CREATE archive files, WRITE archive files and DELETE from database).
    Any pointers for some more information on different stages of SAP Data Archiving project. (Any links to how-to documents would be great help.)
    Regards,
    Rehan

    You have posted your query in a wrong forum.
    This space is for discussion on SAP TDMS topics.
    You may check in the forum Information Lifecycle Management
    for any available information.

  • How to get the path of the web-archive on the local filesystem?? [Apache]

    Hello all,
    Im running a webapplication on a tomcat server.
    I need to find out the path of the web-archive on the local filesystem to access several files.
    How can I do that with out having an instance of HttpRequest or whatever??

    Hello Bruce.
    Please take a look at the following KBase article.  It has a sample that will retrieve the folder structure for a specified Crystal Report template based on the instance name in the file store.  The sample there can be easily modified so that it is based on a specific infoObject id.
    1549728 - How to find the folder structure in the CMC of a report template based on the instance file name in the output file store.
    The KBase can be retrieve from the following link:
    https://bosap-support.wdf.sap.corp/sap/support/notes/1549728
    Please note that the KBase article is retrieved through the Service MarketPlace.
    I hope this helps.
    Regards.
    - Robert

  • Archive - Purge data from F1 and U1 clusters

    Hello Experts,
    I have been given the task of purging data from the F1 cluster, Remuneration Statement forms, and the U1 cluster, Tax Reporter forms, in PCL4.  I was hoping to accomplish this by using an archive function to delete the data but not store the archived files.  I have not been successful in finding much information about purging from these clusters.  I am looking for any advice anyone can provide or a direction to take to reduce this data.   Thank you in advance for your assistance.

    Martin,
    which would help keep everything intact
    I don't know what that means.  The whole purpose of archiving is to remove data from the 'ready' database and place it somewhere else.  Leaving it intact means not archiving at all.
    In the archiving process, data is selected and copied into some sort of storage medium that typically has a lower state of availability than un-archived data.  The business decides what level of availability is acceptable, and the archiving policy (e.g., how long should the data remain archived before it is finally physically deleted forever).
    So 'intact' is a bit vague.  All the bits of data that the business decide are important are replicated 100% in the archive medium, validated, and then the source records that were archived are physically deleted from the ready database.  Functionally, all archived data is intact, it just may be in another format.
    I have never heard of a major ERP system that did not offer archiving in some form.  There are also many third party vendors who offer archiving for the major ERP packages.
    Level of success is hard to predict.  There are tools available as standard in SAP that monitor critical factors:  memory access, disk access, response times, etc etc etc.  Here too there are third party tools that measure critical factors.  You can run these before and after the archiving process to measure what success you have had.
    I have never seen anyone who will stand up and say "if you archive x million records from your ready database, you will see a performance increase of y percent."  There are too many variables.  As they say in the MPG ads, "Your results may vary". You usually can get some qualitative numbers by testing your archiving process in a test or dev system.
    Best Regards,
    DB49

  • Data archiving and data cleansing

    hi experts,
    Can any one tell me step by step guide for Data archiving and Data cleansing of SAP-ISU Object.
    what is the difference between  Data archiving and Data cleansing .
    Thanx & Rgds

    Data Archiving: So many objects are there you can look some of them..........
    ISU_BBP IS-U Archiving: Budget Billing Plan
    ISU_BCONT Business Partner Contacts (Contract A/R + A/P)
    ISU_BILL IS-U Archiving: Billing Document Header
    ISU_BILLZ IS-U Archiving: Billing Line Item
    ISU_EABL IS-U Archiving: Meter Reading Results
    ISU_EORDER IS-U Archiving: Waste Disposal Order
    ISU_EUFASS Archiving of Usage Factors
    ISU_FACTS Installation Facts
    ISU_INSPEC IS-U Archiving: Campaigns for Inspection List
    ISU_PPM Prepayment Documents
    ISU_PRDOCH IS-U Archiving: Print Document Header
    ISU_PRDOCL IS-U Archiving: Print Document Line Item
    ISU_PROFV IS-U Archiving: EDM Profile Values
    ISU_ROUTE IS-U Archiving: Waste Disposal Route
    ISU_SETTLB Settlement Document
    ISU_SWTDOC Archive Switch Document
    Go to SARA t-code and enter the object CA_BUPA for business partner then Press F6 you will get  step by step documentation. please follow the procedure for all the objects.
    Regards,
    Siva

  • ORA-00308: cannot open archived log '+DATA'

    Hello all,
    I created new physical standby but I facing problem with shipping archived file between primary and standby.
    Primary : RAC (4 nodes)
    Standby : single node with ASM
    when I run :
    alter database recover managed standby database disconnect from session;
    in alert log file :
    Managed Standby Recovery not using Real Time Apply
    Parallel Media Recovery started with 24 slaves
    Waiting for all non-current ORLs to be archived...
    All non-current ORLs have been archived.
    Media Recovery Waiting for thread 1 sequence 25738
    Tue Mar 03 12:21:13 2015
    Completed: alter database recover managed standby database disconnect from session
    and when i checked archived files by
    select max(sequence#)  from v$archivanded_log;
    It was null.
    I understand thatno shipping between primary and standby till this point i deiced to use manual recovery by :
    alter database recover automatic standby database;
    But i get this error in alert log file :
    alter database recover automatic standby database
    Media Recovery Start
    started logmerger process
    Tue Mar 03 12:38:38 2015
    Managed Standby Recovery not using Real Time Apply
    Parallel Media Recovery started with 24 slaves
    Media Recovery Log +DATA
    Errors with log +DATA
    Errors in file /u01/app/oracle/diag/rdbms/oracledrs/oracledrs/trace/oracledrs_pr00_4989.trc:
    ORA-00308: cannot open archived log '+DATA'
    ORA-17503: ksfdopn:2 Failed to open file +DATA
    ORA-15045: ASM file name '+DATA' is not in reference form
    ORA-279 signalled during: alter database recover automatic standby database..
    when i opened oracledrs_pr00_4989.trc: file i find :
    *** 2015-03-03 12:38:39.478
    Media Recovery add redo thread 4
    ORA-00308: cannot open archived log '+DATA'
    ORA-17503: ksfdopn:2 Failed to open file +DATA
    ORA-15045: ASM file name '+DATA' is not in reference form
    When I created i set these parameter in duplicate command:
    set db_file_name_convert='+ASM_ORADATA/oracle','+DATA/oracledrs'
    set log_file_name_convert='+ASM_ARCHIVE/oracle','+DATA/oracledrs','+ASM_ORADATA/oracle','+DATA/oracledrs'
    set control_files='+DATA'
    set db_create_file_dest='+DATA'
    set db_recovery_file_dest='+DATA'
    What please the mistake here
    Thanks in advance,

    Yes I have datafiles under +DATA
    ASMCMD> cd +DATA/ORACLEDRS/DATAFILE
    ASMCMD> ls
    ASD.282.873258045
    CATALOG.288.873258217
    DEVTS.283.873258091
    EXAMPLE.281.873258043
    FEED.260.873227069
    FEED.279.873257713
    INDX.272.873251345
    INDX.273.873252239
    INDX.278.873257337
    SYSAUX.262.873227071
    SYSTEM.277.873256531
    SYSTEM_2.280.873257849
    TB_WEBSITE.284.873258135
    TB_WEBSITE.285.873258135
    TB_WEBSITE.286.873258181
    TB_WEBSITE.287.873258183
    UNDOTBS1.275.873253421
    UNDOTBS2.276.873255247
    UNDOTBS3.261.873227069
    UNDOTBS4.271.873245967
    USERS.263.873227071
    USERS.264.873235507
    USERS.265.873235893
    USERS.266.873237079
    USERS.267.873238225
    USERS.268.873243661
    USERS.269.873244307
    USERS.270.873244931
    USERS.274.873252585
    asd01.dbf
    catalog01
    dev01.dbf
    example.dbf
    feed01.dbf
    feed02.dbf
    indx01.dbf
    indx02.dbf
    indx03.dbf
    sysaux01.dbf
    system01.dbf
    system02.dbf
    undotbs01.dbf
    undotbs02.dbf
    undotbs03.dbf
    undotbs04.dbf
    user1.dbf
    users01.dbf
    users02.dbf
    users03.dbf
    users04.dbf
    users05.dbf
    users06.dbf
    users07.dbf
    users08.dbf
    website01.dbf
    website02.dbf
    website03.dbf
    website04.dbf
    ASMCMD>
    Standby :
    [root@oracledrs ~]# id oracle
    uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba)
    Primary :
    [root@dbn-prod-1 disks]# id oracle
    uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba)
    1- yes i have needed archived files at my primary
    2-  select inst_id,thread#,group# from gv$log;
    Primary :
    INST_ID,THREAD#,GROUP#
    1,1,1
    1,1,2
    1,2,3
    1,2,4
    1,3,5
    1,3,6
    1,4,7
    1,4,8
    1,1,9
    1,2,10
    1,3,11
    1,4,12
    3,1,1
    3,1,2
    3,2,3
    3,2,4
    3,3,5
    3,3,6
    3,4,7
    3,4,8
    3,1,9
    3,2,10
    3,3,11
    3,4,12
    2,1,1
    2,1,2
    2,2,3
    2,2,4
    2,3,5
    2,3,6
    2,4,7
    2,4,8
    2,1,9
    2,2,10
    2,3,11
    2,4,12
    4,1,1
    4,1,2
    4,2,3
    4,2,4
    4,3,5
    4,3,6
    4,4,7
    4,4,8
    4,1,9
    4,2,10
    4,3,11
    4,4,12
    Standby :
    INST_ID,THREAD#,GROUP#
    1,1,9
    1,1,2
    1,1,1
    1,2,3
    1,2,4
    1,2,10
    1,3,5
    1,3,6
    1,3,11
    1,4,12
    1,4,7
    1,4,8
    3- That's sample from alert log since i started the standby (for standby and primary)
    Standby :
    alter database mount standby database
    NOTE: Loaded library: /opt/oracle/extapi/64/asm/orcl/1/libasm.so
    NOTE: Loaded library: System
    SUCCESS: diskgroup DATA was mounted
    ERROR: failed to establish dependency between database oracledrs and diskgroup resource ora.DATA.dg
    ARCH: STARTING ARCH PROCESSES
    Tue Mar 03 18:38:16 2015
    ARC0 started with pid=128, OS id=4461
    ARC0: Archival started
    ARCH: STARTING ARCH PROCESSES COMPLETE
    ARC0: STARTING ARCH PROCESSES
    Tue Mar 03 18:38:17 2015
    Successful mount of redo thread 1, with mount id 1746490068
    Physical Standby Database mounted.
    Lost write protection disabled
    Tue Mar 03 18:38:17 2015
    ARC1 started with pid=129, OS id=4464
    Tue Mar 03 18:38:17 2015
    ARC2 started with pid=130, OS id=4466
    Tue Mar 03 18:38:17 2015
    ARC3 started with pid=131, OS id=4468
    Tue Mar 03 18:38:17 2015
    ARC4 started with pid=132, OS id=4470
    Tue Mar 03 18:38:17 2015
    ARC5 started with pid=133, OS id=4472
    Tue Mar 03 18:38:17 2015
    ARC6 started with pid=134, OS id=4474
    Tue Mar 03 18:38:17 2015
    ARC7 started with pid=135, OS id=4476
    Completed: alter database mount standby database
    Tue Mar 03 18:38:17 2015
    ARC8 started with pid=136, OS id=4478
    Tue Mar 03 18:38:17 2015
    ARC9 started with pid=137, OS id=4480
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC8: Becoming the 'no FAL' ARCH
    ARC2: Becoming the heartbeat ARCH
    ARC2: Becoming the active heartbeat ARCH
    Tue Mar 03 18:38:18 2015
    Starting Data Guard Broker (DMON)
    ARC9: Archival started
    ARC0: STARTING ARCH PROCESSES COMPLETE
    Tue Mar 03 18:38:23 2015
    INSV started with pid=141, OS id=4494
    Tue Mar 03 18:39:11 2015
    alter database recover managed standby database disconnect from session
    Attempt to start background Managed Standby Recovery process (oracledrs)
    Tue Mar 03 18:39:11 2015
    MRP0 started with pid=142, OS id=4498
    MRP0: Background Managed Standby Recovery process started (oracledrs)
    started logmerger process
    Tue Mar 03 18:39:16 2015
    Managed Standby Recovery not using Real Time Apply
    Parallel Media Recovery started with 24 slaves
    Waiting for all non-current ORLs to be archived...
    All non-current ORLs have been archived.
    Media Recovery Waiting for thread 1 sequence 25738
    Completed: alter database recover managed standby database disconnect from session
    Tue Mar 03 18:41:17 2015
    WARN: ARCH: Terminating pid 4476 hung on an I/O operation
    Killing 1 processes with pids 4476 (Process by index) in order to remove hung processes. Requested by OS process 4224
    ARCH: Detected ARCH process failure
    Tue Mar 03 18:45:17 2015
    ARC2: STARTING ARCH PROCESSES
    Tue Mar 03 18:45:17 2015
    ARC7 started with pid=127, OS id=4586
    Tue Mar 03 18:45:18 2015
    Fatal NI connect error 12170.
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 03-MAR-2015 18:45:18
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12535
    TNS-12535: TNS:operation timed out
        ns secondary err code: 12560
        nt main err code: 505
    TNS-00505: Operation timed out
        nt secondary err code: 0
        nt OS err code: 0
      Client address: <unknown>
    ARC7: Archival started
    ARC2: STARTING ARCH PROCESSES COMPLETE
    Tue Mar 03 18:47:14 2015
    alter database recover managed standby database cancel
    Tue Mar 03 18:48:18 2015
    Fatal NI connect error 12170.
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 03-MAR-2015 18:48:18
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12535
    TNS-12535: TNS:operation timed out
        ns secondary err code: 12560
        nt main err code: 505
    TNS-00505: Operation timed out
        nt secondary err code: 0
        nt OS err code: 0
      Client address: <unknown>
    Tue Mar 03 18:51:18 2015
    Fatal NI connect error 12170.
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 03-MAR-2015 18:51:18
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12535
    TNS-12535: TNS:operation timed out
        ns secondary err code: 12560
        nt main err code: 505
    TNS-00505: Operation timed out
        nt secondary err code: 0
        nt OS err code: 0
      Client address: <unknown>
    Error 12170 received logging on to the standby
    FAL[client, USER]: Error 12170 connecting to oracle for fetching gap sequence
    MRP0: Background Media Recovery cancelled with status 16037
    Errors in file /u01/app/oracle/diag/rdbms/oracledrs/oracledrs/trace/oracledrs_pr00_4500.trc:
    ORA-16037: user requested cancel of managed recovery operation
    Recovery interrupted!
    Tue Mar 03 18:51:18 2015
    MRP0: Background Media Recovery process shutdown (oracledrs)
    Tue Mar 03 18:51:19 2015
    Managed Standby Recovery Canceled (oracledrs)
    Completed: alter database recover managed standby database cancel
    Tue Mar 03 18:51:56 2015
    alter database recover automatic standby database
    Media Recovery Start
    started logmerger process
    Tue Mar 03 18:51:56 2015
    Managed Standby Recovery not using Real Time Apply
    Parallel Media Recovery started with 24 slaves
    Media Recovery Log +DATA
    Errors with log +DATA
    Errors in file /u01/app/oracle/diag/rdbms/oracledrs/oracledrs/trace/oracledrs_pr00_4617.trc:
    ORA-00308: cannot open archived log '+DATA'
    ORA-17503: ksfdopn:2 Failed to open file +DATA
    ORA-15045: ASM file name '+DATA' is not in reference form
    ORA-279 signalled during: alter database recover automatic standby database...
    Tue Mar 03 18:53:06 2015
    db_recovery_file_dest_size of 512000 MB is 0.13% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Primary :
    Tue Mar 03 17:13:43 2015
    Thread 1 advanced to log sequence 26005 (LGWR switch)
      Current log# 1 seq# 26005 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 17:13:44 2015
    Archived Log entry 87387 added for thread 1 sequence 26004 ID 0x66aa5a0d dest 1:
    Tue Mar 03 18:00:18 2015
    Thread 1 advanced to log sequence 26006 (LGWR switch)
      Current log# 2 seq# 26006 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Tue Mar 03 18:00:18 2015
    Archived Log entry 87392 added for thread 1 sequence 26005 ID 0x66aa5a0d dest 1:
    Tue Mar 03 18:55:33 2015
    Thread 1 advanced to log sequence 26007 (LGWR switch)
      Current log# 9 seq# 26007 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26007 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Tue Mar 03 18:55:33 2015
    Archived Log entry 87395 added for thread 1 sequence 26006 ID 0x66aa5a0d dest 1:
    Tue Mar 03 19:14:22 2015
    Dumping diagnostic data in directory=[cdmp_20150303191422], requested by (instance=4, osid=10234), summary=[incident=1692472].
    Dumping diagnostic data in directory=[cdmp_20150303191425], requested by (instance=4, osid=10234), summary=[incident=1692473].
    Tue Mar 03 20:00:06 2015
    Thread 1 advanced to log sequence 26008 (LGWR switch)
      Current log# 1 seq# 26008 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 20:00:07 2015
    Archived Log entry 87401 added for thread 1 sequence 26007 ID 0x66aa5a0d dest 1:
    Tue Mar 03 21:00:02 2015
    Thread 1 advanced to log sequence 26009 (LGWR switch)
      Current log# 2 seq# 26009 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Tue Mar 03 21:00:03 2015
    Archived Log entry 87403 added for thread 1 sequence 26008 ID 0x66aa5a0d dest 1:
    Thread 1 advanced to log sequence 26010 (LGWR switch)
      Current log# 9 seq# 26010 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26010 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Tue Mar 03 21:00:06 2015
    Archived Log entry 87404 added for thread 1 sequence 26009 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:00:00 2015
    Setting Resource Manager plan SCHEDULER[0x32DA]:DEFAULT_MAINTENANCE_PLAN via scheduler window
    Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
    Tue Mar 03 22:00:00 2015
    Starting background process VKRM
    Tue Mar 03 22:00:00 2015
    VKRM started with pid=184, OS id=4838
    Tue Mar 03 22:00:07 2015
    Begin automatic SQL Tuning Advisor run for special tuning task  "SYS_AUTO_SQL_TUNING_TASK"
    Tue Mar 03 22:00:25 2015
    Thread 1 advanced to log sequence 26011 (LGWR switch)
      Current log# 1 seq# 26011 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 22:00:26 2015
    Archived Log entry 87408 added for thread 1 sequence 26010 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:00:58 2015
    Thread 1 advanced to log sequence 26012 (LGWR switch)
      Current log# 2 seq# 26012 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Tue Mar 03 22:01:00 2015
    Archived Log entry 87412 added for thread 1 sequence 26011 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:02:37 2015
    Thread 1 cannot allocate new log, sequence 26013
    Checkpoint not complete
      Current log# 2 seq# 26012 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Thread 1 advanced to log sequence 26013 (LGWR switch)
      Current log# 9 seq# 26013 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26013 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Tue Mar 03 22:02:41 2015
    Archived Log entry 87415 added for thread 1 sequence 26012 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:03:26 2015
    Thread 1 cannot allocate new log, sequence 26014
    Checkpoint not complete
      Current log# 9 seq# 26013 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26013 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Thread 1 advanced to log sequence 26014 (LGWR switch)
      Current log# 1 seq# 26014 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 22:03:29 2015
    Archived Log entry 87416 added for thread 1 sequence 26013 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:05:50 2015
    Thread 1 cannot allocate new log, sequence 26015
    Checkpoint not complete
      Current log# 1 seq# 26014 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 22:05:52 2015
    End automatic SQL Tuning Advisor run for special tuning task  "SYS_AUTO_SQL_TUNING_TASK"
    Thread 1 advanced to log sequence 26015 (LGWR switch)
      Current log# 2 seq# 26015 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Tue Mar 03 22:05:54 2015
    Archived Log entry 87418 added for thread 1 sequence 26014 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:07:29 2015
    Thread 1 cannot allocate new log, sequence 26016
    Checkpoint not complete
      Current log# 2 seq# 26015 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Thread 1 advanced to log sequence 26016 (LGWR switch)
      Current log# 9 seq# 26016 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26016 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Tue Mar 03 22:07:33 2015
    Archived Log entry 87421 added for thread 1 sequence 26015 ID 0x66aa5a0d dest 1:
    Thread 1 cannot allocate new log, sequence 26017
    Checkpoint not complete
      Current log# 9 seq# 26016 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26016 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Thread 1 advanced to log sequence 26017 (LGWR switch)
      Current log# 1 seq# 26017 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 22:07:39 2015
    Archived Log entry 87422 added for thread 1 sequence 26016 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:16:36 2015
    Thread 1 advanced to log sequence 26018 (LGWR switch)
      Current log# 2 seq# 26018 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Tue Mar 03 22:16:37 2015
    Archived Log entry 87424 added for thread 1 sequence 26017 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:30:06 2015
    Thread 1 advanced to log sequence 26019 (LGWR switch)
      Current log# 9 seq# 26019 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26019 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Tue Mar 03 22:30:07 2015
    Archived Log entry 87427 added for thread 1 sequence 26018 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:30:18 2015
    Thread 1 advanced to log sequence 26020 (LGWR switch)
      Current log# 1 seq# 26020 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 22:30:19 2015
    Archived Log entry 87428 added for thread 1 sequence 26019 ID 0x66aa5a0d dest 1:
    Tue Mar 03 23:07:27 2015
    Dumping diagnostic data in directory=[cdmp_20150303230727], requested by (instance=4, osid=25140), summary=[incident=1692496].
    Dumping diagnostic data in directory=[cdmp_20150303230730], requested by (instance=4, osid=25140), summary=[incident=1692497].
    Thanks in advance sir ,

  • SAP Data archieving

    Hi Experts,
        Can anyone please help me regarding SAP Data archieving material. I would like to have material to look into data Archieving. your help will be appreciated.
    Venkat.

    Data which needs to be archived include Customer Master, Material Master, Bills of Material, Master Recipe, Process Order, Vendor Master, Purchasing Info Record, Sales Pricing Conditions, and Purchasing Pricing Conditions.
        To archive vendor master data, first we need to archive its dependant objects.
        The dependant objects are
    1)     MM_EINA
    2)     FI_DOCUMNT           
    3)     FI_MONTHLY
    4)     MM_EKKO

  • Archiving same data more than once due to overlapping variant values

    Hi all,
    i had accidently run 2 archiving jobs on the same data. For instance, job 1 was archiving for company code IN ( where the company code was from IN00 till ZZ00), which was the unwanted job. The second archive job archived data from IN99 till INZZ ( not the whole IN company code ).
    These 2 jobs failed due to log fulll ( the data was too huge to be archived), however when i expand the jobs in the failed SARA session, the archive files has up to 100 MB size.
    Below are some of the problems which will incur if we archive the same data more than once ( which i found from  my online search )
    - some archiving objects require that data only exists once in the archive therefore duplicate data can lead to erroneous results in the totals of archived data
    - Archiving the data again will affect checksum. Checksum normally conducted before and after archiving process and its purpose is to validate whether the same file contents exist in the newly created archive files as compare the original data.
    Could anyone advice me on how to overcome this multiple archiving on the same data issue. Apart from above stated impact, what are the other problems of multiple archiving on same data?
    The failed archived sessions are currently in "Incomplete Archiving Session" and in 1 week time they will be deleted by archive delete jobs and will be moved to "Completed Archive Session". I highly appreciate if anyone could help
    Source of finding:
    http://help.sap.com/saphelp_nw73/helpdata/en/4d/8c7890910b154ee10000000a42189e/content.htm
    http://help.sap.com/saphelp_nwpi71/helpdata/en/8d/3e4fc6462a11d189000000e8323d3a/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/8d/3e4fc6462a11d189000000e8323d3a/content.htm

    Hello,
    There are several issues here.  In this case it seems pretty clear cut that you did not want the first variant to be executed.  Hopefully none of the deletions have taken place for this archive run.
    In cases where you have overlapping selection criteria and some of the deletions have been processed you can be in a very difficult situation.  The best advise that I have would be to check your archive info structure CATALOG definition and make sure that both the archive file and the offset fields are set to DISPLAY fields and not KEY fields.
    If your file and offset are key fields then when you use the archive info structure you would pull up more than one copy of the archived document.
    Example:  FI document 12345 was archived and deleted in archive run 1 and archive run 2.
    The search for the archive info structure when the file and offset are keys fields would return two results.
    12345 from run 1
    12345 from run 2
    If the CATALOG has the file and offset as display only fields you would only return one result
    12345 from (whichever deletion file was processed first)
    The second deletion process would have a warning message in the job log that not all records were inserted.
    Please note that any direct access of the data archive file that bypasses the archive info structure and goes directly to the data archiving files would still show two documents and not a single document.
    Regards,
    Kirby Arnold

  • Archiving SAP APO

    Hi
    I have got the information from SAP help on what are the archiving objects that can be used to archive SAP-APO. I came to know that SAP APO is a separate box. If I need to archive the data from SAP-APO using the corresponding archiving objects, do we have SARA t-code in the SAP-APO box? Or if we want to run the Archive programs thru R/3, how can we communicate SAP-APO with our external R/3 which has got SARA active.
    Please suggest, thank you.
    Srini

    Hi,
    I have identified, there are no archiving objects specific to application area "XF - Planning".
    If we have SAP APO in a separate system, can anyone tell me the best practice to follow for Data Archiving.
    As, I have new system in place and donot have data to analyze, please suggest me a list of Archiving Objects to be implemented in such scenario.
    Thank You for your help in advance.
    With Regards,
    Santanu

Maybe you are looking for

  • Is it possible to interface a USB Gamepad to a RT PXI system?

    We currently use a RT PXI system to control a subsea ROV. We would like to interface a Logitech Wingman Gamepad with USB interface to our RT PXI system as a Human Interface Device (HID) to control the ROV. Is it possible? The gamepad supplier has inf

  • Twitter & Tumblr not rendering text correctly.

    Image here: http://i.imgur.com/FtK6K.jpg Have tried the usual trouble shooting. When changing the page style to 'No Style' - the text displays. Manually disabling all addons and plugins doesn't fix the issue however when using Firefox in safemode, th

  • Loop keeping second animation on hold

    hello every one i have an animation loop which triggers another animation. The problem is the loop is on hold while the animation it triggered is completed. I wish some one could give me some examples on how loops could trigger other animations. Swap

  • Analytics Report Suite Not working

    Hello: I'd like to setup an analytics report suite for an application account I've created. I don't have an Omniture Site Catalyst so I want to use the basic analytics suite included in the DPS Pro. But, when I try to write a name for the Report Suit

  • Uninstalled cache, now can't reinstall it

    Hi all, I've installed the cache in /opt/sun/proxyserver40 on a Gentoo Linux system. Then I uninstalled it. When I try to reinstall it in the same location, although /opt/sun/proxyserver40 does not exist, I am told: Existing installation has been det