Archiving on File system

Hi Experts,
I want to configure document(pdf) archiving, so as the first step went into transaction OAC0, but could not get as what option should I use for getting all the document on a File server or Local disk?
Document Area if choosen as Archivelink, then storage area is a bit confusing??
Can anybody let me know if this can be done without Content Server?
Regards
Pras

Hello,
If you would like to use a content repository defined in OAC0 to perform document archiving to a filesystem there is only one option.
You have to choose Storage type = HTTP Content Server.
Afterwards you have to define a Content Server that allows you to store the documents on a filesystem.
The cheapest solution for this is SAP's Content Server 6.40.
You can also use OpenText (IXOS) for this but this is an expensive solution.
[http://help.sap.com/saphelp_nw70/helpdata/EN/40/32104211625933e10000000a155106/content.htm|http://help.sap.com/saphelp_nw70/helpdata/EN/40/32104211625933e10000000a155106/content.htm]
Wim

Similar Messages

  • Exctract a BLOB to file system

    Hello,
    I've saved some archives (PDFs, DOCs) in a BLOB Field with Initialize_Container built-in.
    How could I extract these archives to file system ? (inverse operation).
    Thanks. Regards.

    It depends on what OLE automation interfaces Acrobat Reader supports - you may need the full version of acrobat to do any decent manipulation.

  • File drag and drop from java app to file system - Linux problem

    Hi all,
    I am developing an application for files archiving, with a graphical user interface. The files are showed in a JList, and I implemented drag and drop feature for it. It works well for Windows platform, but not at all for Linux (Ubuntu).
    Under Linux, file export with DND from the archive to file system does not work.
    To extract a file with DND, we can't know the target destination folder, so I create a temp file (file extracted from the archive is wrote to OS tmp dir), and the system is expected to handle the DND action.
    The temp file is correctly created (original data is well retrieved), but Ubuntu does not want to copy or move it via DND.
    I get a system dialog window (while extracting file "test1") :
    Error while moving
    There was an error getting information about "[tmp/60lp1t7egl/test1]".
    Show more details > Operation not supported
    (Cancel, Skip all, Skip, Retry)
    Can it be a rights problem ?
    Any idea ?
    Regards,
    Biibox

    Until the 3rd party image editor developers provide us with the Photos extensions for using their app from within Photo we'll have to export to the Desktop, edit there in the apps and either import the new file into Photos or use it elsewhere.

  • Archive Repository - Content Server or Root File System?

    Hi All,
    We are in the process of evaluating a storage solution for archiving and I would like to hear your experiences and recommendations.  I've ruled out 3rd-party solutions such as IXOS as over kill for our requirement.  That leaves us with the i5/OS root file system or the SAP Content Server in either a Linux partition or on a Windows server.  Has anyone done archiving with a similar setup?  What issues did you face?  I don't plan to replicate archive objects via MIMIX.
    Is anyone running the SAP Content Server in a Linux partition?  I'd like to know your experience with this even if you don't use the Content Server for archiving.  We use the Content Server (currently on Windows) for attaching files to SAP documents (e.g., Sales Documents) via Generic Object Services (GOS).  While I lean towards running separate instances of the Content Server for Archiving and GOS, I would like to run them both in the same Linux LPAR.
    TIA,
    Stan

    Hi Stanley,
    If you choose to store your data archive files at the file system level, is that a secure enough environment?  A third party certified storage solution provides a secure system where the archive files cannot be altered and also provides a way to manage the files over the years until they have met their retention limit.
    Another thing to consider, just because the end users may not need access to the archived data, your company might need to be able to access the data easily due to an audit or law suit situation. 
    I am a SAP customer whose job function is the technical lead for my company's SAP data archiving projects, not a 3rd party storage solution provider , and I highly recommend a certified storage solution for compliance reasons.
    Also, here is some information from the SAP Data Archiving web pages concerning using SAP Content Server for data archive files:
    10. Is the SAP Content Server suitable for data archiving?
    Up to and including SAP Content Server 6.20 the SAP CS is not designed to handle large files, which are common in data archiving. The new SAP CS 6.30 is designed to also handle large files and can therefore technically be used to store archive files. SAP CS does not support optical media. It is especially important to regularly run backups on the existing data!
    Recommendation for using SAP CS for data archiving:
          Store the files on SAP CS in a decompressed format (make settings at the repository)
           Install SAP CS and SAP DB on one server
           Use SAP CS for Unix (runtime tests to see how SAP CS for Windows behaves with large files still have to be carried out)
    Best Regards,
    Karin Tillotson

  • Does exist any top creating archives in a file system in solaris 10?

    I need to know if exist any limit creating archives in a file system in solaris 10
    thanks

    http://www.unix.com/solaris/25956-max-size-file.html

  • Local file system for archive destination in RAC with ASM

    hi Gurus .
    I need an info.
    I have ASM file system in 2 node RAC.
    The client does not want to use flash recovery area.
    Can we use the archive destination as local file system rather than ASM.
    like /xyzlog1 for archvies coming from node 1 and /xyzlog2 for archive logs coming from node 2.
    Imortant thing is these two destinations are anot shared with each nodes.
    OS is solaris sparc 10.
    version is 10.2.0.2

    There is huge space in the storage.
    Pls tell me in general ho wdo you do this.
    Do we take and one disk from the storage and format it with local file system and share it among the 2 nodes?
    If so then that mount point will have same mount point name if we see it from other node... ..ryt
    In this scenario if one instance is down then from the same shared mount point which is on the same node(down) can apply archves?Here, Earlier you are using any ASM shared location for ARCHIVES ?
    if so you can add a CANDIDATE disk to your existing ARHCIVE disk group(shared).
    if not from LUN's after format, I.e. candidates based on that you can create a new DISK group(shared) according to your space requirements. THen you can assign to log_archive_dest_1 for both the nodes to single shared location(disk group)

  • Archive notification to the  file system

    We attached some documents (e.g. email w/ screen shots) in txn QM03.
    We need to archive those notifications w/ their attachment, including the screens if they have screens.
    1) we do not know the related archiving objects;
    2) even we know the archiving objects, we are not sure if the attached screens would be archived;
    3) we can only archive them into file systems since not other storage is available.
    Your advise to solve these problems are appreciated.
    Thanks a lot

    Hi,
    1. You can use archive object QM_QMEL to archive quality notifications. Please refer to the SAP documentation for pre-requisites and dependencies:
    Link:[http://help.sap.com/saphelp_erp60_sp/helpdata/en/e0/bc963457885f2ee10000009b38f83b/frameset.htm]
    2. As i know, the attachments created using GOS will not be archived along with the notification. These attachments are stored in database tables itself and only a link is available to the notification. Please refer to note 530792 for more info on where the attachments are stored.
    3. Notifications (using SARA) can be archived into the filesystem
    Hope this helps,
    Naveen

  • Using 2 file systems to place archive logs

    We currently have LOG_ARCHIVE_DEST parameter set to place archive logs to /u10 file system. However it no longer has enough space and system admin added second dedicated file system /u11 to place archive logs. So we would like to start using both of them. If first gets full we want to make sure second is in use until backup job deletes them. If this possible to do in current version that we have (8.1.7) and what changes should we make to start utilizing both file systems.
    Thanks.
    Edited by: user594143 on Sep 8, 2008 3:47 PM

    I am not sure whether this was introduced from 8i but 9i & 10g both versions have this feature.
    You can use the ALTERNATE feature of log_archive_dest_n to set an alternate destination if the primary destination fails. In the sense, if the primary (mandatory) archive log destination fails because of either full or any other reason, the database starts archiving to the ALTERNATE destination. After you clear the primary destination, you have to manually switch the destinations to PRIMARY and ALTERNATE.
    SQL> alter system set log_archive_dest_1='LOCATION=/u00/app/oracle/admin/archive_1 MANDATORY NOREOPEN ALTERNATE=log_archive_dest_2';
    SQL> alter system set log_archive_dest_2='LOCATION=/u01/app/oracle/admin/archive_2 OPTIONAL';
    SQL> alter system set log_archive_dest_state_1=enable;
    SQL> alter system set log_archive_dest_state_2=alternate;
    Thanks,
    Harris.

  • How to use file system replacing optical archive?

    We do not have IXOS (OT) yet.
    However I have configured everything for printlist archiving and early archiving for IXOS (on the R3 side ONLY!)
    May I modify the procedure a bit so that the archive will be stored in the file system instead of the
    optical device since we do not have IXOS yet.
    Please help. Thanks!

    hi,
    Create Queues
    In this customizing activity you create queues and specify the queue administrator. This process should become part of the installation routine of the storage system, because not activating a queue can lead to irreparable errors (and may mean having to repeat print list creation and storage).
    Example
    CARA Queue: Queue in which the spool writes the storage requests.
    Activities
    You can create queues for the following functions:
    Asynchronous storage (CARA-Queue)
    Error in asynchronous storage (CARA_E-Queue)
    Storage confirmation (CFBC-Queue)
    Error in storage confirmation (CFBC_E-Queue)
    Asynchronous retrieval (CFBA-Queue)
    Error in asynchronous retrieval (CFBA_E-Queue)
    You can also define a queue administrator.
    Also u can check in archivelinking.
    Edit Links
    In this customizing activity, you link a document type to an object type, a content repository, and a link table. This has the following effect on documents of this document type:
    The documents can only be linked to instances of the specified business object type.
    The link entry is entered in the specified link table.
    The documents are always stored in the specified content repository.
    Activities
    Enter the following data:
    Object type:
    Business object type that exists in the business object repository (transaction SWO3).
    One object type can be used with more than one document type.
    Document type
    Name of a document type specified in the customizing activity Document types .
    Status
    X: The storage system is active.
    Empty: The storage system is not active.
    Content repository ID
    Two-digit identification number that you have entered in the customizing activity Maintain Content Repositories .
    Link table
    Name of the table, in which ArchiveLink enters the link entries between stored documents and the corresponding application documents.
    SAP supplies the following link tables:
    TOA01, TOA02, TOA03 (general link tables),
    TOAHR (only for documents from the SAP HR application component),
    and
    TOADL (only for print lists).
    Alternatively, you can define your own link tables. To do this, enter the customizing activity Maintain available link tables. Here you can also find a description of the link tables supplied by SAP.
    Retention period
    Number of months the entry for the stored document remains in the link table before it is deleted.
    Benakaraja

  • Archive printlist to file systems?

    Gurus:
    We do NOT have IXOS and alike yet.
    So we want to archive printlists to the file system.
    Would you please help advise how to achieve it and,  after archiving how to view the archive printlist?
    Thanks!

    hi,
    Create Queues
    In this customizing activity you create queues and specify the queue administrator. This process should become part of the installation routine of the storage system, because not activating a queue can lead to irreparable errors (and may mean having to repeat print list creation and storage).
    Example
    CARA Queue: Queue in which the spool writes the storage requests.
    Activities
    You can create queues for the following functions:
    Asynchronous storage (CARA-Queue)
    Error in asynchronous storage (CARA_E-Queue)
    Storage confirmation (CFBC-Queue)
    Error in storage confirmation (CFBC_E-Queue)
    Asynchronous retrieval (CFBA-Queue)
    Error in asynchronous retrieval (CFBA_E-Queue)
    You can also define a queue administrator.
    Also u can check in archivelinking.
    Edit Links
    In this customizing activity, you link a document type to an object type, a content repository, and a link table. This has the following effect on documents of this document type:
    The documents can only be linked to instances of the specified business object type.
    The link entry is entered in the specified link table.
    The documents are always stored in the specified content repository.
    Activities
    Enter the following data:
    Object type:
    Business object type that exists in the business object repository (transaction SWO3).
    One object type can be used with more than one document type.
    Document type
    Name of a document type specified in the customizing activity Document types .
    Status
    X: The storage system is active.
    Empty: The storage system is not active.
    Content repository ID
    Two-digit identification number that you have entered in the customizing activity Maintain Content Repositories .
    Link table
    Name of the table, in which ArchiveLink enters the link entries between stored documents and the corresponding application documents.
    SAP supplies the following link tables:
    TOA01, TOA02, TOA03 (general link tables),
    TOAHR (only for documents from the SAP HR application component),
    and
    TOADL (only for print lists).
    Alternatively, you can define your own link tables. To do this, enter the customizing activity Maintain available link tables. Here you can also find a description of the link tables supplied by SAP.
    Retention period
    Number of months the entry for the stored document remains in the link table before it is deleted.
    Benakaraja

  • SAP GoLive : File System Response Times and Online Redologs design

    Hello,
    A SAP Going Live Verification session has just been performed on our SAP Production environnement.
    SAP ECC6
    Oracle 10.2.0.2
    Solaris 10
    As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
    1/
    We have been told that our file system read response times "do not meet the standard requirements"
    The following datafile has ben considered having a too high average read time per block.
    File name -Blocks read  -  Avg. read time (ms)  -Total read time per datafile (ms)
    /oracle/PMA/sapdata5/sr3700_10/sr3700.data10          67534                         23                               1553282
    I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    2/
    We have been asked  to increase the size of the online redo logs which are already quite large (54Mb).
    Actually we have BW loading that generates "Chekpoint not comlete" message every night.
    I've read in sap note 79341 that :
    "The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
    Frankly, I have problems undertanding this sentence.
    Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
    But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    Thank you.
    Any useful help would be appreciated.

    Hello
    >> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    The recommended ("standard") values are published at the end of sapnote #322896.
    23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
    >> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
    Correct.
    >> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
    Every dirty block in the buffer cache is written down to the datafiles
    The latest SCN is written (updated) into the datafile header
    The latest SCN is also written to the controlfiles
    If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
    But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
    There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
    Regards
    Stefan

  • How to delete file systems from a Live Upgrade environment

    How to delete non-critical file systems from a Live Upgrade boot environment?
    Here is the situation.
    I have a Sol 10 upd 3 machine with 3 disks which I intend to upgrade to Sol 10 upd 6.
    Current layout
    Disk 0: 16 GB:
    /dev/dsk/c0t0d0s0 1.9G /
    /dev/dsk/c0t0d0s1 692M /usr/openwin
    /dev/dsk/c0t0d0s3 7.7G /var
    /dev/dsk/c0t0d0s4 3.9G swap
    /dev/dsk/c0t0d0s5 2.5G /tmp
    Disk 1: 16 GB:
    /dev/dsk/c0t1d0s0 7.7G /usr
    /dev/dsk/c0t1d0s1 1.8G /opt
    /dev/dsk/c0t1d0s3 3.2G /data1
    /dev/dsk/c0t1d0s4 3.9G /data2
    Disk 2: 33 GB:
    /dev/dsk/c0t2d0s0 33G /data3
    The data file systems are not in use right now, and I was thinking of
    partitioning the data3 into 2 or 3 file systems and then creating
    a new BE.
    However, the system already has a BE (named s10) and that BE lists
    all of the filesystems, incl the data ones.
    # lufslist -n 's10'
    boot environment name: s10
    This boot environment is currently active.
    This boot environment will be active on next system boot.
    Filesystem fstype device size Mounted on Mount Options
    /dev/dsk/c0t0d0s4 swap 4201703424 - -
    /dev/dsk/c0t0d0s0 ufs 2098059264 / -
    /dev/dsk/c0t1d0s0 ufs 8390375424 /usr -
    /dev/dsk/c0t0d0s3 ufs 8390375424 /var -
    /dev/dsk/c0t1d0s3 ufs 3505453056 /data1 -
    /dev/dsk/c0t1d0s1 ufs 1997531136 /opt -
    /dev/dsk/c0t1d0s4 ufs 4294785024 /data2 -
    /dev/dsk/c0t2d0s0 ufs 36507484160 /data3 -
    /dev/dsk/c0t0d0s5 ufs 2727290880 /tmp -
    /dev/dsk/c0t0d0s1 ufs 770715648 /usr/openwin -
    I browsed the Solaris 10 Installation Guide and the man pages
    for the lu commands, but can not find how to remove the data
    file systems from the BE.
    How do I do a live upgrade on this system?
    Thanks for your help.

    Thanks for the tips.
    I commented out the entries in /etc/vfstab, also had to remove the files /etc/lutab and /etc/lu/ICF.1
    and then could create the Boot Environment from scratch.
    I was also able to create another boot environment and copied into it,
    but now I'm facing a different problem, error when trying to upgrade.
    # lustatus
    Boot Environment           Is       Active Active    Can    Copy     
    Name                       Complete Now    On Reboot Delete Status   
    s10                        yes      yes    yes       no     -        
    s10u6                      yes      no     no        yes    -        Now, I have the Solaris 10 Update 6 DVD image on another machine
    which shares out the directory. I mounted it on this machine,
    did a lofiadm and mounted that at /cdrom.
    # ls -CF /cdrom /cdrom/boot /cdrom/platform
    /cdrom:
    Copyright                     boot/
    JDS-THIRDPARTYLICENSEREADME   installer*
    License/                      platform/
    Solaris_10/
    /cdrom/boot:
    hsfs.bootblock   sparc.miniroot
    /cdrom/platform:
    sun4u/   sun4us/  sun4v/Now I did luupgrade and I get this error:
    # luupgrade -u -n s10u6 -s /cdrom    
    ERROR: The media miniroot archive does not exist </cdrom/boot/x86.miniroot>.
    ERROR: Cannot unmount miniroot at </cdrom/Solaris_10/Tools/Boot>.I find it strange that this sparc machine is complaining about x86.miniroot.
    BTW, the machine on which the DVD image is happens to be x86 running Sol 10.
    I thought that wouldn't matter, as it is just NFS sharing a directory which has a DVD image.
    What am I doing wrong?
    Thanks.

  • Capture all SQL statements and archive to file in real time

    Want to Capture all SQL statements and archive to file in real time?
    Oracle Session Manager is the tool just you need.
    Get it at http://www.wangz.net
    This tools monitor how connected sessions use database instance resources in real time. You can obtain an overview of session activity sorted by a statistic of your choosing. For any given session, you can then drill down for more detail. You can further customize the information you display by specifying manual or automatic data refresh, the rate of automatic refresh.
    In addition to these useful monitoring capabilities, OSM allows you to send LAN pop-up message to users of Oracle sessions.
    Features:
    --Capture all SQL statement text and archive to files in real time
    --Pinpoints problematic database sessions and displays detailed performance and resource consumption data.
    --Dynamically list sessions holding locks and other sessions who are waiting for.
    --Support to kill several selected sessions
    --Send LAN pop-up message to users of Oracle sessions
    --Gives hit/miss ratio for library cache,dictionary cache and buffer cache periodically,helps to tune memory
    --Export necessary data into file
    --Modify the dynamic system parameters on the fly
    --Syntax highlight for SQL statements
    --An overview of your current connected instance informaiton,such as Version, SGA,License,etc
    --Find out object according to File Id and Block Id
    Gudu Software
    http://www.wangz.net

    AnkitV wrote:
    Hi All
    I have 3 statements and I am writing some thing to a file using UTL_FILE.PUT_LINE after each statement is over. Each statement takes mentioned time to complete.
    I am opening file in append mode.
    statement1 (takes 2 mins)
    UTL_FILE.PUT_LINE
    statement2 (takes 5 mins)
    UTL_FILE.PUT_LINE
    statement3 (takes 10 mins)
    UTL_FILE.PUT_LINE
    I noticed that I am able to see contents written by UTL_FILE.PUT_LINE only after statement3 is over, not IMMEDIATELY after statement1 and statement2 are done ?
    Can anybody tell me if this is correct behavior or am I missing something here ?Calling procedure must terminate before data is actually written to the file.
    It is expected & correct behavior.

  • File System error while restoring a backup from analysis server to another

    Hi
    I have restored an analysis database backup to from server 1  (sql server 2008) to server 2(sql 2008) and its working fine 
    and now am trying to restore another database from analysis server server 3 (sql 2005) to  server 2 (sql 2008 )but its giving error that 
    File System error occured while opening the file G:\ProgramFiles\MSAS10.MSSQLSERVER\OLAP\Backup\Workordermodule.0.db\WomDW1.7.00 ..etc

    Hi Maverick,
    According to your description, you are experiencing the error when restoring the SSAS 2005 database on SSAS 2008 server, right?
    In your scenario, how do you backup your SSAS 2005 database? Please ensure that you backup and restore steps are correct. Here is a blog which describe how to migrate a cube in SQL Server Analysis Services 2005 to SQL Server Analysis Services 2008 step by
    step, please refer to the link below.
    http://blogs.technet.com/b/mdegre/archive/2010/03/31/migrating-a-cube-in-sql-server-analysis-services-2005-to-sql-server-analysis-services-2008.aspx
    Besides, you can import SQL 2005 AS database in SQL 2008 BIDS Project. Then fix AMO warnings and deploy that database on SQL 2008 Server.
    http://technet.microsoft.com/en-in/library/ms365361(v=sql.100).aspx
    Regards,
    Charlie Liao
    TechNet Community Support

  • I have one problem with Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

Maybe you are looking for

  • Interactive Report and the mysterious invalid number

    Hello, I have an application that is ready, or so I thought, to be sent to a client to update an existing application. One page in the application has an interactive report that is based on a table in the parsing schema that is joined to the apex_col

  • Sending PDF to SAP Outbox - Urgent

    Hi All,   I try to send a SAP mail with PDF Attachment throught a Program. Its sending the mail along with the attachment. But when i try to open the PDF in SAP Outbox, its giving formating error and file is not getting opend. The code as follows. An

  • Can't pair macbook air with muse mini blue tooth speaker?

    I am trying to pair up my new Muse Mini blue tooth speaker and my MacBook Air  computer wont even recognize it. I am able to pair it to my iphone5 and it works wonderful. I would appreciate any assistance solving this. Thanks Jim

  • RG1 Report requirements

    Hi all, Following is the clients requirements for RG1 report, 1. When there is opening qnty for materials,but no transactions(production or issues) the report should show the details of opening & closing stock of FG stocks, where as the above require

  • How to uncheck the hide field if it is grayed out?

    Hi, I have added a field in the datasource definition by changing FM logic and adding extra field in extract structure for generic extraction. The check box is checked in the hide column against the newly added field in RSA6 transsaction for the data