Why multiple  log files are created while using transaction in berkeley db

we are using berkeleydb java edition db base api, we have already read/write CDRFile of 9 lack rows with transaction and
without transaction implementing secondary database concept the issues we are getting are as follows:-
with transaction----------size of database environment 1.63gb which is due to no. of log files created each of 10 mb.
without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb. so we want to know how REASON CONCRETE CONCLUSION ..
how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001.....plz reply soon

we are using berkeleydb java edition db base api, If you are seeing __db.NNN files in your environment root directory, these are environment's shared region files. And since you see these you are using Berkeley DB Core (with the Java/JNI Base API), not Berkeley DB Java Edition.
with transaction ...
without transaction ...First of all, do you need transactions or not? Review the documentation section called "Why transactions?" in the Berkeley DB Programmer's Reference Guide.
without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb.There should be no logs created when transactions are not used. That single log file has likely remained there from the previous transactional run.
how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001Have you reviewed the basic documentations references for Berkeley DB Core?
- Berkeley DB Programmer's Reference Guide
in particular sections: The Berkeley DB products, Shared memory regions, Chapter 11. Berkeley DB Transactional Data Store Applications, Chapter 17. The Logging Subsystem.
- Getting Started with Berkeley DB (Java API Guide) and Getting Started with Berkeley DB Transaction Processing (Java API Guide).
If so, you would have had the answers to these questions; the __db.NNN files are the environment shared region files needed by the environment's subsystems (transaction, locking, logging, memory pool buffer, mutexes), and the log.MMMMMMMMMM are the log files needed for recoverability and created when running with transactions.
--Andrei                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Similar Messages

  • SHADOW_IMPORT_UPG1 is very very slow, no log files are created

    Hi all
    We are now doing our production upgrade, during the SHADOW_IMPORT_UPG1 phase system is very slow
    no log files are created in the /usr/sap/put/log directory.
    only three files are growing in /usr/sap/tmp directory
    orar3p> ls -lrt
    total 219176
    -rw-rw-rw-   1 r3padm     sapsys        2693 Aug 15 18:42 UCMIG_DE.ECO
    -rw-rw-rw-   1 r3padm     sapsys        2374 Aug 15 18:42 R3trans.out
    -rw-rw-rw-   1 r3padm     sapsys        2685 Aug 15 18:46 ADDON_TR.ECO
    -rw-rw-rw-   1 r3padm     sapsys         726 Aug 15 20:04 crshdusr.log
    -rw-rw-rw-   1 r3padm     sapsys        3915 Aug 15 21:53 EU_IMTSK.ECO
    -rw-rw-r--   1 r3padm     sapsys         257 Aug 15 22:09 SAPKKLFRN18.R3P
    -rw-rw-r--   1 r3padm     sapsys         257 Aug 15 22:09 SAPKKLPTN18.R3P
    -rw-rw-r--   1 r3padm     sapsys         257 Aug 15 22:09 SAPKKLESN18.R3P
    -rw-rw-r--   1 r3padm     sapsys     36433272 Aug 15 23:44 SAPKLESN18.R3P
    -rw-rw-r--   1 r3padm     sapsys     36807577 Aug 15 23:44 SAPKLFRN18.R3P
    -rw-rw-r--   1 r3padm     sapsys     35372350 Aug 15 23:44 SAPKLPTN18.R3P
    orar3p> date
    Fri Aug 15 23:44:54 PDT 2008
    Can anyone help what to do
    Thanks
    Senthil

    Hello,
    did you discover what the cause was for this phase running so slow? And  how long did it take to complete in the end?
    We are currently running an upgrade of our Development system and have struck the same issue.
    I killed the upgrade after the phase had been running for 4 hours and restarted it, but it looks like it is still going to run for a long time.
    Regards....John

  • Thousands of individual .txt files being created while using "next available file name" option in Save to ASCII step

    I'm using Signal Express to record Load vs Displacement data and export it to a format our engineers can work with (in this case ASCII is okay). It would seem that by selecting the Next Available File Name from the drop down arrow it would do just that. For instance a typical save path for me would look like C:\....Desktop\Project Number and Description\Run_1.txt within that Run1.txt file would be all the data points for that run. When I hit record again Signal Express would (SHOULD) create a Run_2 since it's the Next Available File Name.
    But instead what it does is creates a single txt file for every single sample point being read. Needless to say, If I'm recording 6 second of data at 1khz I end up with thousands of txt files!
    The first thing that comes to mind is, why would anyone want this?
    Second is how can I record multiple individal runs for the same project and have the file name increment?
    SCXI- 1000 Chassis w/ 1346 adapter
    PCI 6281 DAQ card
    SCXI- 1520 Bridge Board w/ 1314 Terminal Block (x2)
    SCXI- 1180 Feedthrough Panel w/ 1302 Block
    Signal Express 2014.
    Win7 Enterprise

    ...and more attachments of the ASCII save path, before and after acquiring 4 seconds of data along with one of the files from that folder.
    Again, this is 4 seconds of two-channel spring  plot data at 100 Samples to Read @ 1k Rate (Start Run.... wait 4 seconds or two full test sample cycles... Stop Run).
    ~EDIT~
    The .txt file would not attach (I think it's too small). Here's what it looks like if you were to open it:
    Load vs Displ - Displ (inches)    Load vs Displ - Load (lbs)
    3.736323                              273.751906
    Also, for some reason it won't let me attach my project file. It's a .seproj extension but the forum thinks it's 1k in size and "empty"
    Message Edited by OKors on 06-05-2009 05:57 PM
    SCXI- 1000 Chassis w/ 1346 adapter
    PCI 6281 DAQ card
    SCXI- 1520 Bridge Board w/ 1314 Terminal Block (x2)
    SCXI- 1180 Feedthrough Panel w/ 1302 Block
    Signal Express 2014.
    Win7 Enterprise
    Attachments:
    FolderBeforeSave.JPG ‏39 KB
    FolderAfterSave.JPG ‏239 KB

  • Does anyone know why multi PDF documents are created when using Adobe Acrobat Pro on OS X (MacBook Pro) in line with the section breaks in word 2011

    Does anyone knwo why Adobe Acrobat Pro cretaes multi PDF docuemnts from word 2011 on the section beraks instead of a single PDF document?

    Does anyone knwo why Adobe Acrobat Pro cretaes multi PDF docuemnts from word 2011 on the section beraks instead of a single PDF document?

  • How to create the log file in remote system using log4j.

    Hi,
    How to create the log file in remote system using log4j. please give me a sample code or related links.The below example i used for create the log file in remote system but it return the below exception.Is there any authandication parameter for accessing the remote path? Please help.
    public class Logging
    Logger log=null;
    FileAppender fileapp=null;
    public Logging(String classname)
    try
    log = Logger.getLogger(classname);
    String path=" [\\192.168.0.14\\c$\\LOG\\d9\\May_08_2008_log.txt|file://\\192.168.0.14\\c$\\LOG\\d9\\May_08_2008_log.txt]";
    fileapp = new FileAppender(new PatternLayout("%r [%t] %-5p %c %x - %m%n"),path, true);
    log.addAppender(fileapp);
    log.info("Logger initilized");
    }catch(Exception ex)
    ex.printStackTrace();
    java.io.FileNotFoundException: \\192.168.0.14\c$\LOG\d9\May_08_2008_log.txt (The network path was not found)
    at java.io.FileOutputStream.openAppend(Native Method)
    at java.io.FileOutputStream.<init>(Unknown Source)
    at java.io.FileOutputStream.<init>(Unknown Source)
    at org.apache.log4j.FileAppender.setFile(FileAppender.java:290)
    at org.apache.log4j.FileAppender.<init>(FileAppender.java:109)
    at annwyn.logger.BioCapLogger.<init>(Logging.java:23)
    at sun.applet.AppletPanel.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
    Please help.
    Thanks in advance.
    Saravanan.K

    Sorry path is missing for the above request.
    path="\\192.168.0.14\c$\LOG\d9\May_08_2008_log.txt ";
    please help.
    Saravanan.K

  • Recently loads of .tmp files are created and left on exit in TEMP folder. CC Cleaner cleans them but takes time because so many - why?

    recently loads of .tmp files are created and left on exit in TEMP folder. CC Cleaner cleans them but takes time because so many - why?
    == found when running CC Cleaner and it took a longtime

    recently loads of .tmp files are created and left on exit in TEMP folder. CC Cleaner cleans them but takes time because so many - why?
    == found when running CC Cleaner and it took a longtime

  • What are the .fm.sp files that are created when using SharePoint?

    What are the .fm.sp files that are created when using SharePoint? When I'm working with SharePoint, I've noticed that duplicate files ending in .fm.sp are created. I've been unable to find any reference or documentation about them so far.

    Whe you use sharepoint as a CMS connections in Framemaker it creates the folder where Sp is installed and also a file ending .sp is created that let SP know that its the file associated with it.
    .fm.sp indicates that its the framemaker type SP file.
    Dont worry about it as its not creating any mess in the system.
    Harpreet

  • Compressor is not working. When I export to compressor from FCP to encode DVD, FCP freezes and Compressor appears to be working but the progress bar never fills no files are created. I'm using a MacBook Pro 10.6.7 version 2.66 GHz core 2 Duo 4GB 1067 MH

    Compressor is not working. When I export to compressor from FCP to encode DVD, FCP freezes and Compressor appears to be working but the progress bar never fills no files are created. I'm using a MacBook Pro 10.6.7 version 2.66 GHz core 2 Duo 4GB 1067 MH. I'm not new at this. I re-installed FCP successfully. I trashed the preferences. Please help.

    Do you have any filters applied to your clips?
    This happened to me before, I accidently had applied a broadcast safe filter to a color matte that was black....
    I figured it out by duplicating my sequence and deleting items from my timeline 1 by 1 until I was able to export a reference movie.
    Anywho, export again, this time with "self contained" movie selected.
    Then try to import to Compressor.

  • How to change the replication group information after db files are created

    Since group information is persisted in the database, I am wondering if there is a way to update the information.
    We want to implement some kind of Berkeley DB master relay mechanism for our two data centers, which has slow link in between. Basically have one master populating a database file and launch another two node as master to replay that to other nodes of its own group. It will be much efficient this way so we don't have to copy the data multiple times over the slow link.
    We periodically (once a day) update the Berkeley DB content from customer's feed on a backend node and upload (rsync) the Berkeley DB File to two the data centers. We would like to have a master node in each data center to read the pre-populated data file, replicate the changes to the web node (read only) while they are still running. I simulated local and if I trick the nodeName and nodeHostPort setting, it should work (basically, fake the replication node on the backend node using tampered hostfile so they get registerred). However, it is not very convenient and definitely a dangerous hack on the production servers.
    If there is a way, after the creation, to update the group information (for example, change all the nodes information) without corrupt the log file/replication stream, it will be much easier for us. Basically, we would like to have the node/group information and data file de-coupled?
    Any ideas how to do that, or is there a better way to design such a replay of data using Berkeley DB?
    Thanks in advance!

    2. You mentioned to not clean up the log file. Is there a point where I can safely call clean up on the environment when BDB is still online as I can imagine we will run out of space very soon if we don't clean up.The approach outlined above (steps 1 to 5) will ensure that no log files are deleted on A while you are updating B and C. The use of DbBackup ensures this. For more information on how this works, see the DbBackup javadoc.
    Whether this causes you to run out of disk space on A is something you'll have to evaluate for yourself. It depends on the write rate on A and how long it takes to do the copy to B and C. If this is a problem, you could make a quick local copy of the environment on A, and then transfer that copy to B/C. But you must prohibit log file deletion during the copy, using DbBackup, or the copy will be invalid.
    You should perform explicit JE log cleaning (including a checkpoint) before doing the copy to B/C. This will reduce the number of files that are copied to B/C, and will reduce the likelihood that you'll fill the disk on A. See the javadoc for Environment.cleanLog for details on how to do an explicit log cleaning.
    In your earlier post, it sounded like the updates to A were in batch mode -- done all at once at a specific time of day. If so, you can do the copy to B/C after the update to A. In that case, I don't understand why you are afraid of filling the disk on A, since updates would not be occurring during the copy to B/C.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • I have one problem with Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

  • Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

  • Log file not creating

    Hi, I tried to configure log for my application, but the log files are not getting created, I am getting the following error.
    My log configuration -
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE log-configuration SYSTEM "log-configuration.dtd">
    <log-configuration>
         <log-destinations>
              <log-destination
                   count="10"
                   effective-severity="ALL"
                   limit="10000"
                   name="LogTestLog"
                   pattern="./log/applications/TestLog/LogTestLog.%g.log"
                   type="FileLog"/>
         </log-destinations>
         <log-controllers>
              <log-controller
                   effective-severity="ALL"
                   name="System.out">
                   <associated-destinations>
                        <destination-ref
                             association-type="LOG"
                             name="LogTestLog"/>
                   </associated-destinations>
              </log-controller>
              <log-controller
                   effective-severity="ALL"
                   name="com.giri.test.LogServlet">
                   <associated-destinations>
                        <destination-ref
                             association-type="LOG"
                             name="LogTestLog"/>
                        <anonymous-destination
                             association-type="LOG"
                             type="FileLog"/>
                   </associated-destinations>
              </log-controller>
         </log-controllers>
    </log-configuration>
    #1.5#0030485D5AE617CD000000000000142000042AAA6682264C#1172811259302#/System/Logging##com.sap.tc.logging.FileLogInfoData.[getFileHeader(String fileName, int cntHeadLines)]######147f6460c87a11dba2920030485d5ae6#SAPEngine_System_Thread[impl:5]_99##0#0#Warning##Java#SAP_LOGGING_UNEXPECTED##Unexcepted error occured on !#1#FileHeader parsing#
    #1.5#0030485D5AE617CD000000010000142000042AAA6682297F#1172811259302#/System/Logging##/System/Logging######147f6460c87a11dba2920030485d5ae6#SAPEngine_System_Thread[impl:5]_99##0#0#Path##Java###Caught #1#java.lang.Exception: .
    log
    applications
    TestLog
    LogTestLog.0.log (The system cannot find the path specified)
         at com.sap.tc.logging.FileLogInfoData.getEOLLength(FileLogInfoData.java:432)
         at com.sap.tc.logging.FileLogInfoData.getFileHeaderLines(FileLogInfoData.java:348)
         at com.sap.tc.logging.FileLogInfoData.getFileHeaderLines(FileLogInfoData.java:334)
         at com.sap.tc.logging.FileLogInfoData.loadFileLogHeader(FileLogInfoData.java:320)
         at com.sap.tc.logging.FileLogInfoData.init(FileLogInfoData.java:260)
         at com.sap.tc.logging.FileLogInfoData.<init>(FileLogInfoData.java:119)
         at com.sap.tc.logging.FileLog.init(FileLog.java:373)
         at com.sap.tc.logging.FileLog.<init>(FileLog.java:282)
         at com.sap.tc.logging.FileLog.<init>(FileLog.java:246)
         at com.sap.engine.services.log_configurator.admin.LogConfigurator.adjustConfiguration(LogConfigurator.java:665)
         at com.sap.engine.services.log_configurator.admin.LogConfigurator.applyConfiguration(LogConfigurator.java:1488)
         at com.sap.engine.services.log_configurator.LogConfiguratorContainer.prepareStart(LogConfiguratorContainer.java:545)
         at com.sap.engine.services.deploy.server.application.StartTransaction.prepareCommon(StartTransaction.java:239)
         at com.sap.engine.services.deploy.server.application.StartTransaction.prepare(StartTransaction.java:187)
         at com.sap.engine.services.deploy.server.application.ApplicationTransaction.makeAllPhasesOnOneServer(ApplicationTransaction.java:301)
         at com.sap.engine.services.deploy.server.application.ParallelAdapter.makeAllPhasesImpl(ParallelAdapter.java:327)
         at com.sap.engine.services.deploy.server.application.ParallelAdapter.runMe(ParallelAdapter.java:74)
         at com.sap.engine.services.deploy.server.application.ParallelAdapter$1.run(ParallelAdapter.java:218)
         at com.sap.engine.frame.core.thread.Task.run(Task.java:64)
         at com.sap.engine.core.thread.impl5.SingleThread.execute(SingleThread.java:79)
         at com.sap.engine.core.thread.impl5.SingleThread.run(SingleThread.java:150)

    I have the same problem. I also see many similar exception from different apps, such as writing to file ".
    log
    applications
    cms
    default.0.trc", ".
    log
    applications
    cms
    default.0.trc"
    Do I have to create log file before I use the logging service.
    I change "ForceSingleTraceFile" to "NO" in "LogManager" by Visual Administrator. Could it be a problem?
    I am running SAP Web AS 6.4 and deploying using SDM. I have a log-configuration.xml in the ear file. This is my log configuration
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE log-configuration SYSTEM "log-configuration.dtd">
    <log-configuration>
         <log-formatters/>
         <log-destinations>
              <log-destination
                   count="10"
                   effective-severity="ALL"
                   limit="1000000"
                   name="TraceLog01"
                   pattern="./log/file.%g.trc"
                   type="FileLog"/>
         </log-destinations>
         <log-controllers>
              <log-controller
                   effective-severity="ALL"
                   name="LogController01">
                   <associated-destinations>
                        <destination-ref
                             association-type="LOG"
                             name="TraceLog01"/>
                   </associated-destinations>
              </log-controller>
         </log-controllers>
    </log-configuration>

  • FMS  - change directory where the log files are located?

    I want to change the logs files directory from:
    C:\Program Files (x86)\Adobe\Flash Media Server 3.5/logs
    to:
    D:\fmsLogs
    Please halp me to understand...
    in adobe in:
    Home / Flash Media Server  3.5 Configuration and Administration Guide / XML configuration files reference
    it says:
    in Logger.xml in Directory
    Specifies the directory where the log files are located.
    By default, the log files are located in the logs directory in the server installation directory.
    Example:
    <Directory>${LOGGER.LOGDIR}</Directory>
    what this meens: ${LOGGER.LOGDIR} ?
    in order to change the logs files directory from:
    C:\Program Files (x86)\Adobe\Flash Media Server 3.5/logs
    to:
    D:\fmsLogs
    do i need to write this:
    <Directory>D:\fmsLogs</Directory>
    or what do i neet to write?
    it is totaly not understandable from this example...
    big thanks for any halp
    cheinan

    You can change LOGGER.LOGDIR in fms.ini to your preferred location i.e. D:\fmsLogs and restart FMS.
    Now if you want to change for individual logs - you can change in Logger.xml - by default logger.xml will use value from fms.ini

  • Where can I file the log file generate by RMAN using EM 10g.

    Hi
    I trying to find the log file that is generated using RMAN invoked from EM.
    I can only see the file using Internet Explorer with the URL:
    em/console/database/rec/bkpMgmt?skey=257&type=oracle_database&target=isatprod.dla_dns.com&event=showJobDe
    But I need to find where the log files are located in the filesystem because in other server I will not have EM with OC4J.
    Thanks.
    Juan.

    When use OEM for 10g and choose the option / Maintance/Backup Reports
    I can see information of all my backups, that includes:
    Backup Name - Start Time - Time Taken - Status - Type- Output Devices - Input Size .....
    When I click on Status field I can see the log file of this Backup.
    (whe click in status one URL will be invoked ,something like below URL)
    http://10.5.0.86:1158/em/console/database/rec/bkpMgmt?skey=259&type=oracle_database&target=isatprod.dla_dns.com&event=showJobDet&objType=jobDtl
    So the log file exist in any place for every backup made , the problem that I can not find it.
    The log has approximately 500 lines, if you want I can send you by email the log.
    Currently I don't have a repository catalog, I use a control file as repository.
    I think that 500 lines of log is not include in any dynamic performance views.
    Thanks
    Juan

  • Log files are not purged

    Hi All,
    I have a TT data store, with setting LogPurge=1. There are lots of transactions manipulating the data store. If I'm correct, the log files, that are older then the older checkpoint file are to be deleted automatically by TT, if there are no operations holding them. In my case the log files are not being deleted, so an ls -ltr command prints:
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 10:49 appdbtt.res1
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 10:49 appdbtt.res0
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 10:49 appdbtt.res2
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:02 appdbtt.log0
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:03 appdbtt.log1
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:03 appdbtt.log2
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:04 appdbtt.log3
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:04 appdbtt.log4
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:04 appdbtt.log5
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:05 appdbtt.log6
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:05 appdbtt.log7
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:06 appdbtt.log8
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:06 appdbtt.log9
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:07 appdbtt.log10
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:07 appdbtt.log11
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:08 appdbtt.log12
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:08 appdbtt.log13
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:09 appdbtt.log14
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:09 appdbtt.log15
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:09 appdbtt.log16
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:10 appdbtt.log17
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:10 appdbtt.log18
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:11 appdbtt.log19
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:11 appdbtt.log20
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:12 appdbtt.log21
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:12 appdbtt.log22
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:13 appdbtt.log23
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:13 appdbtt.log24
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:14 appdbtt.log25
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:14 appdbtt.log26
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:15 appdbtt.log27
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:15 appdbtt.log28
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:16 appdbtt.log29
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:16 appdbtt.log30
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:16 appdbtt.log31
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:17 appdbtt.log32
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:17 appdbtt.log33
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:18 appdbtt.log34
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:18 appdbtt.log35
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:19 appdbtt.log36
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:19 appdbtt.log37
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:20 appdbtt.log38
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:20 appdbtt.log39
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:21 appdbtt.log40
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:21 appdbtt.log41
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:22 appdbtt.log42
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:22 appdbtt.log43
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:22 appdbtt.log44
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:23 appdbtt.log45
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:23 appdbtt.log46
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:24 appdbtt.log47
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:25 appdbtt.log48
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:25 appdbtt.log49
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:25 appdbtt.log50
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:26 appdbtt.log51
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:26 appdbtt.log52
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:27 appdbtt.log53
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:27 appdbtt.log54
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:28 appdbtt.log55
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:28 appdbtt.log56
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:29 appdbtt.log57
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:29 appdbtt.log58
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:30 appdbtt.log59
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:30 appdbtt.log60
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:31 appdbtt.log61
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:31 appdbtt.log62
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:32 appdbtt.log63
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:32 appdbtt.log64
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:33 appdbtt.log65
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:33 appdbtt.log66
    -rw-rw-rw- 1 timesten timesten 487444480 Dec 07 11:33 appdbtt.ds0
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:34 appdbtt.log67
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:34 appdbtt.log68
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:35 appdbtt.log69
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:35 appdbtt.log70
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:35 appdbtt.log71
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:36 appdbtt.log72
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:36 appdbtt.log73
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:37 appdbtt.log74
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:37 appdbtt.log75
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:38 appdbtt.log76
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:38 appdbtt.log77
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:39 appdbtt.log78
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:39 appdbtt.log79
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:40 appdbtt.log80
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:40 appdbtt.log81
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:41 appdbtt.log82
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:41 appdbtt.log83
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:42 appdbtt.log84
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:42 appdbtt.log85
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:43 appdbtt.log86
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:43 appdbtt.log87
    -rw-rw-rw- 1 timesten timesten 632098816 Dec 07 11:43 appdbtt.ds1
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:44 appdbtt.log88
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:45 appdbtt.log89
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:45 appdbtt.log90
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:46 appdbtt.log91
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:46 appdbtt.log92
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:46 appdbtt.log93
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:47 appdbtt.log94
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:47 appdbtt.log95
    -rw-rw-rw- 1 timesten timesten 4767744 Dec 07 11:47 appdbtt.log96
    As you can see, I have 67 older log files than the older checkpoint file. Now if I enter the DS in ttIsql, and call ttLogHolds I get:
    Command> call ttLogHolds();
    < 0, 38034792, Replication , APPDBTT:_ORACLE >
    < 67, 44319520, Checkpoint , appdbtt.ds0 >
    < 88, 45855168, Checkpoint , appdbtt.ds1 >
    3 rows found.
    What can be the problem?
    Thanks in advance:
    Dave

    This bookmark
    < 0, 38034792, Replication , APPDBTT:_ORACLE >
    indicates that the AWT bookmark hasn't moved from log file 0. Since AWT is performed by the replication agent, it maintains a bookmark to track where its reached in reading through the transaction log files looking for transactions against any AWT cachegroups. This looks as though somehow a transaction against an AWT cachegroup has not been commited, meaning that it cannot be sent to Oracle, acknowledged and the bookmark moved on. Once the bookmark moves into a new transaction log file, all old log files can then be purged.
    You might be able to identify any uncommited transaction by using ttXactAdmin and checking for locks held against AWT cachegroups.

Maybe you are looking for

  • How to create an array containing shared variable values

    Hi I am trying to programmatically create an array containing shared variable values and their names.  I can get the variable names by supplying the process name to the get shared variable list function.  How do I then read the value of all the share

  • Unable to set (ssl) certificate on a SQL Server 2012 clustered instance

    Hello everyone! I'm trying to encrypt the SQL Server communication with SSL but I can't add the certificate in the configuration manager. I've found and tried a lot of different explaination but none of them worked. I'll described what I've done and

  • Load localisation list error

    Each time we start up this message appears Followed by this one Having a lot of random crashes and slow page/software loading. I guess this is the root issue? Have upgraded as many drivers as possible - but that's not proving easy on the HP website a

  • How to Configure in ASDM

    Hi all, Using ASDM Java applet how to configure a Ip address in Objeject group on ASA firewall Tnks Ramu

  • Doubts in Web Application Designer

    Hi Experts,          I want to know about the following things in Web Application Designer.    1. How to display the desired text in Ticker item?    2. How can we make alinments between web items such that there is no extra gap between them? (I even