Wss3 ULs log file blank or with 1kb

our wss3 ULs log files are blank, with one or two files throught out a day with 1kb, with the following log
11/11/2014 11:01:34.55 wsstracing.exe (0x09D0) 0x7C30 ULS Logging Unified Logging Service uls1 Monitorable Tracing Service lost trace events. Current value 45.
I have double checked in the Central Admin >> Diagnostic Logging : Trace Log is pointing to the 12 hive locally Tried stop and start the Windows Sharepoint Services Tracing then stop and started the Window Sharepoint Services Timer No effect on the ULs
log files, still blank. The wss3 version is 12.0.0.6421 any ideas? suggestions? Regards Xun

Hello,
You may try to run " stsadm -o listlogginglevels" command and set it to default.
https://social.technet.microsoft.com/Forums/office/en-US/d8fa7b2f-f9b4-4c49-8c2b-5b1c73a4d717/empty-trace-log-files-inside-12logs-folder-moss2007?forum=sharepointadminlegacy
Also check tracing service
http://techdhaan.wordpress.com/2009/04/20/moss-2007-uls-logs-folder-in-12-hive-is-empty/
Let us know your result
Hemendra:Yesterday is just a memory,Tomorrow we may never see<br/> Please remember to mark the replies as answers if they help and unmark them if they provide no help

Similar Messages

  • DATE fields and LOG files  in context with external tables

    I am facing two problems when dealing with the external tables feature in Oracle 9i.
    I created an External Table with some fileds with the DATE data type . There were no issues during the creation part. But when i query the table, the DATE fields are not properly selected though the data is there in the files. Is there any ideas to deal with this ?
    My next question is regarding the log files. The contents in the log file seems to be growing when querying the external tables. Is there a way to control this behaviour?
    Suggestions / Advices on the above two issues are welcome.
    Thanks
    Lakshminarayanan

    Hi
    If you have date datatypes than:
    select
    greatest(TABCASER1.CASERRECIEVEDDATE, EVCASERS.FINALEVDATES, EVCASERS.PUBLICATIONDATE, EVCASERS.PUBLICATIONDATE, TABCASER.COMPAREACCEPDATE)
    from TABCASER, TABCASER1, EVCASERS
    where ...-- join and other conditions
    1. greatest is good enough
    2. to_date creates date dataype from string with the format of format string ('mm/dd/yyyy')
    3. decode(a, b, c, d) is a function: if a = b than return c else d. NULL means that there is no data in the cell of the table.
    6. to format the date for display use to_char function with format modell as in the to_date function.
    Ott Karesz
    http://www.trendo-kft.hu

  • DataGuard Windows 9201 - log file transfer interrupt with a big redo log

    OS WINDOWS
    Oracle 9201
    Primary: service_name orcl1 db_name orcl1
    Standby: service_name orcl2 db_name orcl1
    Same dir structure distribute on different VMware machine but connect with a real physical fiber network enviorment, two node distance more than 20km.
    LOG FILE - 100M
    MAX PERFORMACE MODE
    we can got succesful result when input 'alter system switch log file' manually, the log usually small than 20m.
    but when we try to switch a full redo log the error occur, log can't transfer to standby site.
    it's seem to a transfer interrupt by some unnameable reason.
    we check the network ping, lsnrctl service_name status, dataguard configration and windows tcpip configration, but have no conclusion.
    we will crzy!! help
    the log trace that use log_archive_trace=128 on primary site show:
    Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
    Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
    - Created archivelog as 'C:\ORACLE\ORAARCH\ARC00095.001'
    *** 2010-09-02 15:30:39.000
    Fail to ping standby 'orcl2', error = 12571
    Error 12571 when pinging standby orcl2.
    *** 2010-09-02 15:30:39.000
    kcrrfail: dest:2 err:12571 force:0
    *** 2010-09-02 15:31:40.000
    Fail to ping standby 'orcl2', error = 1010
    Error 1010 when pinging standby orcl2.
    *** 2010-09-02 15:31:41.000
    kcrrfail: dest:2 err:1010 force:0
    *** 2010-09-02 15:32:32.000
    Setting trace level: 31 (1f)
    *** 2010-09-02 15:32:32.000
    ARC0: Evaluating archive log 3 thread 1 sequence 97
    VALIDATE
    PREPARE
    *** 2010-09-02 15:32:32.000
    Acquiring global enqueue on thread 1 sequence 97
    *** 2010-09-02 15:32:32.000
    Acquired global enqueue on thread 1 sequence 97
    INITIALIZE
    SPOOL
    *** 2010-09-02 15:32:32.000
    ARC0: Beginning to archive log 3 thread 1 sequence 97
    *** 2010-09-02 15:32:32.000
    Creating archive destination LOG_ARCHIVE_DEST_2: 'orcl2'
    Network re-configuration required
    Detaching RFS server from standby instance at 'orcl2'
    RFS message number 151
    Error 1010 detaching RFS from standby instance at host 'orcl2'
    Disconnecting from destination LOG_ARCHIVE_DEST_2 standby host 'orcl2'
    Ignoring kcrrvnc() detach error 1010
    Primary database is in CLUSTER CONSISTENT mode
    Primary database is in MAXIMUM PERFORMANCE mode
    Connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl2'
    Attaching RFS server to standby instance at 'orcl2'
    RFS message number 152
    Dest LOG_ARCHIVE_DEST_2 standby mount ID: '42590f20'
    Standby database restarted; old mount ID 0x4258a5ae now 0x42590f20
    Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
    Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
    Issuing standby Create archive destination at 'orcl2'
    RFS message number 153
    *** 2010-09-02 15:32:32.000
    Creating archive destination LOG_ARCHIVE_DEST_1: 'C:\ORACLE\ORAARCH\ARC00097.001'
    - Created archivelog as 'C:\ORACLE\ORAARCH\ARC00097.001'
    Dest LOG_ARCHIVE_DEST_1 primary mount ID: '0x42586021'
    Archiving block 1 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 1 count 2048 to 'orcl2'
    RFS message number 154
    Archiving block 1 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 2049 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 2049 count 2048 to 'orcl2'
    RFS message number 155
    Archiving block 2049 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 4097 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 4097 count 2048 to 'orcl2'
    RFS message number 156
    Archiving block 4097 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 6145 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 6145 count 2048 to 'orcl2'
    RFS message number 157
    Archiving block 6145 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 8193 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 8193 count 2048 to 'orcl2'
    RFS message number 158
    Archiving block 8193 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 10241 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 10241 count 2048 to 'orcl2'
    RFS message number 159
    Archiving block 10241 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 12289 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 12289 count 2048 to 'orcl2'
    RFS message number 160
    Archiving block 12289 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 14337 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 14337 count 2048 to 'orcl2'
    RFS message number 161
    Archiving block 14337 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 16385 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 16385 count 2048 to 'orcl2'
    RFS message number 162
    Archiving block 16385 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 18433 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 18433 count 2048 to 'orcl2'
    RFS message number 163
    Archiving block 18433 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 20481 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 20481 count 2048 to 'orcl2'
    RFS message number 164
    Archiving block 20481 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 22529 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 22529 count 2048 to 'orcl2'
    RFS message number 165
    Archiving block 22529 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 24577 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 24577 count 2048 to 'orcl2'
    RFS message number 166
    Archiving block 24577 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 26625 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 26625 count 2048 to 'orcl2'
    RFS message number 167
    Archiving block 26625 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 28673 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 28673 count 2048 to 'orcl2'
    RFS message number 168
    Archiving block 28673 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 30721 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 30721 count 2048 to 'orcl2'
    RFS message number 169
    Archiving block 30721 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 32769 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 32769 count 2048 to 'orcl2'
    RFS message number 170
    Archiving block 32769 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 34817 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 34817 count 2048 to 'orcl2'
    RFS message number 171
    Archiving block 34817 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 36865 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 36865 count 2048 to 'orcl2'
    RFS message number 172
    Archiving block 36865 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 38913 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 38913 count 2048 to 'orcl2'
    RFS message number 173
    Archiving block 38913 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 40961 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 40961 count 2048 to 'orcl2'
    RFS message number 174
    Archiving block 40961 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 43009 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 43009 count 2048 to 'orcl2'
    RFS message number 175
    Archiving block 43009 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 45057 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 45057 count 2048 to 'orcl2'
    RFS message number 176
    *** 2010-09-02 15:33:22.000
    RFS network connection lost at host 'orcl2'
    Error 3114 writing standby archive log file at host 'orcl2'
    *** 2010-09-02 15:33:22.000
    ARC0: I/O error 3114 archiving log 3 to 'orcl2'
    *** 2010-09-02 15:33:22.000
    kcrrfail: dest:2 err:3114 force:0
    Local destination LOG_ARCHIVE_DEST_1 is still active
    ORA-03114: not connected to ORACLE
    Archiving block 45057 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 47105 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 49153 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 51201 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 53249 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 55297 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 57345 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 59393 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 61441 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 63489 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 65537 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 67585 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 69633 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 71681 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 73729 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 75777 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 77825 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 79873 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 81921 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 83969 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 86017 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 88065 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 90113 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 92161 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 94209 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 96257 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 98305 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 100353 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 102401 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 104449 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 106497 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 108545 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 110593 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 112641 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 114689 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 116737 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 118785 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 120833 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 122881 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 124929 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 126977 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 129025 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 131073 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 133121 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 135169 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 137217 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 139265 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 141313 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 143361 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 145409 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 147457 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 149505 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 151553 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 153601 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 155649 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 157697 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 159745 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 161793 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 163841 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 165889 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 167937 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 169985 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 172033 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 174081 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 176129 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 178177 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 180225 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 182273 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 184321 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 186369 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 188417 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 190465 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 192513 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 194561 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 196609 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 198657 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 200705 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 202753 count 2024 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Closing archive destination LOG_ARCHIVE_DEST_1: C:\ORACLE\ORAARCH\ARC00097.001
    FINISH
    Archival failure destination LOG_ARCHIVE_DEST_2: 'orcl2'
    Archival success destination LOG_ARCHIVE_DEST_1: 'C:\ORACLE\ORAARCH\ARC00097.001'
    COMPLETE, min-succeed count met
    *** 2010-09-02 15:33:27.000
    ArchivedLog entry added for thread 1 sequence 97 ID 0x42585a2b: C:\ORACLE\ORAARCH\ARC00097.001
    Marking [1] log 3 thread 1 sequence 97 spooled
    Updating thread 1 sequence 97 archive SCN 0:4503061
    Scanning 'to be archived' list': kcrrdal
    log 2 thread 1 sequence 98
    Completed 'to be archived' list
    *** 2010-09-02 15:33:27.000
    Releasing global enqueue
    ARCHIVED
    *** 2010-09-02 15:33:27.000
    ARC0: Completed archiving log 3 thread 1 sequence 97
    Scanning 'to be archived' list': kcrrwk
    log 2 thread 1 sequence 98
    Completed 'to be archived' list
    Scanning 'to be archived' list': kcrrwk
    log 2 thread 1 sequence 98
    Completed 'to be archived' list
    *** 2010-09-02 15:34:29.000
    ARC0: Heartbeat ticks... (thread 1)
    Establishing link for destination LOG_ARCHIVE_DEST_2 to standby orcl2
    Primary database is in CLUSTER CONSISTENT mode
    Primary database is in MAXIMUM PERFORMANCE mode
    Connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl2'
    Attaching RFS server to standby instance at 'orcl2'
    RFS message number 177
    Dest LOG_ARCHIVE_DEST_2 standby mount ID: '42590f20'
    Pinging destination LOG_ARCHIVE_DEST_2 at standby orcl2
    RFS message number 178
    Not in RAC mode
    *** 2010-09-02 15:35:30.000
    ARC0: Heartbeat ticks... (thread 1)
    Establishing link for destination LOG_ARCHIVE_DEST_2 to standby orcl2
    Pinging destination LOG_ARCHIVE_DEST_2 at standby orcl2
    RFS message number 179
    Not in RAC mode
    *** 2010-09-02 15:36:22.000
    ARC0: Heartbeat ticks... (thread 1)
    Establishing link for destination LOG_ARCHIVE_DEST_2 to standby orcl2
    Pinging destination LOG_ARCHIVE_DEST_2 at standby orcl2
    RFS message number 180
    Not in RAC mode
    *** 2010-09-02 15:36:39.000
    Setting trace level: 128 (80)
    Setting trace level: 128 (80)
    Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
    Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
    - Created archivelog as 'C:\ORACLE\ORAARCH\ARC00099.001'
    Setting trace level: 128 (80)
    *** 2010-09-02 15:37:32.000
    Setting trace level: 128 (80)

    Something is going on in your network:
    RFS network connection lost at host 'orcl2'
    Error 3114 writing standby archive log file at host 'or
    Network Administrators may help

  • NIPING LOG File in IFS with CCSID(500) not readable in Windows Explorer

    Hi together,
    if we made logfiles with NIPING logfiles or saprouter logfiles, this logfiles will be created in IFS with CCSID(500).
    How can I read this files with Windows Explorer?
    Because all CCSID(500) files looks like jam, when we open that files.
    best regards,
    Carsten Schulz

    Hi Carsten,
    do you use a binary share when accessing the file from a Window PC, or did you configure text conversion? In the System i Navigator, you open the Propertis of the share in question and check the "Text Conversion" tab. If you enable text conversion, you can also limit that to certain file extensions. Make sure the log files that you are viewing use one of the extensions that is specified here, or that you have specified * for the extensions to convert.
    If that does not help, there may be a mismatch between the CCSID tagging on the file and the contents in the file. This will be shown if you look at the file through WRKLNK option 5 (DSPF).
    Kind regards,
    Christian Bartels.

  • Log file sync waits with null sql_ids

    10.2.0.3
    I am querying V$ACTIVE_SESSION_HISTORY to drill into log file sync waits.
    select sql_id,sum(time_waited)
    from v$active_session_history
    where sample_time > sysdate - 1/24
    group by sql_id
    order by 2 desc
    All of my top sessions for this have null sql_ids. I did some google searches and these are the answers that I found have null sql_ids. There are some other sessions where the sql_id is not null, but they are not anywhere near the top.
    1. could be running pl/sql. yeah ok. but I would need to run 'dml' and issue a commit for this event to fire).
    2. no sql is running. does this mean the insert finished and then I am waiting on the 'commit' part?
    I want to track these sqls down so I can track them back to the application. I want to get the developers to limit their commit frequency and use batch (array based) DML. How do I track this down?
    Also, is there anyway to figure out how often different users are committing? I want to track back to the worst offenders. Could be some parts of the application are commit periodically and others are  not, but log file sync's could slow down everyone.

    You are either bored or suffer from Compulsive Tuning Disorder.
    It can be a challenge to solve  a problem that only exists between your ears
    post results from SQL below
    SELECT sql_id,
           SUM(time_waited) / 1000000
    FROM   v$active_session_history
    WHERE  sample_time > SYSDATE - 1 / 24
           AND time_waited > 0
    GROUP  BY sql_id
    ORDER  BY 2 DESC

  • Modellog.log file being moved with inapproprite name. Startup is now failing.

    I ran an alter database command to move to model database to a new location.  This was a DISA STIG security requirement.  It is something I have done dozens or times successfully, but yesterday I went brain dead.
    ALTER DATABASE Model MODIFY FILE ( NAME = modellog , FILENAME = 'C:\dir\modellog. log')
    I should have proofed better, but I  just plain missed it.
    Notice the embedded space between '.' and log.  Even though the file has been copied to the correct database it is still called modellog.log not modellog. log as expected by the startup process.
    Is there any way to change the name the statup process is searching for? or is there a way to rename the file so it does contain an embedded space after the '.' ? I tried using and rename command without luck.
    Thoughts?
    Just my thoughts tomh

    Start sql server in command prompt using these startup parameters
    /f /m /t3608
    net start mssqlserver /f /m /t3608
    Once it is started, run the alter command without the space. Once this is done, shutdown SQL Server and start it normally.
    Trace flag 3608
    Prevents SQL Server from automatically starting and recovering any database except the
    master database. If activities that require tempdb are initiated, then
    model is recovered and tempdb is created. Other databases will be started and recovered when accessed. Some features, such as snapshot isolation and read committed snapshot, might not work. Use for
    Move System Databases and
    Move User Databases. Do not use during normal operation.
    Note: Please dont keep the sql server database files in C drive . That includes System database files as well. Please move it to another drive other than the system drive C.
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • 888k Error in ULS Logs for File System Cache

    Hello,
    We have a SharePoint 2010 farm in a three-tier architecture with multiple WFEs and APP servers.
    Roughly once a week we will have a number of WFEs seize up and jump to 100% CPU usage. Usually they come in pairs; two servers will jump to 100% at the same time while all the other servers are fine in the 20% - 50% range.
    Corresponding to the 100% CPU spike, the following appear in the ULS logs:
    "File system cache monitor encoutered error, flushing in memory cache: System.IO.InternalBufferOverflowException: Too many changes at once in directory:C:\ProgramData\Microsoft\SharePoint\Config\<GUID>\."
    When these appear, the ULS logs will show hundreds back-to-back flooding the logs.
    I have yet to figure out how to stop these and bring the CPU usage down while the incident is happening, and how to prevent them in the future.
    While the incident is happening, I have tried clearing the configuration cache, shutting the timer jobs down on each server, deleting all the files but config.ini in the folder listed above, changing config.ini to 1, and restarting the timer. The CPU will
    drop momentarily during this process, but as soon as all the timer jobs are restarted the CPUs jump back to 100% on the same servers.
    This week as part of my weekly maintenance I thought I'd be proactive and clear the cache even though the behavior wasn't happening, and all CPUs were normal. As soon as I finished, the CPU on two servers that were previously fine jumped to 100% and wouldn't
    come down. Needless to say, users complain of latency when servers are at 100% CPU.
    So I am frustrated. The only thing I have found that works when the CPUs jump to 100% with these errors are a reboot. Nothing else, including IISReset and stopping/starting the admin and timer job services work. Being Production systems, reboots during the
    middle of the day are bad.
    Any ideas? I have scoured the Internet resources on this error and have come up relatively empty-handed. All the articles reference clearing the configuration cache, which, in my instance, does not get rid of these issues, and can even trigger them.
    Thanks,
    Joseph Irvine

    Take a look at http://support.microsoft.com/kb/952167 for the list of recommended exclusions per Microsoft.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Problems with Ericsson F3507g card - does anyone understand Access Connection​s log files?

    I am having big problems getting an f3507g mobile broadband card working on a T500 running XP Pro.
    Have had problems with several machines all supplied in the same batch and it turns out that the BIOS needs to be reset to defaults post installation of the card for it to be properly recognised by the OS.
    Anyway, all machines are now fixed with the exception of one....
    Everything looks okay on the surface - the F3507g card is showing under Device Manager, just to be sure I've reinstalled Access Connections and loaded drivers for the card.  I know that the driver install was successful because it pinged up with the standard driver install success message - 'Windows found new hardware...' about half a dozen times immediately following the install.  Having done the driver install I attempted connection and the CA Internet Security Suite Firewall popped up asking whether I wanted to accept the device as a 'Safe' device - clicked yes to that.
    However, attempting connection never works
    I've tried enabling logging and the log file produced together with a fault report is here.
    I have tried the SIM card under test in another machine and it works. I have also swapped the HDD out of another identical brand new machine and it works with the hardware including the Ericsson card - so the issue has to be software related...
    I could reimage but the user will loose some graphic design application software which is very difficult to reinstall.
    Hope someone that understands Access Connections WWAN debug log files or has had a similar experience might be able to point me in the right direction...
    Many thanks

    @ Deadloss,
    have you checked the power settings in the cards properties? Not that windows is turning it off?
    Are you using a factory image or your own custom image, I'm thinking that maybe a reimage might help ??
    @ schm1tty,
    T-Mobile SIMs work fine, what problems are you having? Maybe you could start a new thread so as not to hijack this one.
    Andy  ______________________________________
    Please remember to come back and mark the post that you feel solved your question as the solution, it earns the member + points
    Did you find a post helpfull? You can thank the member by clicking on the star to the left awarding them Kudos Please add your type, model number and OS to your signature, it helps to help you. Forum Search Option T430 2347-G7U W8 x64, Yoga 10 HD+, Tablet 1838-2BG, T61p 6460-67G W7 x64, T43p 2668-G2G XP, T23 2647-9LG XP, plus a few more. FYI Unsolicited Personal Messages will be ignored.
      Deutsche Community     Comunidad en Español    English Community Русскоязычное Сообщество
    PepperonI blog 

  • Problems with Log files

    hello everybody.
    I am using java.util.logging package to create log files.
    I have to generate log files and append to the file based on the condition.
    When i am writing message to single log file it it updated to all the log files.
    Code
    ==========================
    //To create error_0.log file
    import java.util.logging.FileHandler;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import java.util.logging.SimpleFormatter;
    public class NetLogger
         private String pattern = "./log/error_%g.log";
         private int limit = 1000000; // 1 Mb
         private int numLogFiles = 300;
         private FileHandler fh = null;
         private Logger logger =null;
    public NetLogger()
         try
              fh = new FileHandler(pattern, limit, numLogFiles);
              fh.setFormatter(new SimpleFormatter());
              logger = Logger.getLogger("com.netenforcers");
                   logger.setUseParentHandlers(false);
              logger.addHandler(fh);
              //logger.setLevel(Level.ALL);
         catch(Exception ex)
              ex.printStackTrace();
    public Logger getLogger()
         return this.logger;
    =======
    // To create whoIs_0.log
    import java.util.logging.FileHandler;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import java.util.logging.SimpleFormatter;
    public class WhoIsLogger
         private String pattern = "./log/whoIs_%g.log";
         private int limit = 1000000; // 1 Mb
         private int numLogFiles = 300;
         private FileHandler fh = null;
         private Logger logger =null;
    public WhoIsLogger()
    try
                   fh = new FileHandler(pattern, limit, numLogFiles);
                   fh.setFormatter(new SimpleFormatter());
                   logger = Logger.getLogger("com.netenforcers");
                   logger.setUseParentHandlers(false);
                   logger.addHandler(fh);
                   //logger.setLevel(Level.ALL);
         catch(Exception ex)
              ex.printStackTrace();
    public Logger getLogger()
         return this.logger;
    ========
    I am calling thses two loggers using
    if(true)
    NetLogger logger = new NetLogger();
    logger.getLogger.info("Hi to be written to error log");
    else
    WhoIsLogger whoislogger = new WhoIsLogger ();
    whoislogger.getLogger.info("Hi to be written to whois log");
    =========
    But the two log files arre updating with two messages.
    Pls help me.
    thnx,
    raj

    hello everybody.
    I am using java.util.logging package to create log files.
    I have to generate log files and append to the file based on the condition.
    When i am writing message to single log file it it updated to all the log files.
    Code
    ==========================
    //To create error_0.log file
    import java.util.logging.FileHandler;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import java.util.logging.SimpleFormatter;
    public class NetLogger
         private String pattern = "./log/error_%g.log";
         private int limit = 1000000; // 1 Mb
         private int numLogFiles = 300;
         private FileHandler fh = null;
         private Logger logger =null;
    public NetLogger()
         try
              fh = new FileHandler(pattern, limit, numLogFiles);
              fh.setFormatter(new SimpleFormatter());
              logger = Logger.getLogger("com.netenforcers");
                   logger.setUseParentHandlers(false);
              logger.addHandler(fh);
              //logger.setLevel(Level.ALL);
         catch(Exception ex)
              ex.printStackTrace();
    public Logger getLogger()
         return this.logger;
    =======
    // To create whoIs_0.log
    import java.util.logging.FileHandler;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import java.util.logging.SimpleFormatter;
    public class WhoIsLogger
         private String pattern = "./log/whoIs_%g.log";
         private int limit = 1000000; // 1 Mb
         private int numLogFiles = 300;
         private FileHandler fh = null;
         private Logger logger =null;
    public WhoIsLogger()
    try
                   fh = new FileHandler(pattern, limit, numLogFiles);
                   fh.setFormatter(new SimpleFormatter());
                   logger = Logger.getLogger("com.netenforcers");
                   logger.setUseParentHandlers(false);
                   logger.addHandler(fh);
                   //logger.setLevel(Level.ALL);
         catch(Exception ex)
              ex.printStackTrace();
    public Logger getLogger()
         return this.logger;
    ========
    I am calling thses two loggers using
    if(true)
    NetLogger logger = new NetLogger();
    logger.getLogger.info("Hi to be written to error log");
    else
    WhoIsLogger whoislogger = new WhoIsLogger ();
    whoislogger.getLogger.info("Hi to be written to whois log");
    =========
    But the two log files arre updating with two messages.
    Pls help me.
    thnx,
    raj

  • When trying to create sitecollection showing a blank screen with text ACK

    When i am trying to create a site collection i am ending up with a blank site with text ACK.Please can any1 help me in fixing this issue.

    Hi,
    According to your post, my understanding is that it show a blank site with text ACK when you create a site collection.
    To narrow down the issue scope, I recommend that you can try to test the following things.
      1.  Create some new different types site collections in this web application to check if this issue is occurred on the only site collection in this web application.
      2.  Create a new site collection in another web application to check if this issue is occurred on the only web application.
      3.  Create a new web application, create a new site collection in this new web application to test if it works.
    If the issue persists, please check the ULS log file to find more information about this issue.
    For SharePoint 2013, by default, ULS log is at C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\LOGS.
    Best Regards,
    Yumi Fu

  • Office Web Apps - "Could not find trace string in ULS logs" unhealthy?

    I have reviewed everything I could find on unhealthy WAC clusters as my problem seems unrelated to certificate or missing components.  I've already digested
    http://www.wictorwilen.se/office-web-apps-server-2013---machines-are-always-reported-as-unhealthy (Thanks Wictor).
    The particular configuration is an Office Web Apps 2013 ([X-OfficeVersion, 15.0.4551.1005]), running on top of Windows Server 2012, configured for http access (SSL offloaded NLB cluster) and finally linked to Exchange 2013, Lync 2013 and SharePoint
    2013.  Everything works as expected from client side after setting IIS ARR to handle all reverse proxy bits.
    FarmOU                            :
    InternalURL                       : https://officeapps.fqdn/
    ExternalURL                       : https://officeapps.fqdn/
    AllowHTTP                         : True
    SSLOffloaded                      : True
    CertificateName                   :
    EditingEnabled                    : True
    LogLocation                       : C:\ProgramData\Microsoft\OfficeWebApps\Data\Logs\ULS
    LogRetentionInDays                : 7
    LogVerbosity                      : Unexpected
    Proxy                             :
    CacheLocation                     : C:\ProgramData\Microsoft\OfficeWebApps\Working\d
    MaxMemoryCacheSizeInMB            : 75
    DocumentInfoCacheSize             : 5000
    CacheSizeInGB                     : 15
    ClipartEnabled                    : False
    TranslationEnabled                : False
    MaxTranslationCharacterCount      : 125000
    TranslationServiceAppId           :
    TranslationServiceAddress         :
    RenderingLocalCacheLocation       : C:\ProgramData\Microsoft\OfficeWebApps\Working\waccache
    RecycleActiveProcessCount         : 5
    AllowCEIP                         : False
    ExcelRequestDurationMax           : 300
    ExcelSessionTimeout               : 450
    ExcelWorkbookSizeMax              : 50
    ExcelPrivateBytesMax              : -1
    ExcelConnectionLifetime           : 1800
    ExcelExternalDataCacheLifetime    : 300
    ExcelAllowExternalData            : True
    ExcelWarnOnDataRefresh            : True
    OpenFromUrlEnabled                : False
    OpenFromUncEnabled                : True
    OpenFromUrlThrottlingEnabled      : True
    PicturePasteDisabled              : True
    RemovePersonalInformationFromLogs : False
    AllowHttpSecureStoreConnections   : False
    Machines                          : {WAC15PD-02, WAC15PD-01}
    The problem however is an incessant logging on the WAC cluster nodes of event 1204,2204 followed almost immediately by 1004,2004.  This repeats every 4min or so...
    - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    - <System>
      <Provider
    Name="Office Web Apps Monitoring" />
      <EventID
    Qualifiers="0">1204</EventID>
      <Level>2</Level>
      <Task>1</Task>
      <Keywords>0x80000000000000</Keywords>
      <TimeCreated
    SystemTime="2014-02-04T20:49:37.000000000Z" />
      <EventRecordID>3043246</EventRecordID>
      <Channel>Microsoft Office Web Apps</Channel>
      <Computer>wac15pd-01.fqdn</Computer>
      <Security
    />
      </System>
    - <EventData>
      <Data><?xml version="1.0" encoding="utf-16"?> <HealthReport xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <HealthMessage>UlsControllerWatchdog reported status for UlsController in category 'Verify Trace Logging'. Reported status: Could not find trace string in ULS logs in C:\ProgramData\Microsoft\OfficeWebApps\Data\Logs\ULS.</HealthMessage>
    <ComponentOwner>ServicesInfrastructure</ComponentOwner>
    </HealthReport></Data>
      </EventData>
     </Event>
    - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    - <System>
      <Provider
    Name="Office Web Apps Monitoring" />
      <EventID
    Qualifiers="0">2204</EventID>
      <Level>2</Level>
      <Task>1</Task>
      <Keywords>0x80000000000000</Keywords>
      <TimeCreated
    SystemTime="2014-02-04T20:49:37.000000000Z" />
      <EventRecordID>3043247</EventRecordID>
      <Channel>Microsoft Office Web Apps</Channel>
      <Computer>wac15pd-01.fqdn</Computer>
      <Security
    />
      </System>
    - <EventData>
      <Data><?xml version="1.0" encoding="utf-16"?> <HealthReport xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <HealthMessage>UlsControllerWatchdog reported status for UlsController in category 'Verify Trace Logging'. Reported status: Could not find trace string in ULS logs in
    C:\ProgramData\Microsoft\OfficeWebApps\Data\Logs\ULS.</HealthMessage> <ComponentOwner>ServicesInfrastructure</ComponentOwner>
    </HealthReport></Data>
      </EventData>
      </Event>
    - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    - <System>
      <Provider
    Name="Office Web Apps Monitoring" />
      <EventID
    Qualifiers="0">1004</EventID>
      <Level>2</Level>
      <Task>10002</Task>
      <Keywords>0x80000000000000</Keywords>
      <TimeCreated
    SystemTime="2014-02-04T20:49:39.000000000Z" />
      <EventRecordID>3043266</EventRecordID>
      <Channel>Microsoft Office Web Apps</Channel>
      <Computer>wac15pd-01.fqdn</Computer>
      <Security
    />
      </System>
    - <EventData>
      <Data><?xml version="1.0" encoding="utf-16"?> <HealthReport xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <HealthMessage>AgentManagerWatchdog reported status for
    AgentManagerWatchdog in category 'Recent Watchdog Reports'. Reported status: Machine health is Unhealthy</HealthMessage> </HealthReport></Data>
      </EventData>
     </Event>
    - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    - <System>
      <Provider
    Name="Office Web Apps Monitoring" />
      <EventID
    Qualifiers="0">2004</EventID>
      <Level>2</Level>
      <Task>10002</Task>
      <Keywords>0x80000000000000</Keywords>
      <TimeCreated
    SystemTime="2014-02-04T20:49:39.000000000Z" />
      <EventRecordID>3043267</EventRecordID>
      <Channel>Microsoft Office Web Apps</Channel>
      <Computer>wac15pd-01.fqdn</Computer>
      <Security
    />
      </System>
    - <EventData>
      <Data><?xml version="1.0" encoding="utf-16"?> <HealthReport xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <HealthMessage>AgentManagerWatchdog reported status for
    AgentManagerWatchdog in category 'Recent Watchdog Reports'. Reported status: Machine health is Unhealthy</HealthMessage> </HealthReport></Data>
      </EventData>
      </Event>
    Further exploration of ULS log files (C:\ProgramData\Microsoft\OfficeWebApps\Data\Logs\ULS) did not yield particularly much, except the following;
    02/04/2014 20:48:04.48  UlsControllerWatchdog.exe (0x1244)       0x0F60 Services Infrastructure        Uls Controller Watchdog        ajbam Assert 
     We're about to trace a string for category MsoSpUlsControllerWatchdog at level Info and we expect to find in the log later, but it appears that the category has been throttled. We will never be able to find the string and this watchdog will always fail.
    StackTrace:   at Microsoft.Office.Web.UlsControllerWatchdog.Program.CheckServiceInstance(ServiceInstance serviceInstance)     at Microsoft.Office.Web.Common.WatchdogHelperThreadManager.GetHealthResults(WatchdogExecutionContext
    context, ServiceInstance si)     at Microsoft.Office.Web.Common.WatchdogHelperThreadManager.WatchingThreadMethod(Object o)     at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback
    callback, Object state, Boolean preserveSyncCtx)     at System.Threading.ExecutionContext.Ru... 345fbec5-e958-4f1f-bf56-d65c1c0d472a
    02/04/2014 20:48:04.48* UlsControllerWatchdog.exe (0x1244)       0x0F60 Services Infrastructure        Uls Controller Watchdog        ajbam Assert 
     ...n(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)     at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()    
    at System.Threading.ThreadPoolWorkQueue.Dispatch()   345fbec5-e958-4f1f-bf56-d65c1c0d472a
    02/04/2014 20:48:05.52  UlsControllerWatchdog.exe (0x1244)       0x0F60 Services Infrastructure        Services Infrastructure Health adhog Unexpected Health report
    by UlsControllerWatchdog: Agent: UlsController, eventId: 1204, eventType: Error, categoryId: 1, eventMessage: <?xml version="1.0" encoding="utf-16"?>  <HealthReport xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <HealthMessage>UlsControllerWatchdog reported status for UlsController in category 'Verify Trace Logging'. Reported
    status: Could not find trace string in ULS logs in C:\ProgramData\Microsoft\OfficeWebApps\Data\Logs\ULS.</HealthMessage>    <ComponentOwner>ServicesInfrastructure</ComponentOwner>  </HealthReport> 345fbec5-e958-4f1f-bf56-d65c1c0d472a
    02/04/2014 20:48:05.52  UlsControllerWatchdog.exe (0x1244)       0x0F60 Services Infrastructure        Services Infrastructure Health adhoh Unexpected Health report
    by UlsControllerWatchdog (persistent): Agent: UlsController, eventId: 2204, eventType: Error, categoryId: 1 345fbec5-e958-4f1f-bf56-d65c1c0d472a
    I suspect these might be related, but can't seem to find any logical explanation why this should cause the Get-OfficeWebAppsMachine to report HealthStatus of Unhealthy.  If related, is there a way to disable this check or remove throttling in a safe
    way?  Alternatively if this is some coding issue (I've not found any other blog/QA dealing with this particularly) it would be nice to get confirmation of this and potentially a fix/solution.
    Any help would be greatly appreciated. Thank you!

    Hi ChristiaanB,
    You get this ULS error because you change the log verbosity of the OWA farm. I wrote an article for this on my blog : OWA unhealthy uls issue
    Regards,
    Wes

  • Crystal Report Server Database Log File Growth Out Of Control?

    We are hosting Crystal Report Server 11.5 on Microsoft SQL Server 2005 Enterprise.  Our Crystal Report Server SQL 2005 database file size = 6,272 KB, and the log file that goes with the database has a size = 23,839,552.
    I have been reviewing the Application Logs and this log file size is auto-increasing about 3-times a week.
    We backup the database each night, and run maintenance routines to Check Database Integrity, re-organize index, rebuild index, update statistics, and backup the database.
    Is it "Normal" to have such a large LOG file compared to the DATABASE file?
    Can you tell me if there is a recommended way to SHRINK the log file?
    Some Technical Documents suggest frist truncating the log, and the using the DBCC SHRINKFILE command:
    USE CRS
    GO
    --Truncate the log by changing the database recovery model to SIMPLE
    ALTER DATABASE CRS
    SET RECOVERY SIMPLE;
    --Shrink the truncated log file to 1 gigabyte
    DBCC SHRINKFILE (CRS_log, 1000);
    GO
    --Reset the database recovery model.
    ALTER DATABASE CRS
    SET RECOVERY FULL;
    GO
    Do you think this approach would help?
    Do you think this approach would cause any problems?

    my bad you didn't put the K on the 2nd number.
    Looking at my SQL server that's crazy big my logs are in the k's like 4-8.
    I think someone enabled some type of debugging on your SQL server, it's more of a Microsoft issue, as our product doesn't require it from looking at my SQL DB's
    Regards,
    Tim

  • Error for Generating a log file

    Hi Cezar sanos,
    i am trying to generate a log file for ODI with details like who logged in and what is is doing kind of things.
    For this i am executing the command like
    lagentscheduler.bat "-PORT=20910" "-NAME=localagent" "-V=2" > C:\OraHome_1\logs\agent1.log.
    But its getting the error like
    A JDK is required to execute Web Services with OracleDI. You are currently using a JRE.
    OracleDI: Starting Scheduler Agent ...
    Starting Oracle Data Integrator Agent...
    Version : 10.1.3.5 - 10/11/2008
    DwgJv.main: Exit. Return code:-1

    Just in case,
    the following message :
    A JDK is required to execute Web Services with OracleDI. You are currently using a JRE.
    is only a warning and not an error message....

  • Hardening & Keeping Log files in 10.9

    I'm not in IT but I'm trying to Harden our Macs to please a client.  I found several Hardening Tips & Guides written for older versions of OS X, but none for 10.9.  Does anyone know of a Hardening Guide written with commands 10.9.
    Right now I found a guide written for 10.8 and have been mostly sucessful implementing it except for a couple sticking points.
    They suggested keeping security.log files for 30 days, I found out that they got rid of security.log and most of its functionality is in authd.log.  But I can't figure out how to keep authd logs for 30 days.  Does anyone know how I can set this?
    I also need to keep install.log for 30 days as well, but not seeing a way to control this in /etc/newsyslog.conf.  Anyone know how to set this as well.
    Does anyone know if the following audit flags should still work: lo,ad,fd,fm,-all?
    I'm trying to keep system.log & appfirewall.log for 30 days as well, I've figured out these have moved from /etc/newsyslog.conf to etc/asl.conf, but I'm not sure if I've set this correctly. Right now I have added "store_ttl=30" to these 2 lines asl.conf.  Should this work? Is there a better way to do this?
              > system.log mode=0640 format=bsd rotate=seq compress file_max=5M all_max=100M store_ttl=30
              ? [= Facility com.apple.alf.logging] file appfirewall.log file_max=5M all_max=100M store_ttl=30

    Hi Alex...
    Jim,
    who came up with this solution????
    I got these solutions for creating log files and reconstructing the database from this forum a while back....probably last year sometime.
    Up until recently after doing this, there has been
    no
    problem - server runs as it should.
    I dare to say pure luck.
    The reason I do
    this is because if I don't, the server does NOT
    automatically create new empty .log files, and
    when
    it fills the current log file, it "crashes" with
    the
    "unkown mailbox path" displayed for all mailboxes.
    I would think you some fundamental underlying issue
    there.
    I assume by "unkown mailbox path" problem you mean a
    corrupt cyrus database?
    Yes, I believe that db corruption is the case...
    You should never ever manually modify anthing inside
    cyrus' configuration database. This is just a
    desaster waiting to happen.
    If your database gets regularly corrupted, we need to
    investigate why. Many possible reasons: related
    processes crashing, disk failure, power
    failure/surges and so on.
    Aha!...about a month ago - thinking back to when this problem started - there was a power outage here, over a weekend! The hard drive was "kicked out" of the server box when I returned to work on that Monday....and that's when this problem started!
    I suggest you increase the logging level for a few
    days and keep an eye on things. Then post log
    extracts and /etc/imapd.conf and we'll take it from
    there.
    Alex
    Ok, thanks, will do!
    P.S. Download mailbfr from here:
    http://osx.topicdesk.com/downloads/
    This will allow you to easily rebuild if needed and
    most important to do proper backups of your mail
    services.
    Thanks for that, too. I will check it out and return to this forum with an update in the near future.
    Jim
    Mac OS X (10.3.9)

  • How to remove all log files at application end ?

    I need to remove all log files from database dir.
    Just the data file must be in database diretory after the application ends.
    I´v tried:
    1 - set_flags(DB_LOG_AUTOREMOVE, 1);
    2 - txn_checkpoint(0, 0, DB_FORCE);
    But ways one log file reminds.
    Any bory nows how remove all log files at application end ?
    I really need this. How can i do that in C++ ?
    Thanks,
    DelNeto

    Here's how I solved it
    // At end of app.
    // Commit tables.
    pdbParam     ->sync(0);
    pdbUser     ->sync(0);
    // Close tables.
    pdbParam     ->close(0);
    pdbUser     ->close(0);
    // Delete table objects.
    delete     m_pdbParam;
    delete     m_pdbUser;
    // Commit all changes to the database.
    penvDbEnv->txn_checkpoint(0, 0, DB_FORCE);
    penvDbEnv->close(0);
    delete penvDbEnv;
    // Remove all logs files comes here.
    DbEnv *penvDbEnv;
    penvDbEnv = new DbEnv(0);
    ui32EnvFlags = DB_CREATE |
    DB_PRIVATE |
    DB_INIT_LOCK |
    DB_INIT_LOG |
    DB_INIT_MPOOL |
    DB_THREAD |
    DB_INIT_TXN;
    // Open the environment with full transactional support.
    iResult = penvDbEnv->open("..\\database", ui32EnvFlags, 0);
    // Get the list of log files.
    char **pLogFilLis;
    char **pLogFilLisBegin;
    iResult = penvDbEnv->log_archive(&pLogFilLis, DB_ARCH_ABS | B_ARCH_LOG);
    // This line resets the log sequence numbers from the database file.
    // No actual log file is associated with the database.
    iResult = penvDbEnv->lsn_reset("..\\database\\DATABASE.db", 0);
    // Remove the log files.
    if(pLogFilLis!= NULL)
    // I don´t now how put spaces and tabs here, sorry about the "___".;-).
    __for(pLogFilLisBegin = pLogFilLis; *pLogFilLis != NULL; ++pLogFilLis)
    ____iResult = remove(*pLogFilLis);
    __free(pLogFilLisBegin);
    // At this point no more log files exists at database directory.
    penvDbEnv->close(0);
    delete penvDbEnv;
    // If i need remove the environment files, do this.
    penvDbEnv = new DbEnv(0);
    penvDbEnv->remove(("..\\database", 0);
    delete m_penvDbEnv;
    Thanks to Bogdan Coman for show me the way
    DelNeto.

Maybe you are looking for

  • Permagent space in WebLogic 10.3...

    I have permanent space in Weblogic 10.3. I think that I need to change -XX:MaxPermSize=128m from these parameters: JAVA Memory arguments: -Xms256m -Xmx512m -XX:CompileThreshold=8000 -XX:PermSize= 48m -XX:MaxPermSize=128m I think correct? And where I

  • Few Projectproperty and Listproperty fields returning empty values (CrossListQueryInfo)

    I am running a CrossListQueryInfo and the ViewFields for the CAML includes <ProjectProperty Name="Title"/> <ProjectProperty Name="Url"/> <ListProperty Name="Title"/> <ListProperty Name="DefaultViewUrl"/> I am getting empty values for Url and DefaultV

  • GT70 Default Graphics Driver

    Hello! I've been having issues with the performance of my new GT70 when running games. It's so poor that the out-dated hardware I had intended to replace with the GT70 outperforms the games. I was wondering if it would be anything to do with the fact

  • SC import fails with "import successfully" in log. How to perform import?

    Hello SDN! I can't import SC SAP-SHRWEB.sca from Development to Consolidation system. At firts import process was finished with 'import failed' state and with followed log: 20080416094446 Info   :Starting Step Repository-import at 2008-04-16 09:44:46

  • Hotmail opens new window not new tab

    I have used firefox for a long time and had no problems. Now, all of a sudden when I open my bookmarks or hotmail they open a new window and not a tab. I never changed anything to make this happen it just started happening. I turned my computer on an