Log file is growing too big and too quick on ECC6 EHP4 system

Hello there,
i have ECC6EHP6 system based on NW 7.01 and MSSQL as back-end.
My log file is growing very fast. i seen a number of threads but does not got some perfect idea what to do. Please see DEV_RFC0 and DEV_W0 log file excerpt below. Please help
DEV_RFC0
Trace file opened at 20100618 065308 Eastern Daylight Time, SAP-REL 701,0,14 RFC-VER U 3 1016574 MT-SL
======> CPIC-CALL: 'ThSAPOCMINIT' : cmRc=2 thRc=679
Transaction program not registered                                      
ABAP Programm: SAPLSADC (Transaction: )
User: DDIC (Client: 000)
Destination: SAPDB_DBM_DAEMON (handle: 2, , )
SERVER> RFC Server Session (handle: 1, 59356166, {ADC77ADF-C575-F1CC-B797-0019B9E204CC})
SERVER> Caller host:
SERVER> Caller transaction code:  (Caller Program: SAPLSADC)
SERVER> Called function module: DBM_CONNECT_PUR
Error RFCIO_ERROR_SYSERROR in abrfcpic.c : 1501
CPIC-CALL: 'ThSAPOCMINIT' : cmRc=2 thRc=679
Transaction program not registered                                      
DEST =SAPDB_DBM_DAEMON
HOST =%%RFCSERVER%%
PROG =dbmrfc@sapdb
Trace file opened at 20100618 065634 Eastern Daylight Time, SAP-REL 701,0,14 RFC-VER U 3 1016574 MT-SL
======> CPIC-CALL: 'ThSAPOCMINIT' : cmRc=2 thRc=679
Transaction program not registered                                      
ABAP Programm: SAPLSADC (Transaction: )
User: DDIC (Client: 000)
Destination: SAPDB_DBM_DAEMON (handle: 2, , )
SERVER> RFC Server Session (handle: 1, 59587535, {28C87ADF-4577-F16D-B797-0019B9E204CC})
SERVER> Caller host:
SERVER> Caller transaction code:  (Caller Program: SAPLSADC)
SERVER> Called function module: DBM_CONNECT_PUR
Error RFCIO_ERROR_SYSERROR in abrfcpic.c : 1501
CPIC-CALL: 'ThSAPOCMINIT' : cmRc=2 thRc=679
Transaction program not registered                                      
DEST =SAPDB_DBM_DAEMON
HOST =%%RFCSERVER%%
PROG =dbmrfc@sapdb
DEV_W0

X Fri Jun 18 18:05:15 2010
X  *** ERROR => EmActiveData: Invalid Context Handle -1 [emxx.c       2214]
X  *** ERROR => EmActiveData: Invalid Context Handle -1 [emxx.c       2214]

N Fri Jun 18 18:05:25 2010
N  RSEC: The entry with identifier /RFC/DIICLNT800
N  was encrypted by a system
N  with different SID and cannot be decrypted here.
N  RSEC: The entry with identifier /RFC/T90CLNT090
N  was encrypted by a system
N  with different SID and cannot be decrypted here.

X Fri Jun 18 18:05:39 2010
X  *** ERROR => EmActiveData: Invalid Context Handle -1 [emxx.c       2214]
X  *** ERROR => EmActiveData: Invalid Context Handle -1 [emxx.c       2214]

X Fri Jun 18 18:05:46 2010
X  *** ERROR => EmActiveData: Invalid Context Handle -1 [emxx.c       2214]
X  *** ERROR => EmActiveData: Invalid Context Handle -1 [emxx.c       2214]

X Fri Jun 18 18:05:52 2010
X  *** ERROR => EmActiveData: Invalid Context Handle -1 [emxx.c       2214]
X  *** ERROR => EmActiveData: Invalid Context Handle -1 [emxx.c       2214]

N Sun Jun 20 10:55:32 2010
N  RSEC: The entry with identifier /RFC/DIICLNT800
N  was encrypted by a system
N  with different SID and cannot be decrypted here.
N  RSEC: The entry with identifier /RFC/T90CLNT090
N  was encrypted by a system
N  with different SID and cannot be decrypted here.

X Sun Jun 20 11:00:02 2010
X  *** ERROR => EmActiveData: Invalid Context Handle -1 [emxx.c       2214]

N Sun Jun 20 11:00:32 2010
N  RSEC: The entry with identifier /RFC/DIICLNT800
N  was encrypted by a system
N  with different SID and cannot be decrypted here.
N  RSEC: The entry with identifier /RFC/T90CLNT090
N  was encrypted by a system
N  with different SID and cannot be decrypted here.

C Sun Jun 20 11:03:34 2010
C  Thread ID:1956
C  dbmssslib.dll patch info
C    patchlevel   0
C    patchno      13
C    patchcomment Errors when running with par_stmt_prepare set to zero (1253696)
C  Local connection used on SAPFIVE to named instance: np:SAPFIVE\ECT

C Sun Jun 20 11:03:49 2010
C  OpenOledbConnection: line 23839. hr: 0x8000ffff Login timeout expired
C  sloledb.cpp [OpenOledbConnection,line 23839]: Error/Message: (err 0, sev 0), Login timeout expired
C  Procname: [OpenOledbConnection - no proc]
C  sloledb.cpp [OpenOledbConnection,line 23839]: Error/Message: (err -1, sev 0), An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.
C  Procname: [OpenOledbConnection - no proc]
C  sloledb.cpp [OpenOledbConnection,line 23839]: Error/Message: (err -1, sev 0), SQL Network Interfaces: Server doesn't support requested protocol [xFFFFFFFF].
C  Procname: [OpenOledbConnection - no proc]
C  sloledb.cpp [OpenOledbConnection,line 23839]: Error/Message: (err 0, sev 0), Invalid connection string attribute
C  Procname: [OpenOledbConnection - no proc]

C Sun Jun 20 11:04:04 2010
C  OpenOledbConnection: line 23839. hr: 0x8000ffff Login timeout expired
C  sloledb.cpp [OpenOledbConnection,line 23839]: Error/Message: (err 0, sev 0), Login timeout expired
C  Procname: [OpenOledbConnection - no proc]
C  sloledb.cpp [OpenOledbConnection,line 23839]: Error/Message: (err -1, sev 0), An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.
C  Procname: [OpenOledbConnection - no proc]
C  sloledb.cpp [OpenOledbConnection,line 23839]: Error/Message: (err -1, sev 0), SQL Network Interfaces: Server doesn't support requested protocol [xFFFFFFFF].
C  Procname: [OpenOledbConnection - no proc]
C  sloledb.cpp [OpenOledbConnection,line 23839]: Error/Message: (err 0, sev 0), Invalid connection string attribute
C  Procname: [OpenOledbConnection - no proc]

C Sun Jun 20 11:04:19 2010
C  OpenOledbConnection: line 23839. hr: 0x8000ffff Login timeout expired
C  sloledb.cpp [OpenOledbConnection,line 23839]: Error/Message: (err 0, sev 0), Login timeout expired
C  Procname: [OpenOledbConnection - no proc]
C  sloledb.cpp [OpenOledbConnection,line 23839]: Error/Message: (err -1, sev 0), An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.
C  Procname: [OpenOledbConnection - no proc]
C  sloledb.cpp [OpenOledbConnection,line 23839]: Error/Message: (err -1, sev 0), SQL Network Interfaces: Server doesn't support requested protocol [xFFFFFFFF].
C  Procname: [OpenOledbConnection - no proc]
C  sloledb.cpp [OpenOledbConnection,line 23839]: Error/Message: (err 0, sev 0), Invalid connection string attribute
C  Procname: [OpenOledbConnection - no proc]
C  failed to establish conn to np:SAPFIVE\ECT.
C  Retrying without protocol specifier: SAPFIVE\ECT
C  Connected to db server : [SAPFIVE\ECT] server_used : [SAPFIVE\ECT], dbname: ECT, dbuser: ect
C  pn_id:SAPFIVE_ECT_ECTECT_ECT
B  Connection 4 opened (DBSL handle 2)
B  Wp  Hdl ConName          ConId     ConState     TX  PRM RCT TIM MAX OPT Date     Time   DBHost         
B  000 000 R/3              000000000 ACTIVE       YES YES NO  000 255 255 20100618 061458 SAPFIVE\ECT    
B  000 001 +DBO+0050      000002156 INACTIVE     NO  NO  NO  004 255 255 20100618 101017 SAPFIVE\ECT    
B  000 002 +DBO+0050      000001981 DISCONNECTED NO  NO  NO  000 255 255 20100620 070023 SAPFIVE\ECT    
B  000 003 +DBO+0050      000001982 DISCONNECTED NO  NO  NO  000 255 255 20100620 070023 SAPFIVE\ECT    
B  000 004 R/3*INACT_PACK   000002157 ACTIVE       NO  NO  NO  004 255 255 20100620 110334 SAPFIVE\ECT  
N Sun Jun 20 17:35:35 2010
N  RSEC: The entry with identifier /RFC/DIICLNT800
N  was encrypted by a system
N  with different SID and cannot be decrypted here.
N  RSEC: The entry with identifier /RFC/T90CLNT090
N  was encrypted by a system
N  with different SID and cannot be decrypted here.

M Sun Jun 20 17:47:29 2010
M  ThAlarmHandler (1)
M  ThAlarmHandler (1)
M  ThAlarmHandler: set CONTROL_TIMEOUT/DP_CONTROL_JAVA_EXIT and break sql

C Sun Jun 20 17:47:33 2010
C  SQLBREAK: DBSL_CMD_SQLBREAK: CbOnCancel was not set. rc: 15
M  program canceled
M    reason   = max run time exceeded
M    user     = SAPSYS     
M    client   = 000
M    terminal =                    
M    report   = SAPMSSY2                               
M  ThAlarmHandler: return from signal handler

A Sun Jun 20 17:48:33 2010
A  TH VERBOSE LEVEL FULL
A  ** RABAX: level LEV_RX_PXA_RELEASE_MTX entered.
A  ** RABAX: level LEV_RX_PXA_RELEASE_MTX completed.
A  ** RABAX: level LEV_RX_COVERAGE_ANALYSER entered.
A  ** RABAX: level LEV_RX_COVERAGE_ANALYSER completed.
A  ** RABAX: level LEV_RX_ROLLBACK entered.
A  ** RABAX: level LEV_RX_ROLLBACK completed.
A  ** RABAX: level LEV_RX_DB_ALIVE entered.
A  ** RABAX: level LEV_RX_DB_ALIVE completed.
A  ** RABAX: level LEV_RX_HOOKS entered.
A  ** RABAX: level LEV_RX_HOOKS completed.
A  ** RABAX: level LEV_RX_STANDARD entered.
A  ** RABAX: level LEV_RX_STANDARD completed.
A  ** RABAX: level LEV_RX_STOR_VALUES entered.
A  ** RABAX: level LEV_RX_STOR_VALUES completed.
A  ** RABAX: level LEV_RX_C_STACK entered.

A Sun Jun 20 17:48:42 2010
A  ** RABAX: level LEV_RX_C_STACK completed.
A  ** RABAX: level LEV_RX_MEMO_CHECK entered.
A  ** RABAX: level LEV_RX_MEMO_CHECK completed.
A  ** RABAX: level LEV_RX_AFTER_MEMO_CHECK entered.
A  ** RABAX: level LEV_RX_AFTER_MEMO_CHECK completed.
A  ** RABAX: level LEV_RX_INTERFACES entered.
A  ** RABAX: level LEV_RX_INTERFACES completed.
A  ** RABAX: level LEV_RX_GET_MESS entered.
A  ** RABAX: level LEV_RX_GET_MESS completed.
A  ** RABAX: level LEV_RX_INIT_SNAP entered.
A  ** RABAX: level LEV_RX_INIT_SNAP completed.
A  ** RABAX: level LEV_RX_WRITE_SYSLOG entered.
A  ** RABAX: level LEV_RX_WRITE_SYSLOG completed.
A  ** RABAX: level LEV_RX_WRITE_SNAP_BEG entered.
A  ** RABAX: level LEV_RX_WRITE_SNAP_BEG completed.
A  ** RABAX: level LEV_RX_WRITE_SNAP entered.

A Sun Jun 20 17:48:48 2010
A  ** RABAX: level LEV_SN_END completed.
A  ** RABAX: level LEV_RX_WRITE_SNAP_END entered.
A  ** RABAX: level LEV_RX_WRITE_SNAP_END completed.
A  ** RABAX: level LEV_RX_SET_ALERT entered.
A  ** RABAX: level LEV_RX_SET_ALERT completed.
A  ** RABAX: level LEV_RX_COMMIT entered.
A  ** RABAX: level LEV_RX_COMMIT completed.
A  ** RABAX: level LEV_RX_SNAP_SYSLOG entered.
A  ** RABAX: level LEV_RX_SNAP_SYSLOG completed.
A  ** RABAX: level LEV_RX_RESET_PROGS entered.
A  ** RABAX: level LEV_RX_RESET_PROGS completed.
A  ** RABAX: level LEV_RX_STDERR entered.
A  Sun Jun 20 17:48:48 2010

A  ABAP Program SAPMSSY2                                .
A  Source RSBTCTRC                                 Line 131.
A  Error Code TIME_OUT.
A  Module  $Id: //bas/701_REL/src/krn/runt/abinit.c#1 $ SAP.
A  Function ab_chstat Line 1941.
A  ** RABAX: level LEV_RX_STDERR completed.
A  ** RABAX: level LEV_RX_RFC_ERROR entered.
A  ** RABAX: level LEV_RX_RFC_ERROR completed.
A  ** RABAX: level LEV_RX_RFC_CLOSE entered.
A  ** RABAX: level LEV_RX_RFC_CLOSE completed.
A  ** RABAX: level LEV_RX_IMC_ERROR entered.
A  ** RABAX: level LEV_RX_IMC_ERROR completed.
A  ** RABAX: level LEV_RX_DATASET_CLOSE entered.
A  ** RABAX: level LEV_RX_DATASET_CLOSE completed.
A  ** RABAX: level LEV_RX_RESET_SHMLOCKS entered.
A  ** RABAX: level LEV_RX_RESET_SHMLOCKS completed.
A  ** RABAX: level LEV_RX_ERROR_SAVE entered.
A  ** RABAX: level LEV_RX_ERROR_SAVE completed.
A  ** RABAX: level LEV_RX_ERROR_TPDA entered.
A  ** RABAX: level LEV_RX_ERROR_TPDA completed.
A  ** RABAX: level LEV_RX_PXA_RELEASE_RUDI entered.
A  ** RABAX: level LEV_RX_PXA_RELEASE_RUDI completed.
A  ** RABAX: level LEV_RX_LIVE_CACHE_CLEANUP entered.
A  ** RABAX: level LEV_RX_LIVE_CACHE_CLEANUP completed.
A  ** RABAX: level LEV_RX_END entered.
A  ** RABAX: level LEV_RX_END completed.
A  ** RABAX: end no http/smtp
A  ** RABAX: end RX_BTCHLOG|RX_VBLOG
A  Time limit exceeded..


X Sun Jun 20 17:49:03 2010
X  *** ERROR => EmActiveData: Invalid Context Handle -1 [emxx.c       2214]
N Sun Jun 20 18:35:40 2010
N  RSEC: The entry with identifier /RFC/DIICLNT800
N  was encrypted by a system
N  with different SID and cannot be decrypted here.
N  RSEC: The entry with identifier /RFC/T90CLNT090
N  was encrypted by a system
N  with different SID and cannot be decrypted here.

N Sun Jun 20 18:50:35 2010
N  RSEC: The entry with identifier /RFC/DIICLNT800
N  was encrypted by a system
N  with different SID and cannot be decrypted here.
N  RSEC: The entry with identifier /RFC/T90CLNT090
N  was encrypted by a system
N  with different SID and cannot be decrypted here.

N Sun Jun 20 18:55:31 2010
N  RSEC: The entry with identifier /RFC/DIICLNT800
N  was encrypted by a system
N  with different SID and cannot be decrypted here.
N  RSEC: The entry with identifier /RFC/T90CLNT090
N  was encrypted by a system
N  with different SID and cannot be decrypted here.

N Sun Jun 20 19:00:31 2010
N  RSEC: The entry with identifier /RFC/DIICLNT800
N  was encrypted by a system
N  with different SID and cannot be decrypted here.
N  RSEC: The entry with identifier /RFC/T90CLNT090
N  was encrypted by a system
N  with different SID and cannot be decrypted here.

X Sun Jun 20 19:01:59 2010
X  *** ERROR => EmActiveData: Invalid Context Handle -1 [emxx.c       2214]

X Sun Jun 20 19:02:05 2010
X  *** ERROR => EmActiveData: Invalid Context Handle -1 [emxx.c       2214]

M Sun Jun 20 19:04:02 2010
M  ***LOG R49=> ThReceive, CPIC-Error (020223) [thxxhead.c   7488]
M  ***LOG R5A=> ThReceive, CPIC-Error (76495728) [thxxhead.c   7493]
M  ***LOG R64=> ThReceive, CPIC-Error ( CMSEND(SAP)) [thxxhead.c   7498]

N Sun Jun 20 19:05:31 2010
N  RSEC: The entry with identifier /RFC/DIICLNT800
N  was encrypted by a system
N  with different SID and cannot be decrypted here.
N  RSEC: The entry with identifier /RFC/T90CLNT090
N  was encrypted by a system
N  with different SID and cannot be decrypted here.
Thanks
Mani

i have few BG jobs running at backend, but i dont think that could be a problem. I have these jobs running
ESH_IX_PROCESS_CP_20100621051308
EU_PUT
EU_REORG
RPTMC_CREATE_CHANGEPOINT_AUTH
i cancelled  the last one and first job is for trex indexing for changed object i SAP hR master data. middle two i dont know the purpose.
do you think these could be problem?
Mani

Similar Messages

  • 45 min long session of log file sync waits between 5000 and 20000 ms

    45 min long log file sync waits between 5000 and 20000 ms
    Encountering a rather unusual performance issue. Once every 4 hours I am seeing a 45 minute long log file sync wait event being reported using Spotlight on Oracle. For the first 30 minutes the event wait is for approx 5000 ms, followed by an increase to around 20000 ms for the next 15 min before rapidly dropping off and normal operation continues for the next 3 hours and 15 minutes before the cycle repeats itself. The issue appears to maintain it's schedule independently of restarting the database. Statspack reports do not show an increase in commits or executions or any new sql running during the time the issue is occuring. We have two production environments both running identicle applications with similar usage and we do not see the issue on the other system. I am leaning towards this being a hardware issue, but the 4 hour interval regardless of load on the database has me baffled. If it were a disk or controller cache issue one would expect to see the interval change with database load.
    I cycle my redo logs and archive them just fine with log file switches every 15-20 minutes. Even during this unusally long and high session of log file sync waits I can see that the redo log files are still switching and are being archived.
    The redo logs are on a RAID 10, we have 4 redo logs at 1 GB each.
    I've run statspack reports on hourly intervals around this event:
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 756,729 2,538,034 88.47
    db file sequential read 208,851 153,276 5.34
    log file parallel write 636,648 129,981 4.53
    enqueue 810 21,423 .75
    log file sequential read 65,540 14,480 .50
    And here is a sample while not encountering the issue:
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 953,037 195,513 53.43
    log file parallel write 875,783 83,119 22.72
    db file sequential read 221,815 63,944 17.48
    log file sequential read 98,310 18,848 5.15
    db file scattered read 67,584 2,427 .66
    Yes I know I am already tight on I/O for my redo even during normal operations yet, my redo and archiving works just fine for 3 hours and 15 minutes (11 to 15 log file switches). These normal switches result in a log file sync wait of about 5000 ms for about 45 seconds while the 1GB redo log is being written and then archived.
    I welcome any and all feedback.
    Message was edited by:
    acyoung1
    Message was edited by:
    acyoung1

    Lee,
    log_buffer = 1048576 we use a standard of 1 MB for our buffer cache, we've not altered the setting. It is my understanding that Oracle typically recommends that you not exceed 1MB for the log_buffer, stating that a larger buffer normally does not increase performance.
    I would agree that tuning the log_buffer parameter may be a place to consider; however, this issue last for ~45 minutes once every 4 hours regardless of database load. So for 3 hours and 15 minutes during both peak usage and low usage the buffer cache, redo log and archival processes run just fine.
    A bit more information from statspack reports:
    Here is a sample while the issue is occuring.
    Snap Id Snap Time Sessions
    Begin Snap: 661 24-Mar-06 12:45:08 87
    End Snap: 671 24-Mar-06 13:41:29 87
    Elapsed: 56.35 (mins)
    Cache Sizes
    ~~~~~~~~~~~
    db_block_buffers: 196608 log_buffer: 1048576
    db_block_size: 8192 shared_pool_size: 67108864
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 615,141.44 2,780.83
    Logical reads: 13,241.59 59.86
    Block changes: 2,255.51 10.20
    Physical reads: 144.56 0.65
    Physical writes: 61.56 0.28
    User calls: 1,318.50 5.96
    Parses: 210.25 0.95
    Hard parses: 8.31 0.04
    Sorts: 16.97 0.08
    Logons: 0.14 0.00
    Executes: 574.32 2.60
    Transactions: 221.21
    % Blocks changed per Read: 17.03 Recursive Call %: 26.09
    Rollback per transaction %: 0.03 Rows per Sort: 46.87
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.99 Redo NoWait %: 100.00
    Buffer Hit %: 98.91 In-memory Sort %: 100.00
    Library Hit %: 98.89 Soft Parse %: 96.05
    Execute to Parse %: 63.39 Latch Hit %: 99.87
    Parse CPU to Parse Elapsd %: 90.05 % Non-Parse CPU: 85.05
    Shared Pool Statistics Begin End
    Memory Usage %: 89.96 92.20
    % SQL with executions>1: 76.39 67.76
    % Memory for SQL w/exec>1: 72.53 63.71
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 756,729 2,538,034 88.47
    db file sequential read 208,851 153,276 5.34
    log file parallel write 636,648 129,981 4.53
    enqueue 810 21,423 .75
    log file sequential read 65,540 14,480 .50
    And this is a sample during "normal" operation.
    Snap Id Snap Time Sessions
    Begin Snap: 671 24-Mar-06 13:41:29 88
    End Snap: 681 24-Mar-06 14:42:57 88
    Elapsed: 61.47 (mins)
    Cache Sizes
    ~~~~~~~~~~~
    db_block_buffers: 196608 log_buffer: 1048576
    db_block_size: 8192 shared_pool_size: 67108864
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 716,776.44 2,787.81
    Logical reads: 13,154.06 51.16
    Block changes: 2,627.16 10.22
    Physical reads: 129.47 0.50
    Physical writes: 67.97 0.26
    User calls: 1,493.74 5.81
    Parses: 243.45 0.95
    Hard parses: 9.23 0.04
    Sorts: 18.27 0.07
    Logons: 0.16 0.00
    Executes: 664.05 2.58
    Transactions: 257.11
    % Blocks changed per Read: 19.97 Recursive Call %: 25.87
    Rollback per transaction %: 0.02 Rows per Sort: 46.85
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.99 Redo NoWait %: 100.00
    Buffer Hit %: 99.02 In-memory Sort %: 100.00
    Library Hit %: 98.95 Soft Parse %: 96.21
    Execute to Parse %: 63.34 Latch Hit %: 99.90
    Parse CPU to Parse Elapsd %: 96.60 % Non-Parse CPU: 84.06
    Shared Pool Statistics Begin End
    Memory Usage %: 92.20 88.73
    % SQL with executions>1: 67.76 75.40
    % Memory for SQL w/exec>1: 63.71 68.28
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 953,037 195,513 53.43
    log file parallel write 875,783 83,119 22.72
    db file sequential read 221,815 63,944 17.48
    log file sequential read 98,310 18,848 5.15
    db file scattered read 67,584 2,427 .66

  • File Size Growing too much with every Save

    I have designed a form that uses Giids with drop-down lists and also some Image Fields.  When I am using the form and I save it the file size grows excessively with each save.  If I add even one character to a sentence the file size may grow by as much as 500K or more. Saving a few times causes the file to become far to large to send by email.  Any ideas what I need to try and fix, or is this just normal?

    Nope,  I have it unselected and it still grows by leaps and bounds.  Any other ideas?  Is there anyone I can send a form to who  can work on it for a fee?

  • Log file audit script to search and collect

    Hi guys,
    I'm trying to figure out the best way to complete this log file audit, so I would like to scripted, but can't seem to get a grasp on it and how best to do it. I need to search for the log files (all log files OS and App logs) on a few dozen systems on a
    few different drives per system. I'm looking to collect the log location, name, size, last log file event in that log and then export info that to a CSV file and email it to myself monthly to report on.  

    Please read the following:
    Posting guidelines
    Handy tips for posting to this forum
    How to ask questions in a technical forum
    Rubber duck problem solving
    How to write a bad forum post
    Help Vampires: A Spotter's Guide
    -- Bill Stewart [Bill_Stewart]

  • I can't able to open the *.log files from the firefox browser and i need to open the file inside the frame

    If i open the log file from firefox browser it is not open and throws an error as
    "The address wasn't understood
    Firefox doesn't know how to open this address, because one of the following protocols (e) isn't associated with any program or is not allowed in this context."

    What type of log file?
    What is the file name?
    What is the file path or URI?

  • File sizes grow too fast when asset added

    When I add a 125 Mb mp4 file to my menu in DVD Studio Pro, the size of the file as indicated by the green guide at the top goes from 3.4 Gb, to 5.4 Gb.
    Any idea why a 125 Mb .mp4 file would cause the DVD to get so big so fast?

    Yes, the main DVD assets are one .m2v and .ac3 file with chapter markers that make up the 3.4 GB. The file I want to add is a smaller, 1/2 hour separate and related file. This is why I tried to use .mp4 because I was less concerned about the video quality.

  • SYSAUX tablespace grow too quick????

    We have EBS R12.1 on LINUX system. Recently I found our development EBS database SYSAUX tablespace grow very quick. The SYSAUX tablespace has two data files and each data files size is 6GB (total 12 GB). In one month all 12 GB space are gone.
    My questions are:
    1. what objects or reports or ??? take this much space?
    2. how to delete un-need space?
    3. what is reasonable SYSAUX size?
    Thanks.

    I double check SYSAUX space usage and found it only use less than 100MB. Why SYSAUX show all 12 Gb space are gone?
    SQL> l
    1 SELECT occupant_name, schema_name, move_procedure,
    2 space_usage_kbytes
    3 FROM v$sysaux_occupants
    4* ORDER BY 1
    SQL> /
    OCCUPANT_NAME SCHEMA_NAME MOVE_PROCEDURE SPACE_USAGE_KBYTES
    AO SYS DBMS_AW.MOVE_AWMETA 45888
    AUTO_TASK SYS 320
    EM SYSMAN emd_maintenance.move_em_tblspc 0
    EM_MONITORING_USER DBSNMP 0
    EXPRESSION_FILTER EXFSYS 0
    JOB_SCHEDULER SYS 1152
    LOGMNR SYSTEM SYS.DBMS_LOGMNR_D.SET_TABLESPACE 13376
    LOGSTDBY SYSTEM SYS.DBMS_LOGSTDBY.SET_TABLESPACE 1600
    ORDIM ORDSYS 0
    ORDIM/PLUGINS ORDPLUGINS 0
    ORDIM/SQLMM SI_INFORMTN_SCHEMA 0
    PL/SCOPE SYS 640
    SDO MDSYS MDSYS.MOVE_SDO 0
    SM/ADVISOR SYS 198528
    SM/AWR SYS 1006144
    SM/OPTSTAT SYS 10866560
    SM/OTHER SYS 8192
    SMON_SCN_TIME SYS 3328
    SQL_MANAGEMENT_BASE SYS 1728
    STATSPACK PERFSTAT 0
    STREAMS SYS 1216
    TEXT CTXSYS DRI_MOVE_CTXSYS 0
    TSM TSMSYS 256
    ULTRASEARCH WKSYS MOVE_WK 0
    ULTRASEARCH_DEMO_USE WK_TEST MOVE_WK 0
    R
    WM WMSYS DBMS_WM.move_proc 0
    XDB XDB XDB.DBMS_XDB.MOVEXDB_TABLESPACE 56192
    XSAMD OLAPSYS DBMS_AMD.Move_OLAP_Catalog 0
    XSOQHIST SYS DBMS_XSOQ.OlapiMoveProc 45888
    29 rows selected.

  • EBS Database R12.1 temporary tablespace grow too quick??

    we have a EBS R12.1 database on Redhat LINUX server. Recently this database every day Temporary tablespace grow at least 1 GB. This database temporary tablespace (with two temp files) has been grow to 45 GB.
    Does anyone know what wrong?
    How to fix problem?

    I eventually figure out this temporary tablespace grow is cause by OEM.
    SQL statement is:
    /* OracleOEM */ SELECT end_time, status, session_key, session_recid, session_stamp, command_id, start_time, time_taken_
    display, input_type, output_device_type, input_bytes_display, output_bytes_display, output_bytes_per_sec_display
    FROM (SELECT end_time, status, session_key, session_recid, session_stamp, command_id, start_time,
    time_taken_display, input_type, output_device_type, input_bytes_display, output_bytes_di
    splay, output_bytes_per_sec_display FROM v$rman_backup_job_details ORDER BY end_time DESC) WHERE rownum
    = 1;
    ANyone know why this statement will take 30GB temporary space on EBS R12.1?
    Thanks.

  • Dimension operator - reports of sequence growing too quickly

    Hi,
    It looks like the sequence used to populate one of our SCD type 2 dimesnion sis growing faster than expected.
    -- Create sequence
    create sequence RETAILER_SEQ
    minvalue 1
    maxvalue 9999999999999999999999999999
    start with 63059812
    increment by 1
    cache 20;
    Using 11.2.0.3 and uses dimesnion operator.
    Anybody else seen this - 5 lveels in the dimension.
    Should we be setting cache to nocache?
    What lookis to be happening is that sequence ends one day at certain value and next time load starts uses a value which approx 0.5 million or so greater.
    Thanks
    Edited by: user5716448 on 08-Apr-2013 07:27

    We also noticed that are sequences were growing fast. We then set our sequences to no cache. This ensures that newly added records go in the exact sequence without any gaps. Now the sequences are growing normally.

  • SharePoint TempDB.mdf growing too large? I have to restart SQL Server all the time. Please help

    Hi there,
    On our DEV SharePoint farm > SQL server
    The tempdb.mdf size grows too quickly and too much. I am tired of increasing the space and cannot do that anymore.
    All the time I have to reboot the SQL server to get tempdb to normal size.
    The Live farm is okay (with similar data) so it must be something wrong with our
    DEV environment.
    Any idea how to fix this please?
    Thanks so much.

    How do you get the tempdb to 'normal size'? How large is large and how small is normal.
    Have you put the databases in simple recovery mode? It's normal for dev environments to not have the required transaction log backups to keep the ldf files in check. That won't affect the tempdb but if you've got bigger issues then that might be a symptom.
    Have you turned off autogrowth for the temp DB?

  • Log file size is huge and cannot shrink.

    I have a live database that is published with merge replication and a couple of push subscriptions. I just noticed that the log file has grown to 500GB. The database is in full recovery. We do weekly full backups, daily log backups. I cannot shrink the log
    file back down to normal proportions. It should only be about 5GB. The file properties show an initial size equal to the current size, I cannot change that number and I don't know why it is so big now? How do I go about shrinking the log file. the normal DBCC
    shrink and the SSMS GUI to shrink are not doing anything and say there is 0MB space free!!

    As per your first posing log_reuse_wait_desc was LOG_BACKUP
    and in 2nd REPLICATION . i am consfused
    if
    log_reuse_wait_desc  column
    shows LOG_BACKUP then table log backup of your db, if it is REPLICATION 
    and you are sure that your replications are in sync then reset the status of replicated transactions
    You can reset this by first turning the Reader agent off ( turn the whole SQL Server Agent off), and then run that query on the database for which you want to fix the replication issue:
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time= 0, @reset = 1
    vt
    Please mark answered if I've answered your question and vote for it as helpful to help other user's find a solution quicker

  • Shrink Log file in log shipping and change the database state from Standby to No recovery mode

    Hello all,
    I have configured sql server 2008 R2 log shipping for some databases and I have two issues:
    can I shrink the log file for these databases: If I change the primary database from full to simple and shrink the log file then change it back to full recovery mode the log shipping will fail, I've seen some answers talked about using "No
    Truncate" option, but as I know this option will not affect the log file and it will shrink the data file only.
          I also can't create maintenance to reconfigure the log shipping every time I want to shrink the log file because the database size is huge and it will take time to restore in the DR site, so the reconfiguration
    is not an option :( 
    how can I change the secondary database state from Standby to No recovery mode? I tried to change it from the wizard and wait until the next restore for the transaction log backup, but the job failed and the error was: "the step failed". I need
    to do this to change the mdf and ldf file location for the secondary databases.
    can any one help?
    Thanks in advance,
    Faris ALMasri
    Database Administrator

    1. can I shrink the log file for these databases: If I change the primary database from full to simple and shrink the log file then change it back to full recovery mode the log shipping will fail, I've seen some answers talked about using "No Truncate"
    option, but as I know this option will not affect the log file and it will shrink the data file only.
          I also can't create maintenance to reconfigure the log shipping every time I want to shrink the log file because the database size is huge
    and it will take time to restore in the DR site, so the reconfiguration is not an option :( 
    2. how can I change the secondary database state from Standby to No recovery mode? I tried to change it from the wizard and wait until the next restore for the transaction log backup, but the job failed and the error was: "the step failed". I need to do
    this to change the mdf and ldf file location for the secondary databases.
    can any one help?
    Thanks in advance,
    Faris ALMasri
    Database Administrator
    1. If you change recovery model of database in logshipping to simple and back to full Logshipping will break and logs wont be resored on Secondary server as log chain will be broken.You can shrink log file of primary database but why would you need that
    what is schedule of log backup. Frequent log backup is already taking care of log files why to shrink it and create performance load on system when log file will ultimately grow and since because instant file initilaization is not for Log files it takes time
    to grow and thus slows performace.
    You said you want to shrink as Database size is huge is it huge or does it have lots of free space. dont worry about data file free space it will eventually be utilized by SQL server when more data comes
    2. You are following wrong method changing state to no recovery would not even allow you to run select queries which you can run in Standby mode. Please refer below link to move Secondary data and log files
    http://www.mssqltips.com/sqlservertip/2836/steps-to-move-sql-server-log-shipping-secondary-database-files/
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Question about full backup and Transaction Log file

    I had a query will taking full backup daily won't allow my log file to grow as after taking the full backup I still see the some of the VLF in status 2. It went away when I manually took the backup of log file. I am bit confused shall I
    perform backup of transaction log and full database backup daily to avoid such things in future also until I run shrinkfile the storage space from server won't get reduced right.

    yes, full backup does not clear log file only log backup does. once the log backup is taken, it will set the inactive vlfs in the log file to 0.
    you should perform log backup as the per your business SLA in data loss.
    Go ahead and ask this to yourself:  
    If  a disaster strikes and your database server is lost and your only option is restore it from backup, 
    how much data loss can your business handle??
    the answer to this question is how frequently your log backup should be?
    if the answer is 10 mins, you should have log backups every 10 mins atleast.
    if the answer is 30 mins, you should have log backups every 30 mins atleast.
    if the answer is 90 mins, you should have log backups every 90 mins atleast.
    so, when you restore, you will restore latest fullbackup+differential(latest one taken after restored fullback)
     and all the logbackups taken since the latest(restored full or differential backup).
    there several resources on web,inculding youtube videos, that explain these concepts clearly.i advice you to look at them.
    to release the file space to OS, you should the shrink the file. log file shrink happens from the end upto the point it reaches an active vlf.
    if there are no inactive vlf's in the end,despite how many inactive vlf's the log file has in the begining, the log file is not shrinkable
    Hope it Helps!!

  • Dataguard lost both Primary redo log and standby redo log files

    Hi,
    I am new to data guard, i came acorss a scenario where we loose both primary redo log file and standby redo log files.
    Can someone please help me understand how to recover from this situation.
    Thanks!

    >loose both primary redo log file and standby redo log files
    We have to be very clear.
    There are (set A) online redo log files  and (set B) standby redo log files at (location 1) Primary and (location 2) Standby.
    The standby redo log files, depending on the configuration, aren't strictly mandatory.  The standby can be applying redo without online redo log files present as well, depending on how it was setup.
    So, the question is  : Did you lose online redo log files at the primary ?  Didn't the primary shutdown itself then ? If so, you have to do an incomplete recovery at the primary OR switch over to the standby (which may or may not have received the last transaction, depending on how it was configured and operating)   OR restore from the standby (again, with possible loss of transactions) to the primary.
    Hemant K Chitale

  • VMX20 - Limit virtual machine log file size and number

    Hi all,
    Here is parameter file
    log.rotateSize=1000000
    log.keepOld=10
    that I have apply on virtual machine VMX. After I run the compliance checker again and it still showing fail.
    I also have try restart the virtual machine machine few time but the log file keep increase.
    For my understanding if we apply this parameter setting once the virtual machine log file over 1,000kb or 10 log file the new log file will replace the old one, correct me if Ii wrong.
    Here is my question
    1. Did the parameter file have restrict on any version of VMware?
    2. If the log file less than 1,000kb and it already have 10 log file, so the new log file will still replace the old log file?
    Thk.
    Best regards.
    Wong Pak Lian

    Hi a_nut_in,
    The problem have been solved in VMware vsphere version 5 with compliance checker for vsphere 5.0.
    But under VMware vsphere version 4 I have found out the parameter is ok and workable but the results is still showing fail, will this is software bug?
    Thk.
    Best regards.
    Wong Pak Lian

Maybe you are looking for

  • Can't add a new Firewall Rule

    I have a very curious issue: I cannot add any new firewall rules at all! Clicking on the New Button does nothing and on the console I get System Preferences[487] * -[NSCFString objectForKey:]: selector not recognized [self = 0x3f11b0] I have flushed

  • Confirmation on How to Stop Spotlight Indexing an External Hard Drive

    Hello everyone, I know there are a number of posts on this but I just want to confirm: Can I stop Spotlight from indexing my external FireWire hard drive by adding its name to the Privacy pane in Spotlight's preferences when the drive is mounted? (I'

  • Need to reinstall adobe acrobat

    need to reinstall adobe acrobat on my computer

  • Photoshop CC - Can't sync settings between computers

    Hi! I have Photoshop CC both on my desktop computer and on my laptop. Now I want to sync the settings to my laptop. But it doesn't work. Here is a screendump from my desktop computer: As you can see I'm logged in to my account. But shouldn't I also b

  • Premiere Pro CC keeps CRASHING - Mac

    Premiere Pro has been crashing for over a year. Even before I upgraded to CC. Sometimes it will crash more than 5 times in 10 minutes. Longest it has run is just under an hour without crashing. Everytime it crashes it says "Sorry, a serious error has