CV_HOME/cv/log getting full
Hello,
We have installed a RAC 11.2.0.1 on two nodes and we have observed a lot of log files at CV_HOME/cv/log. From the doc, this is the directory that will collect traces in case you run cluster verify with SRVM_TRACE=TRUE. The logs are generated and rotated every 5 minutes.
-rw-r----- 1 oracle ci 130746 Jan 14 20:13 cvutrace.log.0_20110114201611
-rw-r----- 1 oracle ci 59616 Jan 14 20:16 cvutrace.log.0_20110114201809
-rw-r----- 1 oracle ci 130770 Jan 14 20:18 cvutrace.log.0_20110114202112
-rw-r----- 1 oracle ci 59602 Jan 14 20:21 cvutrace.log.0_20110114202309
-rw-r----- 1 oracle ci 130756 Jan 14 20:23 cvutrace.log.0_20110114202612
-rw-r----- 1 oracle ci 59602 Jan 14 20:26 cvutrace.log.0_20110114202808
Has someone observed similar behaviour, who is generating them? It doesnt look to me to come from a database job or an operating system cron.
Thanks for your replies!
Hi,
Check this note...This will solve our problem.
*Large Number of Trace Files Generated Every 5 min Under $ORACLE_BASE/grid/cv/log/ [ID 1192676.1]*
Regards,
Levi Pereira
Similar Messages
-
O10g se db install - application log gets full
Hi
I have installed o10g se db release 10.1.0.2 on windows server. (AMD Athlon 1.3GHz, 512 MBy RAM, windows 2000 server, sp4).
After installation the application log gets full with the error message:
Source: Perflib
Description:
The Open Procedure for service "Nbf" in DLL "C:\WINNT\system32\perfctrs.dll" failed. Performance data for this service will not be available. Status code returned is data DWORD 0.
Does anyone know what is the cause an how to solve the problem.
Regards, PetrMatt,
I am glad (well not glad, it sucks but I'm glad I'm not alone) to hear you are experiencing the same thing. I have literally tried everything to get this to work. I even took a brand new laptop out of the box and Adobe was the only thing I installed. I can usually follow other posts and fix issues but I am stumped and WAY aggrivated. I love the concept as does my editing team but I'm in charge of installing and updating 5 PC's and my team is now aggrivated that they can't get what they need to do their jobs. Even if you download the trials, you don't get everything you need for the whole cloud process. Not to mention we all need updates badly and really need to start using Edge Animate. -
Client Deletion, Transaction log getting full.
Hi Gurus,
We are trying to delete a client by running:
clientremove
client = 200 (*200 being the client we want to remove)
select *
The transaction log disk space allocated is 50GB, it is getting full (in simple mode) and client deletion never completes. The size of the table it is accessing is 86 GB, and i think 200 client will be occupying around 40-45GB. Client 200 has 15.5 million rows in the table.
I am i giving proper command ?is there any explicit commit i can include or any workaround for deleting the client and not hammer the log file.
Thanks guys
Edited by: SAP_SQLDBA on Jan 22, 2010 6:51 PMHi,
Backup the active transaction log file and Shrink the file directly.
Please refer the following SAP Notes to get more information.
[ Note 625546 - Size of transaction log file is too big|https://websmp130.sap-ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=625546]
[ Note 421644 - SQL error 9002: The transaction log is full|https://websmp130.sap-ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=421644]
Which version of SQL Server u are using ? SP Level ?
Frequently perform Transaction Log backup (BACKUP TRANS) to remove inactive space within the Transaction Log Files.
Please refer [Note 307911 - Transaction Log Filling Up in SQL Server 7.0|https://websmp130.sap-ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=307911] to get more information about the reasons for such kind of situation.
Regards,
Bhavik G. Shroff -
Tempdb getting full but nothing is running on database server.
Hi Experts,
I observe stange behaviour last from couple of days tempdb log getting full periodically but looking at the running transaction nothing is running specfically on server whihc cal fill tempdb log.
can anyone suggest in this.
Shivraj Patil.To what size is the tempdb tlog growing? Make sure you don't have auto-growth disabled on the tempdb tlog file. Also make sure there isn't a low max size specified for the tlog file. Only UNDO information is stored in the tempdb log (no REDO information),
so it should NOT bloat so much as other user DBs unless you have a runaway query consuming too much tlog space.
Satish Kartan www.sqlfood.com -
Database Log File getting full by Reindex Job
Hey guys
I have an issue with one of my databases during Reindex Job. Most of the time, the log file is 99% free, but during the Reindex job, the log file fills up and runs out of space, so the reindex job fails and I also get errors from the DB due to log
file space. Any suggestions?Please note changing to BULK LOGGED recovery will make you loose point in time recovery. Because alter index rebuild would be minimally logged and for the time period this job is running you loose point in time recovery so take step accordingly. Plus you
need to take log backup after changing back to Full recovery
I guess Ola's script would suffice if not you would have to increase space on drive where log file is residing. Index rebuild is fully logged in full recovery.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
Dealing with a 4rd party application that inserts into its log table, but watching sql profiler i see a ton of the same traffic. Its trying to insert into the table, but generates an error that mentions the transaction log for that DB is full.
Well, earlier I had reset the recovery mode to simple, from full since this is a test system and dont really care about recovery.
So the message mentions to check the log_reuse_wait_desc column in sys.databases and there the value is 'CHECKPOINT'. At least at that point in time.
There is plenty of space in the transaction log and the physical disk has plenty of space as well.
What could be causing the error that seems to suggest the transaction log is full, when in fact it is not?What is the setup for autogrowth on the log file?
Transaction log shrink:
http://www.sqlusa.com/bestpractices2005/shrinklog/
Kalman Toth Database & OLAP Architect
SQL Server 2014 Design & Programming
New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012 -
Root file system getting full SLES 10 SAP ERP6
Hi Gurus
Iam having an unusual problem.My / file system is getting full and i can`tv pick up what causing it .I have checked the logs in /var/messages.I dont know whats writing to / .I didnt copy anything directly on /
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 7.8G 7.5G 0 100% /
SLES 10 64 bit ERP6 SR2
Anyone who had a similar problem.
Any ideas are welcomecd /
xwbr1:/ # du -hmx --max-depth=1
1 ./lost+found
48 ./etc
1 ./boot
1 ./sapdb
1 ./sapmnt
1430 ./usr
0 ./proc
0 ./sys
0 ./dev
85 ./var
8 ./bin
1 ./home
83 ./lib
12 ./lib64
1 ./media
1 ./mnt
488 ./opt
56 ./root
14 ./sbin
2 ./srv
1 ./tmp
1 ./sapcd
1 ./backupdisk
1 ./dailybackups
1 ./fulloffline
cd /root
xwbr1:~ # du -hmx --max-depth=1
1 ./.gnupg
1 ./bin
1 ./.kbd
1 ./.fvwm
2 ./.wapi
12 ./dsadiag
1 ./.gconf
1 ./.gconfd
1 ./.skel
1 ./.gnome
1 ./.gnome2
1 ./.gnome2_private
1 ./.metacity
1 ./.gstreamer-0.10
1 ./.nautilus
1 ./.qt
1 ./Desktop
1 ./.config
1 ./Documents
1 ./.thumbnails
38 ./.sdtgui
1 ./sdb
1 ./.sdb
4 ./.mozilla
56 . -
File system is getting full Frequently in Federated Portal
Hi
In my production federated portal the file system
File system /usr/sap/<SID>/JC01 -
is getting full 100% for every 30 minutes ,
what could be the reason for this behavior, this is happening since from 6 days
can I be assisted on this
RegardsHi Michael ,
It does not release the space automatically, what does this command show du -k | sort -n Following are the last lines
I pasted the lines vertically , but in this page all lines are mixed and clubbed horizontally the out put of the command start now ************************************
214260 ./j2ee/cluster/server0/temp/webdynpro
214660 ./SDM/root/origin/sap.com/caf
214704 ./SDM/root/origin/com.adobe/PDFManipulation/Adobe Systems/0/800.20070626144044.156488
217212 ./j2ee/cluster/server0/temp
223168 ./j2ee/cluster/server1/apps/sap.com/irj/servlet_jsp/irj/root/web-inf/portal/portalapps
224000 ./j2ee/cluster/server0/apps/sap.com/irj/servlet_jsp/irj/root/web-inf/portal/portalapps
225844 ./j2ee/cluster/server1/apps/sap.com/irj/servlet_jsp/irj/root/web-inf/portal
226672 ./j2ee/cluster/server0/apps/sap.com/irj/servlet_jsp/irj/root/web-inf/portal
227456 ./j2ee/cluster/dispatcher/log
233540 ./SDM/root/origin/com.adobe/FontManagerService
233540 ./SDM/root/origin/com.adobe/FontManagerService/Adobe Systems
233540 ./SDM/root/origin/com.adobe/FontManagerService/Adobe Systems/0
262300 ./j2ee/cluster/dispatcher
283464 ./SDM/root/origin/sap.com/ess
292296 ./j2ee/cluster/server0/apps/sap.com/irj/servlet_jsp/irj/root/web-inf/deployment
295108 ./j2ee/cluster/server1/apps/sap.com/irj/servlet_jsp/irj/root/portalapps
295324 ./j2ee/cluster/server0/apps/sap.com/irj/servlet_jsp/irj/root/portalapps
300344 ./j2ee/cluster/server1/temp/webdynpro/temp
300344 ./j2ee/cluster/server1/temp/webdynpro/temp/sap.com
350848 ./j2ee/cluster/server1/apps/sap.com/irj/servlet_jsp/irj/root/web-inf
351096 ./exe
366616 ./SDM/root/origin/com.adobe/XMLFormService
366616 ./SDM/root/origin/com.adobe/XMLFormService/Adobe Systems
366616 ./SDM/root/origin/com.adobe/XMLFormService/Adobe Systems/0
383428 ./SDM/program
388232 ./SDM/root/origin/sap.com/tc
487640 ./j2ee/cluster/server1/temp/webdynpro
490564 ./j2ee/cluster/server1/temp
508248 ./j2ee/os_libs/adssap
519288 ./j2ee/cluster/server0/apps/sap.com/irj/servlet_jsp/irj/root/web-inf
567240 ./SDM/root/origin/com.adobe/PDFManipulation
567240 ./SDM/root/origin/com.adobe/PDFManipulation/Adobe Systems
567240 ./SDM/root/origin/com.adobe/PDFManipulation/Adobe Systems/0
645996 ./j2ee/cluster/server1/apps/sap.com/irj/servlet_jsp/irj/root
650780 ./j2ee/cluster/server1/apps/sap.com/irj
650780 ./j2ee/cluster/server1/apps/sap.com/irj/servlet_jsp
650780 ./j2ee/cluster/server1/apps/sap.com/irj/servlet_jsp/irj
723796 ./j2ee/os_libs
814652 ./j2ee/cluster/server0/apps/sap.com/irj/servlet_jsp/irj/root
819436 ./j2ee/cluster/server0/apps/sap.com/irj
819436 ./j2ee/cluster/server0/apps/sap.com/irj/servlet_jsp
819436 ./j2ee/cluster/server0/apps/sap.com/irj/servlet_jsp/irj
1248364 ./SDM/root/origin/com.adobe
1319004 ./j2ee/cluster/server1/apps/sap.com
1336304 ./j2ee/cluster/server1/apps
1470012 ./j2ee/cluster/server0/log/archive
1665496 ./j2ee/cluster/server1/log/archive
1681300 ./SDM/root/origin/sap.com
1730984 ./j2ee/cluster/server0/log
1856844 ./j2ee/cluster/server0/apps/sap.com
1878272 ./j2ee/cluster/server0/apps
1927368 ./j2ee/cluster/server1/log
2929664 ./SDM/root/origin
2931364 ./SDM/root
3314792 ./SDM
3881168 ./j2ee/cluster/server1
3943000 ./j2ee/cluster/server0
8094976 ./j2ee/cluster
9156032 ./j2ee
12878604 .
*****************end of out put
Regards
Edited by: sidharthmellam on Oct 27, 2009 4:15 AM -
How do I get full sleep mode back in Mavericks?
With the release of Mavericks full sleep has vanished. All "sleep" does now is turn the monitor(s) off. The external hard drive remains powered up, the external USB sound card's activity light remains on, the fan continues to blow hot air indicating that the processor has not gone to low power and the power LED remains constant and fully light. In ML everything used to power down and the power LED would pulsate. Does anyone know how to get full sleep back? Thanks.
Test after each of the following steps that you haven’t already tried:
Step 1
Take all the steps suggested in this support article. That's the starting point for any further effort to solve the problem. Skipping any of those steps may mean that the problem won't be solved. Note that, as stated in the article, the computerwill not sleep if Internet Sharing is enabled.
Step 2
From the menu bar, select
▹ System Preferences ▹ Accessibility ▹ Speakable Items: Off
Step 3
Select
▹ System Preferences ▹ Bluetooth ▹ Advanced...
and uncheck both boxes marked
Open Bluetooth Setup Assistant at startup if...
Step 4
Reset the SMC.
Step 5
Boot in safe mode and log in to the account with the problem. Note: If FileVault is enabled on some models, or if a firmware password is set, or if the boot volume is a software RAID, you can’t do this. Post for further instructions.
The login screen appears even if you usually log in automatically. You must know your login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin.
Safe mode is much slower to boot and run than normal. Don’t launch any applications at first. If sleep still doesn’t work properly, back up all data and reinstall the OS. After that, if you still have the issue, make a “Genius” appointment at an Apple Store to have the machine tested.
If sleep now works as expected, go on to the next step.
Step 6
Still in safe mode, launch the usual set of applications that are running when you have the problem, including your login items, one at a time, testing after each one. Some applications may not work; skip them. You might be able to identify the cause of the problem this way.
Step 7
If sleep is still working after you’ve launched all the usual applications, reboot as usual (not in safe mode) and test again. If sleep still works, you’re done, at least for the moment.
If you still have the sleep issue after booting out of safe mode, post again. -
hi,
In my /usr/sap/SMN/DVEBMGS02/work directory getting full regularly,in my work directory its writing .dat files with large space.iam moving these files to another location daily,any permenant solution for this.
ThankuIf you have enviornment ABAP + JAVA then this will be created with automatically
at the following locaiton.
/usr/sap/SID/INSTANCE/j2ee/cluster/server0/log/archive/.%2Flog%...........
But this happand usually if you have XI enviornment. So create the os level script to find the file from above location and delete them automatically
Regards,
Kamal Kishore -
SAPCCM4X Agent directory getting full.
Hello,
The sapccm4x agent running on the satellite systems produce log files i.e,dev_sapccm4x and dev_rfc.trc
which keeps growing and the file systems getting full on account of this.Only restarting the agent
puts the dev_sapccm4x into dev_sapccm4x.old but is there any other way that we can limit the size of the files to prevent space issues.
Thanks and Regards,
Vinod MenonHello,
I am not sure about parameter to limit the file size but certainly you can monitor it and take preventive action instead of reactive one.
You can use the parameter MONITOR_FILESIZE_KB to monitor in rz20.
Here is the link containing detailed information.
http://help.sap.com/saphelp_nw04/helpdata/en/fa/e4ab3b92818b70e10000000a114084/content.htm
Hope this helps.
Thanks,
Manoj Chintawar -
Archive Log vs Full Backup Concept
Hi,
I just need some clarification on how backups and archive logs work. Lets say starting at 1PM I have archive logs 1,2,3,4,5 and then I perform a full backup at 6PM.
Then I resume generating archive logs at 6PM to get logs 6,7,8,9,10. I then stop at 11PM.
If my understanding is correct, the archive logs should allow me to restore oracle to a point in time anywhere between 1PM and 11PM. But if I only have the full backup then I can only restore to a single point, which is 6PM. Is my understanding correct?
Do the archive logs only get applied to the datafiles when the backup occurs or only when a restore occurs? It doesn't seem like the archive logs get applied on the fly.
Thanks in advance.thelok wrote:
Thanks for the great explanation! So I can do a point in time restore from any time since the datafiles have last been written (or from when I have the last set of backed up datafiles plus the archive logs). From what you are saying, I can force the datafiles to be written from the redo logs (by doing a checkpoint with "alter set archive log current" or "backup database plus archivelog"), and then I can delete all the archive logs that have a SCN less than the checkpoint SCN on the datafiles. Is this true? This would be for the purposes of preserving disk space.Hi,
See this example. I hope this explain your doubt.
# My current date is 06-11-2011 17:15
# I not have backup of this database
# My retention policy is to have 1 backup
# I start listing archive logs.
RMAN> list archivelog all;
using target database control file instead of recovery catalog
List of Archived Log Copies
Key Thrd Seq S Low Time Name
29 1 8 A 29-10-2011 12:01:58 +HR/dbhr/archivelog/2011_10_31/thread_1_seq_8.399.766018837
30 1 9 A 31-10-2011 23:00:30 +HR/dbhr/archivelog/2011_11_03/thread_1_seq_9.409.766278025
31 1 10 A 03-11-2011 23:00:23 +HR/dbhr/archivelog/2011_11_04/thread_1_seq_10.391.766366105
32 1 11 A 04-11-2011 23:28:23 +HR/dbhr/archivelog/2011_11_06/thread_1_seq_11.411.766516065
33 1 12 A 05-11-2011 23:28:49 +HR/dbhr/archivelog/2011_11_06/thread_1_seq_12.413.766516349
## See I have archive logs from time "29-10-2011 12:01:58" until "05-11-2011 23:28:49" but I dont have any backup of database.
# So I perfom backup of database including archive logs.
RMAN> backup database plus archivelog delete input;
Starting backup at 06-11-2011 17:15:21
## Note above RMAN forcing archive current log, this archivelog generated will be usable only for previous backup.
## Is not my case... I don't have backup of database.
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=159 devtype=DISK
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=8 recid=29 stamp=766018840
input archive log thread=1 sequence=9 recid=30 stamp=766278027
input archive log thread=1 sequence=10 recid=31 stamp=766366111
input archive log thread=1 sequence=11 recid=32 stamp=766516067
input archive log thread=1 sequence=12 recid=33 stamp=766516350
input archive log thread=1 sequence=13 recid=34 stamp=766516521
channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:15:23
channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:15:38
piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525 tag=TAG20111106T171521 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:16
channel ORA_DISK_1: deleting archive log(s)
archive log filename=+HR/dbhr/archivelog/2011_10_31/thread_1_seq_8.399.766018837 recid=29 stamp=766018840
archive log filename=+HR/dbhr/archivelog/2011_11_03/thread_1_seq_9.409.766278025 recid=30 stamp=766278027
archive log filename=+HR/dbhr/archivelog/2011_11_04/thread_1_seq_10.391.766366105 recid=31 stamp=766366111
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_11.411.766516065 recid=32 stamp=766516067
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_12.413.766516349 recid=33 stamp=766516350
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_13.414.766516521 recid=34 stamp=766516521
Finished backup at 06-11-2011 17:15:38
## RMAN finish backup of Archivelog and Start Backup of Database
## My backup start at "06-11-2011 17:15:38"
Starting backup at 06-11-2011 17:15:38
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=+HR/dbhr/datafile/system.386.765556627
input datafile fno=00003 name=+HR/dbhr/datafile/sysaux.396.765556627
input datafile fno=00002 name=+HR/dbhr/datafile/undotbs1.393.765556627
input datafile fno=00004 name=+HR/dbhr/datafile/users.397.765557979
input datafile fno=00005 name=+BFILES/dbhr/datafile/bfiles.257.765542997
channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:15:39
channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:16:03
piece handle=+FRA/dbhr/backupset/2011_11_06/nnndf0_tag20111106t171539_0.269.766516539 tag=TAG20111106T171539 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:24
Finished backup at 06-11-2011 17:16:03
## And finish at "06-11-2011 17:16:03", so I can recovery my database from this time.
## I will need archivelogs (transactions) which was generated during backup of database.
## Note during backup some blocks are copied others not. The SCN is inconsistent state.
## To make it consistent I need apply archivelog which have all transactions recorded.
## Starting another backup of archived log generated during backup.
Starting backup at 06-11-2011 17:16:04
## So automatically RMAN force another "checkpoint" after backup finished,
## forcing archive current log, because this archivelog have all transactions to bring database in a consistent state.
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=14 recid=35 stamp=766516564
channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:16:05
channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:16:06
piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171604_0.272.766516565 tag=TAG20111106T171604 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_1: deleting archive log(s)
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_14.414.766516565 recid=35 stamp=766516564
Finished backup at 06-11-2011 17:16:06
## Note: I can recover my database from time "06-11-2011 17:16:03" (finished backup full)
## until "06-11-2011 17:16:04" (last archivelog generated) that is my recover window in this scenary.
## Listing Backup I have:
## Archive Logs in backupset before backup full start - *BP Key: 40*
## Backup Full database in backupset - *BP Key: 41*
## Archive Logs in backupset after backup full stop - *BP Key: 42*
RMAN> list backup;
List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
40 196.73M DISK 00:00:15 06-11-2011 17:15:37
*BP Key: 40* Status: AVAILABLE Compressed: NO Tag: TAG20111106T171521
Piece Name: +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525
List of Archived Logs in backup set 40
Thrd Seq Low SCN Low Time Next SCN Next Time
1 8 766216 29-10-2011 12:01:58 855033 31-10-2011 23:00:30
1 9 855033 31-10-2011 23:00:30 896458 03-11-2011 23:00:23
1 10 896458 03-11-2011 23:00:23 937172 04-11-2011 23:28:23
1 11 937172 04-11-2011 23:28:23 976938 05-11-2011 23:28:49
1 12 976938 05-11-2011 23:28:49 1023057 06-11-2011 17:12:28
1 13 1023057 06-11-2011 17:12:28 1023411 06-11-2011 17:15:21
BS Key Type LV Size Device Type Elapsed Time Completion Time
41 Full 565.66M DISK 00:00:18 06-11-2011 17:15:57
*BP Key: 41* Status: AVAILABLE Compressed: NO Tag: TAG20111106T171539
Piece Name: +FRA/dbhr/backupset/2011_11_06/nnndf0_tag20111106t171539_0.269.766516539
List of Datafiles in backup set 41
File LV Type Ckp SCN Ckp Time Name
1 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/system.386.765556627
2 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/undotbs1.393.765556627
3 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/sysaux.396.765556627
4 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/users.397.765557979
5 Full 1023422 06-11-2011 17:15:39 +BFILES/dbhr/datafile/bfiles.257.765542997
BS Key Size Device Type Elapsed Time Completion Time
42 3.00K DISK 00:00:02 06-11-2011 17:16:06
*BP Key: 42* Status: AVAILABLE Compressed: NO Tag: TAG20111106T171604
Piece Name: +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171604_0.272.766516565
List of Archived Logs in backup set 42
Thrd Seq Low SCN Low Time Next SCN Next Time
1 14 1023411 06-11-2011 17:15:21 1023433 06-11-2011 17:16:04
## Here make sense what I trying explain
## As I don't have backup of database before of my Last backup, all archivelogs generated before of my backup full is useless.
## Deleting what are obsolete in my env, RMAN choose backupset 40 (i.e all archived logs generated before my backup full)
RMAN> delete obsolete;
RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
using channel ORA_DISK_1
Deleting the following obsolete backups and copies:
Type Key Completion Time Filename/Handle
*Backup Set 40* 06-11-2011 17:15:37
Backup Piece 40 06-11-2011 17:15:37 +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525
Do you really want to delete the above objects (enter YES or NO)? yes
deleted backup piece
backup piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525 recid=40 stamp=766516523
Deleted 1 objectsIn the above example, I could before starting the backup run "delete archivelog all" because they would not be needed, but to show the example I follow this unnecessary way. (backup archivelog and delete after)
Regards,
Levi Pereira
Edited by: Levi Pereira on Nov 7, 2011 1:02 AM -
UniPack / root filesystem on E10K running SC3.4 gets full
Hi !
The problem is within a domain in my E10K running sun cluster 3.4. I am running a CorDaptix application which is the electricity billing system developed by SPL World Group. This is a web based application.
My /root filesystem which sits on an internal Unipack disks 18GB on one of my domains gets full. I have moved the /opt and re-created it on an external disk. This has decreased my root filesystem but I can see that its filling up my volume again about 200MBs daily. When I check all my logs and find the files modified within the last days, I cannot find the cause.
Please, please helpprocess accounting should help clear up all the used space on the / partiton for now...
If you make a new /var partition, I'd wager you'll find a lot less instances of / filling up afterward..
Process accounting used to be something worthwhile if you didn't have a very busy system, but if you have a big honking database with many connections or a web server taking plenty of hits and doing subsequent system level processing (calling perl scripts and other things like that), you'll fill up the logfile system quickly.
Teamquest probably does a lot of new processes during it's monitoring and checking of your system health, so that would also contribute to things.
Bottom line, I agree, make a new /var partition, copy (via some methodology tar, cpio, ufsdump) from the original location to the new, and then edit the /etc/vfstab and put /var to mount on boot, and if you have the opportunity, reboot with the new /var (plus restored /var/sadm area), and you should be in good shape -
I am trying to complete a large initial load of sales statistics data from an R/3 system into a BI system. I get this error:
Message number 999999 reached. Log is full
I have searched OSS and found many references to a programming error that causes this, but not for program RFKK_MASS_ACT_SINGLE_JOB. Is this a known issue? Is there something I can set up differently?
Below is the full job log:
Job started
Step 001 started (program RFKK_MASS_ACT_SINGLE_JOB, variant &0000000119033, user ID
Key 08023TSW2 AA for G/L acounting does not exist
Message number 999999 reached. Log is full
Message number 999999 reached. Log is full
Rollback work executed
Job cancelled after system exception ERROR_MESSAGEHi,
We are running program in background to close order with FM BAPI_SALESORDER_CHANGE.
The job is failing with messages in Job Log
APO: Message number 999999 reached. Log is full
APO: Task handler for transaction IZVSlIkXAQxX0000bkikP 0 has not been registered
Please can you provide some pointer to stop these messages in LOG or disable the LOG. These messages are coming after FM AVAILABILITY_CHECK (Include LATP4FA0).
What is reason for these messages and what is solution for this problem.
Thanks,
Dipak -
Hi All,
We install SRM system with DB2 9.1, We have using logarchmeth1. The log directory /db2/SID/log_dir is getting full every time.
We are taking full online backup including logs every week for the Development server.
Can i delete the logs in the log_dir..? as we are taking full online backup including logs. And we dont have tape to take backup of archive logs.
Please suggest..
Regards,
SreekanthHi All,
Please find the db2 config parameters below. please suggest is it ok or need to change any thing.
H:\db2\db2srd\db2_software\BIN>db2 get db cfg for SRD
Database Configuration for Database SRD
Database configuration release level = 0x0b00
Database release level = 0x0b00
Database territory = en_US
Database code page = 1208
Database code set = UTF-8
Database country/region code = 1
Database collating sequence = IDENTITY_16BIT
Alternate collating sequence (ALT_COLLATE) =
Database page size = 16384
Dynamic SQL Query management (DYN_QUERY_MGMT) = DISABLE
Discovery support for this database (DISCOVER_DB) = ENABLE
Restrict access = NO
Default query optimization class (DFT_QUERYOPT) = 5
Degree of parallelism (DFT_DEGREE) = 1
Continue upon arithmetic exceptions (DFT_SQLMATHWARN) = NO
Default refresh age (DFT_REFRESH_AGE) = 0
Default maintained table types for opt (DFT_MTTB_TYPES) = SYSTEM
Number of frequent values retained (NUM_FREQVALUES) = 10
Number of quantiles retained (NUM_QUANTILES) = 20
Backup pending = NO
Database is consistent = NO
Rollforward pending = NO
Restore pending = NO
Multi-page file allocation enabled = YES
Log retain for recovery status = NO
User exit for logging status = YES
Self tuning memory (SELF_TUNING_MEM) = ON
Size of database shared memory (4KB) (DATABASE_MEMORY) = 2795520
Database memory threshold (DB_MEM_THRESH) = 10
Max storage for lock list (4KB) (LOCKLIST) = AUTOMATIC
Percent. of lock lists per application (MAXLOCKS) = AUTOMATIC
Package cache size (4KB) (PCKCACHESZ) = AUTOMATIC
Sort heap thres for shared sorts (4KB) (SHEAPTHRES_SHR) = AUTOMATIC
Sort list heap (4KB) (SORTHEAP) = AUTOMATIC
Database heap (4KB) (DBHEAP) = 25000
Catalog cache size (4KB) (CATALOGCACHE_SZ) = 2560
Log buffer size (4KB) (LOGBUFSZ) = 1024
Utilities heap size (4KB) (UTIL_HEAP_SZ) = 10000
Buffer pool size (pages) (BUFFPAGE) = 10000
Max size of appl. group mem set (4KB) (APPGROUP_MEM_SZ) = 128000
Percent of mem for appl. group heap (GROUPHEAP_RATIO) = 25
Max appl. control heap size (4KB) (APP_CTL_HEAP_SZ) = 1600
SQL statement heap (4KB) (STMTHEAP) = 5120
Default application heap (4KB) (APPLHEAPSZ) = 4096
Statistics heap size (4KB) (STAT_HEAP_SZ) = 15000
Interval for checking deadlock (ms) (DLCHKTIME) = 10000
Lock timeout (sec) (LOCKTIMEOUT) = 3600
Changed pages threshold (CHNGPGS_THRESH) = 40
Number of asynchronous page cleaners (NUM_IOCLEANERS) = AUTOMATIC
Number of I/O servers (NUM_IOSERVERS) = AUTOMATIC
Index sort flag (INDEXSORT) = YES
Sequential detect flag (SEQDETECT) = YES
Default prefetch size (pages) (DFT_PREFETCH_SZ) = 32
Track modified pages (TRACKMOD) = ON
Default number of containers = 1
Default tablespace extentsize (pages) (DFT_EXTENT_SZ) = 2
Max number of active applications (MAXAPPLS) = AUTOMATIC
Average number of active applications (AVG_APPLS) = AUTOMATIC
Max DB files open per application (MAXFILOP) = 1950
Log file size (4KB) (LOGFILSIZ) = 16380
Number of primary log files (LOGPRIMARY) = 20
Number of secondary log files (LOGSECOND) = 40
Changed path to log files (NEWLOGPATH) =
Path to log files = H:\db2\SRD\log_dir\NO
DE0000\
Overflow log path (OVERFLOWLOGPATH) =
Mirror log path (MIRRORLOGPATH) =
First active log file = S0002249.LOG
Block log on disk full (BLK_LOG_DSK_FUL) = YES
Percent max primary log space by transaction (MAX_LOG) = 0
Num. of active log files for 1 active UOW(NUM_LOG_SPAN) = 0
Group commit count (MINCOMMIT) = 1
Percent log file reclaimed before soft chckpt (SOFTMAX) = 300
Log retain for recovery enabled (LOGRETAIN) = OFF
User exit for logging enabled (USEREXIT) = OFF
HADR database role = STANDARD
HADR local host name (HADR_LOCAL_HOST) =
HADR local service name (HADR_LOCAL_SVC) =
HADR remote host name (HADR_REMOTE_HOST) =
HADR remote service name (HADR_REMOTE_SVC) =
HADR instance name of remote server (HADR_REMOTE_INST) =
HADR timeout value (HADR_TIMEOUT) = 120
HADR log write synchronization mode (HADR_SYNCMODE) = NEARSYNC
First log archive method (LOGARCHMETH1) = DISK:H:\db2\SRD\log_d
ir\NODE0000\
Options for logarchmeth1 (LOGARCHOPT1) =
Second log archive method (LOGARCHMETH2) = OFF
Options for logarchmeth2 (LOGARCHOPT2) =
Failover log archive path (FAILARCHPATH) =
Number of log archive retries on error (NUMARCHRETRY) = 5
Log archive retry Delay (secs) (ARCHRETRYDELAY) = 20
Vendor options (VENDOROPT) =
Auto restart enabled (AUTORESTART) = ON
Index re-creation time and redo index build (INDEXREC) = SYSTEM (RESTART)
Log pages during index build (LOGINDEXBUILD) = OFF
Default number of loadrec sessions (DFT_LOADREC_SES) = 1
Number of database backups to retain (NUM_DB_BACKUPS) = 12
Recovery history retention (days) (REC_HIS_RETENTN) = 60
TSM management class (TSM_MGMTCLASS) =
TSM node name (TSM_NODENAME) =
TSM owner (TSM_OWNER) =
TSM password (TSM_PASSWORD) =
Automatic maintenance (AUTO_MAINT) = ON
Automatic database backup (AUTO_DB_BACKUP) = OFF
Automatic table maintenance (AUTO_TBL_MAINT) = ON
Automatic runstats (AUTO_RUNSTATS) = ON
Automatic statistics profiling (AUTO_STATS_PROF) = OFF
Automatic profile updates (AUTO_PROF_UPD) = OFF
Automatic reorganization (AUTO_REORG) = OFF
And i have cut and pasted the old logs to another location.
Regards,
Sreekanth
Maybe you are looking for
-
IPhone 3Gs won't sync contacts through iTunes after 4.0.2 upgrade
I updated to OS 4.0.2 today, and since then my iPhone 3Gs will not sync contacts. All contacts have now disappeared from my phone. The group headings still show, but Contacts is empty. I performed a factory reset and restored my phone, but still no c
-
Hi I am create my new webiste so, I need some templates use for our website. I search the best templates provide me. I have download more type of templates for web design sites of web design company free form http://freetemplates.webdesigningcompany.
-
Hi All , Can we craete Optional Run time prompts ? I have to craete two RTPs but the value can be given to any one of these RTPs . But when I created RTPs , it seems thay tagged as Mandatory and forcing users to enter the value in every RTP. Is there
-
All my usb ports suddenly failed. Any easy fix?
I'm not able to go to the Apple Store for repair. Anyone encounter this? What was the solution? thanks Bud
-
Preview: Colour management/soft proofing?
I've finally had it with Acrobat (it's slow, buggy, argghh) and would like to use Preview as my main PDF viewer. I like the colour management settings in Acrobat and found the colour rendition faithful. I use it to soft proof files for press printing