Need to shrink huge log file
Hi,
Have a database which is published using transactional replication. The replication was broken yesterday due to a restore. In order to try and fix this I issued the "EXEC sp_replrestart" command and left it running, unfortunately
it has now filled up the disk the log sits creating a 250GB file.
Getting this error:
Msg 9002, Level 17, State 6, Procedure sp_replincrementlsn_internal, Line 1
The transaction log for database 'RKHIS_Live' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
I really need to free up space on this disk and shrink the log, however I can't backup the database.
I've not tried shrinking the files yet as I can't do a full backup.
Any ideas?
I don't care about replication at this point and will happily ditch it if it gets me out of this situation.
Thanks
I disabled replication at the subscriber and then the publisher and disabled all the agent jobs.
Then I shrank the database and files to 1GB. Phew, database is functioning just fine.
Need a solution to this. Problem is the published database (managed by a 3rd party) is backed up, worked on and then restored as part of their software upgrade procedure. In this case there is bound to be discrepency between the transactions
in the log.
Learned a valuable lesson regarding sp_replrestart today though :-(
Similar Messages
-
Reader 10.1 update fails, creates huge log files
Last night I saw the little icon in the system tray saying an update to Adobe Reader was ready to be installed.
I clicked it to allow the install.
Things seemed to go OK (on my Windows XP Pro system), although very slowly, and it finally got to copying files.
It seemed to still be doing something and was showing that it was copying file icudt40.dll. It still displayed the same thing ten minutes later.
I went to bed, and this morning it still showed that it was copying icutdt40.dll.
There is no "Cancel" button, so this morning I had to stop the install through Task Manager.
Now, in my "Local Settings\TEMP" directory, I have a file called AdobeARM.log that is 2,350,686 KB in size and a file MSI38934.LOG that is 4,194,304 KB in size.
They are so big I can't even look at them to see what's in them. (Too big for Notepad. When I tried to open the smaller log file, AdobeARM.log, with Wordpad it was taking forever and showing only 1% loaded, so after five minutes, I terminated the Wordpad process so I could actually do something useful with my computer.)
You would think the installer would be smart enough to stop at some point when the log files begin to get enormous.
There doesn't seem to be much point to creating log files that are too big to be read.
The update did manage to remove the Adobe Reader X that was working on my machine, so now I can no longer read PDF files.
Maybe I should go back Adobe Reader 9.
Reader X never worked very well.
Sometimes the menu bar showed up, sometimes it didn't.
PDF files at the physics e-print archive always loaded with page 2 displayed first. And if you forgot to disable the look-ahead capability, you could get banned from the e-print archive site altogether.
And I liked the user interface for the search function a lot better in version 9 anyway. Who wants to have to pop up a little box for your search phrase when you want to search? Searching is about the most important and routine activity one does, other than going from page to page and setting the zoom.Hi Ankit,
Thank you for your e-mail.
Yesterday afternoon I deleted the > 2 GB AdobeARM.log file and the > 4.194 GB
MSI38934.LOG file.
So I can't upload them. I expect I would have had a hard time doing so
anyway.
It would be nice if the install program checked the size of the log files
before writing to them and gave up if the size was, say, three times larger
than some maximum expected size.
The install program must have some section that permits infinite retries or
some other way of getting into an endless loop. So another solution would be
to count the number of retries and terminate after some reasonable number of
attempts.
Something had clearly gone wrong and there was no way to stop it, except by
going into the Task Manager and terminating the process.
If the install program can't terminate when the log files get too big, or if
it can't get out of a loop some other way, there might at least be a "Cancel"
button so the poor user has an obvious way of stopping the process.
As it was, the install program kept on writing to the log files all night
long.
Immediately after deleting the two huge log files, I downloaded and installed
Adobe Reader 10.1 manually.
I was going to turn off Norton 360 during the install and expected there
would be some user input requested between the download and the install, but
there wasn't.
The window showed that the process was going automatically from download to
install.
When I noticed that it was installing, I did temporarily disable Norton 360
while the install continued.
The manual install went OK.
I don't know if temporarily disabling Norton 360 was what made the difference
or not.
I was happy to see that Reader 10.1 had kept my previous preference settings.
By the way, one of the default settings in "Web Browser Options" can be a
problem.
I think it is the "Allow speculative downloading in the background" setting.
When I upgraded from Reader 9 to Reader 10.0.x in April, I ran into a
problem.
I routinely read the physics e-prints at arXiv.org (maintained by the Cornell
University Library) and I got banned from the site because "speculative
downloading in the background" was on.
[One gets an "Access denied" HTTP response after being banned.]
I think the default value for "speculative downloading" should be unchecked
and users should be warned that one can lose the ability to access some sites
by turning it on.
I had to figure out why I was automatically banned from arXiv.org, change my
preference setting in Adobe Reader X, go to another machine and find out who
to contact at arXiv.org [I couldn't find out from my machine, since I was
banned], and then exchange e-mails with the site administrator to regain
access to the physics e-print archive.
The arXiv.org site has followed the standard for robot exclusion since 1994
(http://arxiv.org/help/robots), and I certainly didn't intend to violate the
rule against "rapid-fire requests," so it would be nice if the default
settings for Adobe Reader didn't result in an unintentional violation.
Richard Thomas -
SAP Program to Shrink the log file
Hi,
I have NW2004s runing or ORacle 10g Database.
Please let me knwo the code/SAP Program to shrink the log file created in the Databasei hope these threads may help you..
http://www.databasejournal.com/scripts/article.php/1446961
http://www.databasejournal.com/scripts/article.php/1492301 -
Shrink Transaction log file - - - SAP BPC NW
HI friends,
We want to shrink the transaction log files in SAP BPC NW 7.0, how can we achieve thsi
Please can you throw some light on this
why we thought of ghrinking the file ?
we are getting "out of memory " error when ever we do any activity. so we thought of shrinking the file (this is not a production server - FYI)
example of an activity where the out of memeory issueee appears
SAP BPC excel >>> etools >>> client options >>> refresh dimension members >>> this leads to a pop-up screen stating that "out of memory"
so we thought of shrinking the file.
Please any suggestions
Thank you and Kindest Regards
SrikaanthHI Poonam,
Not only the excel is throwing this kind of message (out of memory) - the SAP note is helpful, if we have error in excel alone
But we are facing this error every where
We have also found out that our Hard disk capacity as run out of space.
we want to empty the log files and make some space for us.
our hard disk is now having only few MegaBytes now
we want to clear our all test data, log files, and other stuffs
Please can you recommed us some way
Thank you and Kindest regards
Srikaanth -
Need to copy archive log file "arch1_601.dbf" when restore database?
Hi all,
I have the following case:
1) Full hot backup today (Feb-12-2009) in directory /u03/db/backup
2) But ongoing archive log are generated in directory /u02/oracle/uat/uatdb/9.2.0/dbs
3) I need to restore database becuase some data files are missing.
4) Use today ful backup to restore
5) Am I need to copy all archive log files in /u02/oracle/uat/uatdb/9.2.0/dbs to /u03/db/backup because "/u03/db/backup/RMAN" is restore directory?
FANHere is backup scripts:
Run
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u02/db/backup/RMAN/%F.bck';
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
allocate channel ch1 type disk format '/u02/db/backup/RMAN/backup_%d_%t_%s_%p_%U.bck';
backup incremental level 1 cumulative database plus archivelog delete all input;
backup current controlfile;
backup spfile;
release channel ch1;
allocate channel for maintenance type disk;
delete noprompt obsolete;
delete noprompt archivelog all backed up 2 times to disk; -
Hi,
I have a report server.when i start the report server the size of log file located at
$ORACLE_HOME/opmn/logsOC4J~Webfile2~default~island~1 > 2GB in 24 Hrs
Please tell me what may be the root cause for this and what will be the possible solution for this.
Please its urgent.Hi Jaap,
First of all Thanks.
how to set debuging off on the container?
some lines of the messages in line are as follows:95178969 [AJPRequestHandler-ApplicationServerThread-90] ERROR com.allianz.weo.struts.StoredProcAction - SQLException while calling proc CUSTOMER.AZBJ_WEO_SECURITY.LOAD_MODULE_MENUS: ORA-01013: user requested cancel of current operation
ORA-06512: at "CUSTOMER.AZBJ_WEO_SECURITY", line 107
ORA-06512: at line 1
95178969 [AJPRequestHandler-ApplicationServerThread-90] ERROR com.allianz.weo.struts.StoredProcAction - SQLException while calling proc CUSTOMER.AZBJ_WEO_SECURITY.LOAD_MODULE_MENUS: ORA-01013: user requested cancel of current operation
ORA-06512: at "CUSTOMER.AZBJ_WEO_SECURITY", line 107
ORA-06512: at line 1
95178969 [AJPRequestHandler-ApplicationServerThread-90] ERROR com.allianz.weo.struts.StoredProcAction - SQLException while calling proc CUSTOMER.AZBJ_WEO_SECURITY.LOAD_MODULE_MENUS: ORA-01013: user requested cancel of current operation
ORA-06512: at "CUSTOMER.AZBJ_WEO_SECURITY", line 107
ORA-06512: at line 1
95178969 [AJPRequestHandler-ApplicationServerThread-90] ERROR com.allianz.weo.struts.StoredProcAction - SQLException while calling proc CUSTOMER.AZBJ_WEO_SECURITY.LOAD_MODULE_MENUS: ORA-01013: user requested cancel of current operation
ORA-06512: at "CUSTOMER.AZBJ_WEO_SECURITY", line 107
ORA-06512: at line 1
07/07/12 12:18:32 DriverManagerConnectionPoolConnection not closed, check your code!
07/07/12 12:18:32 (Use -Djdbc.connection.debug=true to find out where the leaked connection was created)
Regards,
Sushama. -
Need information from the .log file
Hi,
This is a sample log file:
[Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
ODBC Database termination - started.
[Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
ODBC Database termination - completed.
[Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
ODBC Database termination - started.
[Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
ODBC Database termination - completed.
[Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
ODBC Database termination - started.
[Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
ODBC Database termination - completed.
[Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
ODBC Database termination - started.
[Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
ODBC Database termination - completed.
[Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
Terminating Analytic Services API.
[Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Informational/0/Build-EIS93130B006
Terminated Analytic Services API.
[Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/1051037/Build-EIS93130B006
Terminating Analytic Services API.
[Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Informational/1051037/Build-EIS93130B006
Terminated Analytic Services API.
[Tue May 24 10:27:53 2011] /IS/Coordinator/0/Informational/0/Build-EIS93130B006
Executed client request 'Logout' in 0 seconds
[Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
Coordinator Service is waiting...
[Tue May 24 10:27:53 2011] /IS/Listener/0/Trace/1051001/Build-EIS93130B006
Service is waiting for client request...
[Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
Service is busy processing 'The service is now available.'.
[Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
Coordinator Service is waiting...
[Tue May 24 10:27:53 2011] /IS/Listener/0/Trace/1051001/Build-EIS93130B006
Received client request Disconnect
[Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
Service is busy processing 'The service is busy.'.
[Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
Coordinator Service is waiting...
[Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
Service is busy processing 'The service is now available.'.
[Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
Coordinator Service is waiting...
[Tue May 24 10:27:53 2011] /IS/Listener/0/Trace/1051001/Build-EIS93130B006
Waiting for Essnet client connections.
What is this number 'EIS93130B006' present in this line and how to get the number for a perticular metamodel and metaoutline?
Regards
ShaktiHi Shakti
Tagging on to what Glenn is saying (FWIW I agree with his theory), What Glenn means is the number EIS93130B006 translates to:
EIS v9.3.1.3(the EIS9313 part) Build 6 (the "B006" part)
These are debug log statements coming out of EIS "trace" is generally the most verbose & chatty level of logging. everything any developer has written to be logged will show up in the log if the log level is at TRACE. I think you might want to check the logging level of your EIS and raise it to something akin to "Warning" to eliminate these log entries.
Regards,
Robb Salzmann -
after a day of using leopard, i noticed in console that one log kept on returning every 5 seconds with the same message
Oct 27 19:57:14 stephen-straubs-macbook com.apple.nibindd[633]: dyld: Symbol not found: xdr_nibind_cloneargs
Oct 27 19:57:14 stephen-straubs-macbook com.apple.nibindd[633]: Referenced from: /usr/sbin/nibindd
Oct 27 19:57:14 stephen-straubs-macbook com.apple.nibindd[633]: Expected in: /usr/lib/libSystem.B.dylib
Oct 27 19:57:15 stephen-straubs-macbook ReportCrash[634]: Formulating crash report for process nibindd[633]
Oct 27 19:57:15 stephen-straubs-macbook com.apple.launchd[1] (com.apple.nibindd[633]): Exited abnormally: Trace/BPT trap
Oct 27 19:57:15 stephen-straubs-macbook com.apple.launchd[1] (com.apple.nibindd): Throttling respawn: Will start in 10 seconds
Oct 27 19:57:15 stephen-straubs-macbook ReportCrash[634]: Saved crashreport to /Library/Logs/CrashReporter/nibindd2007-10-27-195714stephen-straubs-macbook.crash using uid: 0 gid: 0, euid: 0 egid: 0
and the log file is starting to take up space and all I can do is constantly run a periodic script but that is temporary. So is there any solution to this?Try running this command:
*sudo launchctl -w com.apple.nibbindd*
If that doesn't work, please run this:
sudo find / -name com.apple.nibindd*
The output file should be a file called com.apple.nibindd.plist
Just reenter the command above like so:
*sudo lauchctl -w /<path to>/com.apple.nibindd.plist*
I suspect you may have a file left over from the Tiger Netinfo Manager.
Edit: Corrected command
Message was edited by: willtisdale -
Need help shrinking iPhone video file sizes.
Thanks in advance, folks!
I just recently bought an iPhone 4 and I love it. The only is, despite the phone recording in HD, the file sizes are huge. Even using the SD vga camera in the front, 2 and a half minutes equals 45MB.
Is there anything I can do in iTunes, or the phone itself to change the quality and reduce the file size? If not, is there a program I can use to do this?
I have searched all over the internet and haven't found any info. I really hope you guys can help me out. Again, thanks!Pls change the file extension to the format in which the file was recorded for eg : .mp4, .3gp, .avi. Check first in which format ur phone records a video file and apply a similar extension do dat fill, revert if any query.
-
Need info related to log files
Hi All,
If a person tries to take backup of tables metadata from SQL Developer, then is such information stored in the logs?
To be specific, is there any logging mechanism @ oracle server which tells that table metadata has been accessed from a particular machine by a particular user?
If yes, where are such logs located?
Thanks in advance,
---EdenThe archive log records changes to the database for the purposes of recovery and are not human readable.
You may be able to achieve what you want (whatever it is) using auditing features but first you need to get your knowledge of oracle up to a reasonable level. Read the documentation starting with the Concepts guide. -
Log file size is huge and cannot shrink.
I have a live database that is published with merge replication and a couple of push subscriptions. I just noticed that the log file has grown to 500GB. The database is in full recovery. We do weekly full backups, daily log backups. I cannot shrink the log
file back down to normal proportions. It should only be about 5GB. The file properties show an initial size equal to the current size, I cannot change that number and I don't know why it is so big now? How do I go about shrinking the log file. the normal DBCC
shrink and the SSMS GUI to shrink are not doing anything and say there is 0MB space free!!As per your first posing log_reuse_wait_desc was LOG_BACKUP
and in 2nd REPLICATION . i am consfused
if
log_reuse_wait_desc column
shows LOG_BACKUP then table log backup of your db, if it is REPLICATION
and you are sure that your replications are in sync then reset the status of replicated transactions
You can reset this by first turning the Reader agent off ( turn the whole SQL Server Agent off), and then run that query on the database for which you want to fix the replication issue:
EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time= 0, @reset = 1
vt
Please mark answered if I've answered your question and vote for it as helpful to help other user's find a solution quicker -
Shrink Log file in log shipping and change the database state from Standby to No recovery mode
Hello all,
I have configured sql server 2008 R2 log shipping for some databases and I have two issues:
can I shrink the log file for these databases: If I change the primary database from full to simple and shrink the log file then change it back to full recovery mode the log shipping will fail, I've seen some answers talked about using "No
Truncate" option, but as I know this option will not affect the log file and it will shrink the data file only.
I also can't create maintenance to reconfigure the log shipping every time I want to shrink the log file because the database size is huge and it will take time to restore in the DR site, so the reconfiguration
is not an option :(
how can I change the secondary database state from Standby to No recovery mode? I tried to change it from the wizard and wait until the next restore for the transaction log backup, but the job failed and the error was: "the step failed". I need
to do this to change the mdf and ldf file location for the secondary databases.
can any one help?
Thanks in advance,
Faris ALMasri
Database Administrator1. can I shrink the log file for these databases: If I change the primary database from full to simple and shrink the log file then change it back to full recovery mode the log shipping will fail, I've seen some answers talked about using "No Truncate"
option, but as I know this option will not affect the log file and it will shrink the data file only.
I also can't create maintenance to reconfigure the log shipping every time I want to shrink the log file because the database size is huge
and it will take time to restore in the DR site, so the reconfiguration is not an option :(
2. how can I change the secondary database state from Standby to No recovery mode? I tried to change it from the wizard and wait until the next restore for the transaction log backup, but the job failed and the error was: "the step failed". I need to do
this to change the mdf and ldf file location for the secondary databases.
can any one help?
Thanks in advance,
Faris ALMasri
Database Administrator
1. If you change recovery model of database in logshipping to simple and back to full Logshipping will break and logs wont be resored on Secondary server as log chain will be broken.You can shrink log file of primary database but why would you need that
what is schedule of log backup. Frequent log backup is already taking care of log files why to shrink it and create performance load on system when log file will ultimately grow and since because instant file initilaization is not for Log files it takes time
to grow and thus slows performace.
You said you want to shrink as Database size is huge is it huge or does it have lots of free space. dont worry about data file free space it will eventually be utilized by SQL server when more data comes
2. You are following wrong method changing state to no recovery would not even allow you to run select queries which you can run in Standby mode. Please refer below link to move Secondary data and log files
http://www.mssqltips.com/sqlservertip/2836/steps-to-move-sql-server-log-shipping-secondary-database-files/
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
My TechNet Wiki Articles -
Shrink Log File on High Availability
Dear support
good day for you,
i using SQL server 2012 and using AlwaysON High Availability (server_SQL1 = Primary & Server_SQL2=Secondary), when i try to shrink the log files he told me you must alter the database to simple recovery mode first but i cant coz using AlwaysON !
thats mean:
remove the DB's from AlwaysON (Server_SQL1)
Shrink Files
remove DB's from Server_SQL2
add again DB's to AlwaysON
any other solution for shrink logs without add/remove DB from AlwaysON
Regards,The link that Uri has is correct, but let me expand on it for anyone else that runs across this issue:
You don't actually need to be in the simple recovery model to shrink a file or the log. The reason why some people make the switch is because changing to the simple recovery model lets the database automatically clear the logs. This *generally* puts the
VLF in use at the very beginning of the log. Since shrinking a log file works differently from data files (log only works from the end of the log until the first used vlf) it allows for a fast shrink and grow operation to fix up the log.
In the full recovery model it is still possible, the difference being that you'll need to check to see which VLF the database is currently using and you may have to manually cause the log to circle around (by log backups, etc) to get a good shrink so
that you can grow at a proper size.
Sean Gallardy | Blog |
Twitter -
Impact on DB to shrink a 130GB log file?
Here is my setup:
OS: Windows Server 2008 R2 Enterprise
6 x Cores
8GB RAM
SQL: Windows SQL 2008 setup in a 2 node cluster
I noticed that I have been having to add more disk space to one of the DBs that is running on this cluster. After looking into this further I noticed that there is a 130GB log file for this database. I have never attempted to shrink a log file
this size and do not know what kind of impact it may have to the running databases. Can anyone comment on what I might expect? Do I need to do this during a maintenance window?
Thank you
RickRick,
this is a very comprehensive subject and you should brace yourself for a lot of complex information that is about to be posted here, but I'm going to try put this as simple as possible.
Since you are asking if you can shrink your log file, I'm going to assume you do not have a backup policy that involves them but you probably have your database recovery model set to full. The transaction log is needed to ensure transactional consistency
and may be backed up, under full/bulk-logged database recovery models, enabling you to restore them together with full/differental backups.
Yes you can shrink your log files, and for this to be effective I recommend you set your database to simple recovery model. There are a few main things you should be aware of:
1) The database engine will allocate log file space as needed for it to run transactions. If it runs out of the pre-allocated space, it will autogrow by an ammount previously specified if this option is enabled, but if its disabled or there is no more free
disk space available, transactions will fail. It is recommended that you set the autogrow increment to reasonable values so your log files doesn't get too fragmented (new VLFs are created every time the file grows). There is also a performance impact, generally
overrated by some analysts in my opinion but very real, of growing your file too often.
2) Under simple recovery model, parts of the log known as VLFs that have already been filled and aren't supporting active transactions anymore are reused so your log files will generally remain very small. In full recovery model, these parts are not reused
until you take a log backup, so the log file will keep growing (this is probably your case). You will need to switch your database to simple recovery model or take a log backup in order to successfully shrink the log file.
3) However, if your database is already on simple recovery model and the log grew to 130 GB, it was because the transactions really needed it to work, so I would recommend against shrinking your log file unless you know the transactions were part of an unusual
application process or appeared after an index maintenance operation.
4) Your backup/restore operations will run faster with a smaller log file.
5) You'll need less disk storage to restore your databases on secondary environments if you have smaller log files.
6) If the storage used for the log files are shared across multiple databases, instances, or other files, keeping your log files small ensures they have more free space to grow under abnormal circumstances without manual intervention, preventing service
disruption.
Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
is an undocumented behavior and should not be relied upon. -
How to know the history of shrinking log files in mssql
hello,
In my SAP system some one shrinked the log file to 100 GB to 5 GB.How we would check when this
was shrinked recently .
Regards,
ARNS.hi,
Did u check the logfile in sapdirectory.There will be entry of who changed the size and the time.
Also,
Goto the screen where we usually change the logfile size.In that particular field press f1 and goto technical setting screen. Get the program name , table name and field name.
Now using se11 try to open the table and check whether the changed by value is there for that table.
Also open the program and debug at change log file process block.use can see in which table it update the changes.
There is a case of caution in this case.
The size of the application server's System Log is determined by the
following SAP profile parameters. Once the current System Log reaches
the maximum file size, it gets moved to the old_file and and a new
System Log file gets created. The number of past days messages in the
System Log depends on the amount/activity of System Log messages and the
max file size. Once messages get rolled off the current and old files,
they are no longer retrievable.
rslg/local/file /usr/sap/<SID>/D*/log/SLOG<SYSNO>
rslg/local/old_file /usr/sap/<SID>/D*/log/SLOGO<SYSNO>
rslg/max_diskspace/local 1000000
rslg/central/file /usr/sap/<SID>/SYS/global/SLOGJ
rslg/central/old_file /usr/sap/<SID>/SYS/global/SLOGJO
rslg/max_diskspace/central 4000000 .
Maybe you are looking for
-
Complete Removal of Stream Configuration in 9.2
Could any one send me the script for complete removal of streams configuration in oracle 9i database. I faces a lot of error while creating stream configuration.
-
Spotlight crashing hundreds of times per minute
On my new 13" Retina MBP, spotlight is crashing hundreds of times per minute, resulting in more or less complete CPU use by the crash reporter. The console logs are full of this: 4/4/14 7:08:27.925 PM mdworker[22407]: (Fatal) Worker: Uncaught excepti
-
Alert log and Trace file error
Just found this in one of our alertlog file Thread 1 cannot allocate new log, sequence 199023 checkpoint not complete and this in trace file: RECO.TRC file ERROR, tran=7.93.23662, session# =1, ose=60: ORA-12535: TNS: operation timed out
-
Hi Folks, can we encrypt and decrypt a xml bean object . i have a xml bean encoded file which contains all questions and now user need to take a exam ,problem here is user can see that xml file in text editor in readble format. Now how can i encrypt
-
if i have to open an attached pdf to my email it will not always open. ssomethimes i open in new tab aaand about 1 out of 5 times maybe. i have aan acer 200.