Advice on SQL Logs and Backups
Hi All,
I've been trying to understand SQL backups and I'm getting a bit stuck in one area - the log files.
I find they're getting quite big and as such filling up my drives. I'm guessing that the best way to handle this is to truncate them every so often - as the data is in the live DB I'm assuming that the log files should be small.
Q1 - I do daily full backups on my DB's via a maintenance plan so is it safe to say that the log files can be truncated daily?
Q2 - How do I go about truncating the logs? I tried a backup of them but I'm not sure what to do next.
Thanks for any help.
Tom
This can cause fragmentation and performance issues. Truncating the log is what happens when you take a backup.
Prashanth,
Shrinking log file does not causes fragmentation ( if you shrink data file it does cause) but your are correct that Shrinking log fiel should not be made every day practice.After shrinking when log file tries to grow it has to ask OS to allocate it space
which if done frequently ( on slower disk) can cause performance issue.
Tom,
>>I do daily full backups on my DB's via a maintenance plan so is it safe to say that the log files can be truncated daily?
A:NO only transaction log truncates log file ( or marks it reusable).So you have to take transaction log backup frequently( hope your DB is in full recovery).If your DB is in simple recovery automatic truncation happens and SQL server takes care of it after
checkpoint or when log grows 70 % of it size.
>>Q2 - How do I go about truncating the logs? I tried a backup of them but I'm not sure what to do next.
Again answer is simple take transaction log backup frequently or according to your RPO and RTO
PS: Sometimes when there is long running transaction like huge delete operation,index rebuild of huge database log might grow even frequent transaction log backup is there this is by design because unless transaction finishes or commits logs cannot be truncated
so if you face it look for open transaction and wait for it to commit.
Hope this helps
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
Similar Messages
-
Blog Post: Logger, A PL/SQL Logging and Debugging Utility
[Logger, A PL/SQL Logging and Debugging Utility|http://tylermuth.wordpress.com/2009/11/03/logger-a-plsql-logging-and-debugging-utility/]
Tyler Muth
http://tylermuth.wordpress.com
[Applied Oracle Security: Developing Secure Database and Middleware Environments|http://www.amazon.com/gp/product/0071613706?ie=UTF8&tag=tylsblo-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=0071613706]Hello,
My currently limited knowledge of APEX makes me think that this should be done using Javascript, called from the PL/SQL area. Assuming that you know a little about javascript, and that you are using a html form for input, you can insert some javascript to do the encoding. Here is some information on how to call a javascript from PL/SQL:
http://apexwonder.blogspot.com/2007/10/apex-plsql-html-javascript-part-2.html
Also, check this site for some information on javascript encoding:
http://www.yuki-onna.co.uk/html/encode.html
Cheers
Jason -
SMS_NOTIFICATION_SERVER process Active Transaction preventing SQL log file backup
Hello,
I have been working on adding a few thousand machines into our SCCM 2012 R2 environment. Recently after attaching several of these systems there was a spike in activity in the transaction log due to the communication and inventory of these new
machines. The log file would fill up quickly but the log file backup would not function and allow the reuse of the log file. Upon investigation by my DB Admin we noticed that the SMS_NOTIFICATION_SERVER process was holding open an Active Transaction
that would last 1 hour and then restart at the end of the hour. This process was essentially preventing the backup of the log file. In a test, I briefly turned off the SMS_NOTIFICATION_SERVER process and we noticed the transaction log file functioning
correctly. I have included a screen shot of the process in the SQL Activity Monitor. Has anyone experienced this issue and resolved it? Is there anyway to reduce the 1 hour time frame or change the behaviour so that the process releases the
log file for backup if the log is getting full?
Regards,
DaveWe had it in Simple only briefly yesterday when working on the issue. It is in Full recovery mode.
-
How to Verify Db logs and backups
1) How to check that Oracle Database logs are being archived or cleaned up on a regular basis in Solaris
2) How to Verify backups are being performed in solaris844081 wrote:
EdStevens wrote:
844081 wrote:
1) How to check that Oracle Database logs are being archived or cleaned up on a regular basis in Solaris
2) How to Verify backups are being performed in solarisIf you are talking about oracle backups and logs, it is OS independent. Look at the various options of the LIST command in rman. In addition, have your rman backup jobs write a log that you can examine.Can you be clear about what you are saying.. just letme know about the steps to be followed... :)Step 1 - point your browser to tahiti.oracle.com. This is the portal to the complete oracle doc set
Step 2 - drill down to your product and version. Here you will find the complete doc set for said product/version
Step 3 - locate the backup and recovery reference
Step 4 - Find the LIST command
Step 5 - read what each option of LIST can show you. -
Content of SAP SQL Database and backup
1. The SAP database / data files store in the folder <SID>DATA with extension mdf, what does it store? Does it content the structure of the database only or inclusive of the daily entry of the SAP data?
2. Will it be sufficient to backup the SAP SQL Database with the SQL database ONLINE for recovery purpose in case of server crash?Alvin Teo wrote:
Markus,
Thank you for your response.
I would like to clarify further that other than the database is required to be backup, any other SAP related system file that require to be include in the backup file occasionally, like during daily or weekly backup?
Yes - the instance directory and the Windows OS including the registry
Apart from this, do you happen to know where can I obtain the documentation which can assist me to do hardware sizing for SAP systems?
Usually it´s done the following way:
- the customer creates a project under http://service.sap.com/quicksizer
- the hardware partner takes this project and offers a machine that fits
Markus -
We are getting multiple 8623 Errors in SQL Log while running Vendor's software.
How can you catch which Query causes the error?
I tried to catch it using SQL Profiler Trace but it doesn't show which Query/Sp is the one causing an error.
I also tried to use Extended Event session to catch it, but it doesn't create any output either.
Error:
The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that
reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.
Extended Event Session that I used;
CREATE EVENT SESSION
overly_complex_queries
ON SERVER
ADD EVENT sqlserver.error_reported
ACTION (sqlserver.sql_text, sqlserver.tsql_stack, sqlserver.database_id, sqlserver.username)
WHERE ([severity] = 16
AND [error_number] = 8623)
ADD TARGET package0.asynchronous_file_target
(SET filename = 'E:\SQLServer2012\MSSQL11.MSSQLSERVER\MSSQL\Log\XE\overly_complex_queries.xel' ,
metadatafile = 'E:\SQLServer2012\MSSQL11.MSSQLSERVER\MSSQL\Log\XE\overly_complex_queries.xem',
max_file_size = 10,
max_rollover_files = 5)
WITH (MAX_DISPATCH_LATENCY = 5SECONDS)
GO
-- Start the session
ALTER EVENT SESSION overly_complex_queries
ON SERVER STATE = START
GO
It creates only .xel file, but not .xem
Any help/advice is greatly appreciatedHi VK_DBA,
According to your error message, about which query statement may fail with error message 8623, as other post, you can use trace flag 4102 & 4118 for overcoming this error. Another way is looking for queries with very long IN lists, a large number of
UNIONs, or a large number of nested sub-queries. These are the most common causes of this particular error message.
The error 8623 occurs when attempting to select records through a query with a large number of entries in the "IN" clause (> 10,000). For avoiding this error, I suggest that you could apply the latest Cumulative Updates media for SQL Server 2012 Service
Pack 1, then simplify the query. You may try divide and conquer approach to get part of the query working (as temp table) and then add extra joins / conditions. Or You could try to run the query using the hint option (force order), option (hash join), option
(merge join) with a plan guide.
For more information about error 8623, you can review the following article.
http://blogs.technet.com/b/mdegre/archive/2012/03/13/8623-the-query-processor-ran-out-of-internal-resources-and-could-not-produce-a-query-plan.aspx
Regards,
Sofiya Li
Sofiya Li
TechNet Community Support -
SQL LOG Backup failed in one Cluster Node
I have 02 node SQL fail over cluster, NOD01 and NODE 02. and configure SQL log backup job via SQL Logshipping
When the sql service is mounted to node 02 job backup will work without any issues, Once its connected to node 01 this will provide below issue
Executed as user: <domain>\administrator. The process could not be created for step 1 of job 0xAC90A0F3623AE44285089E9EF53B12C7 (reason: The system cannot find the file specified). The step failed.
could anyone have on fix for this
ThanxSQL Server Agent on both nodes run under same domain account?
Are you sure that path location is correct?
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Advice on SQL Backup POC tests
Hello,
We are going to be performing a POC for backup software. I don't know much about restoring SQL2008 and SQL2010 so I would like to know what tests SQL pros would want to see succeed before approving a solution. One of our DBAs will be participating later in
the process.
Thanks,
RobertRobert,
Just as an FYI - there is no such thing as "SQL2010".
Most backup software uses an agent and runs through VSS which is great if you want to backup windows, but isn't so great when it comes to SQL Server.
Things to watch out for:
1. The need of point in time recovery: While VSS does support this for SQL Server, it will not truncate the logs which means some other process will need to be run if the database is in full or bulk logged recovery model.
2. Use of Log shipping in the environment: Since part of log shipping is to take log backups and "ship" them to other servers, your essentially making this null and void. This is because SQL Server takes the log backups (which your software will now take
over doing) and puts them somewhere (configurable). Since the software will be taking over, it'll either need to have the ability to do the file movement needed for log shipping or you'll pretty much have to abandon the use of it. Also, see point #1 as log
shipping require the full or bulk_logged recovery model.
3. The use of Availability Groups in the environment: Akin to #2, backups can be taken on databases in an availability group. VSS does not take this into account as it doesn't need to. This is a SQL Server level item, not an OS level item. You'll somehow
need to build logic into the agent to first check and see if the database is part of an availability group, then check to see where the preferred backup place is, then if both are good take the backup. It also won't truncate the log as per #1 since availability
groups need the full recover model. (See a pattern?)
4. Differential: AFAIK since this is at a SQL Server level, it's not possible with VSS or agents unless the agent issues T-SQL Backup commands... at which point why run an agent or buy backup software?
5. File, Filegroup, ReadOnly filegroups, etc: See #4 for the same reason.
So that's just a few things to think about for backups, now for restores:
1. Does the software support restoring to other servers with dissimilar drive layouts?
2. Does the software support leaving databases to a point in time recovery not at a restore point? For example, at 2:53:02.666 AM or at a marked transaction (if those are used)?
3. Does the software support truncating logs after a log backup is complete?
4. Does the software support native backups with copy_only to not break log chains when needing to send to a vendor or otherwise take an out of band backup?
5. Does the software support piecemeal or partial restores (if enterprise edition and backup strategy supports it) for larger or more important databases?
Now for some admin (non-technical) items...
1. Who will be in charge of backups?
2. Who will be in charge of restores?
3. Who and how will people get/be notified if a backup fails?
4. Will your SLAs be blown if VSS has an issue (which generally required a reboot to fix) and database can't be backed up?
5. Is it possible to get a native backup out of the system if another team, project, or company needs it?
It might seem simple at first, I mean it's just a backup right... but there is definitely more going on depending on needs. If you're the kind of company that does simple recovery model for everything and plain vanilla the rest of the way - you might not
have any issues and pretty much anything will work for you. If you're the kind of company that runs different configurations, have dev environments that need refreshed daily or weekly, use any advanced features of current t-sql based backups, etc, you'll probably
end up not using the agent for the backup software and just sweeping disk or something of the like.
Good luck.
Sean Gallardy | Blog |
Twitter -
Powershell / SQL Inventory and last backup date Skips some servers?
Good Morning experts... I have a text file with a list of servers.. I am trying to automate a script that scans through them all and checks all databases to ensure everything is getting backed up... It works great (I copied the guts of the script from here,
I think and modified it so I could "Learn"). The only problem I have is some of the servers report an error saying can not connect and bypasses it. After the script is over, I can go back and query that single server and it responds correctly..
Has anyone else come across anything like that? I will include the script I have been using.. if you had 100+ servers to support with MULTIPLE instances on each how would you do it?
<#####################################################################
# Get All instances and Date of last Full / Log Backup of each
######################################################################>
#region Variables
$TextFileLocation = "C:\serverlist.txt"
$intRow = 1
$BackgroundColor = 36
$FontColor = 25
#endregion
#Region Open Excel
$Excel = New-Object -ComObject Excel.Application
$Excel.visible = $True
$Excel = $Excel.Workbooks.Add()
$Sheet = $Excel.Worksheets.Item(1)
#endregion
#Go through text file one at a time
foreach($instance in Get-Content $TextFileLocation)
$Sheet.Cells.Item($intRow,1) = "INSTANCE NAME:"
$Sheet.Cells.Item($intRow,2) = $instance
$Sheet.Cells.Item($intRow,1).Font.Bold = $True
$Sheet.Cells.Item($intRow,2).Font.Bold = $True
for ($column =1; $column -le 2; $column++)
$Sheet.Cells.Item($intRow, $column).Interior.Colorindex = 44
$Sheet.Cells.Item($intRow, $column).Font.ColorIndex = $FontColor
#Increase Row count by 1
$intRow++
#Create sub-headers
$Sheet.Cells.Item($intRow,1) = "Name"
$Sheet.Cells.Item($intRow,2) = "LAST FULL BACKUP"
$Sheet.Cells.Item($intRow,3) = "LAST LOG BACKUP"
#Format the column headers
for ($col = 1; $col -le 3; $col++)
$Sheet.Cells.Item($intRow,$col).Font.Bold = $True
$Sheet.Cells.Item($intRow,$col).Interior.ColorIndex = $BackgroundColor
$Sheet.Cells.Item($intRow,$col).Font.ColorIndex = $FontColor
#Finished with Headers, now move to the data
$intRow++
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | Out-Null
# Create an SMO connection to the instance in servers.txt
$s = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $instance
$dbs = $s.Databases
foreach ($db in $dbs)
if ($db.Name -ne "tempdb")
if($db.LastBackupDate -eq "1/1/0001 12:00 AM")
$fullbackupdate = "Never Backed Up"
$fgColor = "red"
else
#$fullBackupDate= "{0:g2}" -f $db.LastBackupDate
$fullBackupDate = $db.LastBackupDate
$Sheet.Cells.Item($intRow, 1) = $db.Name
$Sheet.Cells.Item($intRow, 2) = $db.LastBackupDate #$fullBackupDate
if ($db.RecoveryModel.Tostring() -eq "SIMPLE")
$logBackupDate="Simple"
else
#See Date Above..-eq is same as =
if($db.LastLogBackupDate -eq "1/1/0001 12:00 AM")
$logBackupDate="Never"
else
#$logBackupDate= "{0:g2}" -f $db.LastLogBackupDate
$logBackupDate = $db.LastLogBackupDate
$Sheet.Cells.Item($intRow, 3) = $logBackupdate
$intRow ++
$intRow ++
$Sheet.UsedRange.EntireColumn.AutoFit()
cls
Am I going about this the correct way, or is there a "Better" way to do this that I have not come across yet?
Thank youSorry JRV... I will put more of an example here.
I want to do an inventory of all our our instances and verify if they are being backed up correctly, I just wrote a script to test:
$TextFileLocation = "C:\servers.txt"
foreach($instance in get-content $TextFileLocation)
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | Out-Null
$s = New-Object ("Microsoft.SqlServer.Management.SMO.Server") $instance
$dbs = $s.Databases
foreach($db in $dbs)
$location = $db.Name
$logBackupDate = $db.LastLogBackupDate
if($db.RecoveryModel.Tostring() -eq "SIMPLE")
Write-host "$s - $location - SIMPLE Recovery"
elseif($db.RecoveryModel.ToString() -ne "SIMPLE")
Write-Host "$s - $location - $logBackupDate"
output is expected, and everything works fine... But some servers I get the following error:
PS K:\MyScripts\powershell> . 'C:\Users\scj0025\AppData\Local\Temp\Untitled30.ps1'
foreach : The following exception was thrown when trying to enumerate the collection: "Failed to connect to server poly04406.".
At C:\Users\scj0025\AppData\Local\Temp\Untitled30.ps1:9 char:8
+ foreach <<<< ($db in $dbs)
+ CategoryInfo : NotSpecified: (:) [], ExtendedTypeSystemException
+ FullyQualifiedErrorId : ExceptionInGetEnumerator
If I go to another script and look at the same server, I get the results I expect:
Function Get-SQLInstance {
<#
.SYNOPSIS
Retrieves SQL server information from a local or remote servers.
.DESCRIPTION
Retrieves SQL server information from a local or remote servers. Pulls all
instances from a SQL server and detects if in a cluster or not.
.PARAMETER Computername
Local or remote systems to query for SQL information.
.NOTES
Name: Get-SQLInstance
Author: Boe Prox
DateCreated: 07 SEPT 2013
.EXAMPLE
Get-SQLInstance -Computername DC1
SQLInstance : MSSQLSERVER
Version : 10.0.1600.22
isCluster : False
Computername : DC1
FullName : DC1
isClusterNode : False
Edition : Enterprise Edition
ClusterName :
ClusterNodes : {}
Caption : SQL Server 2008
SQLInstance : MINASTIRITH
Version : 10.0.1600.22
isCluster : False
Computername : DC1
FullName : DC1\MINASTIRITH
isClusterNode : False
Edition : Enterprise Edition
ClusterName :
ClusterNodes : {}
Caption : SQL Server 2008
Description
Retrieves the SQL information from DC1
#>
[cmdletbinding()]
Param (
[parameter(ValueFromPipeline=$True,ValueFromPipelineByPropertyName=$True)]
[Alias('__Server','DNSHostName','IPAddress')]
#[string[]]$ComputerName = Get-Host "C:\serverlist.txt"
[string[]]$ComputerName = $env:COMPUTERNAME
Process {
ForEach ($Computer in $Computername) {
$Computer = $computer -replace '(.*?)\..+','$1'
Write-Verbose ("Checking {0}" -f $Computer)
Try {
$reg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey('LocalMachine', $Computer)
$baseKeys = "SOFTWARE\\Microsoft\\Microsoft SQL Server",
"SOFTWARE\\Wow6432Node\\Microsoft\\Microsoft SQL Server"
If ($reg.OpenSubKey($basekeys[0])) {
$regPath = $basekeys[0]
} ElseIf ($reg.OpenSubKey($basekeys[1])) {
$regPath = $basekeys[1]
} Else {
Continue
$regKey= $reg.OpenSubKey("$regPath")
If ($regKey.GetSubKeyNames() -contains "Instance Names") {
$regKey= $reg.OpenSubKey("$regpath\\Instance Names\\SQL" )
$instances = @($regkey.GetValueNames())
} ElseIf ($regKey.GetValueNames() -contains 'InstalledInstances') {
$isCluster = $False
$instances = $regKey.GetValue('InstalledInstances')
} Else {
Continue
If ($instances.count -gt 0) {
ForEach ($instance in $instances) {
$nodes = New-Object System.Collections.Arraylist
$clusterName = $Null
$isCluster = $False
$instanceValue = $regKey.GetValue($instance)
$instanceReg = $reg.OpenSubKey("$regpath\\$instanceValue")
If ($instanceReg.GetSubKeyNames() -contains "Cluster") {
$isCluster = $True
$instanceRegCluster = $instanceReg.OpenSubKey('Cluster')
$clusterName = $instanceRegCluster.GetValue('ClusterName')
$clusterReg = $reg.OpenSubKey("Cluster\\Nodes")
$clusterReg.GetSubKeyNames() | ForEach {
$null = $nodes.Add($clusterReg.OpenSubKey($_).GetValue('NodeName'))
$instanceRegSetup = $instanceReg.OpenSubKey("Setup")
Try {
$edition = $instanceRegSetup.GetValue('Edition')
} Catch {
$edition = $Null
Try {
$ErrorActionPreference = 'Stop'
#Get from filename to determine version
$servicesReg = $reg.OpenSubKey("SYSTEM\\CurrentControlSet\\Services")
$serviceKey = $servicesReg.GetSubKeyNames() | Where {
$_ -match "$instance"
} | Select -First 1
$service = $servicesReg.OpenSubKey($serviceKey).GetValue('ImagePath')
$file = $service -replace '^.*(\w:\\.*\\sqlservr.exe).*','$1'
$version = (Get-Item ("\\$Computer\$($file -replace ":","$")")).VersionInfo.ProductVersion
} Catch {
#Use potentially less accurate version from registry
$Version = $instanceRegSetup.GetValue('Version')
} Finally {
$ErrorActionPreference = 'Continue'
New-Object PSObject -Property @{
Computername = $Computer
SQLInstance = $instance
Edition = $edition
Version = $version
Caption = {Switch -Regex ($version) {
"^11" {'SQL Server 2012';Break}
"^10\.5" {'SQL Server 2008 R2';Break}
"^10" {'SQL Server 2008';Break}
"^9" {'SQL Server 2005';Break}
"^8" {'SQL Server 2000';Break}
Default {'Unknown'}
}}.InvokeReturnAsIs()
isCluster = $isCluster
isClusterNode = ($nodes -contains $Computer)
ClusterName = $clusterName
ClusterNodes = ($nodes -ne $Computer)
FullName = {
If ($Instance -eq 'MSSQLSERVER') {
$Computer
} Else {
"$($Computer)\$($instance)"
}.InvokeReturnAsIs()
} Catch {
Write-Warning ("{0}: {1}" -f $Computer,$_.Exception.Message)
When I run this command, I get:
PS C:\Users\scj0025\Documents> Get-SQLInstance -ComputerName poly04406
SQLInstance : SQL642K5_01
Version : 9.2.3042.00
isCluster : False
Computername : poly04406
FullName : poly04406\SQL642K5_01
isClusterNode : False
Edition : Enterprise Edition (64-bit)
ClusterName :
ClusterNodes : {}
Caption : SQL Server 2005
SQLInstance : CONSQL2K8
Version : 10.0.2531.0
isCluster : False
Computername : poly04406
FullName : poly04406\CONSQL2K8
isClusterNode : False
Edition : Enterprise Edition
ClusterName :
ClusterNodes : {}
Caption : SQL Server 2008
I notice to get this information the get-sqlinstance script is looking at the registry.. is that the "Better" way to get an accurate list of this?
(I understand they are looking for different things, but why wont the first script pick up the different instances on the server, I wonder...) -
Sharepoint full farm backup generates sql logs for search application differential backups
We are running full farm backups for sharepoint. Backup completes normally but SQL logs are generated for differential backups for search service application.
it says something like:
Database differential changes were backed up: database: search_serv.......
Thanks,
BasitThis can happen for a few reasons, such as is disk contention or network issues. Are you backing up to a shared directory on the SQL Server? If not, try that.
Trevor Seward
Follow or contact me at...
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
One of our SQL Server started creating SQLDUMP file and and on investigation I found the error longs are filled with Errors 3624 & 17066. There is transnational replication configured on one of the databases is the LogReader Agent is failing error "The
process could not execute 'sp_repldone/sp_replcounters' on XXXXX".
Not sure if both these Assertion & Logreader Agent errors are related. Before I remove and put the replication, I wanted to check if anyone has experienced the same issues or aware of what the cause.
***********Error messages from SQL Logs******
**Dump thread - spid = 0, EC = 0x0000000111534460
Message
A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to
Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a QFE from Technical Support.
Error: 3624, Severity: 20, State: 1.
SQL Server Assertion: File: <logscan.cpp>, line=2123 Failed Assertion = 'UtilDbccIsInsideDbcc () || (m_ProxyLogMgr->GetPru ()->GetStartupState () < RecoveryUnit::Recovered)'. This error may be timing-related. If the error persists after rerunning
the statement, use DBCC CHECKDB to check the database for structural integrity, or restart the server to ensure in-memory data structures are not corrupted.
Error: 17066, Severity: 16, State: 1.
External dump process return code 0x20000001.
External dump process returned no errors.
Thank you in advance.You need to determine if this error is a transient one or a show stopper one.
It sounds like your log reader agent has crashed and can't continue.
If so your best bet is to call Microsoft CSS and open a support incident.
It also sounds like DBCC CHECKDB was running while the log reader agent crashed.
If you need to get up and running again run sp_replrestart, but then you might find that replicated commands are not picked up. You will need to run a validation to determine if you need to reinitialize the entire publication or a single article.
I have run into errors like this, but they tend to be transient, ie the log reader agent crashes, and on restart it works fine.
looking for a book on SQL Server 2008 Administration?
http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941 -
Finding SQL trace and Log trace in SAP ME
Dear Experts,
I am new to SAP ME SDK 2.0 development. After depolying the ME with changes.If an error occurs as "An internal error occurred; contact technical support". Where should I check for the traces like SQL trace or og trace.
Thanks in advance,
Eswaraiah M.Hello,
Log records are written to NW log and can be viewed in NW log viewer.
Konstantin -
Patching and backup process advice
Hello all ,
I have solaris 8 and I had a few questions about patching and backup.
1. Monthly I download the Sun 'Recommended and Security patchset", I then unzip and run patch_add.
When I review the terminal output while this is running I get quite a few :"unable to install patch exit code 8".
Should I be getting this?
Is there another way of applying patches(other than individually)?
2. What are good backup practices? Copy /etc and /var directories to a raid? Or is there a Sun tool for backups?
Thanks for any advice!I believe this is a list of exit codes for patches. You'll most likely see a lot of return code 8 and 2. Not typically an issue.
# Exit Codes:
# 0 No error
# 1 Usage error
# 2 Attempt to apply a patch that's already been applied
# 3 Effective UID is not root
# 4 Attempt to save original files failed
# 5 pkgadd failed
# 6 Patch is obsoleted
# 7 Invalid package directory
# 8 Attempting to patch a package that is not installed
# 9 Cannot access /usr/sbin/pkgadd (client problem)
# 10 Package validation errors
# 11 Error adding patch to root template
# 12 Patch script terminated due to signal
# 13 Symbolic link included in patch
# 14 NOT USED
# 15 The prepatch script had a return code other than 0.
# 16 The postpatch script had a return code other than 0.
# 17 Mismatch of the -d option between a previous patch
# install and the current one.
# 18 Not enough space in the file systems that are targets
# of the patch.
# 19 $SOFTINFO/INST_RELEASE file not found
# 20 A direct instance patch was required but not found
# 21 The required patches have not been installed on the manager
# 22 A progressive instance patch was required but not found
# 23 A restricted patch is already applied to the package
# 24 An incompatible patch is applied
# 25 A required patch is not applied
# 26 The user specified backout data can't be found
# 27 The relative directory supplied can't be found
# 28 A pkginfo file is corrupt or missing
# 29 Bad patch ID format
# 30 Dryrun failure(s)
# 31 Path given for -C option is invalid
# 32 Must be running Solaris 2.6 or greater
# 33 Bad formatted patch file or patch file not found
# 34 The appropriate kernel jumbo patch needs to be installed
Back up your system before adding any patches (ufsdump). It's recommended that these patches are added in run level one.
There should be a script called something like "install_cluster" command in the 8_Recommended directory that you can use to add the patches.
By default the patches create backout info in /var, but I usually disable this (yeah, livin' on the edge) with the "-nosave" option. -
Full backup and backup archive logs
Hello,
today in the early morning i did backup of my db with :
RMAN> backup as COMPRESSED BACKUPSET DATABASE format '/backup/%d_t%t_s%s_p%p';This command created two files :
[oracle@p1 backup]$ ls -l
total 1132680
-rw-r----- 1 oracle oinstall 1155940352 Sep 14 00:44 TEST_t697508918_s5_p1
-rw-r----- 1 oracle oinstall 2785280 Sep 14 00:44 TEST_t697509873_s6_p1Did i backed up archive logs also with this command ?
If I did not, do i have to i n order to have everything to do complete resterer and recovery ?
Is now to late to backup them?Can you confirm that this backup will be placed in the FRA, it will backup all datafile, control files, spfile and control files all arhchivle logs and when backup finshes then will delete all archive logs ?+
after running that command yo can confirm it by yourself at RMAN prompt.
RMAN> list backup;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
200 Full 509.78M DISK 00:01:03 15-FEB-07
BP Key: 202 Status: AVAILABLE Compressed: NO Tag: TAG20070215T171219
Piece Name: /disk2/PROD/backupset/2007_02_15/o1_mf_nnndf_TAG20070215T171219_2xb17nbb_.bkp
List of Datafiles in backup set 200
File LV Type Ckp SCN Ckp Time Name
1 Full 421946 15-FEB-07 /disk1/oradata/prod/system01.dbf
2 Full 421946 15-FEB-07 /disk1/oradata/prod/sysaux01.dbf
3 Full 421946 15-FEB-07 /disk1/oradata/prod/undotbs01.dbf
4 Full 421946 15-FEB-07 /disk1/oradata/prod/cwmlite01.dbf
5 Full 421946 15-FEB-07 /disk1/oradata/prod/drsys01.dbf
6 Full 421946 15-FEB-07 /disk1/oradata/prod/example01.dbf
7 Full 421946 15-FEB-07 /disk1/oradata/prod/indx01.dbf
8 Full 421946 15-FEB-07 /disk1/oradata/prod/tools01.dbf
9 Full 421946 15-FEB-07 /disk1/oradata/prod/users01.dbf
BS Key Type LV Size Device Type Elapsed Time Completion Time
201 Full 7.98M DISK 00:00:03 15-FEB-07
BP Key: 203 Status: AVAILABLE Compressed: NO Tag: TAG20070215T171219
Piece Name: /disk2/PROD/backupset/2007_02_15/o1_mf_ncsnf_TAG20070215T171219_2xb19prg_.bkp
SPFILE Included: Modification time: 15-FEB-07
SPFILE db_unique_name: PROD
Control File Included: Ckp SCN: 421968 Ckp time: 15-FEB-07
BS Key Size Device Type Elapsed Time Completion Time
227 30.50M SBT_TAPE 00:00:11 15-FEB-07
BP Key: 230 Status: AVAILABLE Compressed: NO Tag: TAG20070215T171334
Handle: 0bia4rtv_1_1 Media:
List of Archived Logs in backup set 227
Thrd Seq Low SCN Low Time Next SCN Next Time
1 5 389156 15-FEB-07 411006 15-FEB-07
1 6 411006 15-FEB-07 412972 15-FEB-07
1 7 412972 15-FEB-07 417086 15-FEB-07
1 8 417086 15-FEB-07 417114 15-FEB-07
1 9 417114 15-FEB-07 417853 15-FEB-07
1 10 417853 15-FEB-07 421698 15-FEB-07
1 11 421698 15-FEB-07 421988 15-FEB-07list backup will show you everything.
Would then this be incremental level 0 backup ?
yes by default i think so.
so with this i must be able to flasback my database at least 3 day in the past if i understood correctly.
But i am little puzled about purging those flasback data ...
Can you point me please to some link or tell how can i purge logs that are not need for that flashback period... ?
please read this link
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/rpfbdb003.htm#sthref513
Khurram -
Exchange Logs Delete and Backup Successfull
kindly clarify the given information :we have 2 mailbox servers in DAG environment but both do not having same numbers of Logs ( let say MBX01 has 3000 logs and MBX02 has 1400 Logs ) so in this case NetBackup will not take successful backup or Logs will
not truncate/Delete. Our MBX01 having true copy.
Thanks in advance !!
Best Regards, HussainHi Hussain,
Base on my knowledge, the numbers of the logs will not effect the backup.
In addition, I also recommend you post this in netbackup forum, as this is a 3rd party backup software, Microsoft does not provide support for it.
Best regards,
Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
Niko Cheng
TechNet Community Support
Maybe you are looking for
-
"Adobe Bridge has stopped working"
Just installed CS2 Version 9.0. Having "Adobe Bridge has stopped working" Windows 7 Pro 64. Is there a solution to this problem? Thanks?
-
Addiontal Characteristic to be included in Querry Designer
Dear BI Experts, I have Info-Cube which has exactly 10 characterisc, i create querry for this info-cube already. Later then, I change the cube and add mor 1 characteristic, IS it possible for me to include the new characteristic in the querry
-
Youtube video file format in iphone
Hi, I have yourtube accout, and I uploade the video in (.flv)(.mov)(flash)(.mp4)(.avi), but non of them not working in iphone youtube, but they working in my computer.please tell me which format is working in youtube inside the computer and iphone yo
-
How to limit ALL data transmission except for WhatsApp in Z10?
Hello, My prepaid carrier allows me to use WhatsApp with no data limits, so I want to use data packs for WhatsApp and WhatsApp ONLY, is there a way to block ALL data transmission except for that used by Whats App? To clarify more: my carrier debits m
-
Upgrading 9.1 to 9.2.1 Requires Classic???
Hello all. I'm in one of those, "should take a deep breath and count to ten" moments, which I just did, but am still ready to punch my fist through my screen. I hope I don't make too bad a first impression here. When I load Classic I get a frozen wel