Create rman back log file issue in linux
Hi Experts,
I run 10.2.04 database in redhat 5.1
i create a testing shell script for cron job. but I could not create rman backup log file with blank email.
my code as
#!/bin/bash
EXPORT DTE='date +%m%d%C5y%H%M'
export $ORACLE_HOME/bin
export $ORACLE_SID=sale
rman target='backuptest/backuptest@sale4' nocatalog log /urs/tmp/orarman/jimout.log << EOF
RUN {
show all;
EXIT;
EOF
mail -s " backup"${DTE} [email protected] < urs/tmp/orarman/jimout.log
Please advice me for dubug.
Thanks for your help!
JIm
Thanks very much!
I make changes as below
#!/bin/bash
EXPORT DTE='date +%m%d%C5y%H%M'
export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
export ORACLE_SID=sale
export PATH=$ORACLE_HOME/bin:$PATH
rman target='backuptest/[email protected]' nocatalog log='/urs/tmp/orarman/log/jimout.log' << EOF
RUN {
show all;
EXIT;
EOF
mail -s " backup"${DTE} [email protected] < urs/tmp/orarman/log/jimout.log
I am still not able see file under log='/urs/tmp/orarman/log/ directory.
the rman option does not works for log='/urs/tmp/orarman/log/jimout.log'
What is wrong in my code?
I am looking for your help!
JIm
Edited by: user589812 on Jul 23, 2010 8:36 AM
Edited by: user589812 on Jul 23, 2010 8:38 AM
Similar Messages
-
In Adobe 8 how do you create a bates log file?
I love this feature in Adobe 9 where one can select "output options" and change the way the bates stamped PDF is saved, to create a log file of bates numbers per document, etc. Are these same output options and bates log file options available in Adobe 8? Basically, how do I create a bates log file in Adobe 8?
This is a great utility---thanks for sharing it! The Javascripts folder can be hard to find--here's what worked for me:
For Macbook Pro users using Acrobat Pro X with OS X version 10 or greater, you’ll want to copy the unzipped “UVSAR_selectiveFlatten.js” file into the JavaScripts folder located in this path:
/Applications/Adobe Acrobat X Pro/Adobe Acrobat Pro.app/Contents/Resources/JavaScripts
*** IMPORTANT: To be able to see the Contents folder, you must Right-Click (Ctrl-Click if you don't have a right button) on the filename "Adobe Acrobat Pro.app", then select Show Package Contents from the pop-down menu. The Contents folder will appear in a new window and you can navigate to Contents/Resources/JavaScripts, and put the “UVSAR_selectiveFlatten.js” into the JavaScripts folder.
Once you've done that, restart Acrobat Pro and you'll find the "Flatten" option in your Edit menu. VERY handy! Tip: might want to save your original PDF file as a different version just to be safe---once you flatten the PDF, it can't be undone, so either just print and don't save, or save as different version if need be. -
Is it possible to create materialized view log file for force refresh
Is it possible to create materialized view log file for force refresh with join condition.
Say for example:
CREATE MATERIALIZED VIEW VU1
REFRESH FORCE
ON DEMAND
AS
SELECT e.employee_id, d.department_id from emp e and departments d
where e.department_id = d.department_id;
how can we create log file using 2 tables?
Also am copying M.View result to new table. Is it possible to have the same values into the new table once the m.view get refreshed?You cannot create a record as a materialized view within the Application Designer.
But there is workaround.
Create the record as a table within the Application Designer. Don't build it.
Inside your database, create the materialized with same name and columns as the record created previously.
After that, you'll be able to work on that record as for all other within the Peoplesoft tools.
But keep in mind do never build that object, that'll drop your materialized view and create a table instead.
Same problem exists for partitioned tables, for function based-indexes and some other objects database vendor dependant. Same workaround is used.
Nicolas. -
Hi All,
Can anyone suggest me how to create a separate log file(could be any file) from a script to store some unsuccessful MA's.
I have checked the existing IAPI's but found nothing.
Thanks in advance,
srikanth emaniHi srikanth,
you have to create new class IAPi and then specify the separate log.
KR,
Anacia -
Problem in creating log file on Ubuntu(Linux)
hi
I have developed a project in java. which creates log file for exception handling. This is working fine on Windows but When I run the jar file on ubuntu then it is not creating log file and throwing an error.
Error Message="Permission Denied UnixFileSystem.java createFileExclusively()"
Code is :
strStartupPath = new java.io.File("").getAbsolutePath();
DateFormat dateformat = new SimpleDateFormat("yyyyMMdd");
Date date = new Date();
String strFileName = strStartupPath + "\\" + dateformat.format(date) + ".log";
File logFile = new File(strFileName);
Your assistance will be appriciated.
SonalIf you want to do this properly then you will have to make it OS specific. Various security systems on each platform will prevent you from freely writing your logfile unless you choose the correct location. The applications folder under MacOSX, for instance, requires Admin privileges. Under MacOSX and Windows, log files are typically stored in the Applications Data resp. /Library/Logs or ~/Library/Logs folder in MacOS X. In Unix and Linux you will typically use /var/log directory.
I think your question is rather Java than Linux specific, but there is a lot of information available for free on the web.
Perhaps the following example can help you to determine the OS:
public static final class OsUtils
private static String OS = null;
public static String getOsName()
if(OS == null) { OS = System.getProperty("os.name");
public static boolean isWindows()
return getOsName().startsWith("Windows");
public static boolean isUnix() // and so on
}Edited by: Dude on Apr 16, 2011 12:58 AM -
Standby creating archives log files issue!
Hello Everyone,
Working on oracle 10g R2/Windows, I have created a dataguard with one standby database, but there is a strange issue that happen, and I'll need someone to shed the light to me.
By default archived log created from the primary database should be the sent to the stanndby database, but I found that the standby database has plus one archived log file.
From the primary database:
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination C:\local_destination1_orcl
Oldest online log sequence 1021
Next log sequence to archive 1023
Current log sequence 1023
contents of C:\local_destination1_orcl
1_1022_623851185.ARC
from the standby database:
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination C:\local_destination1_orcl
Oldest online log sequence 1022
Next log sequence to archive 0
Current log sequence 1023
contents of C:\local_destination1_orcl
1_1022_623851185.ARC
1_1023_623851185.ARC ---> this is the extra archive file created in the standby database, should someone let me know how to avoid this?
Thanks for your helpSELECT FROM v$version;*
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 64-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
The standby database is a physical standby database (not logical standby)
Thanks against for your contribution, but I'm still not know why standby create and arhive files too? -
Log File Issue In SQL server 2005 standard Edition
We have database of size 375GB .The data file has 80 GB free space within .When trying to rebuild the index we had 450 GB free space on the disk where Log file is residing.The rebuild index activity failed due to space issue.added more space and got the
job done successfully
The Log file has grow up to 611GB to complete the rebuild index.
version :SQL server 2005 Standard Edition .Is ther a way to estimate the space required for rebuild index in this version.
I am aware we notmaly allocate 1.5 times of data file.But in this case It was totaly wrong.
Any suggestion with examples would be appreciated.
RaghuOK, there's a few things here.
Can you outline for everybody the recovery model you are using, the frequency with which you take full, differential and transaction log backups.
Are you selectively rebuilding your indexes or are you rebuilding everything?
How often are you doing this? Do you need to?
There are some great resources on automated index maintenance, check out
this post by Kendra Little.
Depending on your recovery point objectives I would expect a production database to be in the full recovery mode and as part of this you need to be taking regular log backups otherwise your log file will just continue to grow. By taking a log backup it will
clear out information from inactive VLF's and therefore allow SQL Server to write back to those VLF's rather than having to grow the log file. This is a simplified version of events, there are caveats.
A VLF will be marked as active if it still has an open transaction in it or there is a HA option that still requires that data to be available as that data has not been copied to another node yet.
Most customers that I see take transaction log backups every 15 - 30 minutes, but this really does depend upon how much data your company can afford to lose. That's another discussion for another day.
Make sure that you take a transaction log backup prior to your job that does your index rebuilds (hopefully a smart job not a sledge hammer job).
As mentioned previously swapping to bulk logged can help to reduce the size of the amount of information logged during index rebuilds. If you do this make sure to swap back into the full recovery model straight after and perform a full backup. There are
problems with the ability to do point in time restores whilst in the bulk logged recovery model, so you need to reduce the amount of time you use it.
Really you also need to look at how your indexes are created does the design of them lead to them being fragmented on a regular basis? Are they being used? Are there better indexes out there that can help performance?
Hopefully that should put you on the right track.
If you find this helpful, please mark the post as helpful,
If you think this solves the problem, please propose or mark it an an answer.
Please provide details on your SQL Server environment such as version and edition, also DDL statements for tables when posting T-SQL issues
Richard Douglas
My Blog: Http://SQL.RichardDouglas.co.uk
Twitter: @SQLRich -
Help with Script created to check log files.
Hi,
I have a program we use in our organization on multiple workstations that connect to a MS SQL 2005 database on a Virtual Microsoft 2008 r2 Server. The program is quite old and programmed around the days when serial connections were the most efficient means
of connection to a device. If for any reason the network, virtual server or the SAN which the virtual server runs off have roughly 25% utilization or higher on its resources the program on the workstations timeout from the SQL database and drop the program
from the database completely rendering it useless. The program does not have the smarts to resync itself to the SQL database and it just sits there with "connection failed" until human interaction. A simple restart of the program reconnects itself
to the SQL database without any issues. This is fine when staff are onsite but the program runs on systems out of hours when the site is unmanned.
The utilization of the server environment is more than sufficient if not it has double the recommended resources needed for the program. I am in regular contact with the support for the program and it is a known issue for them which i believe they do not
have any desire to fix in the near future.
I wish to create a simple script that checks the log files on each workstation or server the program runs on and emails me if a specific word comes up in that log file. The word will only show when a connection failure has occurred.
After the email is sent i wish for the script to close the program and reopen it to resync the connection.
I will schedule the script to run ever 15 minutes.
I have posted this up in a previous post about a month ago but i went on holidays over xmas and the post died from my lack or response.
Below is what i have so far as my script. I have only completed the monitoring of the log file and the email portion of it. I had some help from a guy on this forum to get the script to where it is now. I know basic to intermediate scripting so sorry for my
crudity if any.
The program is called "wasteman2G" and the log file is located in \\servername\WasteMan2G\Config\DCS\DCS_IN\alert.txt
I would like to get the email side of this script working first and then move on to getting the restart of the program running after.
At the moment i am not receiving an error from the script. It runs and doesn't complete what it should.
Could someone please help?
Const strMailto = "[email protected]"
Const strMailFrom = "[email protected]"
Const strSMTPServer = "mrc1tpv002.XXXX.local"
Const FileToRead = "\\Mrctpv005\WasteMan2G\Config\DCS\DCS_IN\alert.txt"
arrTextToScanFor = Array("SVR2006","SVR2008")
Set WshShell = WScript.CreateObject("WScript.Shell")
Set objFSO = WScript.CreateObject("Scripting.FileSystemObject")
Set oFile = objFSO.GetFile(FileToRead)
dLastCreateDate = CDate(WshShell.RegRead("HKLM\Software\RDScripts\CheckTXTFile\CreateDate"))
If oFile.DateCreated = dLastCreateDate Then
intStartAtLine = CInt(WshShell.RegRead("HKLM\Software\RDScripts\CheckTXTFile\LastLineChecked"))
Else
intStartAtLine = 0
End If
i = 0
Set objTextFile = oFile.OpenAsTextStream()
Do While Not objTextFile.AtEndOfStream
If i < intStartAtLine Then
objTextFile.SkipLine
Else
strNextLine = objTextFile.Readline()
For each strItem in arrTextToScanFor
If InStr(LCase(strNextLine),LCase(strItem)) Then
strResults = strNextLine & vbcrlf & strResults
End If
Next
End If
i = i + 1
Loop
objTextFile.close
WshShell.RegWrite "HKLM\Software\RDScripts\CheckTXTFile\FileChecked", FileToRead, "REG_SZ"
WshShell.RegWrite "HKLM\Software\RDScripts\CheckTXTFile\CreateDate", oFile.DateCreated, "REG_SZ"
WshShell.RegWrite "HKLM\Software\RDScripts\CheckTXTFile\LastLineChecked", i, "REG_DWORD"
WshShell.RegWrite "HKLM\Software\RDScripts\CheckTXTFile\LastScanned", Now, "REG_SZ"
If strResults <> "" Then
SendCDOMail strMailFrom,strMailto,"VPN Logfile scan alert",strResults,"","",strSMTPServer
End If
Function SendCDOMail( strFrom, strSendTo, strSubject, strMessage , strUser, strPassword, strSMTP )
With CreateObject("CDO.Message")
.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2
.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/smtpserver") = strSMTP
.Configuration.Fields.item("http://schemas.microsoft.com/cdo/configuration/smtpauthenticate") = 1 'basic
.Configuration.Fields.item("http://schemas.microsoft.com/cdo/configuration/sendusername") = strUser
.Configuration.Fields.item("http://schemas.microsoft.com/cdo/configuration/sendpassword") = strPassword
.Configuration.Fields.Update
.From = strFrom
.To = strSendTo
.Subject = strSubject
.TextBody = strMessage
On Error Resume Next
.Send
If Err.Number <> 0 Then
WScript.Echo "SendMail Failed:" & Err.Description
End If
End With
End FunctionThankyou for that link, it did help quite a bit. What i wanted was to move it to csript so i could run the wscript.echo in commandline. It all took to long and found a way to complete it via Batch. I do have a problem with my script though and you might
be able to help.
What i am doing is searching the log file. Finding the specific words then outputting them to an email. I havent used bmail before so thats probably my problem but then im using bmail to send me the results.
Then im clearing the log file so the next day it is empty so that when i search it every 15 minutes its clean and only when an error occurs it will email me.
Could you help me send the output via email using bmail or blat?
@echo off
echo Wasteman Logfile checker
echo Created by: Reece Vellios
echo Date: 08/01/2014
findstr "SRV2006 & SRV2008" \\Mrctpv005\WasteMan2G\Config\DCS\DCS_IN\Alert.Txt > c:\log4mail.txt
if %errorlevel%==0 C:\Documents and Settings\rvellios\Desktop\DCS Checker\bmail.exe -s mrc1tpv002.xxx.local -t [email protected] -f [email protected] -h -a "Process Dump" -m c:\log4mail.txt -c
for %%G in (\\Mrctpv005\WasteMan2G\Config\DCS\DCS_IN\Alert.Txt) do (copy /Y nul "%%G")
This the working script without bmail
@echo off
echo Wasteman Logfile checker
echo Created by: Reece Vellios
echo Date: 08/01/2014
findstr "SRV2006 & SRV2008" \\Mrctpv005\WasteMan2G\Config\DCS\DCS_IN\Alert.Txt > C:\log4mail.txt
if %errorlevel%==0 (echo Connection error)
for %%G in (\\Mrctpv005\WasteMan2G\Config\DCS\DCS_IN\Alert.Txt) do (copy /Y nul "%%G")
I need to make this happen:
If error occurs at "%errorlevel%=0" then it will output the c:\log4mail.txt via smtp email using bmail. -
Since setting up RMAN control file autobckup on a dataguard set up I get the following message in the alertlog:-
Starting control autobackup
Sat May 07 01:04:27 2005
Control autobackup written to DISK device
handle 'CF_C-00'
Clearing standby activation ID 2115378951 (0x7e161f07)
The primary database controlfile was created using the
'MAXLOGFILES 9' clause.
There is space for up to 6 standby redo logfiles
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 104857600;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 104857600;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 104857600;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 104857600;
Can anyone tell me a bit more a bit what this message is saying and explain the pros/cons of adding these standby log files to the standby database??This message is not related with Rman.
In DataGurard there is a new fetaure of adding standby logfile for maximum protection.
in 8i standby database this feature is not available.
so this is normal message of adding standy logfile on dataguard.
Thanks and Regards
Kuljeet Pal Singh -
I have a set of logs that that I need to run some analysis on. I'm trying to convert them to a CSV format so I can analyze them with Excel or LogParser.
The basic process I'm trying to implement is:
1. Search entries for ERROR in log file
2. Split components of the error line
3. Create output CSV.
I was able to address item 1 with this this statement:
Select-string -path C:\my\logfile\file.log -Pattern "\W\sERROR:\s" -CaseSensitive -AllMatches | select -expand line
This is a sample of the output lines I'm getting
01/31/2014 14:07:00 USERC <25000> ERROR: 1222 - SQLSTATE: HYT00 - Execute failed: Lock request time out period exceeded. on line 9461 in Quote Entry - Payable, Accounts
01/31/2014 14:07:11 USERD <25000> ERROR: 1222 - SQLSTATE: HYT00 - Execute failed: Lock request time out period exceeded. on line 324491 in &Orders on object SELECT_ORD_BTN.
01/31/2014 15:49:34 USERG <25000> ERROR: 50000 - SQLSTATE: 42000 - Execute failed: \n\nNo Lines entered. Must finish entering lines. on line 281522 in Quote Entry - Payable, Accounts
01/31/2014 15:51:01 USERH <25000> ERROR: 50000 - SQLSTATE: 42000 - Execute failed: \n\nNo Lines entered. Must finish entering lines. on line 281522 in Quote Entry - Payable, Accounts
I'm having issues breaking these lines into its components. Ideally, I'd like to create an output CSV file that breaks the components of these lines thus:
DATE, TIME, USER, ERROR, ERRORID, MESSAGE
Following on the method I mentioned above, I was planning to pipe the output into a foreach and perhaps use regex to extract the parts of the line, but I'm having difficulties. I have the regex figured out, and I have tried combining them with the -split
operator and the select-string, but I'm not having much luck.
What is the best method to process these lines so I can split them at specific locations in the string?
Using one of the example outputs above, I need to split the line in the following spots (note the commas)
01/31/2014, 15:51:01, USERH, <25000> ERROR:, 50000 - SQLSTATE:, 42000 - Execute failed: \n\nNo Lines entered. Must finish entering lines. on line 281522 in Quote Entry - Payable, AccountsWell, I ended up using
$mydate=$line.split(" ")[0]
$myTime=$line.split(" ")[1]
... etc
to extract the first 6 fields in the line as they're all (more or less consistently) separated by spaces. For the final (message) field, I used your recommendation above.
I redistributed the fields so they'd split at the spaces and I concatenated
01/31/2014 | 15:51:01 | USERH | <25000> | ERROR: 50000 - | SQLSTATE: 42000 - | Execute failed: \n\nNo Lines entered. Must finish entering lines. on line 281522 in Quote Entry - Payable, Accounts.
Thanks for pointing me in the right direction. -
Hi,
Maybe someone can help me on this.
We have a RAC database in production that (for some) applications need a response of 0,5 seconds. In general that is working.
Outside of production hours we make a weekly full backup and daily incremental backup so that is not bothering us. However as soon as we make an archive backup or a backup of the control file during production hours we have a problem as the application have to wait for more then 0,5 seconds for a respons caused by the event "log file sync" with wait class "Commit".
I already adjusted the script for RMAN so that we use only have 1 files per set and also use one channel. However that didn't work.
Increasing the logbuffer was also not a success.
Increasing Large pool is in our case not an option.
We have 8 redolog groups with each 2 members ( each 250 Mb) and an average during the day of 12 logswitches per hour which is not very alarming. Even during the backup the I/O doesn't show very high activity. The increase of I/O at that moment is minor but (maybe) apperantly enough to cause the "log file sync".
Oracle has no documentation that gives me more possible causes.
Strange thing is that before the first of October we didn't have this problem and there were no changes made.
Has anyone an idea where to look further or did anyone experience a thing like this and was able to solve it?
Kind regardsThe only possible contention I can see is between the log writer and the archiver. 'Backup archivelog' in RMAN means implicitly 'ALTER SYSTEM ARCHIVE LOG CURRENT' (log switch and archiving the online log).
You should alternate redo logs on different disks to minimize the effect of the archiver on the log writer.
Werner -
Dear All,
There have been issues in the past where the transactional log file has grown too big that it made the drive to limit its size. I would like to know the answers to the following
please:
1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?
2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?
3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?
4. How often should the update stat job should run please?
Thank you in advance!Hi
My answers might be very similar to geeks already answer, but hope it will add something more
1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?
--> If database recovery model is full \ bulk then tlog backup is helpful, and it doesnt help try to increase frequency of log backup and you can refer :
Factors That Can Delay Log Truncation
2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?
Auto grow for very large db is very crucial if its too high can cause active vlf and too less can cause fragmentation. In your case your priority is to control space utilizatiuon.
i suggest you to keep minimum autogrowth and it must be in size not in percentage.
/*******Auto grow formula for log file**********/
Auto grow less than 64MB = 4 VLFs
Autogrow of 64MB and less than 1GB = 8 VLFs
Autogrow of 1GB and larger = 16 VLFs
3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?
---> If below query returns log_backup for respective database then yes you can to increase log backup frequency. But if it returns some other factor , please check above
mention link
"select name as [database] ,log_reuse_wait , log_reuse_wait_desc from sys.databases"
4. How often should the update stat job should run please?
this totaly depend on ammount of dml operation you are performing. you can select auto update stats and weekly you can do update stats with full scan.
Thanks Saurabh Sinha
http://saurabhsinhainblogs.blogspot.in/
Please click the Mark as answer button and vote as helpful
if this reply solves your problem -
Mailaccess.log file - How do I control size/create new mailaccess.log file
I am running an IMAP/SMTP server on a G5 Xserve. My mailaccess.log file is getting rather large. When I mv then touch to create a new mailaccess.log file the old file that was mv'd continues to get updated while the new mailaccess.log file remaile at 0 with not entries.
How to refresh my log file?
thanks
JeffHi,
http://help.sap.com/saphelp_nw2004s/helpdata/en/c2/ee4f58ff1bce41b4dc6d2612fae211/frameset.htm
and More here..
Problem with system.log
Regards,
N. -
Database Engine won't start - master DB log file issue
I had a server patched and reboot yesterday and now the SQL services won't start because it's saying it can't find the master log file. I checked the path and the log file is present. Any suggestions.
2014-04-21 09:48:55.22 Server Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
Jun 28 2012 08:36:30
Copyright (c) Microsoft Corporation
Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
2014-04-21 09:48:55.22 Server (c) Microsoft Corporation.
2014-04-21 09:48:55.22 Server All rights reserved.
2014-04-21 09:48:55.22 Server Server process ID is 4008.
2014-04-21 09:48:55.22 Server System Manufacturer: 'HP', System Model: 'ProLiant DL380 G5'.
2014-04-21 09:48:55.22 Server Authentication mode is MIXED.
2014-04-21 09:48:55.22 Server Logging SQL Server messages in file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\ERRORLOG'.
2014-04-21 09:48:55.22 Server This instance of SQL Server last reported using a process ID of 4080 at 4/21/2014 9:47:41 AM (local) 4/21/2014 2:47:41 PM (UTC). This is an informational message only; no user action is required.
2014-04-21 09:48:55.22 Server Registry startup parameters:
-d C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\master.mdf
-e C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\ERRORLOG
-l C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\mastlog.ldf -g512
2014-04-21 09:48:55.22 Server SQL Server is starting at normal priority base (=7). This is an informational message only. No user action is required.
2014-04-21 09:48:55.22 Server Detected 4 CPUs. This is an informational message; no user action is required.
2014-04-21 09:48:55.33 Server Using dynamic lock allocation. Initial allocation of 2500 Lock blocks and 5000 Lock Owner blocks per node. This is an informational message only. No user action is required.
2014-04-21 09:48:55.35 Server Node configuration: node 0: CPU mask: 0x000000000000000f:0 Active CPU mask: 0x000000000000000f:0. This message provides a description of the NUMA configuration for this computer. This is an informational
message only. No user action is required.
2014-04-21 09:48:55.38 spid7s Starting up database 'master'.
2014-04-21 09:48:55.39 spid7s Error: 17204, Severity: 16, State: 1.
2014-04-21 09:48:55.39 spid7s FCB::Open failed: Could not open file C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\mastlog.ldf -g512 for file number 2. OS error: 2(The system cannot find the file
specified.).
2014-04-21 09:48:55.39 spid7s Error: 5120, Severity: 16, State: 101.
2014-04-21 09:48:55.39 spid7s Unable to open the physical file "C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\mastlog.ldf -g512". Operating system error 2: "2(The system cannot find the
file specified.)".That did it. Not sure why that switch was there.
It might have been intended, but it was added incorrectly. -g512 is a legal switch. Then again, since you have 64-bit SQL Server, it is not likely that you will needed.
(The -g switch increases the area known as memtoleave, which is the virtual address space (VAS) not used for the buffer cache. On 32-bit SQL Server you may need to increase this area from the default of 256 MB. On 64-bit SQL Server the full physical memory
is available for VAS; on 32-bit it's only 2GB.)
Erland Sommarskog, SQL Server MVP, [email protected] -
Create a new log file every hour
I have a program which receives measurement data and writes this data to TDMS file. However, I want to create a new TDMS file every hour. When I put a while loop in mu sub vi which creates the new file the program doesn't run. I have attached the sub vi. Can anyone help me out here?
Cheers
Attachments:
ConfigTDMS (SubVI)_loopTDMS.vi 16 KBOne trick for doing this is to prepend the file name with YYJJJHH See example
Then just check if file exists (if not create and add headder) when you write to file.
This also generates the reports in "alphabetical" order so windows explorer presents them chronologically.
Jeff
Maybe you are looking for
-
File Content Conversion with Multiple structures
Here is the scenario Legacy to XI -> XI to R/3 (App Server) txt file and fixed length. <b>Test file</b> 100WELCOME 0430000960603201321 2000000000040008000803 <b>Table2</b> RecordType PriorityCode Destination BankOrginNo CreationDate CretionTime Sp
-
Im doing a work shop at a apple store do i need to take my mac with me
If I am doing a workshop at one of the apple stores do i have to bring my own laptop ?
-
White space character issue in PSA
Hi All, I am loading master data full load from R3 to BW.The load is failing by giving the Error Message: "Too many error records - update terminated" and "Error 4 in the update" In details tab it is showing red entries as "Record 4857 :Value 'ATF-34
-
Do not use the Restore from Backup when upgrading
For those having issues...... Try to restore your phone as "New" and not from a backup. After you have a clean version iOS 4.0 on your phone customize your phone manually, do not use a backup. i know this can be a pain in the *** but it should fix mo
-
Nokia E72 Font Folder Not Viewing
Hi .. i want to install some nice fonts in my nokia e72. But it needs to rename by a code given in c:/resources/fonts. i havent found such a folder in e72.As i have already enabled "show system folder" too in xplore and y browser. look at screeshot I