ASE - Dump Database (Transaction Log deleted?)
Hi experts,
I have a question.
Will the transaction log be deleted reseted to the time of a database dump is finished?
Hopefully it will only be deleted by dumping the transaction log itself, right?
Kind regards
Hello Christian,
Don't worry we have set the options to the databases which are getting installed through business suites.
However I will provide you the detailed dB details and their respective option which yoibcan cross verify as well from the isql command
sp_helpdb
go
And you can very well enable or disable the options set as well using
sp_dboption "<SID>","<option_name>",true/false
go
Also if you needed the detailed dB wise default dB options that to be set let me know I will attach the details as well ;)
Regards
Kiran K Adharapuram
Similar Messages
-
Need a Walkthrough on How to Create Database & Transaction Log Backups
Is this the proper forum to ask for this type of guidance? There has been bad blood between my department (Research) and the MIS department for 30 years, and long story short I have been "given" a virtual server and cut loose by my MIS department
-- it's my responsibility for installs, updates, backups, etc. I have everything running really well, I believe, with the exception of my transaction log backups -- my storage unit is running out of space on a daily basis, so I feel like I have to be
doing something wrong.
If this is the proper forum, I'll supply the details of how I currently have things set up, and I'm hoping with some loving guidance I can work the kinks out of my backup plan. High level -- this is for a SQL Server 2012 instance running on a Windows
2012 Server...Thanks all, after posting this I'm going to read the materials provided above. As for the details:
I'm running on a virtual Windows Server 2012 Standard, Intel Xeon CPU 2.6 GHz with 16 GB of RAM; 64 bit OS. The computer name is e275rd8
Drives (NTFS, Compression off, Indexing on):
DB_HVSQL_SQL-DAT_RD8-2(E:) 199 GB (47.2 used; 152 free)
DB_HVSQL_SQL-Dat_RD8(F:) 199 GB (10.1 used; 189 free)
DB_HVSQL_SQL-LOG_RD8-2(L:) 199 GB (137 used; 62 free) **
DB_HVSQL_SQL-BAK_RDu-2(S:) 99.8 GB (64.7 used; 35 free)
DB_HVSQL_SQL-TMP_RD8-2(T:) 99.8 GB (10.6 used; 89.1 free)
SQL Server:
Product: SQL Server Enterprise (64-bit)
OS: Windows NT 6.2 (9200)
Platform: NT x64
Version: 11.0.5058.0
Memory: 16384 (MB)
Processors: 4
Root Directory: f:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL
Is Clustered: False
Is HADR Enabled: False
Database Settings:
Default index fill factor: 0
Default backup media retention (in days): 0
Compress backup is checkmarked/on
Database default locations:
Data: E:\SQL\Data
Log: L:\SQL\LOGs
Backup: S:\SQLBackups
There is currently only one database: DistrictAssessmentDW
To create my backups, I'm using two maintenance plans, and this is where I'm pretty sure I'm not doing something correctly. My entire setup is me just guessing what to do, so feel free to offer suggestions...
Maintenance Plan #1: Backup DistrictAssessmentDW
Scheduled to run daily Monday Through Friday at 3:33 AM
Step 1: Backup Database (Full)
Backup set expires after 8 days
Back up to Disk (S:\SQLBackups)
Set backup compression: using the default server setting
Step 2: Maintenance Cleanup Task
Delete files of the following type: Backup files
Search folder and delete files based on an extension:
Folder: L:\SQL\Logs
File extension: trn
Include first-level subfolders: checkmarked/on
File age: Delete files based on the age of the file at task run time older than 1 Day
Step 3: Maintenance Cleanup Task
Delete files of the following type: Backup files
Search folder and delete files based on an extension:
Folder: S:\SQLBackups
File extension: bak
Include first-level subfolders: checkmarked/on
File age: Delete files based on the age of the file at task run time older than 8 Days
Maintenance Plan #2: Backup DistrictAssessmentDW TRANS LOG ONLY
Scheduled to run daily Monday through Friday; every 20 minutes starting at 6:30 AM & ending at 7:00 PM
Step 1: Backup Database Task
Backup Type: Transaction Log
Database(s): Specific databases (DistrictAssessmentDW)
Backup Set will expire after 1 day
Backup to Disk (L:\SQL\Logs\)
Set backup compression: Use the default server setting
Around 2:30 each day my transaction log backup drive (L:) runs out of space. As you can see, transactions are getting backed up every 20 minutes, and the average size of the backup files is about 5,700,000 KB.
I hope this covers everything, if not please let me know what other information I need to provide... -
SQL Server Database - Transaction logs growing largely with Simple Recovery model
Hello,
There is SQL server database on client side in production environment with huge transaction logs.
Requirement :
1. Take database backup
2. Transaction log backup is not required. - so it is set to Simple recovery model.
I am aware that, Simple Recovery model also increases the transaction logs same as in Full Recovery model as given on below link.
http://realsqlguy.com/origins-no-simple-mode-doesnt-disable-the-transaction-log/
Last week, this transaction log became of 1TB size and blocked everything on the database server.
How to over come with this situation?
PS : There are huge bulk uploads to the database tables.
Current Configuration :
1. Simple Recovery model
2. Target Recovery time : 3 Sec
3. Recovery interval : 0
4. No SQL Agent job schedule to shrink database.
5. No other checkpoints created except automatic ones.
Can anyone please guide me to have correct configuration on SQL server for client's production environment?
Please let me know if any other details required from server.
Thank you,
Mittal.@dave_gona,
Thank you for your response.
Can you please explain me this in more details --
What do you mean by one batch ?
1. Number of rows to be inserted at a time ?
2. or Size of data in one cell does matter here.
As in my case, I am clubbing together all the data in one xml (on c# side) and inserting it as one record. Data is large in size, but only 1 record is inserted.
Is it a good idea to shrink transaction log periodically, as it is not happening itself in simple recovery model.
HI Mittal,
Shrinking is bad activity yu should not shrink log files regularly, in rare case if you want to recovery space you may do it.
Have manual chekpoints in Bulk insert operation.
I cannot tell upfront what should be batch size but you can start with 1/4 th of what you are currently inserting.
Most important what does below query return for database
select log_reuse_wait_desc from sys.databases where name='db_name'
The value it returns is what stopping the log from getting cleared and reused.
What is version and editon of SQl server we are talking about. What is output of
select @@version
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
Database Transaction log suspected pages
We migrated our Production Databases to New SQL Cluster and when I run query to find any suspected pages entries in MSDB Database .I found there are 5 entries in msdb.dbo.suspected_pages tables .These enries for Production Database Transaction file (File_id=2)
, Pages_id =1,2,3,6,7 and the event _type was updated to 4 for all pages after I did DB restore and error_count is 1 for each page_id.
As my understanding , before I did the DB restore ,there were transaction log pages were corrupted ,but the restored repaired those corrupted pages .Since pages are repaired then there is no need to concern for now .I have now Database consistency check
job scheduled to check the Database corruption on Report server each night .I restore Database on report server using the a copy of Production Database Backup .Someone please help me to understand what caused the log file pages to get corrupted .Page_id 1,2,3,6,7
are called boot pages for the log file ? What shold I do if I will find the Log file supected Pages ?
Thank so your help in advance .
DaizyHi Andreas , Thanks for your reply .
FYI- You have the event_type 1and 3 for your Database , but the event_type was updated to 4 on my system after I did restore and the date/time shows the exact date/time when the event_type was updated .
Please help me understand usually Database Data file is organized in pages ,not the log file ??
Thanks
Daizy
Hello Daizy
yes, the event types 1-3 were the error-state before the "repair".
After I did a Full backup + Restore I now have type 4 just as you do.
Yes, the Log files is organized in so called "Virtual Log Files"/VLFs, which have nothing in common with the 8-KB data-pages of the data-files. Therefore a page_id does not make sense there.
You can read more on the architecture of the Transaction Log here:
SQL Server Transaction Log Architecture and Management
This article by Paul Randal might also be of interest to you for:
Transaction log corruption and backups
Hope that helps.
Andreas Wolter (Blog |
Twitter)
MCSM: Microsoft Certified Solutions Master Data Platform, MCM, MVP
www.SarpedonQualityLab.com |
www.SQL-Server-Master-Class.com -
Content database transaction log is full
Hi guys,
i am facing some very serious issues right here, SharePoint content can't be updated because the transaction log drive for the logs is full. the following message is displayed in the event viewer
'The transaction for database wss_content_guid is full. To find out why space can't be reused see the log_reuse_wait_desc column in sys.databases'
Pls helpHi,
The recommended way to truncate the transaction log if you are using a full recovery model is to back up the log. SQL Server 2005 automatically truncates the inactive parts of the transaction log when you back up the log. It is also recommended that you pre-grow the transaction log to avoid auto-growing the log. For more information about growing the transaction log, see Managing the Size of the Transaction Log File (http://go.microsoft.com/fwlink/?LinkId=124882). For more information about using a full recovery model, see Backup Under the Full Recovery Model (http://go.microsoft.com/fwlink/?LinkId=127985). For more information about using a simple recovery model, see Backup Under the Simple Recovery Model (http://go.microsoft.com/fwlink/?LinkId=127987).
We do not recommend that you manually shrink the transaction log size or manually truncate the log by using the Truncate method.
Transaction logs are also automatically backed up when you back up the farm, Web application, or databases by using either the SharePoint Central Administration Web site or the Stsadm command-line tool. For more information about the Stsadm command-line tool, see Backup: Stsadm operation (Windows SharePoint Services).
So I would suggest you backing up SharePoint by either the SharePoint Central Administration Web site or the Stsadm command-line tool.
For more information about Best Practice on Backups, please refer to the following articles:
Best Practice on Backups
http://blogs.msdn.com/joelo/archive/2007/07/09/best-practice-on-backups.aspx
Back up logs (Windows SharePoint Services 3.0)
http://technet.microsoft.com/en-us/library/cc811601.aspx
Hope this helps.
Rock Wang
Rock Wang– MSFT -
System Crash after transactional log filled filesystem
Dear gurus,
We have an issue in our PRD system under FlexFrame platform. We SAP NW 7.4 (SP03) with ASE 15.7.0.042 (SuSe SLES 11 SP1) running as BW system.
While uploading data from ERP system, the transactional log was filled. We can see in <SID>.log:
Can't allocate space for object 'syslogs' in database '<SID>' because 'logsegment' segment is full/has no free extents. If you ran out of space in syslogs, dump the transaction log. Otherwise, use ALTER DATABASE to increase the size of the segment.
After this, we increase the transactional log (disk resize). Then, executed ALTER DATABASE <SID> log on <LOGDEVICE> = '<size>'
While ALTER is running the log filesystem was filled (100%), after this, <SID>.log began to grow tremendously.
We stopped Sybase and now, when we try start it all FF node will be down. The filesystem has free space (around 10 GB)
Could you help us?
Add: We think that a posible solution could be to delete the transactional log due to the fact that we understand that the failure is related to this log (maybe corrupted?)
Regards====================
00:0008:00000:00009:2014/06/26 15:49:37.09 server Checkpoint process detected hardware error writing logical page '2854988', device 5, virtual page 6586976 for dbid 4, cache 'log cache'. It will sleep until write completes successfully.
00:0010:00000:00000:2014/06/26 15:49:37.10 kernel sddone: write error on virtual disk 5 block 6586976:
00:0010:00000:00000:2014/06/26 15:49:37.10 kernel sddone: No space left on device
00:0008:00000:00009:2014/06/26 15:49:37.10 server bufwritedes: write error detected - spid=9, ppage=2854988, bvirtpg=(device 5, page 6586976), db id=4
=======================
1 - check to make sure the filesystem that device #5 (vdevno=5) sits on is not full; make sure filesystem is large enough to hold the entire defined size of device #5; make sure no other processes are writing to said filesystem
2 - have your OS/disk admin(s) make sure the disk fragment(s) underlying device #5's filesystem isn't referenced by other filesystems and/or raw device definitions -
Log Reader Agent: transaction log file scan and failure to construct a replicated command
I encountered the following error message related to Log Reader job generated as part of transactional replication setup on publisher. As a result of this error, none of the transactions propagated from publisher to any of its subscribers.
Error Message
2008-02-12 13:06:57.765 Status: 4, code: 22043, text: 'The Log Reader Agent is scanning the transaction log for commands to be replicated. Approximately 24500000 log records have been scanned in pass # 1, 68847 of which were marked for replication, elapsed time 66018 (ms).'.
2008-02-12 13:06:57.843 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on ServerName.'.
2008-02-12 13:06:57.843 Status: 0, code: 18805, text: 'The Log Reader Agent failed to construct a replicated command from log sequence number (LSN) {00065e22:0002e3d0:0006}. Back up the publication database and contact Customer Support Services.'.
2008-02-12 13:06:57.843 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'ServerName'.'.
Replication agent job kept trying after specified intervals and kept failing with that message.
Investigation
I could clearly see there were transactions waiting to be delilvered to subscribers from the followings:
SELECT * FROM dbo.MSrepl_transactions -- 1162
SELECT * FROM dbo.MSrepl_commands -- 821922
The following steps were taken to further investigate the problem. They further confirmed how transactions were in queue waiting to be delivered to distribution database
-- Returns the commands for transactions marked for replication
EXEC sp_replcmds
-- Returns a result set of all the transactions in the publication database transaction log that are marked for replication but have not been marked as distributed.
EXEC sp_repltrans
-- Returns the commands for transactions marked for replication in readable format
EXEC sp_replshowcmds
Resolution
Taking a backup as suggested in message wouldn't resolve the issue. None of the commands retrieved from sp_browserreplcmds with mentioned LSN in message had no syntactic problems either.
exec sp_browsereplcmds @xact_seqno_start = '0x00065e220002e3d00006'
In a desperate attempt to resolve the problem, I decided to drop all subscriptions. To my surprise Log Reader kept failing with same error again. I thought having no subscription for publications log reader agent would have no reason to scan publisher's transaction log. But obviously I was wrong. Even adding new log reader using sp_addLogreader_agent after deleting the old one would not be any help. Restart of server couldn't do much good either.
EXEC sp_addlogreader_agent
@job_login = 'LoginName',
@job_password = 'Password',
@publisher_security_mode = 1;
When nothing else worked for me, I decided to give it a try to the following procedures reserved for troubleshooting replication
--Updates the record that identifies the last distributed transaction of the server
EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
-- Flushes the article cache
EXEC sp_replflush
Bingo !
Log reader agent managed to start successfully this time. I wish if I could have used both commands before I decided to drop subscriptions. It would have saved me considerable effort and time spent re-doing subscriptions.
Question
Even though I managed to resolve the error and have replication funtioning again but I think there might have been some better solution and I would appreciate if you could provide me some feedback and propose your approach to resolve the problem.Hi Hilary,
Will the below truncate the log file marked for replication, is there any data loss, when we execute this command, can you please help me understand, the internal working of this command.
EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1 -
Restore ASE - Dump of open transaction-log required?
Hi experts,
I am still doing some restore tests.
What about the following case.
Last transaction log was done at 1 o'clock and next will be at 4 o'clock.
At 3 o'clock we detect that we have to restore two 2 o'clock.
So for this restore, I need the transaction log which isn't dumped yet.
My question is, do I have to dump the current transaction log also to a file for the restore procedure?
Or is there another way to included in the restore the current-log file?
In other words, when will the log-file be touched first?
After "online database" command?
If so, I can also do the restore using the original-logfile, right?
Kind regardsChristian,
You are right.
Let me tell you what is the practice I reccommend to follow:
1. Take full backup daily during your off business hours if you have the infrastructure like tape/disk or SAN and the data is very critical may be production or development
2 During the business hours take hourly or half an hour once take the transaction backup may be between 9-6 as per your time zone :)
3 This mostly helps you to minimise the tran log loss.
4 As you have the week end reorg and update stats running I prefer just before start of production hours on Monday take a full backup and keep it safe so that the data is super clean and secure
If there is any confusion let me know I will explain you still clearly in simpler words
PS:One full backup per day is fine if you can preserve and retain for 7-10 days and delete it later point if you don't need it and don't have infrastructure and disk cost problems :P ;)
Cheers
Kiran K Adharapuram -
How to delete Transaction Logs in SQL database
Hi,
Can any one explain me the process how to delete the transcation logs in SQL database.
Thanks
SunilSunil,
Yes you can take online backup in MS SQL server.
The transaction log files contain information about all changes made to the database. The log files are necessary components of the database and may never be deleted. Why you want to delete it?
I am taking any backup, do i need to turn off the SAP server that is running at the moment or can i take it online
There are three main types of SQL Server Backup: Full Database Backup, Differential Database Backup and Transaction Log Backup. All these backups can be made when the database is online and do not require you to stop the SAP system.
Check below link for details
http://help.sap.com/erp2005_ehp_04/helpdata/EN/89/68807c8c984855a08b60f14b742ced/frameset.htm
Thanks
Sushil -
Hi
I have large database and i need to perform batch deleting without affecting the transaction log. So if I set the Recovery mode to Simple before deleting the transaction log will not grow ??
Thanks.Hi
I have large database and i need to perform batch deleting without affecting the transaction log. So if I set the Recovery mode to Simple before deleting the transaction log will not grow ??
Thanks.
You CANNOT delete records in sql server without getting information logged in transaction log. Please note every thing in SQL Server is logged and logging depends on recovery model used. When you use simple recovery logging will
almost be same as full just after checkpoint logs would be truncated and also when log file grows 70 % of its size. This can only not happen IF some ongoing transaction is not holding the VLF or requires the VLF(virtual
log file)
So you made good choice to delete in batches. Also have a look on Lock escalation
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
SQL0964C The transaction log for the database is full
Hi,
i am planning to do QAS refresh from PRD system using Client export\import method. i have done export in PRD and the same has been moved to QAS then started imported.
DB Size:160gb
DB:DB2 9.7
os: windows 2008.
I facing SQL0964C The transaction log for the database is full issue while client import and regarding this i have raised incident to SAP then they replied to increase some parameter like(LOGPRIMARY,LOGSECOND,LOGFILSIZ) temporarily and revert them back after the import. Based on that i have increased from as below mentioned calculation.
the filesystem size of /db2/<SID>/log_dir should be greater than LOGFILSIZ*4*(Sum of LOGPRIMARY+LOGSECONDARY) KB
From:
Log file size (4KB) (LOGFILSIZ) = 60000
Number of primary log files (LOGPRIMARY) = 50
Number of secondary log files (LOGSECOND) = 100
Total drive space required: 33GB
To:
Log file size (4KB) (LOGFILSIZ) = 70000
Number of primary log files (LOGPRIMARY) = 60
Number of secondary log files (LOGSECOND) = 120
Total drive space required: 47GB
But still facing the same issue. Please help me to resolve the ASAP.
Last error TP log details:
3 ETW674Xstart import of "R3TRTABUFAGLFLEX08" ...
4 ETW000 1 entry from FAGLFLEX08 (210) deleted.
4 ETW000 1 entry for FAGLFLEX08 inserted (210*).
4 ETW675 end import of "R3TRTABUFAGLFLEX08".
3 ETW674Xstart import of "R3TRTABUFAGLFLEXA" ...
4 ETW000 [ dev trc,00000] Fri Jun 27 02:20:21 2014 -774509399 65811.628079
4 ETW000 [ dev trc,00000] *** ERROR in DB6Execute[dbdb6.c, 4980] CON = 0 (BEGIN) 85 65811.628164
4 ETW000 [ dev trc,00000] &+ DbSlModifyDB6( SQLExecute ): [IBM][CLI Driver][DB2/NT64] SQL0964C The transaction log for the database is full.
4 ETW000 83 65811.628247
4 ETW000 [ dev trc,00000] &+ SQLSTATE=57011 row=1
4 ETW000 51 65811.628298
4 ETW000 [ dev trc,00000] &+
4 ETW000 67 65811.628365
4 ETW000 [ dev trc,00000] &+ DELETE FROM "FAGLFLEXA" WHERE "RCLNT" = ?
4 ETW000 62 65811.628427
4 ETW000 [ dev trc,00000] &+ cursor type=NO_HOLD, isolation=UR, cc_release=YES, optlevel=5, degree=1, op_type=8, reopt=0
4 ETW000 58 65811.628485
4 ETW000 [ dev trc,00000] &+
4 ETW000 53 65811.628538
4 ETW000 [ dev trc,00000] &+ Input SQLDA:
4 ETW000 52 65811.628590
4 ETW000 [ dev trc,00000] &+ 1 CT=WCHAR T=VARCHAR L=6 P=9 S=0
4 ETW000 49 65811.628639
4 ETW000 [ dev trc,00000] &+
4 ETW000 50 65811.628689
4 ETW000 [ dev trc,00000] &+ Input data:
4 ETW000 49 65811.628738
4 ETW000 [ dev trc,00000] &+ row 1: 1 WCHAR I=6 "210" 34 65811.628772
4 ETW000 [ dev trc,00000] &+
4 ETW000 51 65811.628823
4 ETW000 [ dev trc,00000] &+
4 ETW000 50 65811.628873
4 ETW000 [ dev trc,00000] *** ERROR in DB6Execute[dbdb6.c, 4980] (END) 27 65811.628900
4 ETW000 [ dbtran ,00000] ***LOG BY4=>sql error -964 performing DEL on table FAGLFLEXA
4 ETW000 3428 65811.632328
4 ETW000 [ dbtran ,00000] ***LOG BY0=>SQL0964C The transaction log for the database is full. SQLSTATE=57011 row=1
4 ETW000 46 65811.632374
4 ETW000 [ dev trc,00000] dbtran ERROR LOG (hdl_dbsl_error): DbSl 'DEL' 59 65811.632433
4 ETW000 RSLT: {dbsl=99, tran=1}
4 ETW000 FHDR: {tab='FAGLFLEXA', fcode=194, mode=2, bpb=0, dbcnt=0, crsr=0,
4 ETW000 hold=0, keep=0, xfer=0, pkg=0, upto=0, init:b=0,
4 ETW000 init:p=0000000000000000, init:#=0, wa:p=0X00000000020290C0, wa:#=10000}
4 ETW000 [ dev trc,00000] dbtran ERROR LOG (hdl_dbsl_error): DbSl 'DEL' 126 65811.632559
4 ETW000 STMT: {stmt:#=0, bndfld:#=1, prop=0, distinct=0,
4 ETW000 fld:#=0, alias:p=0000000000000000, fupd:#=0, tab:#=1, where:#=2,
4 ETW000 groupby:#=0, having:#=0, order:#=0, primary=0, hint:#=0}
4 ETW000 CRSR: {tab='', id=0, hold=0, prop=0, max.in@0=1, fae:blk=0,
4 ETW000 con:id=0, con:vndr=7, val=2,
4 ETW000 key:#=3, xfer=0, xin:#=0, row:#=0, upto=0, wa:p=0X00000001421A3000}
2EETW125 SQL error "-964" during "-964" access: "SQL0964C The transaction log for the database is full. SQLSTATE=57011 row=1"
4 ETW690 COMMIT "14208" "-1"
4 ETW000 [ dev trc,00000] *** ERROR in DB6Execute[dbdb6.c, 4980] CON = 0 (BEGIN) 16208 65811.648767
4 ETW000 [ dev trc,00000] &+ DbSlModifyDB6( SQLExecute ): [IBM][CLI Driver][DB2/NT64] SQL0964C The transaction log for the database is full.
4 ETW000 75 65811.648842
4 ETW000 [ dev trc,00000] &+ SQLSTATE=57011 row=1
4 ETW000 52 65811.648894
4 ETW000 [ dev trc,00000] &+
4 ETW000 51 65811.648945
4 ETW000 [ dev trc,00000] &+ INSERT INTO DDLOG (SYSTEMID, TIMESTAMP, NBLENGTH, NOTEBOOK) VALUES ( ? , CHAR( CURRENT TIMESTAMP - CURRENT TIME
4 ETW000 50 65811.648995
4 ETW000 [ dev trc,00000] &+ ZONE ), ?, ? )
4 ETW000 49 65811.649044
4 ETW000 [ dev trc,00000] &+ cursor type=NO_HOLD, isolation=UR, cc_release=YES, optlevel=5, degree=1, op_type=15, reopt=0
4 ETW000 55 65811.649099
4 ETW000 [ dev trc,00000] &+
4 ETW000 49 65811.649148
4 ETW000 [ dev trc,00000] &+ Input SQLDA:
4 ETW000 50 65811.649198
4 ETW000 [ dev trc,00000] &+ 1 CT=WCHAR T=VARCHAR L=44 P=66 S=0
4 ETW000 47 65811.649245
4 ETW000 [ dev trc,00000] &+ 2 CT=SHORT T=SMALLINT L=2 P=2 S=0
4 ETW000 48 65811.649293
4 ETW000 [ dev trc,00000] &+ 3 CT=BINARY T=VARBINARY L=32000 P=32000 S=0
4 ETW000 47 65811.649340
4 ETW000 [ dev trc,00000] &+
4 ETW000 50 65811.649390
4 ETW000 [ dev trc,00000] &+ Input data:
4 ETW000 49 65811.649439
4 ETW000 [ dev trc,00000] &+ row 1: 1 WCHAR I=14 "R3trans" 32 65811.649471
4 ETW000 [ dev trc,00000] &+ 2 SHORT I=2 12744 32 65811.649503
4 ETW000 [ dev trc,00000] &+ 3 BINARY I=12744 00600306003200300030003900300033003300310031003300320036003400390000...
4 ETW000 64 65811.649567
4 ETW000 [ dev trc,00000] &+
4 ETW000 52 65811.649619
4 ETW000 [ dev trc,00000] &+
4 ETW000 51 65811.649670
4 ETW000 [ dev trc,00000] *** ERROR in DB6Execute[dbdb6.c, 4980] (END) 28 65811.649698
4 ETW000 [ dbsyntsp,00000] ***LOG BY4=>sql error -964 performing SEL on table DDLOG 36 65811.649734
4 ETW000 [ dbsyntsp,00000] ***LOG BY0=>SQL0964C The transaction log for the database is full. SQLSTATE=57011 row=1
4 ETW000 46 65811.649780
4 ETW000 [ dbsync ,00000] ***LOG BZY=>unexpected return code 2 calling ins_ddlog 37 65811.649817
4 ETW000 [ dev trc,00000] db_syflush (TRUE) failed 26 65811.649843
4 ETW000 [ dev trc,00000] db_con_commit received error 1024 in before-commit action, returning 8
4 ETW000 57 65811.649900
4 ETW000 [ dbeh.c ,00000] *** ERROR => missing return code handler 1974 65811.651874
4 ETW000 caller does not handle code 1024 from dblink#5[321]
4 ETW000 ==> calling sap_dext to abort transaction
2EETW000 sap_dext called with msgnr "900":
2EETW125 SQL error "-964" during "-964" access: "SQL0964C The transaction log for the database is full. SQLSTATE=57011 row=1"
1 ETP154 MAIN IMPORT
1 ETP110 end date and time : "20140627022021"
1 ETP111 exit code : "12"
1 ETP199 ######################################
Regards,
RajeshHi Babu,
I believe you should have taken a restart of your system if log primary are changed. If so, then increase log primary to 120 and secondary to 80 provide size and space are enough.
Note 1293475 - DB6: Transaction Log Full
Note 1308895 - DB6: File System for Transaction Log is Full
Note: 495297 - DB6: Monitoring transaction log
Regards,
Divyanshu -
Oracle dump transaction log like Sybase
Hello
In Sybase we use the dump transaction log command for the needed databases that dumps whatever is in the transaction log of that db to file system, after that dump file is copied over to another database server and loads the other database, this is like a replication
Does oracle have something similar to this?
Many thanksthe closest you'll get to what you are looking for is a dataguard setup. Dataguard will manage the shipment of the logs on it's own and keep things in sync.
There is another option of Oracle Advanced Replication but you will need to review if that really meets your needs. The following is simply an overview of it:
http://www.orafaq.com/wiki/Advanced_Replication_FAQ
Or GoldenGate:
http://www.oracle.com/technetwork/middleware/goldengate/overview/index.html
Edited by: rui_catcuddler on Oct 20, 2010 7:54 AM -
Hi,
I have three T-log files in my database, Now I want to delete 2 Transaction log files.
Can I do the below action:
1. dbcc shrinkfile(log1,truncateonly)
2 dbcc shrinkfile(log2,truncateonly)
2. Then remove the file using command or SSMS.
RegardsHi Satheesh,
What about this:
Can I use the below procedure:
dbcc shrinkfile(LOG2,emptyfile)
dbcc shrinkfile (LOG3,emptyfile)
alter database PRT remove file LOG2
alter database PRT remove file LOG3
Note: I have already LOG1 as my primary logfile existing there, I want to remove only secondary logfiles
Regards -
Every time I get this error, at different points of testing inserts and deletions on my table:
The transaction log for database 'mydatabase' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
Why do I keep getting this? All I'm doing is deleting several hundred thousand records and inserting them into a couple of tables. i shouldn't have to truncate my log every time or my application bombs out!
sys.databases only gives me this info for log_reuse_wait_desc which does nothing for me:
LOG_BACKUPsp_helpdb BizTalkDTADb
ALTER DATABASE BiztalkDTADb
SET RECOVERY SIMPLE;
GO
DBCC SHRINKFILE (BiztalkDTADb_log, 1);
GO
sp_helpdb BizTalkDTADb
GO
ALTER DATABASE BiztalkDTADb
SET RECOVERY FULL
GO -
Unable to delete records as the transaction log file is full
My disk is running out of space and as a result I decided to free some space by deleting old data. I tried to delete 100,000 by 100,000 as there are 240 million records to be deleted. But I am unable to delete them at once and shrinking the database doesn't
free much space. This is the error im getting at times.
The transaction log for database 'TEST_ARCHIVE' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
How can I overcome this situation and delete all the old records? Please advice.
mayooran99In order to delete the SQL Server need to write the information in the log file, and you do not have the place for those rows in the log file. You might succeeded to delete less in each time -> next backup the log file each time -> next shrink the
log file... but this is not the way that I would chose.
Best option is probably to add another disc (a simple disk do not cost a lot), move the log file there permanently. It will increase the database work as well (it is highly recommend not to put the log file on the same disk as the data file in most cases).
If you can't add new disk permanently then add one temporary. Then add file to the database in this disk -> create new table in this disk -> move all the data that you do
not want to delete to the new table -> truncate the current table -> bring back the data om the new table -> drop the new table and the new file to release the temporary disk.
Are you using full mode or simple recovery mode ?
* in full mode you have to backup the log file if you want to shrink it
Ronen Ariely
[Personal Site] [Blog] [Facebook]
Maybe you are looking for
-
BAPI_PO_GETDETAIL
Hello All, I want to use the above BAPI to retrive purchase order details remotely. PO details available in the logistic system X . My client requirement is that want to store this information in system Y(this is also SAP system). But my question is
-
Hello Experts, I am reviewing this document and there seems to be a daily feed from R3 into a BW Cost Based COPA. When I read Cost Based, the first question in my mind was that there much be other options and cost based was selected. If so what are
-
Hi, I understand that using File to IDOC scenario, we can post the PO data (ORDERS.ORDERS05) from legacy sys to R/3 backend as IDOC. My doubt is, do we need to implement IDOC to File scenario again in order to give acknowledgement to the customer (le
-
Is it possible to have a responsive slideshow?
Hi The site that I am building (www.fawleyfalcons.com) has a slideshow at the top of the page which I would like to take up the entire screen. Everytime I stretch it to fit it has scroll bars. Is there any way of getting around this by having a resp
-
List of devices supported by ADF mobile
Hi there, Does anyone know where I can find the latest list of devices currently supported by the latest release of ADF mobile? Thanks in advance. ET