Backup size of transaction log
Hi,
I want to check the backup size of Transaction logs?
please help
REgards
Bilal
Try the below query?
SELECT
CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server,
msdb.dbo.backupset.database_name,
CASE msdb..backupset.type
WHEN 'D' THEN 'Database'
WHEN 'L' THEN 'Log'
END AS backup_type,
msdb.dbo.backupset.backup_size
FROM msdb.dbo.backupmediafamily
INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id
WHERE (CONVERT(datetime, msdb.dbo.backupset.backup_start_date, 102) >= GETDATE() - 7)
ORDER BY
msdb.dbo.backupset.database_name,
msdb.dbo.backupset.backup_finish_date
Similar Messages
-
Big backup file of transaction log
we run transaction log backup of database A every 15 minutes.
backup file of transaction log file is usually less than 1 GB.
All of sudden, backup of transaction log file is more than 20GB continuously.
What could happen? Need to fix because no disk space for continuous 20+GB transaction log backup.we run transaction log backup of database A every 15 minutes.
backup file of transaction log file is usually less than 1 GB.
All of sudden, backup of transaction log file is more than 20GB continuously.
What could happen? Need to fix because no disk space for continuous 20+GB transaction log backup.
You need to analyse why your T-log has started growing all of sudden, You probably have a long running transaction running (Index maintenance or Big batch delete or update).
Also analyse the out put of below query
SELECT log_reuse_wait_desc FROM SYS.databases WHERE name = 'DB_NAME' -- replacce your database name
you can also refer below article for deeper analysis.
A transaction log grows unexpectedly or becomes full in SQL Server
Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
Praveen Dsa | MCITP - Database Administrator 2008 |
My Blog | My Page -
Database restore from backup--miss some transaction log backup files.How to restore?
One database with full recovery model
it runs full backup at 12:00 am on 1/1 one time.
Then transaction log backup every 3 hours. 3:00 am, 6:00am,9:00 am, 12:00 Pm, 3:00Pm,6:00PM,9:00pm,12am......
If we can't find 3:00 am, 6:00am, 9:00am transaction log backup files, could we still restore to 6:00am? We still have all transaction log backup files after 12:00Pm.
ThanksOne database with full recovery model
it runs full backup at 12:00 am on 1/1 one time.
Then transaction log backup every 3 hours. 3:00 am, 6:00am,9:00 am, 12:00 Pm, 3:00Pm,6:00PM,9:00pm,12am......
If we can't find 3:00 am, 6:00am, 9:00am transaction log backup files, could we still restore to 6:00am? We still have all transaction log backup files after 12:00Pm.
Thanks
NO..log files are incremental and connected.If any log file is missing you cannot restore to point which comes after backup which is lost or damaged.Like u miss 3 AM trn log backup now even if you have 6,9,12 AM trn log backups its of no use .
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
Backup and restore full and transaction log in nonrecovery mode failed due to LSN
In SQL 2012 SP1 enterprise, when taking a full backup and followed up a transaction log backup immediately, the transaction log backup starts with an earlier LSN than the ending LSN of the full backup. As a result, I cannot restore
the transaction log backup after the full backup both as nonrecovery on another machine. I was trying to make the two machine in sync for mirroring purpose. An example is as follows.
full backup: first 1121000022679500037, last 1121000022681200001
transaction log: first 1121000022679000001, last 1121000022682000001
--- SQL Scripts used
BACKUP DATABASE xxx TO DISK = xxx WITH FORMAT
go
backup log xxx to disk = xxx
--- When restore, I tried the
restore log BarraOneArchive from disk=xxx WITH STOPATMARK = 'lsn:1121000022682000001', NORECOVERY
Also tried StopBeforeMark, did not work either. Complained about the LSN too early to apply to the databaseI think that what I am saying is correct .I said in sync mirroring ( i was not talking about witness) if network goes for few minutes or some longer time may be 20 mins ( more than that is reare scenario IS team has backup for that) logs on Principal will
continue to grow as they wont be able to commit because there connection with mirror is gone so commit from mirror is not coming.After network comes online Mirror will replay all logs and will soon try to come up with principal
Books Online says this: This is achieved by waiting to commit a transaction on the principal database, until the principal server receives a message from the mirror server stating that it has hardened the transaction's log to disk. That is,
if the remote server would go away in a way so that the primary does not notice, transactions would not commit and the primary would also be stalled.
In practice it does not work that way. When a timeout expires, the principal will consider the mirror to be gone, and Books Online says about this case
If the mirror server instance goes down, the principal server instance is unaffected and runs exposed (that is without mirroring the data). In this section, BOL does not discussion transaction logs, but it appear reasonable that the log records are
retained so that the mirror can resync once it is back.
In Async Mirroring Transaction log is sent to Mirror but it does not waits for Acknowledgement from mirror and commits the transaction.
But I would expect that the principal still gets acknowledgement that the log records have been consumed, or else your mirroring could start failing f you backup the log too frequently. That is, I would not expect any major difference between sync and async
mirroring in this regard. (Where it matters is when you fail over. With async mirroring, you are prepared to accept some data loss in case of a failover.)
These are theories that could be fairly easily tested if you have a mirroring environment set up in a lab, but I don't.
Erland Sommarskog, SQL Server MVP, [email protected] -
Maxdb restore - transaction log backup
Hi,
Is it possible to restore the db backup without the transaction log backup? I know this is kinda lame to ask this but just wondering if this is possible and how it can be done.
Database is MaxDB and OS is Linux.
Thanks in advance!Hi,
the restore does not depend onto the database state in which the databackup has been made.
Instead you are able to recover every complete databackup without a logrecovery. You can do this by using the dbmcli command DB_ACTIVATE RECOVER <medium_name> or the corresponding dbmgui actions.
After the recovery you simply need to restart the database.
Kind regards, Martin -
SQL Server Database - Transaction logs growing largely with Simple Recovery model
Hello,
There is SQL server database on client side in production environment with huge transaction logs.
Requirement :
1. Take database backup
2. Transaction log backup is not required. - so it is set to Simple recovery model.
I am aware that, Simple Recovery model also increases the transaction logs same as in Full Recovery model as given on below link.
http://realsqlguy.com/origins-no-simple-mode-doesnt-disable-the-transaction-log/
Last week, this transaction log became of 1TB size and blocked everything on the database server.
How to over come with this situation?
PS : There are huge bulk uploads to the database tables.
Current Configuration :
1. Simple Recovery model
2. Target Recovery time : 3 Sec
3. Recovery interval : 0
4. No SQL Agent job schedule to shrink database.
5. No other checkpoints created except automatic ones.
Can anyone please guide me to have correct configuration on SQL server for client's production environment?
Please let me know if any other details required from server.
Thank you,
Mittal.@dave_gona,
Thank you for your response.
Can you please explain me this in more details --
What do you mean by one batch ?
1. Number of rows to be inserted at a time ?
2. or Size of data in one cell does matter here.
As in my case, I am clubbing together all the data in one xml (on c# side) and inserting it as one record. Data is large in size, but only 1 record is inserted.
Is it a good idea to shrink transaction log periodically, as it is not happening itself in simple recovery model.
HI Mittal,
Shrinking is bad activity yu should not shrink log files regularly, in rare case if you want to recovery space you may do it.
Have manual chekpoints in Bulk insert operation.
I cannot tell upfront what should be batch size but you can start with 1/4 th of what you are currently inserting.
Most important what does below query return for database
select log_reuse_wait_desc from sys.databases where name='db_name'
The value it returns is what stopping the log from getting cleared and reused.
What is version and editon of SQl server we are talking about. What is output of
select @@version
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
How to re-read a 2 GB transaction log archived
I've archived with the backup utility the transaction log of my R/3 system cause we had space problems. Now the file backed-up has a size of 2 GB. I need to read it but I can't open because it's too big. Is there any utility or way to read a so big LOG file? Is there any particular function between the database tools I can use to do it?
Thanx
SonyaHello,
you cannot check the free space from outside the SQL Server as you can't open the actual log file as it is locked by SQL Server. If you want to get the information about free log space you can run the SQL command:
dbcc sqlperf(logspace)
If you want to get the information within a bat file you can use the osql utillity from SQL Server:
@echo off
osql -E -Q"dbcc sqlperf(logspace)"
The -E paramter connects you with a trusted Windows connection to the server. If you hae a named instance you have to specify the -S<Instancename> paramter as well. See in SQL Server Books Online for the complete syntax for the osql.exe utility.
Best regards
Clas -
Hi,
I found a sql server database with a transaction log file of 65 GB.
The database is configured with the recovery model option = full.
Also, I noticed than since the database exist, they only took database backup.
No transaction log backup were executed.
Now, the "65 GB transaction log file" use more than 70% of the disk space.
Which scenario do you recommend?
1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file, schedule transaction log backup each hour.
2- Backup the database, put the recovery model option= simple, shrink the transaction log file, Backup the database.
Does the " 65 GB file shrink" operation would have impact on my database users ?
The sql server version is 2008 sp2 (10.0.4000)
regards
DI've read the other posts and I'm at the position of: It really doesn't matter.
You've not needed point in time restore abilities inclusive of this date and time since inception. Since a full database backup contains all of the log needed to bring the database into a consistent state, doing a full backup and then log backup is redundant
and just taking up space.
For the fastest option I would personally do the following:
1. Take a full database backup
2. Set the database recovery model to Simple
3. Manually issue two checkpoints for good measure or check to make sure the current VLF(active) is near the beginning of the log file
4. Shrink the log using the truncate option to lop off the end of the log
5. Manually re-size the log based on usage needed
6. Set the recovery model to full
7. Take a differential database backup to bridge the log gap
The total time that will take is really just the full database backup and the expanding of the log file. The shrink should be close to instantaneous since you're just truncating the end and the differential backup should be fairly quick as well. If you don't
need the full recovery model, leave it in simple and reset the log size (through multiple grows if needed) and take a new full backup for safe keeping.
Sean Gallardy | Blog |
Twitter -
Client Deletion, Transaction log getting full.
Hi Gurus,
We are trying to delete a client by running:
clientremove
client = 200 (*200 being the client we want to remove)
select *
The transaction log disk space allocated is 50GB, it is getting full (in simple mode) and client deletion never completes. The size of the table it is accessing is 86 GB, and i think 200 client will be occupying around 40-45GB. Client 200 has 15.5 million rows in the table.
I am i giving proper command ?is there any explicit commit i can include or any workaround for deleting the client and not hammer the log file.
Thanks guys
Edited by: SAP_SQLDBA on Jan 22, 2010 6:51 PMHi,
Backup the active transaction log file and Shrink the file directly.
Please refer the following SAP Notes to get more information.
[ Note 625546 - Size of transaction log file is too big|https://websmp130.sap-ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=625546]
[ Note 421644 - SQL error 9002: The transaction log is full|https://websmp130.sap-ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=421644]
Which version of SQL Server u are using ? SP Level ?
Frequently perform Transaction Log backup (BACKUP TRANS) to remove inactive space within the Transaction Log Files.
Please refer [Note 307911 - Transaction Log Filling Up in SQL Server 7.0|https://websmp130.sap-ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=307911] to get more information about the reasons for such kind of situation.
Regards,
Bhavik G. Shroff -
Transaction logs after large mailbox archive?
Hi all,
I've recently run a large mailbox archive on our mailbox database and I'm concerned about the transaction log files that will be produced.
Some info: We run a single Exchange server on Windows Server 2008, on a single hard disk. The system is run on VMWare with a full Exchange-aware backup run every night. Database file is currently 194GB with about 90GB whitespace.
I archived 20GB worth of email from a mailbox. My problem is that the hard disk with the database and log files on it only has 12GB of free space, so when the Recoverable Items folder is cleared 2 weeks later, is there going to be 20GB of transaction logs
with nowhere to go? Will I have to organise some additional storage to give the log files some room?
Appreciate any help.Hi,
I notice that there only 12 GB free disk in your mailbox server, it may be too small.
Exchange will not generate much more transaction log when delete Recoverable Items, however it also not delete previous transaction log. Meanwhile, Exchange will generate new log when mail flow and move message to archive database, the size of transaction log
only grow. Therefore, I recommend to add additional disk or full backup for your database on schedule to truncated some unrequired logs.
Here’s the article about Mailbox Server Storage Design, for your reference:
https://technet.microsoft.com/en-us/library/dd346703(v=exchg.141).aspx
Best Regards,
Allen Wang -
Dear All,
There have been issues in the past where the transactional log file has grown too big that it made the drive to limit its size. I would like to know the answers to the following
please:
1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?
2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?
3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?
4. How often should the update stat job should run please?
Thank you in advance!Hi
My answers might be very similar to geeks already answer, but hope it will add something more
1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?
--> If database recovery model is full \ bulk then tlog backup is helpful, and it doesnt help try to increase frequency of log backup and you can refer :
Factors That Can Delay Log Truncation
2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?
Auto grow for very large db is very crucial if its too high can cause active vlf and too less can cause fragmentation. In your case your priority is to control space utilizatiuon.
i suggest you to keep minimum autogrowth and it must be in size not in percentage.
/*******Auto grow formula for log file**********/
Auto grow less than 64MB = 4 VLFs
Autogrow of 64MB and less than 1GB = 8 VLFs
Autogrow of 1GB and larger = 16 VLFs
3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?
---> If below query returns log_backup for respective database then yes you can to increase log backup frequency. But if it returns some other factor , please check above
mention link
"select name as [database] ,log_reuse_wait , log_reuse_wait_desc from sys.databases"
4. How often should the update stat job should run please?
this totaly depend on ammount of dml operation you are performing. you can select auto update stats and weekly you can do update stats with full scan.
Thanks Saurabh Sinha
http://saurabhsinhainblogs.blogspot.in/
Please click the Mark as answer button and vote as helpful
if this reply solves your problem -
Transaction Log Truncate not working on Sql Server 2012 High Availability Groups
Hi Everyone
Firstly I have tried to search the forum for similar issues but can't seem to find any that match our situation.
We have a SQL Server 2012 High Availability Group with 2 Nodes
Node 1 = Primary
Node 2 = Secondary
Backup Schedule as follows
Full Database Backup @ 00:00
Transaction Log Backup every 30 minutes from 00:30:00 till 23:59:59.
These backups are run by Maintenance Jobs, but we have also tried doing direct backups in SSMS using Backup Database and Backup Log commands.
Before we configured the High Availability group the transaction log backups worked fine.
After we configured the High Availability group we performed a Full Backup and the T-Log schedule did the T-Log backup. The 1st T-log backup truncated the log (Used space Decreased) as expected.
However subsequent T-Log backups do not truncate the T-Log.
This happens both in our acceptance and Live environments. This also happens when running the backups as a Backup operator and sysadmin, this does not seem to be a permissions issue at all.
We have tried running the Backup on the Primary and Secondary Replica.
What about High Availability groups could stop Transaction Log Backups from not truncating the log?
Thanks
JamesHi Sean
Thank you for your reply, please see the output of the sys.databases query below, and some others which you may find usefull.
Query: select database_id,recovery_model_desc, log_reuse_wait, log_reuse_wait_desc from sys.databases
where database_id = 5
Result: database_id recovery_model_desc log_reuse_wait log_reuse_wait_desc
5
FULL 0
NOTHING
I also ran the following
select database_id, truncation_lsn, last_received_lsn, last_commit_lsn, last_hardened_lsn, last_redone_lsn,*
from sys.dm_hadr_database_replica_states
go
database_id
truncation_lsn
last_received_lsn
last_commit_lsn
last_hardened_lsn
last_redone_lsn
database_id
group_id
replica_id
group_database_id
is_local
synchronization_state
synchronization_state_desc
is_commit_participant
synchronization_health
synchronization_health_desc
database_state
database_state_desc
is_suspended
suspend_reason
suspend_reason_desc
recovery_lsn
truncation_lsn
last_sent_lsn
last_sent_time
last_received_lsn
last_received_time
last_hardened_lsn
last_hardened_time
last_redone_lsn
last_redone_time
log_send_queue_size
log_send_rate
redo_queue_size
redo_rate
filestream_send_rate
end_of_log_lsn
last_commit_lsn
last_commit_time
low_water_mark_for_ghosts
5
1231833000417170000000
1231833000418880000000
1231833000418880000000
1231833000418890000000
1231833000418880000000
5
1391A499-3F9A-47D5-BCE0-70BC204E2A5B
7E8BFC2E-363F-4C48-86F0-C276D3E0C8D9
0581E17A-6B7B-4B8F-9288-BF765BFBCE77
0
2
SYNCHRONIZED
1
2
HEALTHY
NULL
NULL
0
NULL
NULL
4294967295429490000000000
1231833000417170000000
1
41863
1231833000418880000000
41863
1231833000418890000000
41863
1231833000418880000000
41863
0
25541
0
84404
75304
1231833000418880000000
1231833000418880000000
41863
441019861
5
1231833000417170000000
NULL
1231833000418880000000
1231833000418890000000
NULL
5
1391A499-3F9A-47D5-BCE0-70BC204E2A5B
83B9F00E-D63F-4AC0-98FC-35E48FFA2C6F
0581E17A-6B7B-4B8F-9288-BF765BFBCE77
1
2
SYNCHRONIZED
1
2
HEALTHY
0
ONLINE
0
NULL
NULL
4294967295429490000000000
1231833000417170000000
NULL
NULL
NULL
NULL
1231833000418890000000
NULL
NULL
NULL
NULL
NULL
NULL
NULL
NULL
1231833000418880000000
1231833000418880000000
41863
441019861
And
dbcc loginfo
go
RecoveryUnitId
FileId
FileSize
StartOffset
FSeqNo
Status
Parity
CreateLSN
0
2
458752
8192
1231828
0
128
0
0
2
458752
466944
1231829
0
128
0
0
2
458752
925696
1231830
0
128
0
0
2
712704
1384448
1231831
0
128
0
0
2
19398656
2097152
1231832
0
128
1229654000000040000000
0
2
10199171072
21495808
1231833
2
128
1229656000000010000000
0
2
10199171072
10220666880
0
0
64
1229656000000010000000
0
2
10199171072
20419837952
1231827
0
64
1229656000000010000000
0
2
10199171072
30619009024
0
0
128
1229656000000010000000
0
2
10199171072
40818180096
0
0
128
1229656000000010000000
0
2
10199171072
51017351168
0
0
128
1229656000000010000000
0
2
10199171072
61216522240
0
0
128
1229656000000010000000
0
2
10199171072
71415693312
0
0
128
1229656000000010000000
0
2
10199171072
81614864384
0
0
128
1229656000000010000000
0
2
536870912
91814035456
0
0
64
1229989001661260000000
0
2
536870912
92350906368
0
0
64
1229989001661260000000
0
2
536870912
92887777280
0
0
64
1229989001661260000000
0
2
536870912
93424648192
0
0
64
1229989001661260000000
0
2
536870912
93961519104
0
0
64
1229989001661260000000
0
2
536870912
94498390016
0
0
64
1229989001661260000000
0
2
536870912
95035260928
0
0
64
1229989001661260000000
0
2
536870912
95572131840
0
0
64
1229989001661260000000
0
2
536870912
96109002752
0
0
64
1229989001661260000000
0
2
536870912
96645873664
0
0
64
1229989001661260000000
0
2
536870912
97182744576
0
0
64
1229989001661260000000
0
2
536870912
97719615488
0
0
64
1229989001661260000000
0
2
536870912
98256486400
0
0
64
1229989001661260000000
0
2
536870912
98793357312
0
0
64
1229989001661260000000
0
2
536870912
99330228224
0
0
64
1229989001661260000000
0
2
536870912
99867099136
0
0
64
1229989001661260000000
0
2
536870912
100403970048
0
0
64
1229995000058520000000
0
2
536870912
100940840960
0
0
64
1229995000058520000000
0
2
536870912
101477711872
0
0
64
1229995000058520000000
0
2
536870912
102014582784
0
0
64
1229995000058520000000
0
2
536870912
102551453696
0
0
64
1229995000058520000000
0
2
536870912
103088324608
0
0
64
1229995000058520000000
0
2
536870912
103625195520
0
0
64
1229995000058520000000
0
2
536870912
104162066432
0
0
64
1229995000058520000000
0
2
536870912
104698937344
0
0
64
1229995000058520000000
0
2
536870912
105235808256
0
0
64
1229995000058520000000
0
2
536870912
105772679168
0
0
64
1229995000058520000000
0
2
536870912
106309550080
0
0
64
1229995000058520000000
0
2
536870912
106846420992
0
0
64
1229995000058520000000
0
2
536870912
107383291904
0
0
64
1229995000058520000000
0
2
536870912
107920162816
0
0
64
1229995000058520000000
0
2
536870912
108457033728
0
0
64
1229995000058520000000
0
2
536870912
108993904640
0
0
64
1230004000028400000000
0
2
536870912
109530775552
0
0
64
1230004000028400000000
0
2
536870912
110067646464
0
0
64
1230004000028400000000
0
2
536870912
110604517376
0
0
64
1230004000028400000000
0
2
536870912
111141388288
0
0
64
1230004000028400000000
0
2
536870912
111678259200
0
0
64
1230004000028400000000
0
2
536870912
112215130112
0
0
64
1230004000028400000000
0
2
536870912
112752001024
0
0
64
1230004000028400000000
0
2
536870912
113288871936
0
0
64
1230004000028400000000
0
2
536870912
113825742848
0
0
64
1230004000028400000000
0
2
536870912
114362613760
0
0
64
1230004000028400000000
0
2
536870912
114899484672
0
0
64
1230004000028400000000
0
2
536870912
115436355584
0
0
64
1230004000028400000000
0
2
536870912
115973226496
0
0
64
1230004000028400000000
0
2
536870912
116510097408
0
0
64
1230004000028400000000
0
2
536870912
117046968320
0
0
64
1230004000028400000000
0
2
536870912
117583839232
0
0
64
1230012000103140000000
0
2
536870912
118120710144
0
0
64
1230012000103140000000
0
2
536870912
118657581056
0
0
64
1230012000103140000000
0
2
536870912
119194451968
0
0
64
1230012000103140000000
0
2
536870912
119731322880
0
0
64
1230012000103140000000
0
2
536870912
120268193792
0
0
64
1230012000103140000000
0
2
536870912
120805064704
0
0
64
1230012000103140000000
0
2
536870912
121341935616
0
0
64
1230012000103140000000
0
2
536870912
121878806528
0
0
64
1230012000103140000000
0
2
536870912
122415677440
0
0
64
1230012000103140000000
0
2
536870912
122952548352
0
0
64
1230012000103140000000
0
2
536870912
123489419264
0
0
64
1230012000103140000000
0
2
536870912
124026290176
0
0
64
1230012000103140000000
0
2
536870912
124563161088
0
0
64
1230012000103140000000
0
2
536870912
125100032000
0
0
64
1230012000103140000000
0
2
536870912
125636902912
0
0
64
1230012000103140000000
0
2
536870912
126173773824
0
0
128
1230338000973820000000
0
2
536870912
126710644736
0
0
128
1230338000973820000000
0
2
536870912
127247515648
0
0
128
1230338000973820000000
0
2
536870912
127784386560
0
0
128
1230338000973820000000
0
2
536870912
128321257472
0
0
128
1230338000973820000000
0
2
536870912
128858128384
0
0
128
1230338000973820000000
0
2
536870912
129394999296
0
0
128
1230338000973820000000
0
2
536870912
129931870208
0
0
128
1230338000973820000000
0
2
536870912
130468741120
0
0
128
1230338000973820000000
0
2
536870912
131005612032
0
0
128
1230338000973820000000
0
2
536870912
131542482944
0
0
128
1230338000973820000000
0
2
536870912
132079353856
0
0
128
1230338000973820000000
0
2
536870912
132616224768
0
0
128
1230338000973820000000
0
2
536870912
133153095680
0
0
128
1230338000973820000000
0
2
536870912
133689966592
0
0
128
1230338000973820000000
0
2
536870912
134226837504
0
0
128
1230338000973820000000
0
2
536870912
134763708416
0
0
128
1230338001901440000000
0
2
536870912
135300579328
0
0
128
1230338001901440000000
0
2
536870912
135837450240
0
0
128
1230338001901440000000
0
2
536870912
136374321152
0
0
128
1230338001901440000000
0
2
536870912
136911192064
0
0
128
1230338001901440000000
0
2
536870912
137448062976
0
0
128
1230338001901440000000
0
2
536870912
137984933888
0
0
128
1230338001901440000000
0
2
536870912
138521804800
0
0
128
1230338001901440000000
0
2
536870912
139058675712
0
0
128
1230338001901440000000
0
2
536870912
139595546624
0
0
128
1230338001901440000000
0
2
536870912
140132417536
0
0
128
1230338001901440000000
0
2
536870912
140669288448
0
0
128
1230338001901440000000
0
2
536870912
141206159360
0
0
128
1230338001901440000000
0
2
536870912
141743030272
0
0
128
1230338001901440000000
0
2
536870912
142279901184
0
0
128
1230338001901440000000
0
2
536870912
142816772096
0
0
128
1230338001901440000000
0
2
536870912
143353643008
0
0
128
1230346000103040000000
0
2
536870912
143890513920
0
0
128
1230346000103040000000
0
2
536870912
144427384832
0
0
128
1230346000103040000000
0
2
536870912
144964255744
0
0
128
1230346000103040000000
0
2
536870912
145501126656
0
0
128
1230346000103040000000
0
2
536870912
146037997568
0
0
128
1230346000103040000000
0
2
536870912
146574868480
0
0
128
1230346000103040000000
0
2
536870912
147111739392
0
0
128
1230346000103040000000
0
2
536870912
147648610304
0
0
128
1230346000103040000000
0
2
536870912
148185481216
0
0
128
1230346000103040000000
0
2
536870912
148722352128
0
0
128
1230346000103040000000
0
2
536870912
149259223040
0
0
128
1230346000103040000000
0
2
536870912
149796093952
0
0
128
1230346000103040000000
0
2
536870912
150332964864
0
0
128
1230346000103040000000
0
2
536870912
150869835776
0
0
128
1230346000103040000000
0
2
536870912
151406706688
0
0
128
1230346000103040000000
0
2
536870912
151943577600
0
0
128
1230355000086930000000
0
2
536870912
152480448512
0
0
128
1230355000086930000000
0
2
536870912
153017319424
0
0
128
1230355000086930000000
0
2
536870912
153554190336
0
0
128
1230355000086930000000
0
2
536870912
154091061248
0
0
128
1230355000086930000000
0
2
536870912
154627932160
0
0
128
1230355000086930000000
0
2
536870912
155164803072
0
0
128
1230355000086930000000
0
2
536870912
155701673984
0
0
128
1230355000086930000000
0
2
536870912
156238544896
0
0
128
1230355000086930000000
0
2
536870912
156775415808
0
0
128
1230355000086930000000
0
2
536870912
157312286720
0
0
128
1230355000086930000000
0
2
536870912
157849157632
0
0
128
1230355000086930000000
0
2
536870912
158386028544
0
0
128
1230355000086930000000
0
2
536870912
158922899456
0
0
128
1230355000086930000000
0
2
536870912
159459770368
0
0
128
1230355000086930000000
0
2
536870912
159996641280
0
0
128
1230355000086930000000
0
2
536870912
160533512192
0
0
128
1230364000070870000000
0
2
536870912
161070383104
0
0
128
1230364000070870000000
0
2
536870912
161607254016
0
0
128
1230364000070870000000
0
2
536870912
162144124928
0
0
128
1230364000070870000000
0
2
536870912
162680995840
0
0
128
1230364000070870000000
0
2
536870912
163217866752
0
0
128
1230364000070870000000
0
2
536870912
163754737664
0
0
128
1230364000070870000000
0
2
536870912
164291608576
0
0
128
1230364000070870000000
0
2
536870912
164828479488
0
0
128
1230364000070870000000
0
2
536870912
165365350400
0
0
128
1230364000070870000000
0
2
536870912
165902221312
0
0
128
1230364000070870000000
0
2
536870912
166439092224
0
0
128
1230364000070870000000
0
2
536870912
166975963136
0
0
128
1230364000070870000000
0
2
536870912
167512834048
0
0
128
1230364000070870000000
0
2
536870912
168049704960
0
0
128
1230364000070870000000
0
2
536870912
168586575872
0
0
128
1230364000070870000000
0
2
536870912
169123446784
0
0
128
1230373000054750000000
0
2
536870912
169660317696
0
0
128
1230373000054750000000
0
2
536870912
170197188608
0
0
128
1230373000054750000000
0
2
536870912
170734059520
0
0
128
1230373000054750000000
0
2
536870912
171270930432
0
0
128
1230373000054750000000
0
2
536870912
171807801344
0
0
128
1230373000054750000000
0
2
536870912
172344672256
0
0
128
1230373000054750000000
0
2
536870912
172881543168
0
0
128
1230373000054750000000
0
2
536870912
173418414080
0
0
128
1230373000054750000000
0
2
536870912
173955284992
0
0
128
1230373000054750000000
0
2
536870912
174492155904
0
0
128
1230373000054750000000
0
2
536870912
175029026816
0
0
128
1230373000054750000000
0
2
536870912
175565897728
0
0
128
1230373000054750000000
0
2
536870912
176102768640
0
0
128
1230373000054750000000
0
2
536870912
176639639552
0
0
128
1230373000054750000000
0
2
536870912
177176510464
0
0
128
1230373000054750000000
0
2
536870912
177713381376
0
0
128
1230382000038660000000
0
2
536870912
178250252288
0
0
128
1230382000038660000000
0
2
536870912
178787123200
0
0
128
1230382000038660000000
0
2
536870912
179323994112
0
0
128
1230382000038660000000
0
2
536870912
179860865024
0
0
128
1230382000038660000000
0
2
536870912
180397735936
0
0
128
1230382000038660000000
0
2
536870912
180934606848
0
0
128
1230382000038660000000
0
2
536870912
181471477760
0
0
128
1230382000038660000000
0
2
536870912
182008348672
0
0
128
1230382000038660000000
0
2
536870912
182545219584
0
0
128
1230382000038660000000
0
2
536870912
183082090496
0
0
128
1230382000038660000000
0
2
536870912
183618961408
0
0
128
1230382000038660000000
0
2
536870912
184155832320
0
0
128
1230382000038660000000
0
2
536870912
184692703232
0
0
128
1230382000038660000000
0
2
536870912
185229574144
0
0
128
1230382000038660000000
0
2
536870912
185766445056
0
0
128
1230382000038660000000
The create LSN column seems to have been truncated so here is is again, sorry for the bulky reply.
CreateLSN
0
0
0
0
1229654000000041600001
1229656000000012000001
1229656000000012000001
1229656000000012000001
1229656000000012000001
1229656000000012000001
1229656000000012000001
1229656000000012000001
1229656000000012000001
1229656000000012000001
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229989001661260800316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1229995000058525600316
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230004000028405600295
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230012000103148800147
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338000973824800555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230338001901449600555
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230346000103044000554
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230355000086934400510
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230364000070872800554
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230373000054757600431
1230382000038664800234
1230382000038664800234
1230382000038664800234
1230382000038664800234
1230382000038664800234
1230382000038664800234
1230382000038664800234
1230382000038664800234
1230382000038664800234
1230382000038664800234
1230382000038664800234
1230382000038664800234
1230382000038664800234
1230382000038664800234
1230382000038664800234
1230382000038664800234
Thanks
James -
Recovering data using SQL transaction logs
Hi-If we have a backup of our HFM database (in SQL) as of 8am today, and we backup the transaction logs too, can we use them to restore the database in case of an issue. Example, the database gets corrupted or someone deletes the wrong data. Can we restore the backup and then use the transaction logs to recreate the activities since the backup? Would this work if some of those activities were data loads via FDQM?
just checking if it makes sense to do these transaction log backups in SQL as it relates to HFM 9.2.1
Thanks
WagsIf your company performs full backups at the close of business every Friday and differential backups every Monday through Thursday evening. You could include hourly backups of the transaction log during business hours. Assume a database failure at 11:05AM Wednesday. Under this strategy, you would use Friday's full backup and Tuesday's differential backup to restore the database to it's state at the close of business Tuesday. This resulted in a loss of two hours of data (9-11AM Wednesday). Using the transaction log backups, you could apply the 9AM and 10AM transaction log backups after the Tuesday differential backup to restore our database to it's state at 11AM Wednesday. Restoring all but five minutes of database activity.
-
Smartfilter Transaction Logs on CE
Hi
I'm having real problems getting a CE running ACNS 5.1.5.2 to export the CE logs for CYFIN to report on.
Its creating the HTTP logs but only exporting the tftp logs and couple of others so I know the ftp server is working.
I've tried it from the CLI and the GUI and seem to have more luck from the GUI.
Config is:-
transaction-logs enable
transaction-logs log-windows-domain
transaction-logs archive max-file-size 10000
transaction-logs archive interval 120
transaction-logs export enable
transaction-logs export interval 5
transaction-logs export ftp-server 192.168.10.1 anonymous **** /logs
transaction-logs format apache
(if there are any typo's above its because I've typed them rather than copying so ignore them please).
Any help would be great, its one of those backs against the wall situations.
Cheers
MarkHi
Thanks for the response, I've had problems trying to get this to work for ages. I did have it working briefly in the lab (but not always the best test situation).
I've just logged a TAC case because I believe the export process failing is a bug. We cann get it to create the working files, archive them after a set period of time but they never get exported. If you manually copy the files over (using the same ftp information, address, user/password, directory) it works!
I found a bug that looks similar but its supposed to be fixed in the code we are running (5.1.5.2) and it relates to poor bandwidth, one of the tests was a locally connected ftp server on the same switch and that failed.
The engineer at smartfilter said it does work because he gave me the software version but I figure the latest(ish) version should work.
I'll keep the thread updated,
regards
Mark -
Probleme mit Gru00F6u00DFe des Transaction Log
Hallo,
wir haben auf unserem BW-System Probleme mit dem Transaction Log:
Eckdaten des Systems:
DB:
Gesamtgröße 737.280
Allokiert 534.735
Frei 202.545
Protokoll:
Anzahl Dateien 1
Gesamtgröße 46.080
SAP Version: SAP EHP 1 for SAP NetWeaver 7.0 (x64bit)
MSSQL: 9.00.4053
Das Transaktion Log ist akt. 45 GB groß. Bei bestimmten BW-Funktionen (zB InfoCube komprimieren, Index auf InfoCubes löschen, Aggregatsaufbau ...) kommt es zu folgendem Fehler:
The transaction log for database 'BW1' is full. To find out
why space in the log cannot be reused, see the
log_reuse_wait_desc column in sys.databases
Datenbankfehler 9002 aufgetreten
Backup des Transaction Log ist aktuell wie folgt eingerichtet:
Automatisch stündliches Backup
Sobald das Transaction Log 7 GB allokiert hat, wird mittels Alert ebenfalls ein Backup gestartet.
Hinweis 421644 beschreibt, dass langlaufende oder umfangreiche Transaktionen u.U. ein Truncate des Logs verhindern - und das ist bei uns der Fall!
Der betreffende Infocube bei der der Funktion "InfoCube komprimieren" sieht wie folgt aus (Auszug DB02).
Schema TabName Belegt (kb) Reserviert Daten Anz. Zeilen ZeiModZäh
bw1 /BIC/FUCSA_C83 56.659.872 57.195.384 10.638.776 119.828.524 13.024.553
Was wäre eine vernünftige Größe des Transaction Log? Gibt es andere Lösungsmöglichkeiten (als das Transaction Log zu vergrößern)??
Bin für jede Hilfe/Vorschlag dankbar!
LG, BerndHi Bernd,
wir haben ein BW mit rund 11,5 TB und einer entsprechenden Anzahl an Cubes, DSOs usw. Um die von Dir beschriebene Situation zu vermeiden ist unser TA-Log rund 280 GB groß (welches auch noch weiter wachsen könnte!). Die Größe haben wir an der durchschnittlichen Menge der TA-Logs-Backups pro Tag festgemacht, welche in etwa 250 GB entspricht. Damit "überlebt" man auch mal eine Nacht, in der einiges schief geht, ohne daß das System komplett stehen bleibt bzw. alle Prozesse auf die Bretter gehen.
Habt ihr die Situation schon mal genauer untersucht?
Grüße,
Sven
Maybe you are looking for
-
Cannot install EMET Notifier 4.1 or 5.0 Tech Preview
I uninstalled EMET notifier 3 to try out the new 5.0 tech preview. However when trying to install I get an error saying "There is a problem with this Windows Installer package. A DLL required for this install to complete could not be run. Contact you
-
Blank screen with proprietary nvidia drivers [SOLVED]
I've just installed arch on a new computer, and have encountered an issue- I can't get it running with the proprietary drivers, just nouveau, which is quite the handicap on a GeForce 660. I used the beginner's guide since my last fresh install was us
-
Can I delete Mail and then reinstall?
Am a new user with an iPad 2 and with Mail did a sync to bring in my Outlook contacts. Somehow it found some old contact folders too and now have a totally cluttered contacts list! Can I delete Mail and download it and start again? Thanks
-
BT Mapper error : The style sheet is too complex
Hi Guys, I am trying to test the map in visual studio and I immediately prompted with the error "XSLT exception: The style sheet is too complex". I have searched for this on the web and came to know that this might be caused by number of lines in XSL
-
Ip device tracking probe delay
Hi, "'ip device tracking probe delay 10 "" , will it means that , normally cisco device (switch or router or firewall) automatically generate the ARP and if this command given it delays for 10 sec ? or it will delay the the unknown flooding for 1