Clearing transaction log after ehpi

hello every body
i want to clear my transaction log on mssql server 2005,
b'se after EHP4 upgrade it was reached near to 96 Gb
i trried this
a. Detach the database
b. Rename the log file
c. Attach the database without the log file
d. Delete the log file
but database was not attaching , it says that the default user was not able to login,
before this i can log in easily with the same user.
now after restart it was not connecting the database with same user error .
b'se of this i am not been able to start the sap system too.
hlp needed

First, you need to change the default database of your login, mostly likely the database you detached was default for your login
Please follow this article for instructions: [http://support.microsoft.com/kb/307864]
SAP Note: [4064|https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=806925&nlang=E&smpsrv=https://websmp204.sap-ag.de]
you don't delete the transaction log if you want to clear the transaction log. you use sql command in management studio:
dbcc shrinkfile('logfilename', 1024), where 1024 is desired size in MB to shrink the file.
Now if you have deleted transaction log file from OS, probably you may have to restore the database.
Thanks
Mush

Similar Messages

  • DPM doesn't clear transaction logs

    We use DPM 2012 to backup Exchange 2013. It works as shown the screenshot.
    However, the Exchange Admin Center shows no full backup.
    Also, we have a lot old transaction logs. How to DPM clear the transaction logs?
    Bob Lin, MCSE & CNE Networking, Internet, Routing, VPN Networking, Internet, Routing, VPN Troubleshooting on http://www.ChicagoTech.net How to Install and Configure Windows, VMware, Virtualization and Cisco on http://www.HowToNetworking.com

    Hi,
    Check the application event log for events from exchange after a DPM synchronization is complete.  Make sure DPM is configured to perform FULL backups for one copy of the DB's in the DAG and not just copy only. 
    DPM is not responsible for truncating exchange logs. Exchange Writer tells Information Store, that backup has completed. Now Information Store using its own logic can decide which logs can be truncated. Basically IS will retrieve from passive copies
    information about what is the oldest log not yet replayed to the database and will look for the Checkpoint at Log Generation in log header in this log. It will allow logs older than Checkpoint at Log
    Generation to be truncated. Approximately 200 logs should remain.
     See Tim’s excellent blog post on this subject:
    http://blogs.technet.com/b/timmcmic/archive/2012/03/12/exchange-2010-log-truncation-and-checkpoint-at-log-creation-in-a-database-availability-group.aspx
    http://blogs.technet.com/b/timmcmic/archive/2011/09/26/exchange-server-2010-and-system-center-data-protection-manager-2010.aspx#3455825
    FROM: http://technet.microsoft.com/en-us/library/dd876874.aspx   (exch 2013)
    http://technet.microsoft.com/en-us/library/dd876874(v=exchg.141).aspx  (exch 2012)
    Specifically, the Microsoft Exchange Replication Service manages CRCL so that log continuity is maintained and logs are not deleted if they are still needed for replication. The Microsoft Exchange
    Replication Service and the Microsoft Exchange Information Store service communicate by using remote procedure calls (RPCs) regarding which log files can be deleted.
    For truncation to occur on highly available (non-lagged) mailbox database copies, the answer must be "Yes" to the following questions:
    * Has the log file been backed up, or is CRCL enabled?
    * Is the log file below the checkpoint?
    * Do the other non-lagged copies of the database agree with deletion?
    * Has the log file been inspected by all lagged copies of the database?
    For truncation to occur on lagged database copies, the answer must be "Yes" to the following questions:
    * Is the log file below the checkpoint?
    * Is the log file older than ReplayLagTime + TruncationLagTime?
    * Is the log file deleted on the active copy of the database?
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This
    posting is provided "AS IS" with no warranties, and confers no rights.

  • Transaction logs after large mailbox archive?

    Hi all,
    I've recently run a large mailbox archive on our mailbox database and I'm concerned about the transaction log files that will be produced.
    Some info: We run a single Exchange server on Windows Server 2008, on a single hard disk. The system is run on VMWare with a full Exchange-aware backup run every night. Database file is currently 194GB with about 90GB whitespace.
    I archived 20GB worth of email from a mailbox. My problem is that the hard disk with the database and log files on it only has 12GB of free space, so when the Recoverable Items folder is cleared 2 weeks later, is there going to be 20GB of transaction logs
    with nowhere to go? Will I have to organise some additional storage to give the log files some room?
    Appreciate any help.

    Hi,
    I notice that there only 12 GB free disk in your mailbox server, it may be too small.
    Exchange will not generate much more transaction log when delete Recoverable Items, however it also not delete previous transaction log. Meanwhile, Exchange will generate new log when mail flow and move message to archive database, the size of transaction log
    only grow. Therefore, I recommend to add additional disk or full backup for your database on schedule to truncated some unrequired logs.
    Here’s the article about Mailbox Server Storage Design, for your reference:
    https://technet.microsoft.com/en-us/library/dd346703(v=exchg.141).aspx
    Best Regards,
    Allen Wang

  • System Crash after transactional log filled filesystem

    Dear gurus,
    We have an issue in our PRD system under FlexFrame platform. We SAP NW 7.4 (SP03) with ASE 15.7.0.042 (SuSe SLES 11 SP1) running as BW system.
    While uploading data from ERP system, the transactional log was filled. We can see in <SID>.log:
    Can't allocate space for object 'syslogs' in database '<SID>' because 'logsegment' segment is full/has no free extents. If you ran out of space in syslogs, dump the transaction log. Otherwise, use ALTER DATABASE to increase the size of the segment.
    After this, we increase the transactional log (disk resize). Then, executed ALTER DATABASE <SID> log on <LOGDEVICE> = '<size>'
    While ALTER is running the log filesystem was filled (100%), after this, <SID>.log began to grow tremendously.
    We stopped Sybase and now, when we try start it all FF node will be down. The filesystem has free space (around 10 GB)
    Could you help us?
    Add: We think that a posible solution could be to delete the transactional log due to the fact that we understand that the failure is related to this log (maybe corrupted?)
    Regards

    ====================
    00:0008:00000:00009:2014/06/26 15:49:37.09 server  Checkpoint process detected hardware error writing logical page '2854988', device 5, virtual page 6586976 for dbid 4, cache 'log cache'. It will sleep until write completes successfully.
    00:0010:00000:00000:2014/06/26 15:49:37.10 kernel  sddone: write error on virtual disk 5 block 6586976:
    00:0010:00000:00000:2014/06/26 15:49:37.10 kernel  sddone: No space left on device
    00:0008:00000:00009:2014/06/26 15:49:37.10 server  bufwritedes: write error detected - spid=9, ppage=2854988, bvirtpg=(device 5, page 6586976), db id=4
    =======================
    1 - check to make sure the filesystem that device #5 (vdevno=5) sits on is not full; make sure filesystem is large enough to hold the entire defined size of device #5; make sure no other processes are writing to said filesystem
    2 - have your OS/disk admin(s) make sure the disk fragment(s) underlying device #5's filesystem isn't referenced by other filesystems and/or raw device definitions

  • To clear db2 archive log after db2 database backup

    After db2 backup archive log remains as it is.is there setting to remove archive logs after database backup.DAS is not installed.
    DB2 9.1 FP 6
    os HP unix

    Hello Anand,
    If the archvied logs are not required, ie they are not required for restore    
    you can remove them. 
    You can check this is by running                                                                               
    db2 list history all for db <sid>                                                                               
    The above command will give you a detailed overview of the backup           
    history and what logs were archived and what logs were included or needed for restore of backups.
    If they are no longer needed for a restore, then it is safe to remove them.
    Kind regards,
    Paul

  • Log Reader Agent: transaction log file scan and failure to construct a replicated command

    I encountered the following error message related to Log Reader job generated as part of transactional replication setup on publisher. As a result of this error, none of the transactions propagated from publisher to any of its subscribers.
    Error Message
    2008-02-12 13:06:57.765 Status: 4, code: 22043, text: 'The Log Reader Agent is scanning the transaction log for commands to be replicated. Approximately 24500000 log records have been scanned in pass # 1, 68847 of which were marked for replication, elapsed time 66018 (ms).'.
    2008-02-12 13:06:57.843 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on ServerName.'.
    2008-02-12 13:06:57.843 Status: 0, code: 18805, text: 'The Log Reader Agent failed to construct a replicated command from log sequence number (LSN) {00065e22:0002e3d0:0006}. Back up the publication database and contact Customer Support Services.'.
    2008-02-12 13:06:57.843 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'ServerName'.'.
    Replication agent job kept trying after specified intervals and kept failing with that message.
    Investigation
    I could clearly see there were transactions waiting to be delilvered to subscribers from the followings:
    SELECT * FROM dbo.MSrepl_transactions -- 1162
    SELECT * FROM dbo.MSrepl_commands -- 821922
    The following steps were taken to further investigate the problem. They further confirmed how transactions were in queue waiting to be delivered to distribution database
    -- Returns the commands for transactions marked for replication
    EXEC sp_replcmds
    -- Returns a result set of all the transactions in the publication database transaction log that are marked for replication but have not been marked as distributed.
    EXEC sp_repltrans
    -- Returns the commands for transactions marked for replication in readable format
    EXEC sp_replshowcmds
    Resolution
    Taking a backup as suggested in message wouldn't resolve the issue. None of the commands retrieved from sp_browserreplcmds with mentioned LSN in message had no syntactic problems either.
    exec sp_browsereplcmds @xact_seqno_start = '0x00065e220002e3d00006'
    In a desperate attempt to resolve the problem, I decided to drop all subscriptions. To my surprise Log Reader kept failing with same error again. I thought having no subscription for publications log reader agent would have no reason to scan publisher's transaction log. But obviously I was wrong. Even adding new log reader using sp_addLogreader_agent after deleting the old one would not be any help. Restart of server couldn't do much good either.
    EXEC sp_addlogreader_agent
    @job_login = 'LoginName',
    @job_password = 'Password',
    @publisher_security_mode = 1;
    When nothing else worked for me, I decided to give it a try to the following procedures reserved for troubleshooting replication
    --Updates the record that identifies the last distributed transaction of the server
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
    -- Flushes the article cache
    EXEC sp_replflush
    Bingo !
    Log reader agent managed to start successfully this time. I wish if I could have used both commands before I decided to drop subscriptions. It would have saved me considerable effort and time spent re-doing subscriptions.
    Question
    Even though I managed to resolve the error and have replication funtioning again but I think there might have been some better solution and I would appreciate if you could provide me some feedback and propose your approach to resolve the problem.

    Hi Hilary,
    Will the below truncate the log file marked for replication, is there any data loss, when we execute this command, can you please help me understand, the internal working of this command.
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1

  • What is stored in a transaction log file?

    What does the transaction log file store? Is it the blocks of transactions to be executed, is it the snapshot of records before beginning the
    execution of a transaction or is it just the statements found in a transaction block? Please advice.
    mayooran99

    yes, it will store all the values before and after that were modified. you,first, have to understand the need for transaction log, then, it will start to become apparent, what is stored in the transaction log
    before the transaction can be committed, sql server will make sure that all the information is hardened on the transaction log,so if a crash happens, it can still recover\restore the data.
    when you update some data - the data is feteched into memory and updated- transaction log makes note of it(before and after values etc).see, at this point, the changes were done but not physically present in the data page, they present only in the memory.
    so, if crash happens(before a check piont\lazy writer could be issued), you will that data...this where transaction log comes handy, because all this information is stored in physical file of transaction log. so, when your server comes back on, if the transaction
    is committed, the transaction log will roll forward this iinformation
    when a checkpoint\lazy writer happens, in simple recovery, the transaction log for that txn is cleared out, if there are no other older active txns.
    in full recovery you will take log backups, to clear that txn from the transaction log.
    in transaction log data generally is faster because 1. it is written sequentialyl...it will track the data pageno, lsn and other details that were modified and makes a note of it.
    similar to data cache, there is also transaction log cache, that makes this process faster.. all transactions before being committed, it will wait to make sure everything related to the txn is written to the transaction log on disk.  
    i advice you to pick up - kalen delaney, sql internals book and read - recovery an logging chapter..for more and better understanding...
    Hope it Helps!!

  • JTA Transaction log circular collision

    Greetings:
              Just thought I'd share some knowledge concerning a recent JTA-related
              issue within WebLogic Server 6.1.2.0:
              On our Production cluster, we recently ran into the following critical
              level problem:
              <Jan 10, 2003 6:00:14 PM EST> <Critical> <JTA> <Transaction log
              circular collision, file number 176>
              After numerous discussions with BEA Support, it appears to be a (rare)
              race condition within the tlog file. It was also noted by BEA during
              their testing of WebLogic 7.0.
              Some additional research lead to an MBean attribute under *WebLogic
              Server 7.0* entitled, "CheckpointIntervalSeconds". The documentation
              states:
              ~~~~
              Interval at which the transaction manager creates a new transaction
              log file and checks all old transaction log files to see if they are
              ready to be deleted. Default is 300 seconds (5 minutes); minimum is 10
              seconds; maximum is 1800 seconds (30 minutes).
              Default value = 300
              Minimum = 10
              Maximum = 1800
              Configurable = Yes
              Dynamic = Yes
              MBean class = weblogic.management.configuration.JTAMBean
              MBean attribute = CheckpointIntervalSeconds
              ~~~~
              After searching for a equivalent setting under WebLogic Server
              6.1.2.0, nothing was found and a custom (unsupported) patch was
              created to change this hardcoded setting under 6.1:
              from
              ... CHECKPOINT_THRESHOLD_MILLIS = 5 * 60 * 1000;
              to
              ... CHECKPOINT_THRESHOLD_MILLIS = 10 * 60 * 1000;
              within com.bea.weblogic.transaction.internal.ServerTransactionManagerImpl.
              If you'd like additional details, feel free to contact me via e-mail
              <[email protected]> or by phone +1.404.327.7238. Hope this
              helps!
              Brian J. Mitchell
              BEA Systems Administrator
              TRX
              6 West Druid Hills Drive
              Atlanta, GA 30329 USA
              

    Hi 783703,
    As Sridhar suggested for your problem you have to set transaction-time out in j2ee/home/config/transaction-manager.xml.
    If you use Idempotent as false for your partnerlinks, BPEL PM will store the status till that invoke(Proof that this invoke gets executed).
    So better to go for increasing the time instead of going for idempotent as it has some side effects.
    And coming to dehydration ....Ideally performance will be more if there are no much dehydration poitns in our process. But for some scenarios it is better to have dehydration(ex: we can know the status of the process...etc)
    Dehydration store will not get cleared after completion of the process. Here dehydration means ....it will store these dtails in tables(like Cube_instance,cube_scope...etc).
    Regards
    PavanKumar.M

  • Sql 2008 Issue restoring transaction logs....

    ** Update: I performed the same steps on the corresponding Dev and things worked as expected. Only  our prod environment uses SnapManager for Sql (NetApp) and I'm beginning to suspect that may be behind this issue
    Restored a full backup of the prod MyDB from 1/23/2014 in non-operational mode (so trans logs can be added). Planned to apply trans log dumps from 1/24/2014, 7am (our first of the day) to noon. But applying the 7am trans dump gave this error:
    >>>>>
    Restore Failed for this Server... the Log in this backup set begins at....which is too recent to apply to the database. An earlier log backup that includes LSN....can be restored.
    >>>>>
    That message is clear but I don't understand it in this case as the full DB dump was taken Thursday night and the tran logs I am trying to restore are all from Friday.
    TIA,
    edm2

    ** Update 2 **
    I kept checking and am now definitely think that the NetApp SnapManager for Sql product (which is a storage based, not sql based, approach to DR) is the culprit. My view of the world was that a Full Sql Database backup is performed at 7pm and the
    Sql translogs are dumped every hour beginning at 7:15am the next day.  This extract from the SnapManager log indicates quite a different story. It takes a full database backup at 11pm (!) that night followed by a translog backup.
    No wonder, when I try to restoring things using Sql utilities it doesn't work. BTW: I have no idea where SnapManager's dumps are stored.
    >>>>>>>>>>>>>>>>>>>>>>>>
    [23:00:32.295]  *** SnapManager for SQL Server Report
    [23:00:32.296]  Backup Time Stamp: 01-24-2014_23.00.32
    [23:00:32.298]  Getting SQL Server Database Information, please wait...
    [23:00:32.299]  Getting virtual disks information...
    [23:00:37.692]  Querying SQL Server instances installed...
    [23:01:01.420]  Full database backup
    [..e
    [23:01:01.422]  Run transaction log backup after full database backup: Yes
    [23:01:01.423]  Transaction logs will be truncated after backup: Yes
    [23:02:39.088]  Database [MyDatabase] recovery model is Full.
    [23:02:39.088]  Transaction log backup for database [MyDatabase] will truncate logs...
    [23:02:39.089]  Starting to backup transaction log for database [MyDatabase]...
    [23:02:39.192]  Transaction log backup of database [MyDatabase] completed.
    >>>>>>>>>>>>>>>>>>>>>>>>
    Unless anyone has further thoughts I think I will close this case and take it up with NetApp.
    edm2
    Sorry I wasn't clearer. The Full database backups was taken on 1/23/2014 at 7pm. The trans logs I was trying to restore were from the next day (starting 1/24/2014 at 7:15am, 8:15am, etc.).   I could not find any Sql translog dumps taken after
    the full backup (at 7pm) until the next morning's  trans dumps (which start at 7:15am). Here is what I did:
    RESTORE DATABASE [MyDatabase] FROM  DISK =
     N'D:\MyDatabase\FULL_(local)_MyDatabase_20140123_190400.bak' WITH  FILE = 1,
     MOVE N'MyDatabase_data' TO N'C:\MSSQL\Data\MyDatabase.mdf', 
     MOVE N'MyDatabase_log' TO N'C:\MSSQL\Data\MyDatabase_1.LDF', 
     NORECOVERY,  NOUNLOAD,  STATS = 10
    GO
    RESTORE LOG [MyDatabase] FROM  DISK =
    N'D:\MyDatabase\MyDatabase_backup_2014_01_24_071501_9715589.trn'
    WITH  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 10
    GO
    Msg 4305, Level 16, State 1, Line 1
    The log in this backup set begins at LSN 250149000000101500001, which is too recent to apply to the database. An earlier log backup that includes LSN 249926000000024700001 can be restored.
    Msg 3013, Level 16, State 1, Line 1
    RESTORE LOG is terminating abnormally.
    From Sql Error Log:
    2014-01-25 00:00:15.40 spid13s     This instance of SQL Server has been using a process ID of 1428 since 1/23/2014 9:31:01 PM (local) 1/24/2014 5:31:01 AM (UTC). This is an informational message only; no user action is required.
    2014-01-25 07:31:08.79 spid55      Starting up database 'MyDatabase'.
    2014-01-25 07:31:08.81 spid55      The database 'MyDatabase' is marked RESTORING and is in a state that does not allow recovery to be run.
    2014-01-25 07:31:14.11 Backup      Database was restored: Database: MyDatabase, creation date(time): 2014/01/15(16:41:13), first LSN: 249926:231:37, last LSN: 249926:247:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK:
    {'D:\MyDatabase\FULL_(local)_MyDatabase_20140123_190400.bak'}). Informational message. No user action required.
    Regarding my update note,  the SnapManager for SQL product (which I was tolds simply uses VSS) runs every hour throughout the night. That's why I wondering if it could be interfering with the transaction log sequence.

  • The transaction log for database 'BizTalkMsgBoxDb' is full.

    Hi All,
    We are getting the following error continously in the event viewer of our UAT servers. I checked the jobs and all the backup jobs were failing on the step to backup the transaction log file and were giving the same error. Our DBA's cleaned the message box manually and backed up the DB but still after some time the jobs starts failing and this error is logged in the event viewer.
    The transaction log for database 'BizTalkMsgBoxDb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases".
    Thanks,
    Abdul Rafay
    http://abdulrafaysbiztalk.wordpress.com/
    Please mark this answer if it helps

    Putting the database into simple recovery mode and shrinking the log file isn't going to help: it'll just grow again, it will probably fragment across the disk thereby impacting performance and, eventually, it will fill up again for the same reason
    as before.  Plus you put yourself in a very vulnerable position for disaster recovery if you change the recovery mode of the database: and that's before we've addressed the distributed transaction aspect of the BizTalkDatabases.
    First, make sure you're backing up the log file using the BizTalk job Backup BizTalk Server (BizTalkMgmtDb).  It might be that the log hasn't been backed up and is full of transactions: and, eventually, it will run out of space.  Configuration
    instructions at this link:
    http://msdn.microsoft.com/en-us/library/aa546765(v=bts.70).aspx  Your DBA needs to get the backup job running properly rather than panicking!
    If this is running properly, and backing up (which was the case for me) and the log file is still full, run the following query:
    SELECT Name, log_reuse_wait_desc
    FROM sys.databases
    This will tell you why the log file isn't properly clearing down and why it cannot use the space inside.  When I had this issue, it was due to an active transaction.
    I checked for open transactions on the server using this query:
    SELECT
    s_tst.[session_id],
    s_es
    .[login_name]
    AS [Login Name],
    DB_NAME
    (s_tdt.database_id)
    AS [Database],
    s_tdt
    .[database_transaction_begin_time]
    AS [Begin Time],
    s_tdt
    .[database_transaction_log_record_count]
    AS [Log Records],
    s_tdt
    .[database_transaction_log_bytes_used]
    AS [Log Bytes],
    s_tdt
    .[database_transaction_log_bytes_reserved]
    AS [Log Rsvd],
    s_est
    .[text]
    AS [Last T-SQL Text],
    s_eqp
    .[query_plan]
    AS [Last Plan]
    FROM
    sys.dm_tran_database_transactions
    s_tdt
    JOIN
    sys.dm_tran_session_transactions
    s_tst
    ON s_tst.[transaction_id]
    = s_tdt.[transaction_id]
    JOIN
    sys.[dm_exec_sessions]
    s_es
    ON s_es.[session_id]
    = s_tst.[session_id]
    JOIN
    sys.dm_exec_connections
    s_ec
    ON s_ec.[session_id]
    = s_tst.[session_id]
    LEFT
    OUTER
    JOIN
    sys.dm_exec_requests
    s_er
    ON s_er.[session_id]
    = s_tst.[session_id]
    CROSS
    APPLY
    sys.dm_exec_sql_text
    (s_ec.[most_recent_sql_handle])
    AS s_est
    OUTER
    APPLY
    sys.dm_exec_query_plan
    (s_er.[plan_handle])
    AS s_eqp
    ORDER
    BY [Begin Time]
    ASC;
    GO
    And this told me the spid of the process with an open transaction on BizTalkMsgBoxDB (in my case, this was something that had been open for several days).  I killed the transaction using KILL spid, where spid is an integer.  Then I ran the BizTalk
    Database Backup job again, and the log file backed up and cleared properly.
    Incidentally, just putting the database into simple transaction mode would have emptied the log file: giving it lots of space to fill up again.  But it doesn't deal with the root cause: why the backups were failing in the first place.

  • Restore ASE - Dump of open transaction-log required?

    Hi experts,
    I am still doing some restore tests.
    What about the following case.
    Last transaction log was done at 1 o'clock and next will be at 4 o'clock.
    At 3 o'clock we detect that we have to restore two 2 o'clock.
    So for this restore, I need the transaction log which isn't dumped yet.
    My question is, do I have to dump the current transaction log also to a file for the restore procedure?
    Or is there another way to included in the restore the current-log file?
    In other words, when will the log-file be touched first?
    After "online database" command?
    If so, I can also do the restore using the original-logfile, right?
    Kind regards

    Christian,
    You are right.
    Let me tell you what is the practice I reccommend to follow:
    1. Take full backup daily during your off business hours if you have the infrastructure like tape/disk or SAN and the data is very critical may be production or development
    2 During the business hours take hourly or half an hour once take the transaction backup may be between 9-6 as per your time zone :)
    3 This mostly helps you to minimise the tran log loss.
    4 As you have the week end reorg and update stats running I prefer just before start of production hours on Monday take a full backup and keep it safe so that the data is super clean and secure
    If there is any confusion let me know I will explain you still clearly in simpler words
    PS:One full backup per day is fine if you can preserve and retain for 7-10 days and delete it later point if you don't need it and don't have infrastructure and disk cost problems :P ;)
    Cheers
    Kiran K Adharapuram

  • Transaction log maintainance

    Hi All,
    Is there anyway that i can clear the transaction logs within a specific timeperiod.
    My primary objective is to keep this in daily automation so that it will backup the transaction logs for replay transaction and clear the previous logs.

    Stop the application and have a script that deletes the files in the transaction log directory.
    Or the official Oracle take is :-
    *"Periodically, you might want to remove the transaction log store and the files in the Replay directory to increase available disk space on Essbase Server.*
    *Transaction log store: Oracle recommends removing the transaction log store for one database at a time. The log store is in a subdirectory under the log location specified by the TRANSACTIONLOGLOCATION configuration setting. For example, if the log location for the Sample.Basic database is /Hyperion/trlog, delete the contents of the following directory:*
    */Hyperion/trlog/Sample/Basic*
    *Replay directory: After you have replayed transactions, the data and rules files associated with the replayed transactions can be removed from the ARBORPATH/app/appname/dbname/Replay directory (see Configuring Transaction Replay). You can delete all of the files in the Replay directory, or follow these guidelines for selectively removing files:*
    ** Remove the data and rules files in chronological order, from earliest to latest.*
    ** Do not remove data and rules files with a timestamp that is later than the timestamp of the most recent archive file.*
    *Note: Oracle recommends waiting until several subsequent database backups have been taken before deleting files associated with transaction logging and replay."*
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Question about full backup and Transaction Log file

    I had a query will taking full backup daily won't allow my log file to grow as after taking the full backup I still see the some of the VLF in status 2. It went away when I manually took the backup of log file. I am bit confused shall I
    perform backup of transaction log and full database backup daily to avoid such things in future also until I run shrinkfile the storage space from server won't get reduced right.

    yes, full backup does not clear log file only log backup does. once the log backup is taken, it will set the inactive vlfs in the log file to 0.
    you should perform log backup as the per your business SLA in data loss.
    Go ahead and ask this to yourself:  
    If  a disaster strikes and your database server is lost and your only option is restore it from backup, 
    how much data loss can your business handle??
    the answer to this question is how frequently your log backup should be?
    if the answer is 10 mins, you should have log backups every 10 mins atleast.
    if the answer is 30 mins, you should have log backups every 30 mins atleast.
    if the answer is 90 mins, you should have log backups every 90 mins atleast.
    so, when you restore, you will restore latest fullbackup+differential(latest one taken after restored fullback)
     and all the logbackups taken since the latest(restored full or differential backup).
    there several resources on web,inculding youtube videos, that explain these concepts clearly.i advice you to look at them.
    to release the file space to OS, you should the shrink the file. log file shrink happens from the end upto the point it reaches an active vlf.
    if there are no inactive vlf's in the end,despite how many inactive vlf's the log file has in the begining, the log file is not shrinkable
    Hope it Helps!!

  • Cannot write to transaction log "C:\Program Files (x86)\SAP BusinessObjects\sqlanywhere\database\BI4_Audit.log

    Hi friends,
    My server Intelligence Agent (SIA) can not start because the database service "SQLAnywhereForBI" can't start also. I got the following error :
    "I . 08/09 20:35:06. A read failed with error code: (1392), Le fichier ou le répertoire est endommagé et illisible.
    E. 08/09 20:35:06. Fatal error:  cannot write to transaction log "C:\Program Files (x86)\SAP BusinessObjects\sqlanywhere\database\BI4_Audit.log"
    E. 08/09 20:35:06. unable to start database "C:\Program Files (x86)\SAP BusinessObjects\sqlanywhere\database\BI4_CMS.db"
    E. 08/09 20:35:06. Error writing to transaction log file
    I. 08/09 20:35:06. Database server shutdown due to startup error "
    inside the database log file.
    Please, can you help me

    I found the solution by following the advice given on the following forum:
    http://evtechnologies.com/transaction-logs-on-sybase-sql-anywhere-and-sap-​​businessobjects-bi-4-1
    In fact, I crushed the BI4_Audit.db and BI4_Audit.log files and I replaced with others that I got from another machine where I installed BO again and where the files are not corrupted . After I logged in to the CMS database by executing the command in the command line:
    dbisql -c "UID = DBA; PWD = mypassword; BI4 Server =; DBF = C: \ Program Files (x86) \ SAP BusinessObjects \ sqlanywhere \ database \ BI4_CMS.db."
    Once connected, I start the command:
    alter database 'C: \ Program Files (x86) \ SAP BusinessObjects \ sqlanywhere \ database \ BI4_Audit.db' alter log off;
    The query runs successfully.
    And that's good, I can be connected to BO smoothly.
    Thank you again Eric

  • The transaction log for database 'Test_db' is full due to 'LOG_BACKUP'

    My dear All,
    Came up with another issue:
    App team is pushing the data from one Prod1 server 'test_1db' to another Prod2 server 'User_db' through a job, here while pushing the data after some duration job is failing and throwing the following error
    'Error: 9002, Severity: 17, State: 2.'The transaction log for database 'User_db' is full due to 'LOG_BACKUP'''.
    On Prod2 server 'User_db' log is having enough space 400gb on drive and growth is 250mb. I really confused that why job is failing as there is lot of space available. Kindly guide me to troubleshoot the issue as this issue is occuring from more than
    1 week. Kindly refer the screenshot for the same.
    Environment: SQL Server 2012 with sp1 Ent-edition. and log backup duration is every 15 mints and there is no High availability between the servers.
    Note: Changing to simple recovery model may resolve but App team is required to run in Full recovery model as they need of log backups.
    Thanks in advance,
    Nagesh
    Nagesh

    Dear V,
    Thanks for the susggestions.
    I have followed some steps to resolve the issue, as of now my jobs are working without issue.
    Steps:
    Generating log backup for every 5 minutes
    Increased the growth 500mb to unrestricted. 
    Once whole job completed we are shrinking the log file.
    Nagesh

Maybe you are looking for