Knowledge on Transaction log ?

Hi All,
I have couple of questions?
Question-1:
I need to know will running import/export wizard increase the T-log growth? OR will running simple select statement increase the T-log.
To my little knowledge data modification (insert, update, or delete) or data definition language (DDL) statements only increase the T-log how about import/export wizard or simple select statement..
Question-2:
Also what will happen inside simple recovery model when comparable to full recovery model ?
I assume the data is first written in T-log and once committed will move to mdf. In this scenario what will happen in simple and full recovery, how they differ from they differ from each other? Please help me to understand the internal architecture/inside oprations
of recovery models...
Best Regards,
Moug
Best Regards Moug

Hi,
Q1) No. Select statements doesnt get logged. Import/export will write to the database hence the tlog will be used.  Any statement other than a DRL (Data Retrieval Language) will be either fully or minimally logged.
Q2) In case of Single recovery model, the data is in the transaction log till it commits. Once it is committed it is written to mdf and then the space is cleared which means it can be reused for other transactions. In case of Full Recovery model the
space can only be reused once the log backup is taken.
Check this link about transaction log which should clear all your doubts.
http://msdn.microsoft.com/en-gb/library/ms190925.aspx
You can check the log_reuse_wait_desc column in sys.databases to know why transaction log is not reused.
http://msdn.microsoft.com/en-gb/library/ms178534.aspx
Listen to this video to know about the internals in deep for transaction log -
http://technet.microsoft.com/en-US/sqlserver/gg313762.aspx
Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

Similar Messages

  • Content Engine transaction logs -- monitoring and analysis

    At our remote sites there's a local Cisco CE511 to ease our WAN bandwidth. I have been tasked to find a method to gather CE usage for trending and troubleshooting.
    From my search on the internet I decided to go with the Webalizer application. I setup the CEs to export their transaction logs every hour to my FTP server. After a test of Webalizer on a log file, it produced a nice HTML report for that hour.
    I would like to discuss with anyone on bringing this up to a new level. I would like webalizer to run as a cron job, but the log file names changes every hour. So that's a hurdle I need to figure out. Also keeping track of user web hits is important. I would like to make sure my reports are accurate in reporting what IP address is the top talker.
    I hope this will start a productive exchange of ideas. Thanks.

    Simple Network Management Protocol (SNMP) is an interoperable standards-based protocol that allows for external monitoring of the Content Engine through an SNMP agent.
    An SNMP-managed network consists of three primary components: managed devices, agents, and management systems. A managed device is a network node that contains an SNMP agent and resides on a managed network. Managed devices collect and store management information and use SNMP to make this information available to management systems that use SNMP. Managed devices include routers, access servers, switches, bridges, hubs, computer hosts, and printers.
    An SNMP agent is a software module that resides in a managed device. An agent has local knowledge of management information and translates that information into a form compatible with SNMP. The SNMP agent gathers data from the Management Information Base (MIB), which is the repository for information about device parameters and network data. The agent can also send traps, or notification of certain events, to the manager.
    http://www.cisco.com/en/US/products/sw/conntsw/ps491/products_configuration_guide_chapter09186a0080236630.html#wp1101506

  • JTA Transaction log circular collision

    Greetings:
              Just thought I'd share some knowledge concerning a recent JTA-related
              issue within WebLogic Server 6.1.2.0:
              On our Production cluster, we recently ran into the following critical
              level problem:
              <Jan 10, 2003 6:00:14 PM EST> <Critical> <JTA> <Transaction log
              circular collision, file number 176>
              After numerous discussions with BEA Support, it appears to be a (rare)
              race condition within the tlog file. It was also noted by BEA during
              their testing of WebLogic 7.0.
              Some additional research lead to an MBean attribute under *WebLogic
              Server 7.0* entitled, "CheckpointIntervalSeconds". The documentation
              states:
              ~~~~
              Interval at which the transaction manager creates a new transaction
              log file and checks all old transaction log files to see if they are
              ready to be deleted. Default is 300 seconds (5 minutes); minimum is 10
              seconds; maximum is 1800 seconds (30 minutes).
              Default value = 300
              Minimum = 10
              Maximum = 1800
              Configurable = Yes
              Dynamic = Yes
              MBean class = weblogic.management.configuration.JTAMBean
              MBean attribute = CheckpointIntervalSeconds
              ~~~~
              After searching for a equivalent setting under WebLogic Server
              6.1.2.0, nothing was found and a custom (unsupported) patch was
              created to change this hardcoded setting under 6.1:
              from
              ... CHECKPOINT_THRESHOLD_MILLIS = 5 * 60 * 1000;
              to
              ... CHECKPOINT_THRESHOLD_MILLIS = 10 * 60 * 1000;
              within com.bea.weblogic.transaction.internal.ServerTransactionManagerImpl.
              If you'd like additional details, feel free to contact me via e-mail
              <[email protected]> or by phone +1.404.327.7238. Hope this
              helps!
              Brian J. Mitchell
              BEA Systems Administrator
              TRX
              6 West Druid Hills Drive
              Atlanta, GA 30329 USA
              

    Hi 783703,
    As Sridhar suggested for your problem you have to set transaction-time out in j2ee/home/config/transaction-manager.xml.
    If you use Idempotent as false for your partnerlinks, BPEL PM will store the status till that invoke(Proof that this invoke gets executed).
    So better to go for increasing the time instead of going for idempotent as it has some side effects.
    And coming to dehydration ....Ideally performance will be more if there are no much dehydration poitns in our process. But for some scenarios it is better to have dehydration(ex: we can know the status of the process...etc)
    Dehydration store will not get cleared after completion of the process. Here dehydration means ....it will store these dtails in tables(like Cube_instance,cube_scope...etc).
    Regards
    PavanKumar.M

  • The system failed to flush data to the transaction log. Corruption may occur.

    We have a windows server 2008 R2 Virtual machine and we are getting the following Warning Event.
    Event 51 Volmgr
    The system failed to flush data to the transaction log.  Corruption may occur.
    Any idea what is wrong with this server? Why this event is occurring?

    Hi Jitender KT,
    Before going further, would you please let me know the complete error message that you can find (such as a
    screenshot if you can provide)? Please check and confirm in Event Viewer if there other related event you can find, such as Event 57 and so on. Meanwhile, can you remember what operations you have done before the warning occurred?
    Based on current message that you provided, please run
    Chkdsk command to check if you can find error. The issue seems to be related to the storage device. Please refer to the following similar question.
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/044b10af-c253-46de-b40d-ce9d128b83d7/event-id-57-source-volmgr?forum=winservergen
    In addition, please also refer to the following link. It should be helpful.
    http://www.eventid.net/display-eventid-57-source-volmgr-eventno-8865-phase-1.htm
    Hope this helps.
    Best regards,
    Justin Gu

  • Cannot write to transaction log "C:\Program Files (x86)\SAP BusinessObjects\sqlanywhere\database\BI4_Audit.log

    Hi friends,
    My server Intelligence Agent (SIA) can not start because the database service "SQLAnywhereForBI" can't start also. I got the following error :
    "I . 08/09 20:35:06. A read failed with error code: (1392), Le fichier ou le répertoire est endommagé et illisible.
    E. 08/09 20:35:06. Fatal error:  cannot write to transaction log "C:\Program Files (x86)\SAP BusinessObjects\sqlanywhere\database\BI4_Audit.log"
    E. 08/09 20:35:06. unable to start database "C:\Program Files (x86)\SAP BusinessObjects\sqlanywhere\database\BI4_CMS.db"
    E. 08/09 20:35:06. Error writing to transaction log file
    I. 08/09 20:35:06. Database server shutdown due to startup error "
    inside the database log file.
    Please, can you help me

    I found the solution by following the advice given on the following forum:
    http://evtechnologies.com/transaction-logs-on-sybase-sql-anywhere-and-sap-​​businessobjects-bi-4-1
    In fact, I crushed the BI4_Audit.db and BI4_Audit.log files and I replaced with others that I got from another machine where I installed BO again and where the files are not corrupted . After I logged in to the CMS database by executing the command in the command line:
    dbisql -c "UID = DBA; PWD = mypassword; BI4 Server =; DBF = C: \ Program Files (x86) \ SAP BusinessObjects \ sqlanywhere \ database \ BI4_CMS.db."
    Once connected, I start the command:
    alter database 'C: \ Program Files (x86) \ SAP BusinessObjects \ sqlanywhere \ database \ BI4_Audit.db' alter log off;
    The query runs successfully.
    And that's good, I can be connected to BO smoothly.
    Thank you again Eric

  • Error in db6conv failed due to transaction log full

    Hi,
    I have a huge problem with my production system.
    I was executing db6conv v4.08  to convert a table to a new tablespace and it stopped due to a transaction log full.
    Now I have this situation:
    table soffcont1
    db6conv: status: preliminary
    I check the job db6conv_job_soffcont1 with status scheduled.
    The problem is that when I want to execute this jobs it gives me an error:
    Definition of job db6conv_job_soffcont1 is incomplete. Operation is not possible.
    regards,
    filipe vasconcelos

    hi filipe,
    i will follow up on this problem in you OSS message.
    regards, frank

  • How does one read the Unity transaction logs on Exchange?

    I'm trying to find out who placed a call to whom on a particular day and time.  A subscriber receives a blank email periodically with no "from" statement, just a timestamp. They don't have unified messaging.  I want to see if the transaction logs (located on Exchange) contain that information but they are all cryptic symbols.  Is there an application or utility to read tham?  These are the files located in the \Exchsrvr\MDBDATA folder. This is Unity 5.x and Excahnge 2003. Thanks.

    Hi,
    I think you may be wanting to look at Message Tracking?
    http://www.msexchange.org/tutorials/Exchange-2003-Message-Tracking-Logging.html
    But here's also an explanation of the transaction logging and how to read them from the same site:
    http://www.msexchange.org/articles/Transaction-Logs-Lifeblood-Exchange.html
    Hope that helps,
    Brad

  • Transaction log and access log

    The transaction log (TransactionLogFilePrefix) and the access log are stored
    relative to the directory where the server is started rather than where it
    resides as with the rest of the log files. Why is this?
    Eg.
    I start the server with a batch file contained in
    projects\bat
    My server is in
    projects\server\config\myDomain
    When I start the server the access and transaction logs end up in
    projects\bat
    while all the rest of the log files (such as the domain and server log) end
    up in
    projects\server
    My batch file that starts the server looks like this
    "%JAVA_HOME%\bin\java" -hotspot -ms64m -mx64m -classpath %CLASSPATH%
    "-Dbea.home=e:\bea"
    "-Djava.security.policy==i:\projects\server\config\myDomain\weblogic.policy"
    "-Dweblogic.Domain=myDomain" "-Dweblogic.Name=adminServer"
    "-Dweblogic.RootDirectory=i:/projects/server"
    "-Dweblogic.management.password=weblogic" weblogic.Server
    Thanks for help on this,
    Myles

    The same case with me, I sent email to apple support, but got not reply.
    The apple status page indicated that every thing is fine now, what a joke.
    Many devs are in this situation too, I guess we could do nothing but waiting for their system to come up.

  • Transaction log shipping restore with standby failed: log file corrupted

    Restore transaction log failed and I get this error: for only 04 no of database in same SQL server, renaming are working fine.
    Date                     
    9/10/2014 6:09:27 AM
    Log                        
    Job History (LSRestore_DATA_TPSSYS)
    Step ID                
    1
    Server                  
    DATADR
    Job Name                           
    LSRestore_DATA_TPSSYS
    Step Name                        
    Log shipping restore log job step.
    Duration                             
    00:00:03
    Sql Severity        0
    Sql Message ID 0
    Operator Emailed           
    Operator Net sent          
    Operator Paged               
    Retries Attempted         
    0
    Message
    2014-09-10 06:09:30.37  *** Error: Could not apply log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' to secondary database 'TPSSYS'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred while processing the log for database 'TPSSYS'. 
    If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'TPSSYS' (13:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    Processed 0 pages for database 'TPSSYS', file 'TPSSYS' on file 1.
    Processed 1 pages for database 'TPSSYS', file 'TPSSYS_log' on file 1.(.Net SqlClient Data Provider) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Skipping log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' for secondary database 'TPSSYS' because the file could not be verified.
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Deleting old log backup files. Primary Database: 'TPSSYS'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  The restore operation completed with errors. Secondary ID: 'dd25135a-24dd-4642-83d2-424f29e9e04c'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not cleanup history.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.38  ----- END OF TRANSACTION LOG RESTORE    
    Exit Status: 1 (Error)

    I Have restore the database to new server and check with new log shipping but its give this same error again, If it is network issue i believe issue need to occur on every database in  that server with log shipping configuration
    error :
    Message
    2014-09-12 10:50:03.18    *** Error: Could not apply log backup file 'E:\LsSecondery\EAPDAT\EAPDAT_20140912051511.trn' to secondary database 'EAPDAT'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-12 10:50:03.18    *** Error: An error occurred while processing the log for database 'EAPDAT'.  If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'EAPDAT' (8:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    can this happened due to data base or log file corruption, if so how can i check on that to verify the issue
    Its not necessary if the issue is with network it would happen every day IMO it basically happens when load on network is high and you transfer log file which is big in size.
    As per message database engine was not able to restore log backup and said that you must rebuild log because it did not find out log to be consistent. From here it seems log corruption.
    Is it the same log file you restored ? if that is the case since log file was corrupt it would ofcourse give error on wehatever server you restore.
    Can you try creating logshipping on new server by taking fresh full and log backup and see if you get issue there as well. I would also say you to raise case with Microsoft and let them tell what is root cause to this problem
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Articles

  • Transaction log full

    Transaction log is full in production system ,when i was tried login into sap system it show the error message 'SNAP_NO_NEW_ENTRIES'.
    our system is db2 and AIX ,can any body hlep us step by step procedure for reslove the issue .
    For best answer will reward.
    Thanks
    Imran khan

    you have to increase the sum of the logs in order to enlarge the database log...plz do not forget that the log must fit the underlying file system, eg. /db2/<SID>/dir_log..so you might have to increase this as well using SMITTY...
    <b>(DB6) [IBM][CLI Driver][DB2/AIX64] SQL0964C  The transaction log for the database is full.  SQLSTATE=57011</b>
    <i>[root] > su - db2<sid></i>
    <i>1> db2 get db cfg for <SID> | grep -i logfilsiz</i>
    Log file size (4KB)                         (LOGFILSIZ) = 16380
    <i>2> db2 get db cfg for <SID> | grep -i logprimary</i>
    Number of primary log files                (LOGPRIMARY) = 20
    <i>3> db2 get db cfg for <SID> | grep -i logsecond</i>
    Number of secondary log files               (LOGSECOND) = 40
    so we have log file of max. 16.380 * 4.096 * 60 = 4.025.548.800 bytes (about 4 gb)...this needs to be increased by increasing either LOGPRIMARY and/or LOGSECONDARY...(assuming that your logfile size is 16kb...query db2 for your size!)
    <i>4> db2 update db cfg for <SID> using logsecond 80 immediate</i>
    DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
    SQL1363W One or more of the parameters submitted for immediate modification
    were not changed dynamically. For these configuration parameters, all
    applications must disconnect from this database before the changes become
    effective.
    <i>5> db2 get db cfg for <SID> | grep -i logprimary</i>
    Number of primary log files                (LOGPRIMARY) = 20
    <i>6> db2 get db cfg for <SID> | grep -i logsecond</i>
    Number of secondary log files               (LOGSECOND) = 80
    <i>7> db2stop</i>
    02/20/2007 09:17:12     0   0   SQL1064N  DB2STOP processing was successful.
    SQL1064N  DB2STOP processing was successful.
    <i>8> db2start</i>
    02/20/2007 09:17:19     0   0   SQL1063N  DB2START processing was successful.
    SQL1063N  DB2START processing was successful.
    -> plz keep in mind that the sap system needs to be down when re-starting db2...
    check via snapshot:
    <i>9> db2 get snapshot for database on <SID></i>
    <b>Log space available to the database (Bytes)= 2353114756 (= 2.353 MB)
    Log space used by the database (Bytes)     = 4329925244 (= 4.329 MB)</b>
    Maximum secondary log space used (Bytes)   = 2993640963
    Maximum total log space used (Bytes)       = 4330248963
    Secondary logs allocated currently         = 46
    Appl id holding the oldest transaction     = 9
    so now our log is about 6.5 gb ...<b>see sapnote 25.351 for details</b>...
    GreetZ, AH

  • What is the current schedule for 6.1.2 and will it fix the Exchange transaction log issue in 6.1?

    Just spent the entire night with virtually no sleep with our firm's group of IT engineers trying to keep our Exchange system online due to massive transaction log growth. Confirmed the issue related to 6.1 calendar bug with Activesync. The workarounds are not practical for large groups of users who depend on their mobile devices for work. Our users have no way of knowing that they are causing an issue so the Apple guidance isnt terribly useful to communicate to 1000 users. When can we expect a resolution? The problem is only going to get worse as more and more users hit the bug. Does anyone know if the issue will resolve as soon as someone installs the 6.1.2 update, assuming that has the fix. Im not trying to bash anyone but this is a very serious problem in enterprise deployments.

    The update was released some time today. 6.1.2 appears to specifically fix the Exchange issue causing excess comms and logging issues. However, although the update is available i do not see the notification badge on the Settings icon. Is this controlled by Apple or is there a user setting i am missing somewhere? I would prefer that all users see the badge to expedite user action.

  • Log Reader Agent: transaction log file scan and failure to construct a replicated command

    I encountered the following error message related to Log Reader job generated as part of transactional replication setup on publisher. As a result of this error, none of the transactions propagated from publisher to any of its subscribers.
    Error Message
    2008-02-12 13:06:57.765 Status: 4, code: 22043, text: 'The Log Reader Agent is scanning the transaction log for commands to be replicated. Approximately 24500000 log records have been scanned in pass # 1, 68847 of which were marked for replication, elapsed time 66018 (ms).'.
    2008-02-12 13:06:57.843 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on ServerName.'.
    2008-02-12 13:06:57.843 Status: 0, code: 18805, text: 'The Log Reader Agent failed to construct a replicated command from log sequence number (LSN) {00065e22:0002e3d0:0006}. Back up the publication database and contact Customer Support Services.'.
    2008-02-12 13:06:57.843 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'ServerName'.'.
    Replication agent job kept trying after specified intervals and kept failing with that message.
    Investigation
    I could clearly see there were transactions waiting to be delilvered to subscribers from the followings:
    SELECT * FROM dbo.MSrepl_transactions -- 1162
    SELECT * FROM dbo.MSrepl_commands -- 821922
    The following steps were taken to further investigate the problem. They further confirmed how transactions were in queue waiting to be delivered to distribution database
    -- Returns the commands for transactions marked for replication
    EXEC sp_replcmds
    -- Returns a result set of all the transactions in the publication database transaction log that are marked for replication but have not been marked as distributed.
    EXEC sp_repltrans
    -- Returns the commands for transactions marked for replication in readable format
    EXEC sp_replshowcmds
    Resolution
    Taking a backup as suggested in message wouldn't resolve the issue. None of the commands retrieved from sp_browserreplcmds with mentioned LSN in message had no syntactic problems either.
    exec sp_browsereplcmds @xact_seqno_start = '0x00065e220002e3d00006'
    In a desperate attempt to resolve the problem, I decided to drop all subscriptions. To my surprise Log Reader kept failing with same error again. I thought having no subscription for publications log reader agent would have no reason to scan publisher's transaction log. But obviously I was wrong. Even adding new log reader using sp_addLogreader_agent after deleting the old one would not be any help. Restart of server couldn't do much good either.
    EXEC sp_addlogreader_agent
    @job_login = 'LoginName',
    @job_password = 'Password',
    @publisher_security_mode = 1;
    When nothing else worked for me, I decided to give it a try to the following procedures reserved for troubleshooting replication
    --Updates the record that identifies the last distributed transaction of the server
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
    -- Flushes the article cache
    EXEC sp_replflush
    Bingo !
    Log reader agent managed to start successfully this time. I wish if I could have used both commands before I decided to drop subscriptions. It would have saved me considerable effort and time spent re-doing subscriptions.
    Question
    Even though I managed to resolve the error and have replication funtioning again but I think there might have been some better solution and I would appreciate if you could provide me some feedback and propose your approach to resolve the problem.

    Hi Hilary,
    Will the below truncate the log file marked for replication, is there any data loss, when we execute this command, can you please help me understand, the internal working of this command.
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1

  • The transaction log for database 'Test_db' is full due to 'LOG_BACKUP'

    My dear All,
    Came up with another issue:
    App team is pushing the data from one Prod1 server 'test_1db' to another Prod2 server 'User_db' through a job, here while pushing the data after some duration job is failing and throwing the following error
    'Error: 9002, Severity: 17, State: 2.'The transaction log for database 'User_db' is full due to 'LOG_BACKUP'''.
    On Prod2 server 'User_db' log is having enough space 400gb on drive and growth is 250mb. I really confused that why job is failing as there is lot of space available. Kindly guide me to troubleshoot the issue as this issue is occuring from more than
    1 week. Kindly refer the screenshot for the same.
    Environment: SQL Server 2012 with sp1 Ent-edition. and log backup duration is every 15 mints and there is no High availability between the servers.
    Note: Changing to simple recovery model may resolve but App team is required to run in Full recovery model as they need of log backups.
    Thanks in advance,
    Nagesh
    Nagesh

    Dear V,
    Thanks for the susggestions.
    I have followed some steps to resolve the issue, as of now my jobs are working without issue.
    Steps:
    Generating log backup for every 5 minutes
    Increased the growth 500mb to unrestricted. 
    Once whole job completed we are shrinking the log file.
    Nagesh

  • How do I view the transaction log in SQL Server 2008?

    Hello,
    I want to know how to view all the transactions taken during a particular period of time. I know there is a log file, ending with .ldf, created for each database. But how do I view this file?
    Is there any tool in the SQL Server studio that can enable me to view the transactions for a given time period?
    The reason for me wanting to view the log file is that, last week during a power outage, certain amount of data was not written. And one my friend had also messed up some of the data (unfortunately, she doesn't remember what she did).
    Thanks in advance.

    Hi,
     It enables you to read from you transaction log which contains very valuable information about stuff that is happening in your database.
    select
    * from fn_dblog (null,null) ..
    EXAMPLE:
    SELECT
    FROM
    ::fn_dblog(NULL, NULL)
    WHERE
    operation = 'LOP_DELETE_SPLIT'
    Thanks,
    Leks

  • How to reduce transaction log in SQL Server 2000

    Dear All gurus/experts,
    I need your help. The problem is that the time to take to reduce transaction log in SQL server 2000 when I use shrink method. Are there another way to do that beside shrink database (right click : all tasks --> shrink database) ?
    I appreciate your answers. TIA
    Rgds,

    Hi Steve,
    Is this for a test system or a production system?
    For a test system, as per Ad's post, setting the recovery model to simple should do the trick.
    For a production system, I'd recommend you leave the recovery model at full and set up transaction log backups. This will keep the log file at a reasonable size and you will gain point in time recovery (eg if you back up the logs on an hourly basis, you can recover the database to the last log backup, meaning you would never lose more than an hour's work).
    Kind Regards,
    Owen

Maybe you are looking for

  • An error ocurred while running the specified calc script. Check the log...

    Hi, I get the following error when I use the "Mass Mllocate" of hyperion planning (11.1.1.3) , specifically when i try to use "Spread Type" and select "Relational Spread". I check the log for details, i see the following messages: [Mon Mar  5 17:34:1

  • Color Picker for Web

    I often do website images for developers. The problem i am having is matching the web color from a color pick in ps. I have used a method where s i take a screen shot on my MAC and then pull that into PS cs5 (I have cs6 also) and pick. Then i export

  • How do i increase call volume

    I just got my IPhone 4S and cannot locate where to turn up the call volume, not the ringer volume, but actually increasing the loudness of the caller?

  • A Fairly Tough Nokia N8 Problem . can it be solved...

    I had my useual camera,messages,pictures, etc bar on my main page of my N8 . . I selected edit page, and removed the bar, thinking it would go into widget catologue,  so i could put it back later . . however when i looked it wasnt among the widgets a

  • Host Credentials and  Database Credentials

    Hi I'm using Enterprise Manager to startup my database. I don't know the username and password for Host Credentials and username and password for Database Credentials? I can't start up my database without knowing them. I provided the following inform