Log suspended for database sybsystemdb

This morning, I got error message:
1 task(s) are sleeping waiting for space to become available in the log segment for database sybsystemdb.
then I extend the space for sybsystempdb(data and log).  the process for logsuspended automatically gone.  Then, I got more error message:
Error: 21, Severity: 21, State: 1
00:00000:00037:2014/07/08 09:43:24.80 server  WARNING - Fatal Error 806 occurred at Jul  8 2014  9:43AM.  Please note the error and time, and contact a user with System Administrator (SA) authorization.
00:00000:00037:2014/07/08 09:43:24.81 server  Error: 806, Severity: 21, State: 1
00:00000:00037:2014/07/08 09:43:24.81 server  Could not find virtual page for logical page 1919221760  in database 'mydb'.
00:00000:00037:2014/07/08 09:43:24.81 server  Error while undoing log row in database 'mydb'.  Rid pageid = 0x0; row num = 0x0.
00:00000:00037:2014/07/08 09:43:24.81 server  Error: 6103, Severity: 17, State: 1
00:00000:00037:2014/07/08 09:43:24.81 server  Unable to do cleanup for the killed process; received Msg 3300.
00:00000:00037:2014/07/08 09:43:24.83 server  WARNING: spid 37 with suid 37 and curdbid 5 has an active transaction in dbid 5 with xactid (0, 0).
then check doc for error 806, 6103. Looks like something possibly wrong. System looks like fine at this time. Want to know how to do more check for this case to make sure ase(12.5) it's okay.

For the issue of sybsystemdb/log filling up ... enable the auto-truncation of the log (sp_dboption sybsystemdb,'trunc','true').
NOTE: sybsystemdb is typically used to manage distributed transactions and can include keeping track of some proxy calls; for example the sybsystemdb log can fill up due to a high volume of proxy calls; unless you're working with distributed transactions the recovery of the sybsystemdb database usually isn't a concern, ie, just enable 'trunc log on chkpt' to keep the log trimmed
For the 806/6103 errors ... without knowing more details about what processing was hung, which processes may have been killed, etc ... I'd bounce the dataserver and run 'dbcc checkalloc()' against mydb to see if this is a permanent or transient error.

Similar Messages

  • The log file for database 'tempdb' is full

    Hi Experts,
    in my sync RFC to JDBC scenario I get the following error message which says:
    11.02.2009 11:23:20 Error Unable to execute statement for table or stored procedure.
    'Trns' (Structure 'XYZ')
    due to java.sql.SQLException:
    [Microsoft][SQLServer 2000 Driver for JDBC][SQLServer]
    The log file for database 'tempdb' is full. Back up the transaction
    log for the database to free up some log space.
    11.02.2009 11:23:20 Error JDBC message processing failed;
    reason Error processing request in sax parser: Error when executing
    statement for table/stored proc. 'Trns' (structure 'XYZ'):
    java.sql.SQLException: [Microsoft][SQLServer 2000 Driver for JDBC][SQLServer]
    The log file for database 'tempdb' is full. Back up the transaction log
    for the database to free up some log space.
    see here:
    http://img23.imageshack.us/img23/6541/unbenannthd9.jpg
    does anyone know how to solve this problem?
    thanks
    chris

    Christian,
    See this page for more information on the error you're getting: http://sqlserver2000.databases.aspfaq.com/why-is-tempdb-full-and-how-can-i-prevent-this-from-happening.html.
    Kind regards,
    Koen

  • ASE - Started filling free space info for database

    Hi All
    I have an ASE db that is in a RECOVERY state.
    This the last communication in the log: Started filling free space info for database 'BWP'
    Does anyone know what this means?
    There is a SAP BW running on ASE 15.7.
    I am an SAP consultant working onsite at a client and the environment is down due to the DB being in this state.
    Any ideas?
    00:0002:00000:00014:2014/07/03 10:27:18.04 server  Recovering database 'BWP'.
    00:0002:00000:00014:2014/07/03 10:27:18.05 server  Started estimating recovery log boundaries for database 'BWP'.
    00:0002:00000:00014:2014/07/03 10:27:18.07 server  Database 'BWP', checkpoint=(249429512, 203), first=(249429512, 203), last=(249429513, 46).
    00:0002:00000:00014:2014/07/03 10:27:18.07 server  Completed estimating recovery log boundaries for database 'BWP'.
    00:0002:00000:00014:2014/07/03 10:27:18.07 server  Started ANALYSIS pass for database 'BWP'.
    00:0002:00000:00014:2014/07/03 10:27:18.07 server  Completed ANALYSIS pass for database 'BWP'.
    00:0002:00000:00014:2014/07/03 10:27:18.07 server  Log contains all committed transactions until 2014/07/03 10:19:12.65 for database BWP.
    00:0002:00000:00014:2014/07/03 10:27:18.07 server  Started REDO pass for database 'BWP'. The total number of log records to process is 81.
    00:0002:00000:00014:2014/07/03 10:27:18.14 server  Completed REDO pass for database 'BWP'.
    00:0002:00000:00014:2014/07/03 10:27:18.14 server  Timestamp for database 'BWP' is (0x0004, 0xd609797b).
    00:0002:00000:00014:2014/07/03 10:27:18.14 server  Recovery of database 'BWP' will undo incomplete nested top actions.
    00:0002:00000:00014:2014/07/03 10:27:18.14 server  Started recovery checkpoint for database 'BWP'.
    00:0002:00000:00014:2014/07/03 10:27:18.14 server  Completed recovery checkpoint for database 'BWP'.
    00:0002:00000:00014:2014/07/03 10:27:18.14 server  Started filling free space info for database 'BWP'.
    ASE VERSION:
    Adaptive Server Enterprise/15.7/EBF 22779 SMP SP122 /P/x86_64/Enterprise Linux/ase157sp12x/3662/64-bit/FBO/Sat Apr 19 05:48:19 2014
    Any suggestions on what to do?
    J

    ASE tracks the free space available on each segment in memory.
    If the server is shut down politely, ASE can store the current values on disk and retrieve them at startup.  However, if the server is shutdown abruptly (shutdown with nowait, crash, power failure, kill -9, etc.) the free space figures don't get written out.  In that case ASE has to recalculate the free space values by reading all the allocation pages or OAM pages in the database.  On a big database, that can take time.
    Your main choices are to
    1) wait it out
    2) set the "no freespace accounting" database option and reboot
    Disabling free-space accounting for data segments
    While recovery will be much faster with freespace accounting turned off, there are side effects such as unexpected 1105 errors (no free space...) and thresholds not firing as expected.  In general I'd advise waiting it out and trying to avoid the use of "shutdown with nowait" going forward (which may or may not be what brought the server down, but it is the main cause you can control).
    -bret

  • (Cisco Historical Reporting / HRC ) All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054

    Hi All,
    I am getting an error message "All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054"  when trying to log into HRC (This user has the reporting capabilities) . I checked the log files this is what i found out 
    The log file stated that there were ongoing connections of HRC with the CCX  (I am sure there isn't any active login to HRC)
    || When you tried to login the following error was being displayed because the maximum number of connections were reached for the server .  We can see that a total number of 5 connections have been configured . ||
    1: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Current number of connections (5) from historical Clients/Scheduler to 'CRA_DATABASE' database exceeded the maximum number of possible connections (5).Check with your administrator about changing this limit on server (wfengine.properties), however this might impact server performance.
    || Below we can see all 5 connections being used up . ||
    2: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:[DB Connections From Clients (count=5)]|[(#1) 'username'='uccxhrc','hostname'='3SK5FS1.ucsfmedicalcenter.org']|[(#2) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#3) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#4) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#5) 'username'='uccxhrc','hostname'='47BMMM1.ucsfmedicalcenter.org']
    || Once the maximum number of connection was reached it threw an error . ||
    3: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Number of max connection to 'CRA_DATABASE' database was reached! Connection could not be established.
    4: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Database connection to 'CRA_DATABASE' failed due to (All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054.)
    Current exact UCCX Version 9.0.2.11001-24
    Current CUCM Version 8.6.2.23900-10
    Business impact  Not Critical
    Exact error message  All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054
    What is the OS version of the PC you are running  and is it physical machine or virtual machine that is running the HRC client ..
    OS Version Windows 7 Home Premium  64 bit and it’s a physical machine.
    . The Max DB Connections for Report Client Sessions is set to 5 for each servers (There are two servers). The no of HR Sessions is set to 10.
    I wanted to know if there is a way to find the HRC sessions active now and terminate the one or more or all of that sessions from the server end ? 

    We have had this "PRX5" problem with Exchange 2013 since the RTM version.  We recently applied CU3, and it did not correct the problem.  We have seen this problem on every Exchange 2013 we manage.  They are all installations where all roles
    are installed on the same Windows server, and in our case, they are all Windows virtual machines using Windows 2012 Hyper-V.
    We have tried all the "this fixed it for me" solutions regarding DNS, network cards, host file entries and so forth.  None of those "solutions" made any difference whatsoever.  The occurrence of the temporary error PRX5 seems totally random. 
    About 2 out of 20 incoming mail test by Microsoft Connectivity Analyzer fail with this PRX5 error.
    Most people don't ever notice the issue because remote mail servers retry the connection later.  However, telephone voice mail systems that forward voice message files to email, or other such applications such as your scanner, often don't retry and
    simply fail.  Our phone system actually disables all further attempts to send voice mail to a particular user if the PRX5 error is returned when the email is sent by the phone system.
    Is Microsoft totally oblivious to this problem?
    PRX5 is a serious issue that needs an Exchange team resolution, or at least an acknowledgement that the problem actually does exist and has negative consequences for proper mail flow.
    JSB

  • The transaction log for database 'Test_db' is full due to 'LOG_BACKUP'

    My dear All,
    Came up with another issue:
    App team is pushing the data from one Prod1 server 'test_1db' to another Prod2 server 'User_db' through a job, here while pushing the data after some duration job is failing and throwing the following error
    'Error: 9002, Severity: 17, State: 2.'The transaction log for database 'User_db' is full due to 'LOG_BACKUP'''.
    On Prod2 server 'User_db' log is having enough space 400gb on drive and growth is 250mb. I really confused that why job is failing as there is lot of space available. Kindly guide me to troubleshoot the issue as this issue is occuring from more than
    1 week. Kindly refer the screenshot for the same.
    Environment: SQL Server 2012 with sp1 Ent-edition. and log backup duration is every 15 mints and there is no High availability between the servers.
    Note: Changing to simple recovery model may resolve but App team is required to run in Full recovery model as they need of log backups.
    Thanks in advance,
    Nagesh
    Nagesh

    Dear V,
    Thanks for the susggestions.
    I have followed some steps to resolve the issue, as of now my jobs are working without issue.
    Steps:
    Generating log backup for every 5 minutes
    Increased the growth 500mb to unrestricted. 
    Once whole job completed we are shrinking the log file.
    Nagesh

  • The transaction log for database 'SharePoint_Config' is full

    Hi all ,
    I am very new to sharepoint. when i tried to remove a wsp file from central administration i got the message like
    The transaction log for database 'SharePoint_Config' is full. To find out why space in the log cannot
    be reused, see the log_reuse_wait_desc column in sys.databases.
    Can anybody help me to solve this please. I saw one solution in net like this but i don't know how to do this. Can anybody help me how to do these steps please.
    1.       
    Take the configuration database offline and detach it
    2.       
    Copy the current MDF to a new location (to be used as a way of recovering the database if needed)
    3.       
    Put the database back on-line, reattach it and then put it within simple mode (from full), with an aim of this stopping the database from
    increasing in size
    4.       
    Shrink the database and recover log space
    5.       
    Should the shrinking fail, we'd look at detaching the database, making a sideways copy of the log file to another database
    6.       
    We would then reattach the database, which should generate a new log file
    Thank you 

    Hi Soumya,
    I don't have any dba resource. And it is not a lab environment. It's Corporate environment.
    Thanks alot for ur quick reply.
    I normally don't just come onto the threads and tell people that they need a consultant, but you might want to look at getting a consultant to help make sure that your SharePoint environment is health and can be restored in the event of a server failure. 
    Based on the problem that you have described it might not be recoverable, or at least it might not be as recoverable as you want it to be.
    At the very least you'll want to watch this video of my session at TechEd 2014 which talks about backups and how to set them up.
    http://channel9.msdn.com/events/TechEd/NorthAmerica/2014/DBI-B214#fbid=
    Thank You,
    Denny Cherry

  • How to use the mirrored and log shipped secondary database for update or insert operations

    Hi,
    I am doing a DR Test where I need to test the mirrored and log shipped secondary database but without stopping the mirroring or log shipping procedures. Is there a way to get the data out of mirrored and log shipped database to another database for update
    or insert operations?
    Database snapshot can be used only for mirrored database but updates cannot be done. Also the secondary database of log shipping cannot used for database snapshot. Any ideas of how this can be implemented?
    Thanks,
    Preetha

    Hmm in this case I think you need Merge Replication otherwise it breaks down the purpose of DR...again in that case.. 
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Configurin​g Database for logging to citadel Database using shared variable engine

    Hello All,
    I have two systems with me here, one with LabVIEW 8.5 and one with LabVIEW 8.6, I'm using Shared Variables in my code and I am Logging to citadel Database. In a PC with LabVIEW 8.5, I am able to Log Data to citadel Database, but with the same code I am not able to Log Data from the PC with LabVIEW 8.6. Both the PC's have Database Installed and the connection with the Database Exists when I test connection with database through control panel. I would Like to know any configurations (in LabVIEW or in code or in Database or in PC) that have to be done for Logging to Database because the PC with LabVIEW 8.6 was added recently and the code was upgraded to 8.6.

    It was due to a dll in LabVIEW, a dll named nitaglv.dll
    Now I have a issue with Data Logging from the EXE, once the shared variables are deployed programatically from the EXE the Data Logging stops, 
    can I get any input on this issue..???

  • Does anyone succeed logging a SR for database?

    in metalink,
    i am having trouble to find database on product list, i have expanded the productlist but nothing changed!!
    Which product must i choose from that dropdown list for database please someone tell me?
    please somebody help me, i called oracle support but auto-message kept saying "wait" i had wait enough but nobody answered!!!
    soon ill be going mad!

    Hi
    i have succesfully logged an sr, thanks for your help.
    previously i have created a lot of Sr s about application server, oid, ifs..
    but when i tried to open a sr for database today, i scanned the product list drop down a dozen of times after clicking "expand product list" and saw nothing containing the word "database"
    luckly, i have asked this forum and had the answer that i must select " oracle server"
    this is weird because this product is database, as mentioned in all documentation and release notes.
    i didnt understand that its including database so thats complex for me and i am sure its complex for anyone who is hastily trying to log his first sr about the product database!
    thanks for all help!
    cheers,
    ceren

  • The transaction log for database 'mydatabase' is full. To find out why space in the log cannot be reused, see the log_reuse_wait

    Every time I get this error, at different points of testing inserts and deletions on my table:
    The transaction log for database 'mydatabase' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
    Why do I keep getting this?  All I'm doing is deleting several hundred thousand records and inserting them into a couple of tables.  i shouldn't have to truncate my log every time or my application bombs out!
    sys.databases only gives me this info for log_reuse_wait_desc which does nothing for me:
    LOG_BACKUP

    sp_helpdb BizTalkDTADb
    ALTER DATABASE BiztalkDTADb
    SET RECOVERY SIMPLE;
    GO
    DBCC SHRINKFILE (BiztalkDTADb_log, 1);
    GO
    sp_helpdb BizTalkDTADb
    GO
    ALTER DATABASE BiztalkDTADb
    SET RECOVERY FULL
    GO

  • The transaction log for database 'BizTalkMsgBoxDb' is full.

    Hi All,
    We are getting the following error continously in the event viewer of our UAT servers. I checked the jobs and all the backup jobs were failing on the step to backup the transaction log file and were giving the same error. Our DBA's cleaned the message box manually and backed up the DB but still after some time the jobs starts failing and this error is logged in the event viewer.
    The transaction log for database 'BizTalkMsgBoxDb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases".
    Thanks,
    Abdul Rafay
    http://abdulrafaysbiztalk.wordpress.com/
    Please mark this answer if it helps

    Putting the database into simple recovery mode and shrinking the log file isn't going to help: it'll just grow again, it will probably fragment across the disk thereby impacting performance and, eventually, it will fill up again for the same reason
    as before.  Plus you put yourself in a very vulnerable position for disaster recovery if you change the recovery mode of the database: and that's before we've addressed the distributed transaction aspect of the BizTalkDatabases.
    First, make sure you're backing up the log file using the BizTalk job Backup BizTalk Server (BizTalkMgmtDb).  It might be that the log hasn't been backed up and is full of transactions: and, eventually, it will run out of space.  Configuration
    instructions at this link:
    http://msdn.microsoft.com/en-us/library/aa546765(v=bts.70).aspx  Your DBA needs to get the backup job running properly rather than panicking!
    If this is running properly, and backing up (which was the case for me) and the log file is still full, run the following query:
    SELECT Name, log_reuse_wait_desc
    FROM sys.databases
    This will tell you why the log file isn't properly clearing down and why it cannot use the space inside.  When I had this issue, it was due to an active transaction.
    I checked for open transactions on the server using this query:
    SELECT
    s_tst.[session_id],
    s_es
    .[login_name]
    AS [Login Name],
    DB_NAME
    (s_tdt.database_id)
    AS [Database],
    s_tdt
    .[database_transaction_begin_time]
    AS [Begin Time],
    s_tdt
    .[database_transaction_log_record_count]
    AS [Log Records],
    s_tdt
    .[database_transaction_log_bytes_used]
    AS [Log Bytes],
    s_tdt
    .[database_transaction_log_bytes_reserved]
    AS [Log Rsvd],
    s_est
    .[text]
    AS [Last T-SQL Text],
    s_eqp
    .[query_plan]
    AS [Last Plan]
    FROM
    sys.dm_tran_database_transactions
    s_tdt
    JOIN
    sys.dm_tran_session_transactions
    s_tst
    ON s_tst.[transaction_id]
    = s_tdt.[transaction_id]
    JOIN
    sys.[dm_exec_sessions]
    s_es
    ON s_es.[session_id]
    = s_tst.[session_id]
    JOIN
    sys.dm_exec_connections
    s_ec
    ON s_ec.[session_id]
    = s_tst.[session_id]
    LEFT
    OUTER
    JOIN
    sys.dm_exec_requests
    s_er
    ON s_er.[session_id]
    = s_tst.[session_id]
    CROSS
    APPLY
    sys.dm_exec_sql_text
    (s_ec.[most_recent_sql_handle])
    AS s_est
    OUTER
    APPLY
    sys.dm_exec_query_plan
    (s_er.[plan_handle])
    AS s_eqp
    ORDER
    BY [Begin Time]
    ASC;
    GO
    And this told me the spid of the process with an open transaction on BizTalkMsgBoxDB (in my case, this was something that had been open for several days).  I killed the transaction using KILL spid, where spid is an integer.  Then I ran the BizTalk
    Database Backup job again, and the log file backed up and cleared properly.
    Incidentally, just putting the database into simple transaction mode would have emptied the log file: giving it lots of space to fill up again.  But it doesn't deal with the root cause: why the backups were failing in the first place.

  • "The transaction log for database 'speakasiaonline' is full." I'm getting this message when I m trying to login to speakasia website and am unable to open it. Pl help.

    The transaction log for database 'speakasiaonline' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

    What does it return?
    SELECT log_reuse_wait_desc FROM  sys.databases WHERE database_id=2
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • The transaction log for database 'ECC' is full + ECC6.0 Installation Failur

    Guyz,
    my ecc6 installation failed after 8 hours run with following error log snippet...
    exec sp_bindefault 'numc3_default','SOMG.MSGNO'
    DbSlExecute: rc = 99
      (SQL error 9002)
      error message returned by DbSl:
    The transaction log for database 'ECC' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
    (DB) ERROR: DDL statement failed
    (ALTER TABLE [SOMG] ADD CONSTRAINT [SOMG~0] PRIMARY KEY CLUSTERED ( [MANDT], [OBJTP], [OBJYR], [OBJNO] ) )
    DbSlExecute: rc = 99
      (SQL error 4902)
      error message returned by DbSl:
    Cannot find the object "SOMG" because it does not exist or you do not have permissions.
    ECCLOG1 data file has got 25GB initial size and growth was restricted to 10% (PROPOSED BY SAPInst)...
    i'm assuming this error was due to lack of growth space for ECCLOG1 datafile...am i right? if so how much should i allocate memory for this log ? or is there any workaround ?
    thanks in advance

    Kasu,
    If SQL is complaining that the log file is full then the phase of the install that creates the SQL data/log files has already occurred (happens early in the install) and the install is importing programs, config and data into the db.
    Look at the windows application event log for "Transaction log full" events to confirm.
    To continue, in SQL Query analyzer try:
    "Backup log [dbname] with truncate_only"
    This will remove only inactive parts of the log and is safe when you don't require point-in-time recovery (which you don't during an install).
    Then, go to the SQL Enterprise manager, choose the db in question and choose the shrink database function, choose to shrink only the transaction log file and the space made empty by the truncate will be removed from the file.
    Change the recovery mode in SQL Server to "simple" so that the log file does not grow for the remainder of the install.
    Make sure you change the recovery mode back to "full" after the install is complete.
    Your transaction log appears to have filled the disk partition you have assigned to it.
    25GB is huge for a transaction log and you would normally not see them grow this large if you are doing regular scheduled tlog backups (say every 30-60 minutes) because the log will truncate every time, but its not unusual to see one get big during an install, upgrade or when applying hotpacks.
    Tim

  • Any body able to help me : The transaction log for database 'KDS' is full

    Hi Experts,
                      I am facing the follwoing problem when i entered into the portal.
    [NWMss][SQLServer JDBC Driver][SQLServer]The transaction log for database 'KDS' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases.
    Exception id: 10:51_28/09/07_0006_3439451
    See the details for the exception ID in the log file
    Please give me a solution urgent.
    Message was edited by:
            Ramanan Panchabakesan

    Ameya,
    The log file seems to be....[ i have bolded the warning number and bold+underline the error number].kindly see this
    #1.5 #001320E973F2004900000281000010D400043B2BF18A5CFD#1190959829445#com.sap.engine.services.deploy##com.sap.engine.services.deploy######01eb74206d8811dc8114001320e973f2#SAPEngine_System_Thread[impl:5]_5##0#0#Warning#1#/System/Server#Plain###
    Warning occurred on server <b>3439450</b> during startApp sap.com/cafruntimemonitoringear : Application sap.com/cafruntimemonitoringear has weak reference to application sap.com/com.sap.jdo and is starting it!#
    #1.5 #001320E973F2004900000283000010D400043B2BF18A5DCF#1190959829445#com.sap.engine.services.deploy##com.sap.engine.services.deploy######01eb74206d8811dc8114001320e973f2#SAPEngine_System_Thread[impl:5]_5##0#0#Warning#1#/System/Server#Plain###
    Warning occurred on server <b>3439450</b> during startApp sap.com/cafruntimemonitoringear : Application sap.com/cafruntimemonitoringear has weak reference to application sap.com/caf~km.proxies and is starting it!#
    #1.5 #001320E973F2004900000285000010D400043B2BF18A5E85#1190959829445#com.sap.engine.services.deploy##com.sap.engine.services.deploy######01eb74206d8811dc8114001320e973f2#SAPEngine_System_Thread[impl:5]_5##0#0#Warning#1#/System/Server#Plain###
    Warning occurred on server <b>3439450</b> during startApp sap.com/cafruntimemonitoringear : Application sap.com/cafruntimemonitoringear has weak reference to application sap.com/cafruntimeear and is starting it!#
    #1.5 #001320E973F2004900000287000010D400043B2BF18A5F72#1190959829461#com.sap.engine.services.deploy##com.sap.engine.services.deploy######01eb74206d8811dc8114001320e973f2#SAPEngine_System_Thread[impl:5]_5##0#0#Warning#1#/System/Server#Plain###
    Warning occurred on server <b>3439450</b> during startApp sap.com/cafruntimemonitoringear : Application sap.com/cafruntimemonitoringear has a weak reference to resource jmsfactory/TopicConnectionFactory with type javax.jms.TopicConnectionFactory but the resource is not available and the application may not work correctly!#
    #1.5 #001320E973F2001100000002000010D400043B2BF18FC93A#1190959829805#com.sap.jms##com.sap.jms.LOCK_EXCEPTION######837a4dd06d8911dcbe35001320e973f2#SAPEngine_System_Thread[impl:5]_34##0#0#Warning#1#/System/Server#Java###Couldn't acquire Lock $service.jms_provider. Current JMS lock owner is server node with id = .#1#<i><b><u>3439451</u></b></i>#
    #1.5 #001320E973F200490000028D000010D400043B2BF1A40473#1190959831134#com.sap.engine.services.jndi##com.sap.engine.services.jndi######01eb74206d8811dc8114001320e973f2#SAPEngine_System_Thread[impl:5]_5##0#0#Path##Java###Caught #1#com.sap.engine.services.jndi.persistent.exceptions.JNDIException: Error during s object serialization.
    at com.sap.engine.services.jndi.implclient.ClientContext.serializeObject(ClientContext.java:3335)
    at com.sap.engine.services.jndi.implclient.ClientContext.serializeDirObject(ClientContext.java:3224)
    at com.sap.engine.services.jndi.implclient.ClientContext.rebind(ClientContext.java:1032)
    at com.sap.engine.services.jndi.implclient.ClientContext.rebind(ClientContext.java:957)
    at com.sap.engine.services.servlets_jsp.server.runtime.context.WebApplicationConfig.bind(WebApplicationConfig.java:455)
    at com.sap.engine.services.servlets_jsp.server.runtime.context.WebApplicationConfig.parse(WebApplicationConfig.java:116)
    at com.sap.engine.services.servlets_jsp.server.runtime.context.ApplicationContext.init(ApplicationContext.java:617)
    at com.sap.engine.services.servlets_jsp.server.container.WebContainerHelper.createContext(WebContainerHelper.java:540)
    at com.sap.engine.services.servlets_jsp.server.container.StartAction.prepareStart(StartAction.java:51)
    at com.sap.engine.services.servlets_jsp.server.container.WebContainer.prepareStart(WebContainer.java:475)
    at com.sap.engine.services.deploy.server.application.StartTransaction.prepareCommon(StartTransaction.java:223)
    at com.sap.engine.services.deploy.server.application.StartTransaction.prepareLocal(StartTransaction.java:176)
    at com.sap.engine.services.deploy.server.application.ApplicationTransaction.makeAllPhasesLocal(ApplicationTransaction.java:365)
    at com.sap.engine.services.deploy.server.application.ParallelAdapter.runInTheSameThread(ParallelAdapter.java:132)
    at com.sap.engine.services.deploy.server.application.ParallelAdapter.makeAllPhasesLocalAndWait(ParallelAdapter.java:250)
    at com.sap.engine.services.deploy.server.DeployServiceImpl.startApplicationLocalAndWait(DeployServiceImpl.java:4450)
    at com.sap.engine.services.deploy.server.DeployServiceImpl.startApplicationsInitially(DeployServiceImpl.java:2610)
    at com.sap.engine.services.deploy.server.DeployServiceImpl.clusterElementReady(DeployServiceImpl.java:2464)
    at com.sap.engine.services.deploy.server.ClusterServicesAdapter.containerStarted(ClusterServicesAdapter.java:42)
    at com.sap.engine.core.service630.container.ContainerEventListenerWrapper.processEvent(ContainerEventListenerWrapper.java:144)
    at com.sap.engine.core.service630.container.AdminContainerEventListenerWrapper.processEvent(AdminContainerEventListenerWrapper.java:19)
    at com.sap.engine.core.service630.container.ContainerEventListenerWrapper.run(ContainerEventListenerWrapper.java:102)
    at com.sap.engine.frame.core.thread.Task.run(Task.java:64)
    at com.sap.engine.core.thread.impl5.SingleThread.execute(SingleThread.java:79)
    at com.sap.engine.core.thread.impl5.SingleThread.run(SingleThread.java:150)
    Caused by: com.sap.engine.services.jndi.persistent.exceptions.JNDIException: Error during s object serialization.
    at com.sap.engine.services.jndi.persistent.RemoteSerializator.toByteArray(RemoteSerializator.java:55)
    at com.sap.engine.services.jndi.implclient.ClientContext.serializeObject(ClientContext.java:3332)
    ... 24 more
    Caused by: java.io.NotSerializableException: com.sap.engine.system.ORBProxy
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1054)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:278)
    at com.sap.engine.services.jndi.persistent.RemoteSerializator.toByteArray(RemoteSerializator.java:48)
    ... 25 more

  • The transaction log for database 'EMP_SP_SearchApp_CrawlStoreDB_32fdb1522c5249088db8b09c1917dbec' is full due to 'ACTIVE_TRANSACTION'

    Hi
    we have a sharepoint farm for uploading documents in huge number daily and we conifgured RBS for sql server content db.
    we are facing issue in sql server instance:
    Suddenly the SQL server instance restarts and sometimes failover to node1 and sometimes just restart and keep running on the current server
    and did some invistagations we found that 
    -Analysis of data shows that the slowness of IO for tempdb.ldf is cause of the issue
    -tried to check tempdb using db properties but I couldn’t see the properties it gives me error,
    -There might be a big transaction locking the tempdb.
    Also noted  that there are lot of errors:
     The transaction log for database 'MOJSP_SearchApp_CrawlStoreDB_32fdb1522c5249088db8b09c1917dbec' is full due to 'ACTIVE_TRANSACTION'
    This might also be part of the problem causing locking.
    we found the errors like
    The transaction log for database 'EMP_SP_SearchApp_CrawlStoreDB_32fdb1522c5249088db8b09c1917dbec' is full due to 'ACTIVE_TRANSACTION'
    I see that log_reuse_wait_desc (reuse of transaction log space is currently waiting on as of the last checkpoint) is 'ACTIVE_TRANSACTION'
    adil

    Hi Adil,
    Method1:
    On your SharePoint SQL Server database, open the SQL Server Management Studio.
    Connect to the local SQL Server.
    Right click Your Database Name.
    Go to -> Properties -> Options.
    Change Recover Model from Full to Simple -> Click OK.
    Right click Your Database Name, go to Tasks -> Shrink -> Shrink Files.
    Select a File type of Log, then Shrink.
    Method2:
    Run SQL Management Studio and login to your SharePoint instance.
    Click on "New Query".
    Type in the following commands:
    USE [database name];
    BACKUP LOG [Your Database Name] WITH TRUNCATE_ONLY;
    DBCC
    SHRINKFILE ([Your Log File Name], 1);
    Replace [Your Database Name] and [Your Log File Name] with your log file.
    This will truncate your log file to 1mb.
    Method3:
    Issue CHECKPOINT on the tempdb
    USE tempdb
    GO
    CHECKPOINT
    GO
    Take Transactional log backup on the user database 
        BACKUP LOG CustomerXS TO DISK = N'M:\MSSQL\Backup\MSSQLSERVER\XS and RT\XS_Movement.trn'     WITH COMPRESSION     GO

Maybe you are looking for