SMS_NOTIFICATION_SERVER process Active Transaction preventing SQL log file backup

Hello,
I have been working on adding a few thousand machines into our SCCM 2012 R2 environment.  Recently after attaching several of these systems there was a spike in activity in the transaction log due to the communication and inventory of these new
machines.  The log file would fill up quickly but the log file backup would not function and allow the reuse of the log file.  Upon investigation by my DB Admin we noticed that the SMS_NOTIFICATION_SERVER process was holding open an Active Transaction
that would last 1 hour and then restart at the end of the hour.  This process was essentially preventing the backup of the log file.  In a test, I briefly turned off the SMS_NOTIFICATION_SERVER process and we noticed the transaction log file functioning
correctly.  I have included a screen shot of the process in the SQL Activity Monitor.  Has anyone experienced this issue and resolved it?  Is there anyway to reduce the 1 hour time frame or change the behaviour so that the process releases the
log file for backup if the log is getting full?
Regards,
Dave

We had it in Simple only briefly yesterday when working on the issue.  It is in Full recovery mode.

Similar Messages

  • SQL LOG FILE SIZE INCREASING

    Hi DBA's
    SQL Log file size occupies more disk space on the server, the overall database size is 8GB
    How to decrease the SQL LDF file size on the server, please explain the suitable steps to perform
    Thanks
    DBA

    use master
    go
    dump transaction <YourDBName>
    with no_log
    go
    use <YourDBName>
    go
    DBCC SHRINKFILE (<YourDBNameLogFileName>,
    100) -- where 100 is the size you may want to shrink it to in MB, change it to your needs
    go
    -- then you can call to check that all went fine
    dbcc checkdb(<YourDBName>)
    Andy ,
    what point in asking user to use No_log and you did not even motioned about what this eveil command will do. Actually its
    seriously not required reason being initial size of log file set to 8 G.
    Plus what is point in running checkdb ?
    I don't agree to any part you pointed
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • CPO SQL Log Files

    Hey guys
    We have started to see an exponential growth in our SQL log files that is in direct correlation with running more processes more consistently.
    Has anyone had any issues with running a log truncating script to both the TEO_processlog and TEO_reportinglog files?
    something along the lines of 
    ALTER DATABASE ExampleDB SET RECOVERY SIMPLE
    DBCC SHRINKFILE('ExampleDB_log', 0, TRUNCATEONLY)
    Thanks 
    Matt

    Matt,
     Yes, many people (if they do not need the t-logs) will reduce them. In 3.0 (during install) you can actually set it to SIMPLE mode instead of the more full recovery mode. I do that on most all of my boxes. Prior to CPO 3.0 you would have to do it like you show above.
    On most of my pre 3.0 boxes I would do something like
    USE <TEOProcess_DB_Name>;GO
    -- Truncate the log by changing the database recovery model to SIMPLE.
    ALTER DATABASE <TEOProces_DB_Name> SET RECOVERY SIMPLE;GO
    -- Shrink the truncated log file to 1000MB.
    DBCC SHRINKFILE ("<TEOProcess_log_file_name>", 1000); GO
    Of course this is on SQL only. You can find more information on DBCC Shrinkfile at Microsoft's help site.
    If you need to reset the database to full mode it's:
    -- Reset the database recovery model.
    ALTER DATABASE <TEOProcess_DB_Name> SET RECOVERY FULL;GO
    --Shaun

  • Which background process writes date into alert log file in oralce

    which background process writes date into alert log file in oralce

    Hi,
    AFAIK, all the background process are eligible for writing information to alert log file. As the file name indicates to show the alerts, so background process have the access rights in terms of packages (dbms), to write to alert log.
    I might be wrong also..
    - Pavan Kumar N

  • SQL log file size is extending rapidly

    Hello All,
    We are using ECC 6.0, our database is SQL 2005 & operating system is Windows NT 4x AMD64 L.
    Our database log file size is increasing rapidly, now its size is more than the all 4 data files (near about 300gb).
    Last week I tried to shrink log file but it didn't worked.
    Now less space is remained on disk, please help me.
    Now the system is started giving dump at the time of log in, & the dump is like "START_CALL_SICK ".
    I am attaching dump error text file.
    Please help why is this happening.
    Thanks in advance
    Mahendra

    Hi,
    I have backed up log file & shrink the file but it didn't worked for me
    What is the result? It shrinks the log and release all the space (for all committed transactions).
    How can i add another log file?
    Can i delete old log file after adding new log file.
    You can add another log file by following below steps. but in your case, this is not the right solution because you have good amount of log file configuration for your database (now its size is more than the all 4 data files (near about 300gb)).
    Open SQL server management studio > Expand database > Right click on database > Select Files > Click on Add > Give the input parameters (Logical file name, path, initial size etc.) click on OK
    If system is not allowing you to shrink the log file, it means you have active transactions in system which are continuously using your log file.
    Regards,
    Nick Loy

  • Unable to turn on windows process activation service thus IIS (CBS files uploaded in sky drive)

    Hi,
    I am trying to turn on/install IIS for local host and for that it require WAS to be installed and running. When try to turn on the feature of WAS i get the error "Error Occured. Not all the features were successfully changed." I had applied all
    the patches that I could find on the net and tried cleaning and re installing the .NET. None of them worked. Please help me out as I am in critical situation and need to develop something in really quick time.
    I had uploaded the files at "C:\Windows\logs\CBS" to my sky drive.
    Please provide the solution for the same as I can not format my system at this point of time.
    Thanks in advance.
    Warm Regards,
    Kuldeep
    John

    Hi,
    There are two tools that can be used to fix this issue quickly:
    System Update Readiness Tool
    MSConfig.exe
    As for your situation, I suggest MSConfig.exe.
    When you install IIS then installer adds WPAS for you automatically as one of the dependencies.
    BUT when you uninstall IIS WPAS does not get uninstalled automatically – leaving the core binaries intact (this is done for a reason and is not a BUG.
    In short, it’s not uninstalled to make sure we don’t end up breaking other services on the box that consume this process model explicitly – like WCF service).
    One has to make sure WPAS is explicitly uninstalled by going to features under server manager and choosing “Windows Process Activation Services” to uninstall.
    The detail can be found at:
    http://www.iis.net/learn/troubleshoot/installation-issues/troubleshooting-iis-7x-installation-issues
    Meanwhile, you’d better post the OneDrive link where you post the CBS file.
    Regards
    Wade Liu

  • SQL log file

    Hi guys. I know this is not adapted for this forum, but could anybody show me a way to read SQL server log(.ldf) files. have already tried SQL log Rescue, Apex SQL. I am trying to read a log file that got muh bigger in a couple of days. These 2 softwares do not seem to work

    No. Go ask your question on an SQL forum.

  • Advice on SQL Logs and Backups

    Hi All,
    I've been trying to understand SQL backups and I'm getting a bit stuck in one area - the log files.
    I find they're getting quite big and as such filling up my drives. I'm guessing that the best way to handle this is to truncate them every so often - as the data is in the live DB I'm assuming that the log files should be small.
    Q1 - I do daily full backups on my DB's via a maintenance plan so is it safe to say that the log files can be truncated daily?
    Q2 - How do I go about truncating the logs? I tried a backup of them but I'm not sure what to do next.
    Thanks for any help.
    Tom

      This can cause fragmentation and performance issues. Truncating the log is what happens when you take a backup.
    Prashanth,
    Shrinking log file does not causes fragmentation ( if you shrink data file it does cause) but your are correct that Shrinking log fiel should not be made every day practice.After shrinking when log file tries to grow it has to ask OS to allocate it space
    which if done frequently ( on slower disk) can cause performance issue.
    Tom,
    >>I do daily full backups on my DB's via a maintenance plan so is it safe to say that the log files can be truncated daily?
    A:NO only transaction log truncates log file ( or marks it reusable).So you have to take transaction log backup frequently( hope your DB is in full recovery).If your DB is in simple recovery automatic truncation happens and SQL server takes care of it after
    checkpoint or when log grows 70 % of it size.
    >>Q2 - How do I go about truncating the logs? I tried a backup of them but I'm not sure what to do next.
    Again answer is simple take transaction log backup frequently or according to your RPO and RTO
    PS: Sometimes when there is long running transaction like huge delete operation,index rebuild of huge database log might grow even frequent transaction log backup is there this is by design because unless transaction finishes or commits logs cannot be truncated
    so if you face it look for open transaction and wait for it to commit.
    Hope this helps
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • RAC(OPS) 환경 하에서 ARCHIVED LOG FILE을 BACKUP 받는 방법

    제품 : RMAN
    작성날짜 : 2004-11-26
    RAC(OPS) 환경하에서 양쪽 Node의 archived log file을 RMAN을 사용하여 동시에 BACKUP 받는 방법
    ======================================================================================
    ORACLE 9i 이전 버전
    Oracle 8i까지는 다음과 같은 Script를 통하여 Backup을 받을 수 있었습니다.
    1) Script Name: arch_backup.rcv
    run{
    allocate channel node_1 type disk connect 'system/manager@v92hp1';
    allocate channel node_2 type disk connect 'system/manager@v92hp2';
    backup filesperset 1
    (archivelog until time 'SYSDATE' thread 1 channel node_1)
    (archivelog until time 'SYSDATE' thread 2 channel node_2);
    release channel node_1;
    release channel node_2;
    2) 수행 방법
    $ rman target=system/manager catalog=rman_user/rmanpw cmdfile='arch_backup.rcv' log='arch_backup.log'
    ORACLE 9i 이후 버전
    그러나 Oracle9i 이상부터는 Archived file backup전에 다음과 같은 설정을 먼저
    해 주셔야만 합니다.
    1) Configuration 설정
    $ rman target=system/manager catalog=rman_user/rmanpw
    RMAN> Show all;
    RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
    RMAN> configure default device type to disk;
    RMAN> configure channel 1 device type disk connect 'system/manager@v92hp1';
    RMAN> configure channel 2 device type disk connect 'system/manager@v92hp2';
    위 설정은 backup을 Disk에 받는 경우로 가정하고 device type을 모두 disk로 설정하였습니다.
    만일 backup solution을 사용하여 tape에 받는다면 device type을 'sbt_tape'으로 변경해 주시면 됩니다
    몇개의 Channel을 설정할 것인가에 따라 PARALLELISM의 값을 반드시맞춰 주어야 합니다.
    이것을 맞춰주지 않으면 다음과 같은 형태의 Error가 발생하면서 다른 Node의 archive file들을 인식하지
    못하게 됩니다.(실제로 Archived file들은 정상적으로 존재합니다)
    RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
    ORA-19625: error identifying file /u01/64bit/app/oracle/product/9.2.0/admin/V92HP/arch/arch1_146.dbf
    ORA-27037: unable to obtain file status
    HP-UX Error: 2: No such file or directory
    Additional information: 3
    위 설정은 한번만 수행해 주시면 됩니다.
    만일 CHANNEL을 잘못 설정하였으면 다음과 같은 명령으로 Clear 해 주시면 됩니다.
    RMAN> configure channel 1 device type disk clear;
    2)Archived file을 Backup 받습니다.
    RMAN> run { backup
    format='/u01/64bit/app/oracle/product/9.2.0/admin/V92HP/arch/%U'
    archivelog all delete input;
    ADDITIONAL INFORMATION(1)
    RAC 환경 하에서 일부 Archived file들이 OS에서 삭제 되었을 경우 다음과 같은 명령을 통하여
    validation check를 수행한 후에 backup을 수행하여 주십시요
    RMAN> allocate channel for maintenance type disk connect 'system/manager@v92hp1';
    RMAN> allocate channel for maintenance type disk connect 'system/manager@v92hp2';
    RMAN> crosscheck archivelog all;
    만약에 Configuration에서 이미 Channel을 설정해 주었다면
    Channel allocation 없이 바로 crosscheck명령어를 수행해 주시면 됩니다.
    ADDITIONAL INFORMATION(2)
    Channel Configuration 설정시에 Backup FORMAT을 함께 설정하려면 다음과 같은 형태로 수행합니다.
    RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
    RMAN> configure default device type to disk;
    RMAN> configure channel 1 device type disk connect 'system/manager@v92hp1' FORMAT '/arch/bkup%t_s%s_s%p';
    RMAN> configure channel 2 device type disk connect 'system/manager@v92hp2' FORMAT '/arch/bkup%t_s%s_s%p';
    ADDITIONAL INFORMATION(3)
    Tape device를 사용할 경우 device type은 'sbt_tape'을 사용합니다.
    RMAN> CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 2;
    RMAN> configure default device type to 'sbt_tape';
    RMAN> configure channel 1 device type 'sbt_tape' connect 'system/manager@v92hp1' FORMAT 'bkup%t_s%s_s%p';
    RMAN> configure channel 2 device type 'sbt_tape' connect 'system/manager@v92hp2' FORMAT 'bkup%t_s%s_s%p';

  • Confusing  about achived log file backup

    From a book, I see
    "we can not combine archived redo log files and datafiles into a single backup",
    But, I do have a command
    "backup...........plus archivedlog"
    they seams contradict with each other,
    why is that?

    They do not conflict which each other:
    "we can not combine archived redo log files and datafiles into a single backup", referes to backup pieces. Oracle cannot combine archivelog and for example tablespace backup in a single backup piece.
    the following command, just says rman to perform a backup of tablespace and archivelogs, but as a result it will create at least two backup pieces one for tablespace and the second for archive redo logs.
    RMAN> backup tablespace users plus archivelog delete input skip inaccessible format "C:\%U.bkf";
    Starting backup at 29-JUN-09
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=128 RECID=142 STAMP=690573687
    channel ORA_DISK_1: finished piece 1 at 29-JUN-09
    piece handle=C:\0SKIOKQ3_1_1.BKF tag=TAG20090629T004553 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:02:45
    channel ORA_DISK_1: deleting archived log(s)
    archived log file name=C:\APP\MOB\ORADATA\ORCL\ARCH\ARC00128_0686744258.001 RECID=142 STAMP=690573687
    archived log file name=C:\APP\MOB\ORADATA\ORCL\ARCH\ARC00129_0686744258.001 RECID=143 STAMP=690588250
    Finished backup at 29-JUN-09
    Starting backup at 29-JUN-09
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00004 name=C:\APP\MOB\ORADATA\ORCL\USERS01.DBF
    channel ORA_DISK_1: starting piece 1 at 29-JUN-09
    channel ORA_DISK_1: finished piece 1 at 29-JUN-09
    piece handle=C:\APP\MOB\FLASH_RECOVERY_AREA\ORCL\BACKUPSET\2009_06_29\O1_MF_NNNDF_TAG20090629T004911_54HWVKFO_.BKP tag=TAG20090629T004911 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
    Finished backup at 29-JUN-09
    Starting backup at 29-JUN-09
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=148 RECID=162 STAMP=690770984
    channel ORA_DISK_1: starting piece 1 at 29-JUN-09
    channel ORA_DISK_1: finished piece 1 at 29-JUN-09
    piece handle=C:\0UKIOL1B_1_1.BKF tag=TAG20090629T004946 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    channel ORA_DISK_1: deleting archived log(s)
    archived log file name=C:\APP\MOB\ORADATA\ORCL\ARCH\ARC00148_0686744258.001 RECID=162 STAMP=690770984
    Finished backup at 29-JUN-09
    Starting Control File and SPFILE Autobackup at 29-JUN-09
    piece handle=C:\APP\MOB\PRODUCT\11.1.0\DB_1\DATABASE\C-1213135877-20090629-00 comment=NONE
    Finished Control File and SPFILE Autobackup at 29-JUN-09With kind regards
    Krystian Zieja

  • Big transaction log file

    Hi,
    I found a sql server database with a transaction log file of 65 GB.
    The database is configured with the recovery model option = full.
    Also, I noticed than since the database exist, they only took database backup.
    No transaction log backup were executed.
    Now, the "65 GB transaction log file" use more than 70% of the disk space.
    Which scenario do you recommend?
    1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file, schedule transaction log backup each hour.
    2- Backup the database, put the recovery model option= simple, shrink the transaction log file, Backup the database.
    Does the " 65 GB file shrink" operation would have impact on my database users ?
    The sql server version is 2008 sp2 (10.0.4000)
    regards
    D

    I've read the other posts and I'm at the position of: It really doesn't matter.
    You've not needed point in time restore abilities inclusive of this date and time since inception. Since a full database backup contains all of the log needed to bring the database into a consistent state, doing a full backup and then log backup is redundant
    and just taking up space.
    For the fastest option I would personally do the following:
    1. Take a full database backup
    2. Set the database recovery model to Simple
    3. Manually issue two checkpoints for good measure or check to make sure the current VLF(active) is near the beginning of the log file
    4. Shrink the log using the truncate option to lop off the end of the log
    5. Manually re-size the log based on usage needed
    6. Set the recovery model to full
    7. Take a differential database backup to bridge the log gap
    The total time that will take is really just the full database backup and the expanding of the log file. The shrink should be close to instantaneous since you're just truncating the end and the differential backup should be fairly quick as well. If you don't
    need the full recovery model, leave it in simple and reset the log size (through multiple grows if needed) and take a new full backup for safe keeping.
    Sean Gallardy | Blog |
    Twitter

  • Why multiple  log files are created while using transaction in berkeley db

    we are using berkeleydb java edition db base api, we have already read/write CDRFile of 9 lack rows with transaction and
    without transaction implementing secondary database concept the issues we are getting are as follows:-
    with transaction----------size of database environment 1.63gb which is due to no. of log files created each of 10 mb.
    without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb. so we want to know how REASON CONCRETE CONCLUSION ..
    how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001.....plz reply soon

    we are using berkeleydb java edition db base api, If you are seeing __db.NNN files in your environment root directory, these are environment's shared region files. And since you see these you are using Berkeley DB Core (with the Java/JNI Base API), not Berkeley DB Java Edition.
    with transaction ...
    without transaction ...First of all, do you need transactions or not? Review the documentation section called "Why transactions?" in the Berkeley DB Programmer's Reference Guide.
    without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb.There should be no logs created when transactions are not used. That single log file has likely remained there from the previous transactional run.
    how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001Have you reviewed the basic documentations references for Berkeley DB Core?
    - Berkeley DB Programmer's Reference Guide
    in particular sections: The Berkeley DB products, Shared memory regions, Chapter 11. Berkeley DB Transactional Data Store Applications, Chapter 17. The Logging Subsystem.
    - Getting Started with Berkeley DB (Java API Guide) and Getting Started with Berkeley DB Transaction Processing (Java API Guide).
    If so, you would have had the answers to these questions; the __db.NNN files are the environment shared region files needed by the environment's subsystems (transaction, locking, logging, memory pool buffer, mutexes), and the log.MMMMMMMMMM are the log files needed for recoverability and created when running with transactions.
    --Andrei                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Tansaction Log File of SQL Server

    Dear Guys,
    We have implemented ECC 6.0 on Windows 2003 Server / MS SQL 2005 platform. For backup purpose we have bought Tivoli. Some reason we cant backup of Database from DB13. Our Transaction log file is goin to increase, when after taking the backup of Transaction log, I shrink the log file but size of log file never decrease. So size of log file increase and free size on disk decrease. Second issue is, Can I shirnk the log file when SAP instance run ...??
    So help me out in this 2 issues.
    Thanks & Regards,
    Charanjit Singh.

    Hi,
    The log space is usually huge after a fresh install. You can use the following script to shrink the log file. Make sure you take a full backup of the database before you run. If you running in production system make sure you run during the scheduled downtime after a full backup.
    1) BACKUP LOG <DB_NAME> TO  [<DEVICE_NAME>] WITH NOFORMAT, NOINIT, 
    NAME = N'<DESCRIPTION>', SKIP, NOREWIND, NOUNLOAD,  STATS = 10 --backup transaction log to a backup device
    GO
    2) USE <DB_NAME> --switch to the database
    GO
    3) DBCC SHRINKFILE (N'<LOGICAL_LOG_FILE_NAME>' , 2024) --shrink the file to 2gb (2024). use sp_helpdb to find out the logical file name
    GO
    4) sp_helpdb <SID> --view the new size
    go
    5) dbcc loginfo (<SID>) -- a status of 2 means transaction is dirty.
    Repeat steps 1 thru 5 until log size shrinks to your desired size.
    The step 3 value of "2024" is 2gb which tries to shrink the log file to 2MB. But it is not guaranteed. That is why you have to run it multiple times.
    The step 5 displays output of the transactions in the log file. The status of "2" means it is dirty. If you see this value at the end of result set then there is less chance of shrinking the log file. if you see the value of "2" in the middle of the result set then there is likely chance of shrinking the file. "DBCC LOGINFO" is an undocumented command in SQL Server. But it is a favourite command of all DBAs'.
    I hope this helps.
    RT
    Message was edited by:
            R.T.
    Message was edited by:
            R.Th.

  • Re: identifying server log file when runningdistributed

    Hi John, I can give you some TOOL code which will get the process
    id, but I do have to stray outside of Framework :-). The following
    code uses classes from the SystemMonitor project which was
    introduced in release 2 of Forte (this code won't work on R1):
    partAgent : SystemAgent;
    pidInst : ConfigValueInst;
    pid : TextData;
    partAgent = SystemAgent(task.Part.Agent);
    pidInst = ConfigValueInst(partAgent.FindInstrument('ProcessId');
    pid = TextData(pidInst.GetData);
    The result is that the variable pid contains the process id in string form.
    This could be converted to numeric form if needed.
    If what you're really after is the partition's log file name, then the following
    code will do the trick (it takes into account the differences in how the log
    files are named for interpreted vs. compiled partitions):
    partAgent : SystemAgent;
    logFileInst : ConfigValueInst;
    logFileName : TextData;
    -- Get our agent and try to get the log file inst
    partAgent = SystemAgent(task.Part.Agent);
    logFileInst = ConfigValueInst(partAgent.FindInstrument('LogFile');
    -- Interpreted partition don't have their own log file, so check
    if (logFileInst = NIL) then
    pidInst : ConfigValueInst;
    pid : TextData;
    -- We must be an interpreted partition get our pid
    pidInst = ConfigValueInst(partAgent.FindInstrument('ProcessId');
    pid = TextData(pidInst.GetData);
    -- Build log file name
    logFileName = 'forte_ex_';
    logFileName.Concat(pid);
    else
    -- Get the name of the log file from the instrument
    logFileName = TextData(logFileInst.GetData);
    end;
    The available agents and their instruments and commands are documented
    in the manual "SystemMonitor Project". I'm at home now, so I don't have
    the page numbers. Some additional agents (which were added after this
    manual went to press) can be found in Tech Note #10475. Also, econsole
    and escript can be handy since any instrument you can see in these tools
    can be accessed from TOOL code. Hope this is of some use.
    Sean
    At 05:24 PM 7/30/96 -0700, John L. Jamison wrote:
    >
    I'd like to solicit some ideas from you folks. As many of you are probably
    aware, when running in distributed mode, log output for server partitions is
    written out to log files on the server partition. However it is sometimes a
    trick trying to identify the process which is running your individual
    partitions, and
    thus knowing which log file to read.
    At one client, we added a 3gl call-out to obtain the process id and return it
    to the client. However this is not a good option at a new client which uses
    Sequent (3gl wrappering difficult in statically linked environments such as
    sequent). I am also aware that Econsole allows you to browse active
    partitions and display log files, but you still have to know which active
    partitions to watch.
    I have not yet seen a way to programmatically obtain the process ID for a
    partition within TOOL and using FrameWork classes.
    What kinds of strategies are folks employing out there?
    Thanks in advance,
    -John
    John Jamison
    Sage Solutions, Inc.
    353 Sacramento Street, Suite 1360
    San Francisco, CA 94111
    415 392 7243 x 508
    [email protected]

    Hi John,
    I think that Sean Fits answered your question about TOOL code to get the PID
    number. I just want to complement on the loging strategy.
    There is one log file for every active partition of an application. I think
    it is useful in some cases that a distributed application gets a centralized
    log file to trace the exact sequential flow of processing among all the
    partitions. This is useful during the initial debuging and tuning. In fact
    something similar to the UNIX syslog file.
    For doing so it is easy to implement a custom central log Mgr in one
    partition and to have all partitions use it when needed (it doesn't prevent
    to continue using the standard LogMgr in addition). This central LogMgr
    automatically adds the date&time plus the node name, partition name, ... to
    the log messages it receives.
    The flags which apply are those of the partition where the central Log Mgr is.
    Because of potential concurency requests from the several partitions
    accessing the central Log Mgr, it is not possible to support the "Put" and
    PutHex" methods. Only complete lines can be logged (Putline and PutHexLine).
    Attached is the TOOL code of my TraceService plan that implements it.
    Remark : the "Phr" in the names relate to the name of the application we
    have here under development.
    To use the central Log Mgr, a partition must create an object of class
    PartitionLog, and then log messages must be sent to it the way you send them
    to the standard LogMgr; it will manage to send them to the central Log Mgr.
    At 17:24 30/07/96 -0700, John Jamison wrote:
    >
    I'd like to solicit some ideas from you folks. As many of you are probably
    aware, when running in distributed mode, log output for server partitions is
    written out to log files on the server partition. However it is sometimes a
    trick trying to identify the process which is running your individual
    partitions, and
    thus knowing which log file to read.
    At one client, we added a 3gl call-out to obtain the process id and return it
    to the client. However this is not a good option at a new client which uses
    Sequent (3gl wrappering difficult in statically linked environments such as
    sequent). I am also aware that Econsole allows you to browse active
    partitions and display log files, but you still have to know which active
    partitions to watch.
    I have not yet seen a way to programmatically obtain the process ID for a
    partition within TOOL and using FrameWork classes.
    What kinds of strategies are folks employing out there?

  • Missing current redo log files

    hello to all
    i have a question please if you can guide me.
    if i lose all current redo log files (current redo log group files) how i can repair it and open
    database ? (i don't have problem by missing INACTIVE redo group)
    thanks

    Hi,
    >>if i lose all current redo log files (current redo log group files) how i can repair it and
    open database ? (i don't have problem by missing INACTIVE redo group)Well, It depends. The database was active when the current log file was lost ? Are you using RMAN ? The database is operating in ARCHIVELOG mode or NOARCHIVEMOG mode ? Basically, an incomplete recovery is necessary when there is a loss of the current redo log files. It means that you don’t have all the redo log files up to the point of failure, so the only alternative is[b] to recover prior to the point of failure. To perform incomplete recovery after the redo log files have been lost, you do the following (if you are not using RMAN):
    1) Execute a SHUTDOWN command and restore all data files (*.dbf) from most recent backup.
    2) Execute a STARTUP MOUNT command to read the contents of the control file.
    3) Execute a RECOVER DATABASE UNTIL CANCEL command to start the recovery process.
    SQL> startup mount;
    4) Execute a RECOVER DATABASE UNTIL CANCEL command to start the recovery process.
    SQL> recover database until cancel;
    5) Apply the necessary archived logs up to, but not including, the lost or corrupted log.
    6) Open the database and reset the log files.
    SQL> alter database open resetlogs;
    7) Shut down the database.
    SQL> shutdown normal;
    8) Take a full cold backup
    In resume, for more information, take a look at [url http://download-west.oracle.com/docs/cd/B19306_01/backup.102/b14191/recoscen008.htm#sthref1872]Recovering After the Loss of Online Redo Log Files: Scenarios
    Cheers
    Legatti

Maybe you are looking for

  • Shambles: why am I trying to come back to BT?

    One year ago I moved house. I decided to go with BT for my phone and broadband. It was a disaster and despite two attempts at ordering, I ended up cancelling and going with the Post Office. One year on and I decide to come back to BT again. I make an

  • Artifacts when printing .pdf created from publisher

    I have some documents in microsoft Publisher 2007, i need them to be converted to pdf to send to people and for them to be able to print them. but when i convert them to pdf i loose a lot of quality all the sharp edges become blockey and fuzzy and th

  • Import VPN admin certificate from RV016 to RV042

    Hello, I've configured VPN to use quickVPN for RV016. The VPN connection works fine. I'm going to install the second SB router RV042 in the same location in order the users should use RV042 instead of RV016. My Question is, is it possible to export t

  • Full screen mode and Screen saver Photo viewing

    The problem I have is my images vary in size, due to photos being taken on different cameras, with different settings, or photos/negatives that have been scanned. When viewed in iphoto some of the photos are smaller than others in full screen mode (o

  • Illustrator CS3 acts weird...what is this?

    hi all, Mac Pro 2.8 Quad Core w. 4Gb memory. Recently Illustrator acts very weird. I open a file and Illustrator doesn't let me do anything as if the layers were locked (it's not), can't select etc etc... but if I open the Layer Option window then ev