Crashdumps log files keep generating

Hi all,
We currently have a weird problem with our SCCM. 
There is a folder  D:\SCCM BakUP\HKGBackup\SiteServer\SMSServer\Logs\CrashDumps 
It creates 60+GB logs each day for no reason. We have not made any changes to SCCM configuration recently.
 I have tried to  removed those logs but they keep coming back…and some of the logs are dated
back to June, 2011..
 How can I stop the logs generating?
Please help.
Terry

Glad to know. Could you please provide the resolution steps so that it would be helpful to others?
Anoop C Nair
MY BLOG:
 http://anoopmannur.wordpress.com
SCCM Professionals
This posting is provided AS-IS with no warranties/guarantees and confers no rights.

Similar Messages

  • Log file not generated

    i follow the steps
    1.In Application Set profile FND: Debug Log Level to "Statement"
    2.restart apache
    3.Run debug from help-->diagnostics-->debug
    4.Secure debug log file which should be in
    select value from v$parameter where name like 'utl_file%'
    but the is no log file created i dont know why (these steps are provided by an SR)
    thnx

    What about "FND: Debug Log Filename for Middle-Tier" and "FND: Diagnostics" profile options?
    Note: 372209.1 - How to Collect an FND Diagnostics Trace (aka FND:Debug)
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=372209.1
    If the above does not help, set the debug log at the user level and check then.
    Note: 390881.1 - How To Set The Debug Log At User Level?
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=390881.1

  • Exchange 2010 Log files keep filling up following migration from Exchange 2003

    I am migrating from Exchange server 2003 to 2010.
    Having only moved one mailbox and set-up Public Folder replication, I noticed that the 20GB Drive allocated for the logs are filling up the entire drive, even before I have time to run my scheduled backup.
    As a temporary measure, I have enable circular logging as a workaround.
    Q/ Whilst this is not ideal, shall I leave it like this until the Public Folders are fully replicated and all mailboxes moved over?
    Q/ What risk am I exposing myself to as a result of using Circular logging (I am running full backups every night).
    Q/ Could there be another cause as to why the log files would grow so quick in such a short amount of time?

    Hello,
    Remember that logs are truncated after successful backups so if you have a lot of data replicated between backups, a lot of logs will be stored on disk.
    "Q/ Whilst this is not ideal, shall I leave it like this until the Public Folders are fully replicated and all mailboxes moved over?"
    The best option is to run backups to truncate logs. Circular Logging is useful in test and high available deployments as it can cause data loss (loss of data created between backups).
    "Q/ What risk am I exposing myself to as a result of using Circular logging (I am running full backups every night)."
    The same as mentioned above.
    "Q/ Could there be another cause as to why the log files would grow so quick in such a short amount of time?"
    Enormous logs generation can be caused by bugs in Exchange and connecting devices, i.e. iPhones can cause quick logs creation in some configurations. If you have latest Exchange 2010 build, it shouldn't be problem.
    Hope it helps,
    Adam
    CodeTwo: Software solutions for Exchange and Office 365
    If this post helps resolve your issue, please click the "Mark as Answer" or "Helpful" button at the top of this message. By marking a post as Answered, or Helpful you help others find the answer faster.

  • Generating archieve log files

    Hi Experts,
    Can anyone tell me why lot of archieving log files are generated autoamtically like I hav faced 15-16 GB of log files in 2 days.
    Thanks in advance,
    Soumya
    Edited by: Soumya06 on Jan 7, 2011 6:03 AM

    Hi,
    Archive logs are created only when there s some action on the daabase. It meas that in the last 2 days there were many activities performed on your database :
    1. Check the background jobs in the last 2 days.
    2. check whether any patching, client copy has been performed.
    3. Also check the time during which the archive logs gets created in huge number.
    regards,
    Nirmal.K

  • WebDAV Query generates a high number of transaction log files

    Hi all,
    I have a program that launch WebDAV queries to search for contacts on an Exchange 2007 server. The number of contacts returned for each user's mailbox is quite high (about 4500).
    I've noticed that each time the query is launched, about 15 transaction log files are generated on the Exchange server (each of them 1Mb). If I ask only for 2 properties on the contacts, this number is reduced to about 8.
    This is a problem since our program is supposed to launch often (about every 3/5min) as It will synchronize Exchange mailboxes with a SQL Server DB. The result is that the logs increase very quickly on the server side, even if there are not so many updates.
    Any idea why so many transaction logs are generated when doing a WebDAV search returning many items? I would understand that logs are created when an update is done on the server, but here it's only a search with many contacts items returned.
    Is there maybe a setting on the Exchange server to control what kind of logs to generate?
    Thank for your help,
    Alexandre

    Hi Alex,
    Actually circular logging/backup was not a solution, I was just explaining that there is an option like that on server but it is not recommended hence not useful in our case :)
    - I am not a developer but AFAIK, WebDAV search query shouldn't generate transaction log because it just searches the mailboxes and gives the result in HTTP format and doesn't produce any Exchange transaction.
    - I wouldn't open transaction logs since it is being used by Exchange which may generate errors and may corrupt Exchange database sometime too. However it is not readable, as you observed, other than Exchange Information Store service (store.exe).
    - You can post this query in development forum to get better idea on this, if any other programmer observed similar symptom while using WebDAV contact search query in Exchange 2007 or can validate your query.
    Microsoft TechNet > Forums Home > Exchange Server > Development
    Well, I just saw that you are using Exchange 2007, in that case why don't you use Exchange Web Service which is better and improved method to access/query mailboxes where as WebDAV is also de-emphasized in Exchange 2007 and might be disappeared in next version of Exchange. Checkout below article for further detail.
    Development: Overview
    http://technet.microsoft.com/en-us/library/aa997614.aspx
    Amit Tank | MVP - Exchange | MCITP:EMA MCSA:M | http://ExchangeShare.WordPress.com

  • MR11 log file

    Hi,
    While run MR11 for GR/IR clearing account a log file is generated and document number is 5400000010. Where this log file is stored by default and how it should display ? Beside F.13 automatic clearing  clears those GR/IR records whose balance shows as 0. And difference amount is cleared through F-03 by choosing document number from GR and IR under same purchase order. In F.13(automatic clearing) does not clear those same GR value and IR value inspite of same PO number. These values are easily tracebale from normal balance viewing mode through FBL3N. Why these values are not cleared through F.13 ?
    Regards,
    Samrat

    Immediate AI:
    0. Check the log file auto growth setup too and check is this a practically a good one and disk has still space or not.
    1. If disk is full where you are keeping log file, then add a log file in database property page on another disk where you have planned to keep log files, in case you can't afford to get db down. Once you are done then you can plan to truncate data out of
    log file and remove that if it has come just first time issues. If this happens now and then check for capacity part.
    2. You can consider shrinking  the log files after no any other backup are going on or any maintenance job like rebuild\reorg indexes \update stats jobs are executing as this will be blocking it.
    If db size is small and copy files from prod to dr is not that latency prone, and shrink is not happening, then you can try changing recovery model and then do shrinking and reconfigure log-shipping after reverting recovery model.
    3. Even you can check if anyone mistakenly places some old files and forgot to remove them which is causing disk full issues. Also
    4. For permanent solution, do monitor the environment for capacity and allocate good space for log file disks. Also consider tweaking frequencies of the log backup from default that suits your environment.
    Santosh Singh

  • Microsoft sql server extended event log file

    Dears
    Sorry for my below questions if it is very beginner level.
    In my implementation I have cluster SQL 2012 on Windows 2012; I am using MountPoints since I have many Clustered Disks.
    My MountPoint Size is only 3 GB; My Extended event log are growing fast and it is storing in the MountPoint Drive directly (Path: F:\MSSQL11.MSSQLSERVER\MSSQL\Log).
    What is the best practice to work with it? (is it to keep all Extended events? or recirculate? or to shrink? or to store in DB?)
    Is there any relation between SQL truncate and limiting the size of Extended event logs?
    How can I recirculate this Extended Events?
    How can I change the default path?
    How can I stop it?
    and in case I stop it, does this means to stop storing SQL event in Windows event Viewer?
    Thank you

    After a lot of checking, I have found below:
    My Case:
    I am having SQL Failover Cluster Instances "FCI" and I am using Mount-Points to store my Instances.
    I am having 2 Passive Copies for each FCI.
    In my configuration I choose to store the Root Instance which include the logs on Mount-Point.
    My Mount Point is 2 GB Only, which became full after few days of deployment.
    Light Technical Information:
    The Extended Event Logs files are generated Coz I have FCI, in single SQL Installation you will not find this files.
    The File Maximum size will be 100 MB.
    The Files start circulating after it become 10 Full Files.
    If you have the FCI installed as 1 Active 2 Passive, and you are doing failover between the nodes, then you will expect to see around 14 - 30 copy of this file.
    Based on above information you will need to have around 100 MB * 10 Files Per Instance copy * 3 Since in my case I have 1 Active and 2 passive instances which will = 3000 MB
    So in my case My Mount-Point was 2 GB, which become full coz of this SQLDIAG Logs.
    Solution:
    I extended my mount point by 3 GB coz I am storing this logs on it.
    In case you will need to change SQLDIAG Extended Logs Size to 50 MB for example and place to F:\Logs, then you will need below commands:
    ALTER SERVER CONFIGURATION SET DIAGNOSTICS LOG OFF;
    ALTER SERVER CONFIGURATION
    SET DIAGNOSTICS LOG MAX_SIZE = 50 MB;
    ALTER SERVER CONFIGURATION
    SET DIAGNOSTICS LOG PATH = 'F:\logs';
    ALTER SERVER CONFIGURATION SET DIAGNOSTICS LOG ON;
    After that you will need to restart the FCI from SQL Server Configuration Manager or Failover Cluster Manager.
    I wish you will find this information helpful if it is your case.
    Regards

  • Why size of archive log file increasing in merge clause

    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong).
    please suggest........
    Edited by: 855516 on Mar 13, 2012 11:18 AM

    855516 wrote:
    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong). No this is not correct that after commit archive log will generate....You know merge statement causes the insert (if data not present already) or update if database is present.. Obviously these operation will generate lots of redo if the amount of data been processed is high.
    If you feel that this operation is causing excessive of redo then root cause analysis should be done...
    For that use Logminer (excellent tool to provide segment level breakdown of redo size). V$logmnr_contens has columns redo block and redo byte address associated with the current redo
    change
    There are some gudlines in order to reduce redos( which may vary in any environment)
    1) check if there are unwanted indexes being used in tables which are refereed in merge. If yes then remove those could bring down the redo
    2) Use global temporary tables to reduce redo (if there is a need to keep data only temporarily in a session)
    3) Use nologging if possible (but see its implications)
    Hope this helps

  • Rule created to monitor a single line entries in a text.log file does not work

    Hi All,
    I have this strange issue. I created a script which generates .log file and i have configured a rule to monitor it. Whenever the .log is altered the alert does not come at all in SCOM 2012 R2.
    I want this alert to be raised when one specific line in the center is altered from LISTENING to NOT LISTENING.
    I have configured it. It triggered a alert for the first time and again it did not trigger at all.
    I created this rule and disabled it and overrided the value to true only to the MS acting as the watcher for this log file.
    The log file generates in the local drive of the MS itself.
    Changed the log watcher to a different server and also mentioned the application data source to a network location when the watcher was changed so it can pull the log accordingly.
    The log is generated in the MS itself. Tried using both local location where the log is located as well as converted the same to a network location still didn't help.
    C:\Port_checker is the directory where the .log file is located also there is no other log file present only 1.
    I also changed the parameters such as "Contains, Wildcard matches etc but nothing worked.
    Screenshots:
    2. 
    The SCOM Action account has Full permissions on all servers over the entire forest itself.
    Target used to create this rule is "Windows server operating system"
    Can any one help me please.
    Gautam.75801

    Since you have a script that updates a file line from "LISTENING" to "NOT LISTENING"
    you might want to try and configure a Two State Script Unit Monitor rather then a rule. So your script just need to check say every 5 minutes the content of the log file and generate an alert when it matches "Not Listening" and clear when
    it changes to "listening".
    http://www.systemcentercentral.com/wp-content/uploads/2009/04/HOW-TO_2-state_ScriptMonitor.pdf
    Cheers,
    Martin
    Blog:
    http://sustaslog.wordpress.com 
    LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • LOG FILE for batch scripting in MAXL

    Hello,
    I just wanted to know how to create a LOG FILE for batch scripting.
    essmsh E:\Batch\Apps\TOG_DET\Scripts\unload_App.msh
    copy e:\batch\apps\tog_det\loadfile\gldetail.otl e:\hyperion\analyticservices\app\tog_det\gldetail /Y
    essmsh E:\Batch\Apps\TOG_DET\Scripts\Build_Hier_Data.msh
    REM
    ECHO OFF
    ECHO Loading GL actuals into WFS \ Combined......
    E:\HYPERION\common\Perl\5.8.3\bin\MSWin32-x86-multi-thread\PERL.EXE E:\Batch\Apps\WFS.COMBINED\AMLOAD\WFSUATAMLOAD.PLX
    E:\HYPERION\common\Perl\5.8.3\bin\MSWin32-x86-multi-thread\PERL.EXE E:\Batch\Apps\WFS.COMBINED\AMLOAD\WFSUATAMLOAD.PLX
    Drop object d:\NDM\Data\StampFiles\STAMPLOADBKUP.csv of type outline force;
    Alter object d:\NDM\Data\StampFiles\STAMPLOAD_cwoo.csv of type outline rename to d:\NDM\Data\StampFiles\STAMPLOADBKUP.CSV;
    SET LogFile=E:\Batch\Apps\TOG_DET\Logs.log
    This file does not generate log file can any help me what might be the problem? Even though some of the steps above are not correct it should generate me log file atleast. I need syntax or whatever it is to generate Log file.
    Regards
    Soma

    I wanted to have a logfile of the following batch script regardless of whether the script is running or not.
    essmsh E:\Batch\Apps\TOG_DET\Scripts\unload_App.msh
    copy e:\batch\apps\tog_det\loadfile\gldetail.otl e:\hyperion\analyticservices\app\tog_det\gldetail /Y
    essmsh E:\Batch\Apps\TOG_DET\Scripts\Build_Hier_Data.msh
    REM
    ECHO OFF
    ECHO Loading GL actuals into WFS \ Combined......
    E:\HYPERION\common\Perl\5.8.3\bin\MSWin32-x86-multi-thread\PERL.EXE E:\Batch\Apps\WFS.COMBINED\AMLOAD\WFSUATAMLOAD.PLX
    Drop object d:\NDM\Data\StampFiles\STAMPLOADBKUP.csv of type outline force;
    Alter object d:\NDM\Data\StampFiles\STAMPLOAD_cwoo.csv of type outline rename to d:\NDM\Data\StampFiles\STAMPLOADBKUP.CSV;
    What I really want is I need a log file of the above batch script, how the above scripts are running. I do not care whether they are giving me positive results but I need to know what is happening in logfile. HOw will the log file be generated.
    Regards
    SOma

  • Log file variables

    Hi
    I am taking the following steps to try to generate my log file from an ODI variable:
    1. Create an ODI variable say called ESSBASELOG, give it a path and file e.g. D:\ODI\Errors\Essbase\essload.err
    2. In your interface in the KM options for the log filename enter #ESSBASELOG.
    3. Create a package, drag the variable and set it to declare, drag the interface, join them up, execute.
    But for some reason, the path is not recognised, and instead the log file is generated in the ODI bin directory with the variable name as the log name ie.#ESSBASELOG.
    Can anyone suggest what I should try to resolve this?
    Cheers

    Hi,
    Qualify your variable name with your project code .
    i.e #<PROJECT_CODE>.<VARIABLE_NAME>
    Thanks,
    Sutirtha

  • Fatal NI connect error 12203 resulting huge increase in sqlnet.log file

    Hi,
    I am getting the following error message in SQLNET.LOG file on the client machine. This is my upload program which takes few hours to complete and during the program run, the size of SQLNET.LOG file keeps on increasing and goes to 100's of MB and it contains only this error repeatedly.
    But my program gets connected to Database and does the upload. But the size of SQLNET.LOG file grows like anything. Pls let me know what's going wrong.
    ERROR in SQLNET.LOG File -
    Fatal NI connect error 12203, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle80)(ARGV0=oracle80ORCL)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))')))(CONNECT_DATA=(SID=ORCL)(CID=(PROGRAM=C:\ORAWIN95\BIN\IFRUN60.EXE)(HOST=IT_DBA)(USER=IT))))
    VERSION INFORMATION:
         TNS for 32-bit Windows: Version 8.0.5.0.0 - Production
         Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 8.0.4.0.0 - Production
         Windows NT TCP/IP NT Protocol Adapter for 32-bit Windows: Version 8.0.5.0.0 - Production
    Time: 06-MAR-03 12:37:08
    Tracing not turned on.
    Tns error struct:
    nr err code: 12203
    TNS-12203: TNS:unable to connect to destination
    ns main err code: 12560
    TNS-12560: TNS:protocol adapter error
    ns secondary err code: 0
    nt main err code: 102
    TNS-00102: Keyword-Value binding operation error
    nt secondary err code: 0
    nt OS err code: 0
    Regards,
    Mitesh V.

    Hi,
    Actually I thought this error is appearing in only one of the machines, but when I am figuring out, I found the same error in almost all the client machines.
    Though the programs are running fine getting the database connectivity, I am not able to find why it is giving error showing connection through BEQ protocol.
    Pls someone tell me why this is happening and how I will find out which protocol it is using.
    Regards,
    Mitesh Vijayvargiy

  • Changed jvmEntry to use JDK14Logger in WAS 6.1.0.3 but Log is not generat

    Hi,
    1) I have installed WAS 6.1.0.3
    2) Created App Server Profile
    3) Added the following java option in <jvmEntry> <systemProperty> of sever.xml
    <systemProperties xmi:id="Property_1187707290069" name="java.util.logging.config.file" value="C:/PT850-103I/webserv/peoplesoft01/installedApps/peoplesoft01NodeCell/peoplesoft01.ear/logging.properties" description="java.util.logging.config.file" />
    <systemProperties xmi:id="Property_1187707290070" name="org.apache.commons.logging.Log" value="org.apache.commons.logging.impl.Jdk14Logger"/>
    I have also edited logging.peoperties file to use FileHnadler and the location for generate log But no log file is generated
    Can anyone help me How can I configure or use Jdk14Logger to generate log file?
    Regards
    Sunil Kumar Gupta

    Hi,
    1) I have installed WAS 6.1.0.3
    2) Created App Server Profile
    3) Added the following java option in <jvmEntry> <systemProperty> of sever.xml
    <systemProperties xmi:id="Property_1187707290069" name="java.util.logging.config.file" value="C:/PT850-103I/webserv/peoplesoft01/installedApps/peoplesoft01NodeCell/peoplesoft01.ear/logging.properties" description="java.util.logging.config.file" />
    <systemProperties xmi:id="Property_1187707290070" name="org.apache.commons.logging.Log" value="org.apache.commons.logging.impl.Jdk14Logger"/>
    I have also edited logging.peoperties file to use FileHnadler and the location for generate log But no log file is generated
    Can anyone help me How can I configure or use Jdk14Logger to generate log file?
    Regards
    Sunil Kumar Gupta

  • Exchange 2010 SP3, RU5 - Massive Transaction Log File Generation

    Hey All,
    I am trying to figure out why 1 of our databases is generating 30k Log Files a day! The other one is generating 20K log files a day. The database does not grow in size as the log files are generated, the problem is log file generation.
    I've tried running through some of the various solutions out there, reviewed message tracking logs, rpc client access logs, IIS Logs - all of which show important info, but none of which actually provide the answers.
    I Stopped the following services to see if that would affect the log file generation in any way, and it has not!
    MS Exchange Transport
    Mail Submission
    IIS (Site Stopped in IIS)
    Mailbox Assistants
    Content Indexing Service
    With the above services stopped, I still see dozens (or more) log files generated in under 10 minutes, I also checked mailbox size reports (top 10) and found that several users mailboxes were generating item count increases for one user of
    about 300, size increases for one user of about 150Mb (over the whole day).
    I am not sure what else to check here? Any ideas?
    Thanks,
    Robert
    Robert

    Hmm - this sounds like an device is chewing up the logs.
    If you use log parser studio, are there any stand out devices in terms of the # of hits?
    And for the ExMon was that logged over a period of time?  The default 60 second window normally misses a lof of stuff.  Just curious!
    Cheers,
    Rhoderick
    Microsoft Senior Exchange PFE
    Blog:
    http://blogs.technet.com/rmilne 
    Twitter:   LinkedIn:
      Facebook:
      XING:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
    Rhoerick, 
    Thanks for the response. When checking the logs the highest number of hits were the (Source) Load Balancers, Port 25 VIP. The problem i was experience was the following: 
    1) I kept expecting the log file generation to drop to an acceptable rate of 10~20 MB Per Minute (Max). We have a large environment and use the exchange sevrers as the mail relays for the hated Nagios monitoring environment
    2) We didn't have our enterprise monitoring system watching SMTP traffic, this is  being resolved. 
    3) I needed to look closer at the SMTP transport database counters, logs, log files and focus less on the database log generation, i did do some of that but not enough of that. 
    4) My troubleshooting kept getting thrown off due to the monitoring notifications seeming to be sent out in batches (or something similar) stopping the transport service for 10 ~ 15 minutes several times seemed to finally "stop the transactions logs
    from growing at a psychotic rate". 
    5) I am re-running my data captures now that i have told the "Nagios Team" to quit killing the exchange servers, with their notifications, sometimes as much as 100+ of the same notifications for the same servers, issues. so far at a quick glance
    the log file generation seems to have dropped by about 30%. 
    Question: What would be the best counters to review in order to "Put it all together"? Also note: our Server roles are split, MBX and CAS/HT. 
    Robert 
    Robert

  • Archived log files not registered in the Database

    I have Widows Server 2008 R2
    I have Oracle 11g R2
    I configured primary and standby database in 2 physical servers , please find below the verification:
    I am using DG Broker
    Renetly I did failover from primary to standby database
    Then I did REINSTATE DATABASE to returen the old primary to standby mode
    Then I did Switchover again
    I have problem that archive logs not registered and not imeplemented.
    SQL> select max(sequence#) from v$archived_log; 
    MAX(SEQUENCE#)
             16234
    I did alter system switch logfile then I ssue the following statment to check and I found same number in primary and stanbyd has not been changed
    SQL> select max(sequence#) from v$archived_log;
    MAX(SEQUENCE#)
             16234
    Any body can help please?
    Regards

    Thanks for reply
    What I mean after I do alter system switch log file, I can see the archived log files is generated in the physical Disk but when
    select MAX(SEQUENCE#) FROM V$ARCHIVED_LOG;
    the sequence number not changed it should increase by 1 when ever I do switch logfile.
    however I did as you asked please find the result below:
    SQL> alter system switch logfile;
    System altered.
    SQL> /
    System altered.
    SQL> /
    System altered.
    SQL> /
    System altered.
    SQL> SELECT DB_NAME,HOSTNAME,LOG_ARCHIVED,LOG_APPLIED_02,LOG_APPLIED_03,APPLIED_TIME,LOG_ARCHIVED - LOG_APPLIED_02 LOG_GAP_02,
      2  LOG_ARCHIVED - LOG_APPLIED_03 LOG_GAP_03
      3  FROM (SELECT NAME DB_NAME FROM V$DATABASE),
      4  (SELECT UPPER(SUBSTR(HOST_NAME, 1, (DECODE(INSTR(HOST_NAME, '.'),0, LENGTH(HOST_NAME),(INSTR(HOST_NAME, '.') - 1))))) HOSTNAME FROM V$INSTANCE),
      5  (SELECT MAX(SEQUENCE#) LOG_ARCHIVED FROM V$ARCHIVED_LOG WHERE DEST_ID = 1 AND ARCHIVED = 'YES'),
      6  (SELECT MAX(SEQUENCE#) LOG_APPLIED_02 FROM V$ARCHIVED_LOG WHERE DEST_ID = 2 AND APPLIED = 'YES'),
      7  (SELECT MAX(SEQUENCE#) LOG_APPLIED_03 FROM V$ARCHIVED_LOG WHERE DEST_ID = 3 AND APPLIED = 'YES'),
      8  (SELECT TO_CHAR(MAX(COMPLETION_TIME), 'DD-MON/HH24:MI') APPLIED_TIME FROM V$ARCHIVED_LOG WHERE DEST_ID = 2 AND APPLIED = 'YES');
    DB_NAME HOSTNAME           LOG_ARCHIVED   LOG_APPLIED_02    LOG_APPLIED_03     APPLIED_TIME     LOG_GAP_02      LOG_GAP_03
    EPPROD  CORSKMBBOR01     16252                  16253                        (null)                      15-JAN/12:04                  -1                   (       null)

Maybe you are looking for

  • Cannot use i2c and custom fpga logic at the same time

    I am driving a OV7670 camera sensor with my myRio. Configuring the camera's registers is done via I2C (or they call it the SCCB interface, but it's practically the same thing). The sensors has to be given an external clock input which I do through th

  • ITUNES WILL NOT LET ME DOWNOAD IT BECAUSE OF CODE 2738!!! PLEASE HELP!!!

    When I try to upgrade to itunes 7.3 or 7 it said that there was something wrong with the package. It said code 2738 was the problem. PLEASE HELP!!!

  • How to get data dynamically into the input form fields.

    hey guys,        I have created an input form in VC and it has 2 fields which r 'drill-down' type.Now what i want is that the entries for the fields should come automatically on deployment in the portal.what i mean is that e.g. u have a field 'locati

  • W2k3 Auth failed with KRB5KDC_ERR_ETYPE_NOSUPP when using DES

    We are authenticating users on AD server 2k3, and the users are setup in AD to use DES (checked "Use DES encryption types for this account" in user properties). It failed somehow with ETYPE_NOSUPP. From the packet capture, I can find KRB5 AS-REQ cont

  • Change default USB sound driver.

    Due to my laptop crashing everytime I change my usb device, trial and error is taking me forever so I was hoping someone can guide me. I can't work out how to change the default driver for my external card. This is my alsa-base.conf # lists the order