Best Way to monitor standby, primary databases, including alert logs, etc.

Hi, Guys, I finally cutover the new environment to the new linux redhat and everything working great so far (the primary/standby).
Now I would like to setup monitoring scripts to monitor it automatically so I can let it run by itself.
What is the best way?
I talked to another dba friend outside of the company and he told me his shop not use any cron jobs to monitor, they use grid control.
We have no grid control. I would like to see what is the best option here? should we setup grid control?
And also for the meantime, I would appreciate any good ideas of any cronjob scripts.
Thanks

Hello;
I came up with this which I run on the Primary daily, Since its SQL you can add any extras you need.
SPOOL OFF
CLEAR SCREEN
SPOOL /tmp/quickaudit.lst
PROMPT
PROMPT -----------------------------------------------------------------------|
PROMPT
SET TERMOUT ON
SET VERIFY OFF
SET FEEDBACK ON
PROMPT
PROMPT Checking database name and archive mode
PROMPT
column NAME format A9
column LOG_MODE format A12
SELECT NAME,CREATED, LOG_MODE FROM V$DATABASE;
PROMPT
PROMPT -----------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT Checking Tablespace name and status
PROMPT
column TABLESPACE_NAME format a30
column STATUS format a10
set pagesize 400
SELECT TABLESPACE_NAME, STATUS FROM DBA_TABLESPACES;
PROMPT
PROMPT ------------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT Checking free space in tablespaces
PROMPT
column tablespace_name format a30
SELECT tablespace_name ,sum(bytes)/1024/1024 "MB Free" FROM dba_free_space WHERE
tablespace_name <>'TEMP' GROUP BY tablespace_name;
PROMPT
PROMPT ------------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT Checking freespace by tablespace
PROMPT
column dummy noprint
column  pct_used format 999.9       heading "%|Used"
column  name    format a16      heading "Tablespace Name"
column  bytes   format 9,999,999,999,999    heading "Total Bytes"
column  used    format 99,999,999,999   heading "Used"
column  free    format 999,999,999,999  heading "Free"
break   on report
compute sum of bytes on report
compute sum of free on report
compute sum of used on report
set linesize 132
set termout off
select a.tablespace_name                                              name,
       b.tablespace_name                                              dummy,
       sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )      bytes,
       sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id ) -
       sum(a.bytes)/count( distinct b.file_id )              used,
       sum(a.bytes)/count( distinct b.file_id )                       free,
       100 * ( (sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )) -
               (sum(a.bytes)/count( distinct b.file_id ) )) /
       (sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )) pct_used
from sys.dba_free_space a, sys.dba_data_files b
where a.tablespace_name = b.tablespace_name
group by a.tablespace_name, b.tablespace_name;
PROMPT
PROMPT ------------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT Checking Size and usage in GB of Flash Recovery Area
PROMPT
SELECT
  ROUND((A.SPACE_LIMIT / 1024 / 1024 / 1024), 2) AS FLASH_IN_GB,
  ROUND((A.SPACE_USED / 1024 / 1024 / 1024), 2) AS FLASH_USED_IN_GB,
  ROUND((A.SPACE_RECLAIMABLE / 1024 / 1024 / 1024), 2) AS FLASH_RECLAIMABLE_GB,
  SUM(B.PERCENT_SPACE_USED)  AS PERCENT_OF_SPACE_USED
FROM
  V$RECOVERY_FILE_DEST A,
  V$FLASH_RECOVERY_AREA_USAGE B
GROUP BY
  SPACE_LIMIT,
  SPACE_USED ,
  SPACE_RECLAIMABLE ;
PROMPT
PROMPT ------------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT Checking free space In Flash Recovery Area
PROMPT
column FILE_TYPE format a20
select * from v$flash_recovery_area_usage;
PROMPT
PROMPT ------------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT ------------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT Checking last sequence in v$archived_log
PROMPT
clear screen
set linesize 100
column STANDBY format a20
column applied format a10
--select max(sequence#), applied from v$archived_log where applied = 'YES' group by applied;
SELECT  name as STANDBY, SEQUENCE#, applied, completion_time from v$archived_log WHERE  DEST_ID = 2 AND NEXT_TIME > SYSDATE -1;
prompt
prompt----------------Last log on Primary--------------------------------------|
prompt
select max(sequence#) from v$archived_log where NEXT_TIME > sysdate -1;
PROMPT
PROMPT ------------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT Checking switchover status
PROMPT
select switchover_status from v$database;I run it from a shell script and email myself quickaudit.lst
Alert logs are great source of information when you have an issue or just want to check something.
Best Regards
mseberg

Similar Messages

  • Best way to monitor the ON time of something in a minute ?!

    Greetings everybody,
    I first have to thank everybody offers help to others here.
    I have a question regarding the Best way to monitor the ON time of something in a minute.
    Say I have an On/Off switch that I want to know how many seconds that it was ON in the last minute (say) .. and reports that to a file or database each minute. So every minute I send a report to the DataBase with the number of seconds the switch was ON in the last minute.
    I already made a solution, But it's not that good I think and there is a problem there .. Please check my VI as it describes the solution more than my words here.
    Any comment is appreciated.
    Thanks in advance.
    Ayman Mohammad Metwally
    Automation Engineer
    Egypt - Cairo
    Attachments:
    On timet.vi ‏127 KB

    Hello Ayman,
    I attached a changed version of your vi. It uses two parallel loops.
    The communication is made via local variable and controled by a flag.
    Just have a look to get the idea.
    You can do the communication also on different ways like queues...
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome
    Attachments:
    OnTime 2.vi ‏37 KB

  • What is the best way of monitoring HD content in FCP?

    I hope to get the Mac Pro listed below, What is the best way of monitoring my HD edit from FCP?
    Can I output to a HDTV, or is a computer monitor better?
    Do I need a capture card to output the signal, or is my below system adequate? i don't need one for capturing as i have will be using a Panasonic AG-HVX200 with P2 card, Can i use this camera to output to a TV/Monitor?
    Message was edited by: calihal

    Well, given your situation, the best option is the Matrox MXO and Apple Cinema Display. I have a full review here as to why it is a good solution.
    http://library.creativecow.net/articles/ross_shane/MXO.php
    This will cost you $995 for the MXO, and $900 for the Apple display. So under $2k. The other options are more expensive. Broadcast quality HD LCDs start at $3500. And other capture cards range from $295 (decklink Intensity) to $3500 (AJA and Decklink). And HDTV will still require a capture card, like the Intensity with HDMI out, but won't be suitable for broadcast quality. It will, however, be perfectly fine for seeing what your HD footage looks like on an HD set that most people will have in their homes. So I take back my initial statement and say that if you have a tower, the Intensity and HDTV is your best option...if you aren't needing full broadcast quality. If you just need to see what it looks like.
    Shane

  • What is the best way to import a full database?

    Hello,
    Can anyone tell me, what is the best way to import a full database called test, into an existing database called DEV1?
    when import into an existing database do you have drop the users say pinfo, tinfo schemas are there. do i have drop these and recreate or how it will work, when you impport full database?
    Could you please give step by step instructions....
    Thanks a lot...

    Nayab,
    http://youngcow.net/doc/oracle10g/backup.102/b14191/rcmdupdb005.htmA suggestion that please don't use external sites which host oracle docs since there can not be any assurance that whether they update their content with the latest corrections or not. You can see the updated part no in the actual doc site from oracle,
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#i1009381
    Aman....

  • My Mum( a pensioner) is wanting to purchase both an iPhone and iPad.  What is the best way for her to manage her data /calls/txt etc. obviously the cost needs to be as low as possible.  Currently does not have WiFi but uses dongle

    My Mum( a pensioner) is wanting to purchase both an iPhone and iPad.  What is the best way for her to manage her data /calls/txt etc. obviously the cost needs to be as low as possible.  Currently does not have WiFi but uses dongle

    My Mum( a pensioner) is wanting to purchase both an iPhone and iPad.  What is the best way for her to manage her data /calls/txt etc. obviously the cost needs to be as low as possible.  Currently does not have WiFi but uses dongle

  • Best way to load initial TimesTen database

    I have a customer that wants to use TimesTen as a pure in-memory database. This IMDB has about 65 tables some having data upwards of 6 million rows. What is the best way to load this data? There is no cache-connect option being used. I am thinking insert is the only option here. Are there any other options?
    thansk

    You can also use the TimesTen ttbulkcp command line utility, this tool is similar to SQL*Loader except it handles both import and export of data.
    For example, the following command loads the rows listed in file foo.dump into a table called foo in database mydb, placing any error messages into the file foo.err.
    ttbulkcp -i -e foo.err dsn=mydb foo foo.dump
    For more information on the ttbulkcp utility you can refer to the Oracle TimesTen API Reference Guide.

  • Best way to deploy a new database

    What is the best way to deploy a database for a user base that mostly doesn't understand how to use SQL based db products or has some understand?
    I'm current working on a setup utility for my desktop application, which uses MySQL, right now I'm at a design issue where I'm not sure how to get the database deployed.
    I have a creation script for deploying the database, but I'm not sure wither to create a default user, assign the rights to the user or make the user customizible. [which starts to branch off way too much] The desktop application does have an option of using an already deployed databased else where or creating one locally.
    Does anyone have suggestions for deploying databases for desktop applications? I know that derby is a great solution for this, however, It is not nearly powerful enough to handle what I need a database for. [large amount of transactions and comparisons, really quickly] I also have been unable to find information on this as well.

    are you talking about creating another copy of your existing DB on the same server ??
    (Just wanna confirm as your last line seems to contradict with this?!?)
    2 ways:
    Go for RESTORE database with the Source option pointing to one of the existing databases.
    Go for COPY DATABASE (useful when the copy of the db is to be put in another server maybe..)
    Note that you would need backup of the existing DB to proceed..
    Thanks, Jay <If the post was helpful mark as 'Helpful and if the post answered your query, mark as 'Answered'>

  • What is the best way to monitor taffic across a Campus?

    I am trying to find the best way/ways to monitor traffic across a campus network. The two solutions I have thought of are using Netflow or ERSPAN. However, neither are supported by the devices in this network. Here is a quick overview of the network...
    Core Switches (3750 Stacks) using Layer 3
    |
    Distribution Switches (3750 & 3650s) using L3 towards Core and L2 towards Access
    |
    Access Switches (Mostly 3500s) using L2
    What are the best options for monitoring traffic on this type of network? All links between switches are Gig, so we have plenty of bandwidth. I would really like to be able to setup snort/ntop or something similar.
    Are there any solutions available that I could use RSPAN and a monitoring computer at the Access Switches and have them report back to a central monitoring machine? I would prefer a centralized solution.
    Thanks,
    Garrett

    Hello Garrett
    Each monitoring software has its own limitations/specifications..
    If you want to monitor traffic/protocols running on ur network, on a constant basis, you will have to use Netflow.. You can use a simple netflow collector, and collect reports, and analyze the application traffic on your LAN/WAN.. Not sure if this will help too much in troubleshooting, since this will be more used for trending your applications. You can probably discover new applications, which arent used much on your network, using this..
    But for real troubleshooting, you will need something like a syslog server.. u can configure logging levels and push important errors/updates from the cisco gear to this box. In case your box goes down, or has issues, system log messages will be dumped to this server and will be a very useful device for troubleshooting... eg, kiwi cat, solarwinds, 3cdaemon, and lots of other freewares...
    I would ideally have both these components on my network, for trending and troubleshooting..
    apart from this, if you have other advance technology products, like wireless, application accelaration etc, there are other network management solutions available..
    Hope this helps.. All the best.. rate replies if found useful
    Raj

  • Best way to check whether the database is demo or sys?

    Hi Gurus,
    Whats the best way to check whether the installed peoplesoft database is a demo or a sys?
    Thanks for the help?
    Regards,
    Anoop

    There is nothing set by default.
    However, if it has been configured properly by the administrator after db creation, through the menu Peopletools>Utilities>Administration>Peopletools Options, the following query should return the type of the database :
    select systemtype from psoptions;Otherwise the following could help to understand what database you are on.
    From HRMS9.1 DMO database :
    SQL> select count(*) from ps_employees;
          2792From HRMS9.1 SYS database :
    SQL> select count(*) from ps_employees;
      COUNT(*)
             0Nicolas.

  • Best Way to Drop a 10g Database

    hi experts,
    This is 10g on Windows.
    I have 3 10g databases on this server and I need to drop and recreate 1 of the databases.
    What is the best way to get the cleanest, most thorough deletion?
    I'm thinking of doing:
    shutdown immediate;
    startup mount exclusive restrict;
    drop database;
    is there a better option?
    Thanks, John

    No.
    Though the "EXCLUSIVE" keyword is no longer required ... at least in 11gR1 and perhaps not in your version either.

  • Oracle 10.1, Whats the best way to load XML in database?

    Hi All,
    I am a typical Oracle developer. I know some Java and some XML technologies, but not an expert.
    I will be receiving XML files from some system, which will be,
    - of reasonable size like 2 to 15 MBs
    - of reasonable complexity, like the root element have children, grand-children and great-grand-children, with attributes and all
    - Every day it needs to be loaded to Oracle database, in relational format
    - we need not update the XML data, only put the XML data in relational table
    My questions are,
    - With Oracle 10.1, XML DB, what is the best way to load this XML file to relational Oracle tables ?
    - What can be the steps in this process ?
    - In the documentation, I was lost and was not able to decide anything concrete
    - If I write a pure Java program with SAX API and load the data to Oracle database in same program, is it a good idea?
    - Is there any pure Oracle based way to do this?
    - If I want to avoid using DOM parser, as it will need more JAVA_POOL_SIZE, what can be the way ?
    Please help.
    Thanks

    Many customer solve this problem by registering an XML Schema that corresponds to their XML and then creating relational views over the XML that allow them to access the content in a relational manner. They then use insert as select operations on the relational views to transfer data from the XML into relational tables where necessary. There are a large number of threads in this forum with detailed examples of how this can be done. Most of the customers who have adopted this approach have discovered that this is the least complex approach in terms of code that to be developed / maintained and offeres acceptable performance.

  • Best Way to Replicate Azure SQL Databases to Lower Environments

    I have XML files delivered to my server where they are parsed into reference data and written to a database (Premium tier).  I want to have that database sync to other databases (Basic tier) so that my non-production environments can use the same reference
    data.
    I tried Data Sync and it seems incredibly slow.  Is Azure Data Sync the best way?  What are my other options?  I don't really want to change my parser to write to 3 different databases each time they receive an updated XML file, but I suppose
    that is an option.

    Greg,
    Data sync is one of the option but I wouldn't recommend as Data-Sync Service is going to be deprecated in near future. I would urge you to go through the options around Geo-replication. There are 3 versions of Geo-repl and i believe Active-Geo replication
    would suit your requirement however the copy of the database which is in sync will also have to be in the same service tier (Basic is not possible). With the current Azure offering, it is not possible to have a sync copy of database with different SLOs. I
    would also recommend you to open a support incident with Microsoft to understand different options of Geo-replication. Throughout the time I was composing my answer, keeping DR (disaster recovery) in mind. If i am mistaken, please let me know.
    -Karthik Krishnamurthy (SQK Azure KKB)

  • The best way to populate a secondary database

    I'm trying to create a secondary database over an existing primary database. I've looked over the GSG and Collections docs and haven't found an example that explicitly populates a secondary database.
    The one thing I did find was setAutoPopulate(true) on the SecondaryConfig.
    Is this the only way to get a secondary database populated from a primary? Or is there another way to achieve this?
    Thanks

    However, after primary and secondary are in sync,
    going forward, I'm unsure of the mechanics of how to
    automagically ensure that updates to primary db are
    reflected in secondary db. I'm sorry, I misunderstood your question earlier.
    Does JE take care of updating secondary db in such
    cases (provided both DBs are open)? In other words,
    if I have a Map open on the primary and do a put(), I
    can turn around and query the secondary (with apt
    key) and I should be able to retrieve the record I
    just put into the primary?Yes, JE maintains the secondaries automatically. The only requirement is that you always keep the secondary open while writing to the primary. JE uses your SecondaryKeyCreator implementation (you pass this object to SecondaryConfig.setKeyCreator when opening the secondary) to extract the secondary keys from the primary record, and automatically insert, update and delete records in the secondary databases as necessary.
    For the base API and collections API, JE does not persistently store the association between primaries and secondaries, so you must always open your secondary databases explicitly after opening your primary databases. For the DPL API (persist package), JE maintains the relationship persistently, so you don't have to always open the secondary indexes explicitly.
    I couldn't find an example illustrating this (nice)
    feature - hence the questions.For the collections API (I see you're using the collections API):
    http://www.oracle.com/technology/documentation/berkeley-db/je/collections/tutorial/UsingSecondaries.html
    In the examples directory:
    examples/collections/* -- all but the basic example use secondaries
    Mark

  • Best way to transfer a 10g database from HP9000 to Linux Redhat?

    What is the best way to transfer a 10g databasse from HP9000 to Linux Redhat?

    Hi Bill,
    What is the best way to transfer a 10g databasse from HP9000 to Linux Redhat?Define "best"? There are many choices, each with their own benefits . . .
    Fastest?
    If you are on an SMP server, parallel CTAS over a databaee link can move large amnunts of tables, fast:
    http://www.dba-oracle.com/t_create_table_select_ctas.htm
    I've done 100 gig per hours . . .
    Easiest?
    If you are a beginner, data pump is good, and I have siome tips on doing it quickly:
    http://www.dba-oracle.com/oracle_tips_load_speed.htm
    Also,, make sure to check the Linux kernel settings. I query http://www.tpc.org and search for the server type . . .
    The full disclosure reports show optimal kernel settings.
    Finally, don't forget to set direct I/O in Linux:
    http://www.dba-oracle.com/t_linux_disk_i_o.htm
    Hope this helps . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference" http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm

  • What is the best way to archive old footage (DVD, VHS, Super 8 etc.)

    Hello,
    I am starting a huge archiving projet of possibley 100-200 hours of footage from all kinds of sources (VHS tapes, Mini DV, Super 8 film).
    What is the best way to archive this footage at the highest possible resolution?
    What sort of file should be saved?
    Im a complete starter at this.
    Thank you!

    One strategy is to purchase a Canopus ADVC300 analog/dv converter. It has a TBC incorporated.
    You connect your original source material (for example - VHS deck) to the Canopus via analog cables then from the Canopus via firewire to your computer. This will turn everything into DV format.
    Play the tape through the canopus and record to your hard drive.
    Take good notes for each tape. Create a logging sheet and make notes as the system captures.
    Once you have the tapes in the computer, break them into 1 hr or less segments by some reasonable system - chronologically, by personalities, locations or whatever makes sense to you.
    Then write out each 1 hour segment back to a DV recorder. Give each tape a unique reel number - and make the reel number part of the file name on the computer.
    When you are done in a year or two ( lol ) you should have two matching sets of material - one on the hard drives and one on tape - with names that make it easy to cross reference and a binder full of logging information. If you want to be fully digital, look into a nifty application like CatDV. It will help organize a mass of video info like this.
    DV runs at ~13-14GB/ hr. Plan you storage accordingly.
    Purchase professional quality DV tapes - not drugstore junk.
    Good luck.
    x

Maybe you are looking for

  • BI Infoset

    Hi All, How to enable fields in the infoset. I have dso and cube in an infoset, In the DSO there are certain infobjects with char more than 25 which are disabled for selection. I want these fields to be included in my Infoset. How to make it enable.

  • I have encountered a peculiar problem in MM01 material master upload.

    I have encountered a peculiar problem in MM01 material master upload. The BDC program uploads data using Session method. The problem is when i process the session in foreground mode the data uploads successfully, but when i process it in background m

  • Upgrade Solaris 2.5.1 to 2.6

    Hi all, I am very new and unfamilliar with the SunSolaris OS ; actually I was recently upgraded from Oracle Developer/Analyst to Oracle DBA. So I have a server (which I access remotely) with Solaris 2.5.1 , and I need to install Oracle 8i(8.1.6.0) on

  • Processor speed after Standby Mode

    I have the T400 with  Windows XP (32bit).  Dual P8600 2.4GHz processors & 3GB memory.  Fairly new computer so there is plenty of space. I run a macro out of Excel every day.  When I run the program it takes about 15 to 20 mins and Excel struggles to

  • How to remove nano sim card from micro sim slot of Z3

    I mistakenly put nano sim card in my new bb Z3 im unable to remove it plz help me out in this issue