Best way to drop standby database

Hi
Oracle RDBMS 11.2.0.2 on RHEL 5.6.
I need to drop standby database completeley and rebuilt physical standby for the same database. We identified there are lot of inconsistencies between primary and standby. How do i need to remove the standby? And what is the best procedure?
Thanks

951368 wrote:
Hi
Oracle RDBMS 11.2.0.2 on RHEL 5.6.
I need to drop standby database completeley and rebuilt physical standby for the same database. We identified there are lot of inconsistencies between primary and standby. How do i need to remove the standby? And what is the best procedure?
ThanksFollow this *Step By Step How to Recreate Dataguard Broker Configuration [ID 808783.1]*

Similar Messages

  • What is the best way to install standby database?

    Hello.
    I'm trying to develop a good scenario for implementing standby database. I've already installed and configured standby. It's working.
    Is there any best practice for installing database instance for standby (i.e. using SWPM)?

    Hi
    If you are talking about an oracle standby database, logic assumption as you are posting in the Oracle space, SWPM won't help you, it is meant for installing SAP instance.
    For an Oracle standby DB you need to install an Oracle instance and restore the master DB on it.
    Regards
    http://docs.oracle.com/cd/E11882_01/server.112/e17022/rcmbackp.htm
    http://www.pythian.com/blog/creating-a-physical-standby/

  • What is the best way to create a database schema from XML

    What is the best way to create a database schema from XML?
    i have  a complex XML file that I want to create a database from and consistently import new XML files of the same schema type. Currently I have started off by mapping the XSD into Excel and using Mysql for Excel to push into MySQL.
    There must be a more .net microsoft solution for this but I cannot locate the topic and tools by searching. What are the best tools and way to manage this?
    Taking my C# further

    Hi Saythj,
    When mentioning "a database schema from XML", do you mean the
    XML Schema Collections? If that is what you mean, when trying to import XML files of the same schema type, you may take the below approach.
    Create an XML Schema Collection basing on your complex XML, you can find
    many generating tools online to do that.
    Create a Table with the above created schema typed XML column as below.
    CREATE TABLE youTable( Col1 int, Col2 xml (yourXMLSchemaCollection))
    Load your XML files and try to insert the xml content into the table above from C# or some other approaches. The XMLs that can't pass the validation fail inserting into that table.
    If you have any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • Best way to update DR database from PROD server automatic ?

    Hello Gurus,
    I want to decide best solution.
    Goal: Best way to update DR Database from PROD Server Database automatic.
    System: 32 bit Linux system with Oracle Database 10g Release 10.2.0.4.0 - Production (database version) Oracle version is standard.
    Please let me know how i can update my DR server with PROD data and this task should be automatic.
    Please ask me more information if required to solve this issue.
    Thanks- Priyank
    Edited by: Oracle DBA P on Nov 19, 2010 3:06 AM

    you mean to say data guard needs to implement ? i think that's one option but what you said is different ?
    tell me procedure how i can implement your idea ? steps i have to perform.
    Thanks
    Edited by: Oracle DBA P on Nov 19, 2010 3:38 AM

  • Best Way to monitor standby, primary databases, including alert logs, etc.

    Hi, Guys, I finally cutover the new environment to the new linux redhat and everything working great so far (the primary/standby).
    Now I would like to setup monitoring scripts to monitor it automatically so I can let it run by itself.
    What is the best way?
    I talked to another dba friend outside of the company and he told me his shop not use any cron jobs to monitor, they use grid control.
    We have no grid control. I would like to see what is the best option here? should we setup grid control?
    And also for the meantime, I would appreciate any good ideas of any cronjob scripts.
    Thanks

    Hello;
    I came up with this which I run on the Primary daily, Since its SQL you can add any extras you need.
    SPOOL OFF
    CLEAR SCREEN
    SPOOL /tmp/quickaudit.lst
    PROMPT
    PROMPT -----------------------------------------------------------------------|
    PROMPT
    SET TERMOUT ON
    SET VERIFY OFF
    SET FEEDBACK ON
    PROMPT
    PROMPT Checking database name and archive mode
    PROMPT
    column NAME format A9
    column LOG_MODE format A12
    SELECT NAME,CREATED, LOG_MODE FROM V$DATABASE;
    PROMPT
    PROMPT -----------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking Tablespace name and status
    PROMPT
    column TABLESPACE_NAME format a30
    column STATUS format a10
    set pagesize 400
    SELECT TABLESPACE_NAME, STATUS FROM DBA_TABLESPACES;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking free space in tablespaces
    PROMPT
    column tablespace_name format a30
    SELECT tablespace_name ,sum(bytes)/1024/1024 "MB Free" FROM dba_free_space WHERE
    tablespace_name <>'TEMP' GROUP BY tablespace_name;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking freespace by tablespace
    PROMPT
    column dummy noprint
    column  pct_used format 999.9       heading "%|Used"
    column  name    format a16      heading "Tablespace Name"
    column  bytes   format 9,999,999,999,999    heading "Total Bytes"
    column  used    format 99,999,999,999   heading "Used"
    column  free    format 999,999,999,999  heading "Free"
    break   on report
    compute sum of bytes on report
    compute sum of free on report
    compute sum of used on report
    set linesize 132
    set termout off
    select a.tablespace_name                                              name,
           b.tablespace_name                                              dummy,
           sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )      bytes,
           sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id ) -
           sum(a.bytes)/count( distinct b.file_id )              used,
           sum(a.bytes)/count( distinct b.file_id )                       free,
           100 * ( (sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )) -
                   (sum(a.bytes)/count( distinct b.file_id ) )) /
           (sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )) pct_used
    from sys.dba_free_space a, sys.dba_data_files b
    where a.tablespace_name = b.tablespace_name
    group by a.tablespace_name, b.tablespace_name;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking Size and usage in GB of Flash Recovery Area
    PROMPT
    SELECT
      ROUND((A.SPACE_LIMIT / 1024 / 1024 / 1024), 2) AS FLASH_IN_GB,
      ROUND((A.SPACE_USED / 1024 / 1024 / 1024), 2) AS FLASH_USED_IN_GB,
      ROUND((A.SPACE_RECLAIMABLE / 1024 / 1024 / 1024), 2) AS FLASH_RECLAIMABLE_GB,
      SUM(B.PERCENT_SPACE_USED)  AS PERCENT_OF_SPACE_USED
    FROM
      V$RECOVERY_FILE_DEST A,
      V$FLASH_RECOVERY_AREA_USAGE B
    GROUP BY
      SPACE_LIMIT,
      SPACE_USED ,
      SPACE_RECLAIMABLE ;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking free space In Flash Recovery Area
    PROMPT
    column FILE_TYPE format a20
    select * from v$flash_recovery_area_usage;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking last sequence in v$archived_log
    PROMPT
    clear screen
    set linesize 100
    column STANDBY format a20
    column applied format a10
    --select max(sequence#), applied from v$archived_log where applied = 'YES' group by applied;
    SELECT  name as STANDBY, SEQUENCE#, applied, completion_time from v$archived_log WHERE  DEST_ID = 2 AND NEXT_TIME > SYSDATE -1;
    prompt
    prompt----------------Last log on Primary--------------------------------------|
    prompt
    select max(sequence#) from v$archived_log where NEXT_TIME > sysdate -1;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking switchover status
    PROMPT
    select switchover_status from v$database;I run it from a shell script and email myself quickaudit.lst
    Alert logs are great source of information when you have an issue or just want to check something.
    Best Regards
    mseberg

  • Best Way to Drop a 10g Database

    hi experts,
    This is 10g on Windows.
    I have 3 10g databases on this server and I need to drop and recreate 1 of the databases.
    What is the best way to get the cleanest, most thorough deletion?
    I'm thinking of doing:
    shutdown immediate;
    startup mount exclusive restrict;
    drop database;
    is there a better option?
    Thanks, John

    No.
    Though the "EXCLUSIVE" keyword is no longer required ... at least in 11gR1 and perhaps not in your version either.

  • What is the best way to drop and recreate a Primary Key in the Replication Table?

    I have a requirement to drop and recreate a primary key in a table which is part of Transaction replication. What is the best way to fo it other than remove it from replication and add again?
    Thanks
    Swapna

    Hi Swapna,
    Unfortunately you cannot drop columns used in a primary key from articles in transactional replication.  This is covered in
    Make Schema Changes on Publication Databases:
    You cannot drop columns used in a primary key from articles in transactional publications, because they are used by replication.
    You will need to drop the article from the publication, drop and recreate the primary key, and add the article back into the publication.
    To avoid having to send a snapshot down to the subscriber(s), you could specify the option 'replication support only' for the subscription.  This would require the primary key be modified at the subscriber as well prior to adding the article back in
    and should be done during a maintenance window when no activity is occurring on the published tables.
    I suggest testing this out in your test environment first, prior to deploying to production.
    Brandon Williams (blog |
    linkedin)

  • Best Way to Drop Large Clob Column?

    I have a very large partitioned table that contains XML documents stored in a clob column. Aside from the clob column there are several varchar and numeric columns in the table that are related to each document. We have decided to move the XML out of Oracle and into text files on the OS but want to keep the other data in Oracle. Each partition has a tablespace for the clob column and a tablespace for the other columns.
    What is the best (quickest/most efficient) way to drop the clob column and free up the space that it is currently using?
    OS: HP-UX
    Oracle: 11.2.0.3
    Table Partitions: 27
    Table Rows: 550,000,000
    Table Size: around 15 TB with 95% of that found in the column to drop
    One other wrinkle, there are several tables that have a foreign key relationship back to the primary key of the table in question. Three of those tables are multi-billion rows in size.

    Hi,
    You can use the mark unused column,and checkpoint in the drop column statements,
    please visit the link. may it help you
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:623063677753

  • Best way to manage a database creation

    Hi,
    i need an advise, i have an Informix database that i have to create and delete.
    So, i created two kind of scripts : one to create the database and another one to delete it.
    If an user call twice the script to create the database Informix goes down because it tries to create a database which already exists.
    what is the right methodology to use :
    1) drop the database each time i have to create it
    2) check if the database exists and not created
    Any suggestion ?
    Cheers.
    Stefano

    If only developers use the scripts then drop it each time.
    If there is any chance that QA or production will use it then the scripts should check for an existing database, print an error message if there is one, and exit without doing anything.

  • Best way to map CLOB database fields?

    I ran into a problem mapping CLOB fields a couple of months ago and couldn't find an easy answer to the problem. I got sidetracked and now find myself back at the same point...
    What is the best practice for mapping CLOB database fields in the mapping workbench? Do you map them as serialized (as a note I've got here) or can someone point me in the proper direction please? I've got everything else functioning properly, but I left my CLOB fields out of my mapping because they were giving me such trouble. Now, I need to get back to that specific area.
    ~ Tiffani

    Hi Tiffani,
    SerializedObjectMapping is generally used to map a multimedia object (for example image)to a BLOB field. For CLOB, I use simple direct-to-field mapping, and TopLink conversion manager will handle the read/write from the object String/Char[] to CLOB.
    King

  • What is the best way to drop out a background to white?

    I have several quite old architects drawings to reproduce. The problem is that with age, the original paper has yellowed and foxed. Can anyone suggest a good way of dropping out the background to white so that the image will look nice and clean when I print it, please? Normally I would bring the white point slider up in Levels and sort of burn it out, but I was wondering if there is a more subtle method of doing this. The trouble is I don't want to loose detail in the fine pencil lines at the same time.
    Thanks everyone - Brian.

    Thanks for that, I'll give your method a try.
    No no, I don't mean an insult - you read it wrong! Last thing I would do when asking for help... No, I mean that just 'turning up the white level' is crude, in that it looks like its been blasted by an atom bomb and looks awful, and I was wondering if a more scientific approach might not be applicable. For example, I've tried putting colour sampler points in the image and then independantly adjusting the R, G, and B levels etc until they match at 255 each, but it still looks blown out, but that's what I meant by being more 'scientific', ie perhaps using some of Photoshop's ability to measure colours to enable me to drop them out, for example.
    I found a previous thread along the same lines as this and one suggestion was to use some special filters, but they turned out to be mac only.
    The 'forensic' reference was to the Color Deconvolution filter, it was designed for police forensic departments as an aid to help spot where two different colours of ink have been used on a document, for example.
    By the way, the drawings I'm working on are not all B/W only, some have colour washes on them too...
    Perhaps the only way is to try and select the background areas first and then to drop them out.

  • The Best Way to Check a Database.

    Hello All,
    I'm working on an app. where I'll retrieve upwards of 1 million items.
    For each retrieval, I have to make sure that the item was not previously retrieved and stored. That means on the 1millionth item, I have to test the previous 999,999 items . If a match is found, I would ignore it. Other wise I would add it to the store.
    I thought about storing all items in a simple hashtable. But 1 million items in a hashtable probably wouldn't be supported on a standard 128MB pc.
    So my only other option seems to be database.
    But the time required to retrieve and compare to the database seems like it would take forever.
    My best guess would be to retrieve say 3000 items then do all my tests at that time and store. Clear the memory and repeat. Is that the strategy you guys would try ?
    Does that seem doable -- i'll be using a standard jdbc-odbc bridge by the way
    stev
    Then I'm going to dump it into a database (MSAccess 2000) using the batch method -if the addBatch method is supported by

    If you create your database table to have an index which requires unique values, then an insert of a duplicate will throw an exception. I don't know whether addBatch will be practical here -- it might throw the exception and not add the rest of the batch, but then again it might not. Of course, adding 1 million items is going to take some time, but the other advantage of the database is that the data is permanently stored in a form that's easy to access in a variety of ways.

  • Best Way to move MB database from single XCH 2010 to 2013 DAG

    Hi,
    We are migrating from a Single Exchange 2010 sp3 server to a 2-node exchange 2013 DAG cluster, located in another AD site.
    I can move a mailbox from 2010 to 2013 without issue, but the WAN link between 2010 and 2013 is very small.
    It will take us too long to move all mailboxes with move-mailbox over the line.
    Is there another way to move the mailboxes or Databases more quickly?
    I was thinking about copying the DB's to a physical disk from 2010 and then move and mount it on 2013. Is that even possible? I explored the possibility to seed the DB to 2013, but don't have the option to do so. 2010 is installed on 2008R2 standard.
    We prefer a short downtime and move over everthing in one go as opposed to having coeexistance for longer period of time to allow the move mailbox over the WAN to complete.
    Thanks for any suggestions.

    You can't move the databases from Exchange 2010 to Exchange 2013.
    I can think of three options.
    1.  Move the Exchange 2013 server to the same LAN as the Exchange 2010 server, move the mailboxes, then move the server back.
    2.  Move the Exchange 2010 server to the same LAN as the Exchange 2013 server and move the mailboxes.
    3.  Install a new Exchange 2010 server on the same LAN as the current Exchange 2010 server, create a DAG, add mailbox database copies on the new Exchange 2010 server, move that server to the same LAN as the Exchange 2013 server, activate the databases
    on that LAN, and then move the mailboxes.
    3a.  A variation of #3 is to build the Exchange 2010 server on the same LAN as the Exchange 2013 server and perform a manual seed of the database copies.  This is more trouble than option 3.
    Moving Exchange servers between sites is possible but there are other issues you would need to solve.  You might be best off just getting a temporary increase in your WAN link capacity for the period of the mailbox move.
    Ed Crowley MVP "There are seldom good technological solutions to behavioral problems."

  • Best way to do a database ?

    Hi,
    I have an application which sorts through and reads a databse. Currently I have it split into small functions which are called for the appropriate database with if statements, I would have it in one large function but java me gives memory errors. Anyway I was wondering if anyone had a better way of doing it as I don't like using multiple functons....
    this is the basic structure I've done, is there a more efficient way ?
    public void GetMeADatabase() {
    public int [][] database = { {0} };
    if (databseuserwants==1)
    getdatabse1();
    else if (databseuserwants==2)
    getdatabse2();
    public void getdatabase1() {
    database = new int[][] {
    {1,2,3},
    {4,5,6},
    public void getdatabase2() {
    database = new int[][] {
    {7,8,9},
    {10,11,12},
    }Edited by: kob2040 on May 2, 2008 9:41 PM

    I am not sure if i set up the linked server correctly here is the schema and an attempt to run a select and the resulting error:
    SELECT  *
    FROM  [CN5E6E9LM2.DATABASE.WINDOWS.NET,1433].[Mkerr_db].dbo.addr
    OLE DB provider "SQLNCLI11" for linked server "CN5E6E9LM2.DATABASE.WINDOWS.NET,1433" returned message "Unspecified error".
    Msg 40515, Level 16, State 2, Line 1
    Reference to database and/or server name in 'Mkerr_db.sys.sp_tables_info_90_rowset_64' is not supported in this version of SQL Server.
    McC

  • Best way to move Oracle database to another machine

    Hello,
    I'm trying to find the best solution for that case:
    On first Unix machine I'm running 10g database. Whole oracle (binary files + datafiles + index + redo) takes about 350GB from my first storage.
    On second Unix machine (and second storage) I'd like to run exactly the same instance (the first one is going to be a backup server).
    I work in real OLTP environment, and I can only afford a 4-5 hours of downtime (in the night) to move from server 1. to 2.
    Could you recommend me the best solution for moving database to no. 2. My problem is, that between storage 1. and storage 2. I've only 30MB/s bandwidth, sequence:
    1) instance_on_1: shutdown immediate
    2) scp /u01/oracle server2:/u01/oracle
    3) instance_on_3: startup
    will take about 3-4 hours, so I would have no time to additional tasks.
    Best regards
    R.

    user2676302 wrote:
    Hello,
    I'm trying to find the best solution for that case:
    On first Unix machine I'm running 10g database. Whole oracle (binary files + datafiles + index + redo) takes about 350GB from my first storage.
    On second Unix machine (and second storage) I'd like to run exactly the same instance (the first one is going to be a backup server).
    I work in real OLTP environment, and I can only afford a 4-5 hours of downtime (in the night) to move from server 1. to 2.
    Could you recommend me the best solution for moving database to no. 2. My problem is, that between storage 1. and storage 2. I've only 30MB/s bandwidth, sequence:
    1) instance_on_1: shutdown immediate
    2) scp /u01/oracle server2:/u01/oracle
    3) instance_on_3: startup
    will take about 3-4 hours, so I would have no time to additional tasks.
    Best regards
    R.Hi and welcome to forum. Why don't you want to use DUPLICATE DATABASE command of RMAN to duplicate the databsae without any downtime?

Maybe you are looking for

  • Help with exporting A Small AVI

    I am working on a project for school using my schools computers to edit with Premiere Pro and After Effects cs3. I edited my project and exported it through the media encoder using H2.64 codec ane exported it as a .mp4 and it was slow for the my teac

  • ORA-00001: unique constraint error..

    Hi There, We were trying to do an insert when we started having ORA-00001: unique constraint error.. to speed our testing we decided to disable all the constraints on the table; however we still having the same issue. How can we resolve this please.

  • Regarding Error

    Tell me the procedure , How to deploy a EJB application. I hava 2 Interfaces, 1 Bean file and 1 Client file. Tell me the Procedures from Writing the First File. Because i get confused with these procedures. 1. Creating files 2. Compilation. 3. Server

  • Jetpack Cannot Pay for service

    I've been trying to pay for the jet pack for 5 plus days without giving my credit card to a human. The system will not allow me to enter a credit card from any other device than the jetpack. Since the plan has now expired that can't be done. In fact

  • IOS7: inability to crop photos for home or lock screen

    iOS7 doesn't allow you to "move and scale" photos to set for the lock and home screens. When will this be fixed?