Best Way to Drop a 10g Database

hi experts,
This is 10g on Windows.
I have 3 10g databases on this server and I need to drop and recreate 1 of the databases.
What is the best way to get the cleanest, most thorough deletion?
I'm thinking of doing:
shutdown immediate;
startup mount exclusive restrict;
drop database;
is there a better option?
Thanks, John

No.
Though the "EXCLUSIVE" keyword is no longer required ... at least in 11gR1 and perhaps not in your version either.

Similar Messages

  • Best way to transfer a 10g database from HP9000 to Linux Redhat?

    What is the best way to transfer a 10g databasse from HP9000 to Linux Redhat?

    Hi Bill,
    What is the best way to transfer a 10g databasse from HP9000 to Linux Redhat?Define "best"? There are many choices, each with their own benefits . . .
    Fastest?
    If you are on an SMP server, parallel CTAS over a databaee link can move large amnunts of tables, fast:
    http://www.dba-oracle.com/t_create_table_select_ctas.htm
    I've done 100 gig per hours . . .
    Easiest?
    If you are a beginner, data pump is good, and I have siome tips on doing it quickly:
    http://www.dba-oracle.com/oracle_tips_load_speed.htm
    Also,, make sure to check the Linux kernel settings. I query http://www.tpc.org and search for the server type . . .
    The full disclosure reports show optimal kernel settings.
    Finally, don't forget to set direct I/O in Linux:
    http://www.dba-oracle.com/t_linux_disk_i_o.htm
    Hope this helps . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference" http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm

  • What is the best way to import a full database?

    Hello,
    Can anyone tell me, what is the best way to import a full database called test, into an existing database called DEV1?
    when import into an existing database do you have drop the users say pinfo, tinfo schemas are there. do i have drop these and recreate or how it will work, when you impport full database?
    Could you please give step by step instructions....
    Thanks a lot...

    Nayab,
    http://youngcow.net/doc/oracle10g/backup.102/b14191/rcmdupdb005.htmA suggestion that please don't use external sites which host oracle docs since there can not be any assurance that whether they update their content with the latest corrections or not. You can see the updated part no in the actual doc site from oracle,
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#i1009381
    Aman....

  • What is the best way to drop and recreate a Primary Key in the Replication Table?

    I have a requirement to drop and recreate a primary key in a table which is part of Transaction replication. What is the best way to fo it other than remove it from replication and add again?
    Thanks
    Swapna

    Hi Swapna,
    Unfortunately you cannot drop columns used in a primary key from articles in transactional replication.  This is covered in
    Make Schema Changes on Publication Databases:
    You cannot drop columns used in a primary key from articles in transactional publications, because they are used by replication.
    You will need to drop the article from the publication, drop and recreate the primary key, and add the article back into the publication.
    To avoid having to send a snapshot down to the subscriber(s), you could specify the option 'replication support only' for the subscription.  This would require the primary key be modified at the subscriber as well prior to adding the article back in
    and should be done during a maintenance window when no activity is occurring on the published tables.
    I suggest testing this out in your test environment first, prior to deploying to production.
    Brandon Williams (blog |
    linkedin)

  • Best way to load initial TimesTen database

    I have a customer that wants to use TimesTen as a pure in-memory database. This IMDB has about 65 tables some having data upwards of 6 million rows. What is the best way to load this data? There is no cache-connect option being used. I am thinking insert is the only option here. Are there any other options?
    thansk

    You can also use the TimesTen ttbulkcp command line utility, this tool is similar to SQL*Loader except it handles both import and export of data.
    For example, the following command loads the rows listed in file foo.dump into a table called foo in database mydb, placing any error messages into the file foo.err.
    ttbulkcp -i -e foo.err dsn=mydb foo foo.dump
    For more information on the ttbulkcp utility you can refer to the Oracle TimesTen API Reference Guide.

  • Best Way to monitor standby, primary databases, including alert logs, etc.

    Hi, Guys, I finally cutover the new environment to the new linux redhat and everything working great so far (the primary/standby).
    Now I would like to setup monitoring scripts to monitor it automatically so I can let it run by itself.
    What is the best way?
    I talked to another dba friend outside of the company and he told me his shop not use any cron jobs to monitor, they use grid control.
    We have no grid control. I would like to see what is the best option here? should we setup grid control?
    And also for the meantime, I would appreciate any good ideas of any cronjob scripts.
    Thanks

    Hello;
    I came up with this which I run on the Primary daily, Since its SQL you can add any extras you need.
    SPOOL OFF
    CLEAR SCREEN
    SPOOL /tmp/quickaudit.lst
    PROMPT
    PROMPT -----------------------------------------------------------------------|
    PROMPT
    SET TERMOUT ON
    SET VERIFY OFF
    SET FEEDBACK ON
    PROMPT
    PROMPT Checking database name and archive mode
    PROMPT
    column NAME format A9
    column LOG_MODE format A12
    SELECT NAME,CREATED, LOG_MODE FROM V$DATABASE;
    PROMPT
    PROMPT -----------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking Tablespace name and status
    PROMPT
    column TABLESPACE_NAME format a30
    column STATUS format a10
    set pagesize 400
    SELECT TABLESPACE_NAME, STATUS FROM DBA_TABLESPACES;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking free space in tablespaces
    PROMPT
    column tablespace_name format a30
    SELECT tablespace_name ,sum(bytes)/1024/1024 "MB Free" FROM dba_free_space WHERE
    tablespace_name <>'TEMP' GROUP BY tablespace_name;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking freespace by tablespace
    PROMPT
    column dummy noprint
    column  pct_used format 999.9       heading "%|Used"
    column  name    format a16      heading "Tablespace Name"
    column  bytes   format 9,999,999,999,999    heading "Total Bytes"
    column  used    format 99,999,999,999   heading "Used"
    column  free    format 999,999,999,999  heading "Free"
    break   on report
    compute sum of bytes on report
    compute sum of free on report
    compute sum of used on report
    set linesize 132
    set termout off
    select a.tablespace_name                                              name,
           b.tablespace_name                                              dummy,
           sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )      bytes,
           sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id ) -
           sum(a.bytes)/count( distinct b.file_id )              used,
           sum(a.bytes)/count( distinct b.file_id )                       free,
           100 * ( (sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )) -
                   (sum(a.bytes)/count( distinct b.file_id ) )) /
           (sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )) pct_used
    from sys.dba_free_space a, sys.dba_data_files b
    where a.tablespace_name = b.tablespace_name
    group by a.tablespace_name, b.tablespace_name;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking Size and usage in GB of Flash Recovery Area
    PROMPT
    SELECT
      ROUND((A.SPACE_LIMIT / 1024 / 1024 / 1024), 2) AS FLASH_IN_GB,
      ROUND((A.SPACE_USED / 1024 / 1024 / 1024), 2) AS FLASH_USED_IN_GB,
      ROUND((A.SPACE_RECLAIMABLE / 1024 / 1024 / 1024), 2) AS FLASH_RECLAIMABLE_GB,
      SUM(B.PERCENT_SPACE_USED)  AS PERCENT_OF_SPACE_USED
    FROM
      V$RECOVERY_FILE_DEST A,
      V$FLASH_RECOVERY_AREA_USAGE B
    GROUP BY
      SPACE_LIMIT,
      SPACE_USED ,
      SPACE_RECLAIMABLE ;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking free space In Flash Recovery Area
    PROMPT
    column FILE_TYPE format a20
    select * from v$flash_recovery_area_usage;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking last sequence in v$archived_log
    PROMPT
    clear screen
    set linesize 100
    column STANDBY format a20
    column applied format a10
    --select max(sequence#), applied from v$archived_log where applied = 'YES' group by applied;
    SELECT  name as STANDBY, SEQUENCE#, applied, completion_time from v$archived_log WHERE  DEST_ID = 2 AND NEXT_TIME > SYSDATE -1;
    prompt
    prompt----------------Last log on Primary--------------------------------------|
    prompt
    select max(sequence#) from v$archived_log where NEXT_TIME > sysdate -1;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking switchover status
    PROMPT
    select switchover_status from v$database;I run it from a shell script and email myself quickaudit.lst
    Alert logs are great source of information when you have an issue or just want to check something.
    Best Regards
    mseberg

  • Best way to deploy a new database

    What is the best way to deploy a database for a user base that mostly doesn't understand how to use SQL based db products or has some understand?
    I'm current working on a setup utility for my desktop application, which uses MySQL, right now I'm at a design issue where I'm not sure how to get the database deployed.
    I have a creation script for deploying the database, but I'm not sure wither to create a default user, assign the rights to the user or make the user customizible. [which starts to branch off way too much] The desktop application does have an option of using an already deployed databased else where or creating one locally.
    Does anyone have suggestions for deploying databases for desktop applications? I know that derby is a great solution for this, however, It is not nearly powerful enough to handle what I need a database for. [large amount of transactions and comparisons, really quickly] I also have been unable to find information on this as well.

    are you talking about creating another copy of your existing DB on the same server ??
    (Just wanna confirm as your last line seems to contradict with this?!?)
    2 ways:
    Go for RESTORE database with the Source option pointing to one of the existing databases.
    Go for COPY DATABASE (useful when the copy of the db is to be put in another server maybe..)
    Note that you would need backup of the existing DB to proceed..
    Thanks, Jay <If the post was helpful mark as 'Helpful and if the post answered your query, mark as 'Answered'>

  • Best Way to Drop Large Clob Column?

    I have a very large partitioned table that contains XML documents stored in a clob column. Aside from the clob column there are several varchar and numeric columns in the table that are related to each document. We have decided to move the XML out of Oracle and into text files on the OS but want to keep the other data in Oracle. Each partition has a tablespace for the clob column and a tablespace for the other columns.
    What is the best (quickest/most efficient) way to drop the clob column and free up the space that it is currently using?
    OS: HP-UX
    Oracle: 11.2.0.3
    Table Partitions: 27
    Table Rows: 550,000,000
    Table Size: around 15 TB with 95% of that found in the column to drop
    One other wrinkle, there are several tables that have a foreign key relationship back to the primary key of the table in question. Three of those tables are multi-billion rows in size.

    Hi,
    You can use the mark unused column,and checkpoint in the drop column statements,
    please visit the link. may it help you
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:623063677753

  • Best way to check whether the database is demo or sys?

    Hi Gurus,
    Whats the best way to check whether the installed peoplesoft database is a demo or a sys?
    Thanks for the help?
    Regards,
    Anoop

    There is nothing set by default.
    However, if it has been configured properly by the administrator after db creation, through the menu Peopletools>Utilities>Administration>Peopletools Options, the following query should return the type of the database :
    select systemtype from psoptions;Otherwise the following could help to understand what database you are on.
    From HRMS9.1 DMO database :
    SQL> select count(*) from ps_employees;
          2792From HRMS9.1 SYS database :
    SQL> select count(*) from ps_employees;
      COUNT(*)
             0Nicolas.

  • Oracle 10.1, Whats the best way to load XML in database?

    Hi All,
    I am a typical Oracle developer. I know some Java and some XML technologies, but not an expert.
    I will be receiving XML files from some system, which will be,
    - of reasonable size like 2 to 15 MBs
    - of reasonable complexity, like the root element have children, grand-children and great-grand-children, with attributes and all
    - Every day it needs to be loaded to Oracle database, in relational format
    - we need not update the XML data, only put the XML data in relational table
    My questions are,
    - With Oracle 10.1, XML DB, what is the best way to load this XML file to relational Oracle tables ?
    - What can be the steps in this process ?
    - In the documentation, I was lost and was not able to decide anything concrete
    - If I write a pure Java program with SAX API and load the data to Oracle database in same program, is it a good idea?
    - Is there any pure Oracle based way to do this?
    - If I want to avoid using DOM parser, as it will need more JAVA_POOL_SIZE, what can be the way ?
    Please help.
    Thanks

    Many customer solve this problem by registering an XML Schema that corresponds to their XML and then creating relational views over the XML that allow them to access the content in a relational manner. They then use insert as select operations on the relational views to transfer data from the XML into relational tables where necessary. There are a large number of threads in this forum with detailed examples of how this can be done. Most of the customers who have adopted this approach have discovered that this is the least complex approach in terms of code that to be developed / maintained and offeres acceptable performance.

  • Best Way to Replicate Azure SQL Databases to Lower Environments

    I have XML files delivered to my server where they are parsed into reference data and written to a database (Premium tier).  I want to have that database sync to other databases (Basic tier) so that my non-production environments can use the same reference
    data.
    I tried Data Sync and it seems incredibly slow.  Is Azure Data Sync the best way?  What are my other options?  I don't really want to change my parser to write to 3 different databases each time they receive an updated XML file, but I suppose
    that is an option.

    Greg,
    Data sync is one of the option but I wouldn't recommend as Data-Sync Service is going to be deprecated in near future. I would urge you to go through the options around Geo-replication. There are 3 versions of Geo-repl and i believe Active-Geo replication
    would suit your requirement however the copy of the database which is in sync will also have to be in the same service tier (Basic is not possible). With the current Azure offering, it is not possible to have a sync copy of database with different SLOs. I
    would also recommend you to open a support incident with Microsoft to understand different options of Geo-replication. Throughout the time I was composing my answer, keeping DR (disaster recovery) in mind. If i am mistaken, please let me know.
    -Karthik Krishnamurthy (SQK Azure KKB)

  • What is the best way to drop out a background to white?

    I have several quite old architects drawings to reproduce. The problem is that with age, the original paper has yellowed and foxed. Can anyone suggest a good way of dropping out the background to white so that the image will look nice and clean when I print it, please? Normally I would bring the white point slider up in Levels and sort of burn it out, but I was wondering if there is a more subtle method of doing this. The trouble is I don't want to loose detail in the fine pencil lines at the same time.
    Thanks everyone - Brian.

    Thanks for that, I'll give your method a try.
    No no, I don't mean an insult - you read it wrong! Last thing I would do when asking for help... No, I mean that just 'turning up the white level' is crude, in that it looks like its been blasted by an atom bomb and looks awful, and I was wondering if a more scientific approach might not be applicable. For example, I've tried putting colour sampler points in the image and then independantly adjusting the R, G, and B levels etc until they match at 255 each, but it still looks blown out, but that's what I meant by being more 'scientific', ie perhaps using some of Photoshop's ability to measure colours to enable me to drop them out, for example.
    I found a previous thread along the same lines as this and one suggestion was to use some special filters, but they turned out to be mac only.
    The 'forensic' reference was to the Color Deconvolution filter, it was designed for police forensic departments as an aid to help spot where two different colours of ink have been used on a document, for example.
    By the way, the drawings I'm working on are not all B/W only, some have colour washes on them too...
    Perhaps the only way is to try and select the background areas first and then to drop them out.

  • The best way to populate a secondary database

    I'm trying to create a secondary database over an existing primary database. I've looked over the GSG and Collections docs and haven't found an example that explicitly populates a secondary database.
    The one thing I did find was setAutoPopulate(true) on the SecondaryConfig.
    Is this the only way to get a secondary database populated from a primary? Or is there another way to achieve this?
    Thanks

    However, after primary and secondary are in sync,
    going forward, I'm unsure of the mechanics of how to
    automagically ensure that updates to primary db are
    reflected in secondary db. I'm sorry, I misunderstood your question earlier.
    Does JE take care of updating secondary db in such
    cases (provided both DBs are open)? In other words,
    if I have a Map open on the primary and do a put(), I
    can turn around and query the secondary (with apt
    key) and I should be able to retrieve the record I
    just put into the primary?Yes, JE maintains the secondaries automatically. The only requirement is that you always keep the secondary open while writing to the primary. JE uses your SecondaryKeyCreator implementation (you pass this object to SecondaryConfig.setKeyCreator when opening the secondary) to extract the secondary keys from the primary record, and automatically insert, update and delete records in the secondary databases as necessary.
    For the base API and collections API, JE does not persistently store the association between primaries and secondaries, so you must always open your secondary databases explicitly after opening your primary databases. For the DPL API (persist package), JE maintains the relationship persistently, so you don't have to always open the secondary indexes explicitly.
    I couldn't find an example illustrating this (nice)
    feature - hence the questions.For the collections API (I see you're using the collections API):
    http://www.oracle.com/technology/documentation/berkeley-db/je/collections/tutorial/UsingSecondaries.html
    In the examples directory:
    examples/collections/* -- all but the basic example use secondaries
    Mark

  • Best way to drop standby database

    Hi
    Oracle RDBMS 11.2.0.2 on RHEL 5.6.
    I need to drop standby database completeley and rebuilt physical standby for the same database. We identified there are lot of inconsistencies between primary and standby. How do i need to remove the standby? And what is the best procedure?
    Thanks

    951368 wrote:
    Hi
    Oracle RDBMS 11.2.0.2 on RHEL 5.6.
    I need to drop standby database completeley and rebuilt physical standby for the same database. We identified there are lot of inconsistencies between primary and standby. How do i need to remove the standby? And what is the best procedure?
    ThanksFollow this *Step By Step How to Recreate Dataguard Broker Configuration [ID 808783.1]*

  • How is the best way to connect to the Database ?

    I just have a question regarding to the connection to the Oracle DB.
    Every time I create a new JSP I am writing java code such as:
    Class.forName("oracle.jdbc.driver.OracleDriver").newInstance();
    conn = DriverManager.getConnection("jdbc:oracle:thin:@host:1521:DB",
    "user",
    "pwd");
    Is there a way I can reuse the connection that we create on JDeveloper on the tab Connections, under the Database node ?
    Because I would like to centralized more the way how I connect to DB.
    Thanks!
    Giovani

    That is a nice solution, but only works if you use embedded OC4J, if not you must define datasources, maybe in standalone OC4J, OAS, JBoss, etc
    this is because the embedded OC4J automatically create datasources if it sees database connectiones created through Connections Tab

Maybe you are looking for