Moving the Datafile in Dataguard Setup.

Hi DBAs,
I have setup the data guard single instance to single instance with Oracle 10.2.0.4 on RHEL 5. I am using local storage for datafiles. I have created one datafile which was part of Index TS (Index TS has 7 other datafiles already) at wrong location at /u01 instead of /datafiles/docmrep file system. Is there any way , I move/rename the datafile to right filesystem w/o outage. Also the same issue is with data guard location. I need to move out the data file from data guard as well.
Please suggest/help.
Regards
-Samar-

Datafile information is in the controlfile and you just need to update the information in the controlfile when you move a datafile.
So just do the following things on primary and database.
You can do this when database i s in mount stage or open mode .
Better to do it in mount mode.
PERFORM FOLLOWING OPERATIONS ON PRIMARY
SQL> ashut immediatE
SQL> STARTUP MOUNT
MOVE THE FILE AT OS LEVEL FROM LOCATION1 TO LOCATION2
RENAME THE FILEAT DATABASE LEVEL
SQL> ALTER DATABASE RENAME FILE 'LOCATION1' TO 'LOCATION2';
THIS COMPLETES THE ACTIVITY ON PRIMARY.
REPEAT ABOVE OPERATIONS AT STANDBY. NO NEED OF ANY EXTRA RECOVERY
iF YOU DON'T WNAT TO SHUTDOWN THE PRIMARY , THEN YOU CAN DO IT BY TAKING THE TABLESPACE OFFLINE.

Similar Messages

  • Change the maxsize of the datafiles in DataGuard 10gR2

    We have the DataGuard 10gR2, 1 is primary and other is physical standby in production.
    My questions:
    1/ The maxsize of the datafiles is being modified by either increasing or decreasing in primary, does it modification change automatically also in standby ? Note: the parameter is set in the init.ora file standby_file_management = AUTO.
    If it does not change in standby, what can we do to change it in stanby
    2/ We want to resize the datafile in primary using the below command, does it modification change automatically also in standby ?
    If it does not change in standby, what can we do to change it in stanby
    ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' RESIZE 100M;

    Pl do not post duplicates - Change the maxsize of the datafiles in DataGuard 10gR2

  • How to recleaim disk space in HP UX after moving the dbf file to some other

    Hi All,
    I have moved the datafiles from one mount point to another by -
    1) take tablespace offline
    2) move the file from the OS command
    3) take tablespace online
    But still the space is not getting released from the mount point where the files were present earlier.
    OS i am using is HP-UX.
    Can anyone Please help in this regard???
    Thanks
    Abhinav

    Laura Gaigala wrote:
    This is not actually HPUX forum, but if You would use cp instead of mv and for first copy file to the new directory then take tablespace online and only after that deleted the old file - then probably space would be reclaimed.
    But in this case You should restart db and only then all space will be freed up as right now database is keeping the inode.
    I can be wrong in explanation, not any kind of HPUX expert.Agree to this.
    It is smarter to keep the original and see if everything works OK after the copy.
    Then delete the old file.
    It is a kind of "normal" behaviour - I have seen it on HP-UX before - that the file(space) stays used, or is not released until the database is bounced.
    It even once saved me, when a collegae accidently deleted a datafile. I was still able to export the data out of it.

  • [svn:osmf:] 13121: Moving the setup for the default control bar into a DefaultControlBar class .

    Revision: 13121
    Revision: 13121
    Author:   [email protected]
    Date:     2009-12-21 11:53:07 -0800 (Mon, 21 Dec 2009)
    Log Message:
    Moving the setup for the default control bar into a DefaultControlBar class. Adding constants for the widget names, and updating WebPlayer accordingly.
    Modified Paths:
        osmf/trunk/apps/samples/framework/WebPlayer/src/WebPlayer.as
        osmf/trunk/libs/ChromeLibrary/.flexLibProperties
        osmf/trunk/libs/ChromeLibrary/src/org/osmf/chrome/controlbar/ControlBar.as
    Added Paths:
        osmf/trunk/libs/ChromeLibrary/src/org/osmf/chrome/controlbar/DefaultControlBar.as

    Revision: 13121
    Revision: 13121
    Author:   [email protected]
    Date:     2009-12-21 11:53:07 -0800 (Mon, 21 Dec 2009)
    Log Message:
    Moving the setup for the default control bar into a DefaultControlBar class. Adding constants for the widget names, and updating WebPlayer accordingly.
    Modified Paths:
        osmf/trunk/apps/samples/framework/WebPlayer/src/WebPlayer.as
        osmf/trunk/libs/ChromeLibrary/.flexLibProperties
        osmf/trunk/libs/ChromeLibrary/src/org/osmf/chrome/controlbar/ControlBar.as
    Added Paths:
        osmf/trunk/libs/ChromeLibrary/src/org/osmf/chrome/controlbar/DefaultControlBar.as

  • Dataguard setup using hot backup files

    Hello,
    I am planning to setup dataguard using Hot backup (Not RMAN) files from the primary instance. I have a few doubt as, How shall I recover the Standby database using the archive files generated while taking hot backup of primary? Shall I directly execute the "alter database recover managed standby database" command? Will this command take care of the archives generated durring hot backup also?
    Appreciate any help on the above.
    DB Version:10.2.0.4
    Regards.

    Guys by using below procedure you can rebuild your data gurad by using manual hot backup.
    1) On the primary database, defer the archival of redo data .
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;
    2)
    put primary database in backup mode
    copy all the datafiles from primary database to Standby database (you can copy to other location in primary server and tar&zip it and send it to stand by and unzip and untar there)
    Once you copy the datafiles to another location on primary server,you can use below command for tar and zip
    go to the copied datafiles location and issue below command
    tar cvf - .|gzip -c > /dump/backup/drdb_backup.tar.gz
    put primary database in end backup.
    3) create stabdby controlfile by using below command and send it to standby server
    alter database create standby controlfile as '/dump/drbackup/2standby.ctl'
    4) copy all generated archive logs (from the time of database begin abckup) from primary database server to standby database log_archive_dest location.
    5) on standby database once we placed all datafiles and controlfiles in place use below commands
    export ORACLE_SID=standbydb
    SQL> starup nomount;
    SQL> alter database mount standby database;
    SQL> recover standby database;
    --AUTO
    SQL> alter database open read only;
    SQL> shut immediate
    SQL> startup nomount;
    SQL> alter database mount standby database;
    SQL> alter database open read only;
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    6) On the primary database, issue the following statement to reenable archiving to the physical standby database:
    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
    Thanks & Regards,
    Satish Kumar Sadhu.
    Edited by: Satish Kumar Sadhu on Apr 10, 2013 11:05 PM

  • Can't delete primary zone in DNS after moving the server

    Woe is me!
    Our MacMini was hosted at a Colo site and working fine. No firewall in front of the machine, so we turned on the server firewall and only allowed mail, web, ftp, and a couple of other services. This worked great using our external public DNS wired to our domain names and public fixed IP address. Later, we got VPN up a running (the trick was to create a second, local IP address for the ethernet port), but this also required us to turn on the server's DNS to create a split-brained DNS server.
    Everything was working swimmingly... and then we had a hard drive crash. Since we were thinking about moving the server onsite anyway (our POS system was accessed through the VPN, but it could be slow and made our tasting room dependent on Internet access in order to run the POS), we ordered Comcast business class internet with a fixed IP address.
    We updated the external public DNS to the new public fixed ip. Rather than plug the mini directly to the Comcast router (which is in pass-through mode), we elected to put a AirPort Extreme in front of it, mainly so we could get all of the POS computers on the same local network without using the mini as a DHCP/NAT router. We created a DHCP reservation on the Extreme so that the mini had a fixed local IP address. We port forwarded everything we wanted to expose to the Internet. Email started to work again. However, web services and VPN are nada.
    This being Snow Leopard Server and having spent literally hours debugging DNS issues when we first got the server, I knew it wouldn't be straightforward. And it hasn't been. Even changing the IP address of the server has been a chore.
    We ran "sudo changeip <old IP address> <new IP address>".
    Then we ran "sudo changeip -checkhostname" and received:
    "$ sudo changeip -checkhostname
    Primary address     = 10.0.8.2 <new static internal IP address>
    Current HostName    = <servername>.<domainname>.com
    The DNS hostname is not available, please repair DNS and re-run this tool.
    dirserv:success = "success""
    Oh no, the black pit of death.
    Even though I tried to modify the machine record in the local DNS to reflect the new internal static IP address, Nada.
    So, looking back on my previous research from Mr Hoffman and others, I stopped the DNS service, and I deleted the primary zone and reverse lookups in order to rebuild them from scratch. Except that no matter what I do, I can't delete the primary zone - it comes back like Dracula (even though the reverse zone and all of the zone records are gone). I tried rebuilding everything using the undeletable zone, but after a few services (saved each one separately), they would suddenly disappear.
    I am leery of messing with the DNS files on the server as I don't want to hose up Server Admin (my command line skills are rudimentary and slow). I have so much installed on the machine now that I am concerned about someone saying "reinstall".
    Help!
    Related to this is that it is not clear to me in web services which IP address you should use for the sites. The internal IP? The public IP? I thought Apache cared about the external IP address. And I think Apache is hosed at the moment due to my DNS troubles anyway.
    Thanks in advance!

    Morris Zwick wrote:
    And does anyone know which IP you enter for your sites in the web service? The public static IP or the internal private static IP?
    For the external DNS server I am sure you have already deduced that it should be the static IP issued you by Comcast and this will be forwarded by your router to your server.
    For your internal DNS server you could use either the internal LAN IP, or the external IP although the later might be affected by your firewall so this you will need to test.
    For the Web Server service in Server admin, if your only running a single website you could avoid the issue by just using the wildcard entry which will respond to any IP address, so this would be an empty host name and an IP address of *
    In fact you don't have to specify an IP address you could just use the hostname, so it will listen to traffic arriving at your server addressed to any IP address and as long as the URL that was requested includes the hostname you define for the site it will get responded to. So if as an example you have two websites you want to serve
    www.example.com
    site2.example.com
    then as long as both have the IP address for the site as an * (asterisk) then both should work as separate sites for traffic addressed to either the LAN or WAN IP address of the server.
    You will still need to use two IP addresses on the server to enable VPN, you could use a USB Ethernet adapter for the second one. Port forwarding for VPN is not as simple as other traffic as VPN requires traffic different to the standard IP and UDP packets. Routers that support 'VPN Passthrough' are specifically designed to accomodate this but I don't know if the AirPort Extreme does this. I have also found PPTP copes better with this sort of setup than L2TP although PPTP is generally regarded as less secure.

  • How to defragment the datafile hwm in EBS R12 database

    Hi All,
    We are on 1204 E-biziness instance on 11gR2 database.
    We have deleted(purged some EGO data and got the huge space in dba_segments. it was around 2.5 gb after purging activity we got
    SQL> select sum(bytes/1024/1024/1024) from dba_segments;
    SUM(BYTES/1024/1024/1024)
    734.867561
    SQL>
    SQL> select sum(bytes/1024/1024/1024) from dba_data_files;
    SUM(BYTES/1024/1024/1024)
    2456.70493
    SQL>
    But in datafile the HWM is not reduced, i checked by moving the big tables, even though i am not getting the space in datafile level, i need to resize my datafile size to 1TB , take backup and clone the target which is having 1TB space.
    For example in apps_ts_tx_data we have only 243Gb segments but the datafiles size it is having 1000GB, we have to reduce the datafile size to that 300Gb, how to do it?
    =======
    SQL> select sum(bytes/1024/1024/1024) from dba_segments where tablespace_name='APPS_TS_TX_DATA';
    SUM(BYTES/1024/1024/1024)
    243.981201
    SQL> select sum(bytes/1024/1024/1024) from dba_data_files where tablespace_name='APPS_TS_TX_DATA';
    SUM(BYTES/1024/1024/1024)
    1070.2343
    SQL>
    ==========
    I thought of creating one tablespace of 300GB and move all objects into new tablespace , drop old tablespace and rename new tablespace to 'APPS_TS_TX_DATA', but we have objects like below, Pl guide me what is the best method of doing this and reduce my database size to 1TB. so that i can accomplish my task
    ====
    SQL> select DISTINCT SEGMENT_TYPE,count(*) FROM DBA_SEGMENTS where tablespace_name='APPS_TS_TX_DATA' group by SEGMENT_TYPE;
    SEGMENT_TYPE COUNT(*)
    INDEX 275
    INDEX PARTITION 509
    INDEX SUBPARTITION 96
    LOB PARTITION 8
    LOB SUBPARTITION 96
    LOBINDEX 460
    LOBSEGMENT 460
    TABLE 14615
    TABLE PARTITION 2079
    TABLE SUBPARTITION 96
    10 rows selected.
    ====
    Thanks in advance..

    Please see these docs.
    How to Reorganize INV Schema / Reclaim the High Watermark [ID 555058.1]     
    Optimizing Database disk space using Alter table shrink space/move compress [ID 1173241.1]
    Why is no space released after an ALTER TABLE ... SHRINK? [ID 820043.1]
    Various Aspects of Fragmentation [ID 186826.1]
    Thanks,
    Hussein

  • 'alter database open resetlogs' didn't reset one of the datafiles

    I've spent the last three and a half weeks recovering an oracle database (11g 64-bit linux) because of a corrupt block in an online redo log (which I thought was being written to multiple locations). I restored the files, moving some of them around in the process; recovered to the latest possible point; moved files back to their proper location; ran 'alter database open resetlogs'; and one of the datafiles (from a bigfile tablespace) didn't get reset. I checked afterward, and it was marked offline. I do not remember placing the file offline, and cannot find such a statement in my last 300 sqlplus commands, which includes commands well before I renamed this file and the commands surrounding the rename.
    Restoring/recovering the database again will take too long, and is a remarkably poor option. Even if the database had opened correctly, the affected tablespace would not have been touched in the two or three minutes the database was open. Is there any way to force oracle to reset the logs again or otherwise fix this one file to mark it with the same date? Only allowing the resetlogs option after an incomplete recovery seems a poor restriction, especially, if files can slip through like this. I'm suspecting there is someway to just fix the checkpoint values for the tablespace, but I don't know where to begin. This particular file is <5% of the database, so if I have to do some sort of backup/restore with just it, that is probably doable.

    0: 11.1.0.6.0 on SUSE Linux Enterprise Server 10 SP2
    1: rman
    backup format '/opt/oracle/backup/mydatabase_%Y-%M-%D_%s_datafiles_%p' (database);
    backup format '/opt/oracle/backup/mydatabase_%Y-%M-%D_%s_archivelogs_%p' archivelog all delete input;
    backup format '/opt/oracle/backup/mydatabase_%Y-%M-%D_%s_control_%p' current controlfile spfile;
    2:
    restore database; --not sure what datafiles were restored with this
    restore datafile X; --several files were restored individually
    recover database until scn 1137554504; -- I verified that all datafiles were on the same checkpoint after this finished. Not having placed any files offline, I didn't bother checking that.
    3:
    SQL> alter database open resetlogs;
    Database altered.
    Elapsed: 00:04:20.34
    SQL> quit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    4: Nothing in the tablespace has been touched since I ran 'alter database open resetlogs;'. It also appears that oracle placed the file offline (without me telling it to do so) and left it that way through the resetlogs, leaving the tablespace unusable during the time it was opened. The only things that would be out of date are the 'RESETLOGS_CHANGE#', the 'CHECKPOINT_CHANGE#', and associated values. It's still at the last scn before the resetlogs, and the system has been in archivelog mode the entire time. This is all information that Oracle could be tracking, and from a program logic standpoint there is no reason why Oracle cannot tie together the changes before the resetlogs, the resetlogs command and the changes after the resetlogs into a new, continuous string of changes. I assume there is some such feature in a high-caliber program because I'm actually a programmer (who would have included such advanced tracking features), and I've become a DBA out of necessity. I admit to not knowing all of the oracle DBA commands, hence me posting here before doing the work of submitting a request to metalink.
    5: I consider it a poor restriction because it doesn't always reset the logs on all files, and as far as my knowledge goes it has rendered my 3.5 week recovery process WORTHLESS. I suppose it could cause numerous errors, especially if the database wasn't cleanly shut down, but having the ability to do something equivalent to datafiles that oracle skipped the process on seems quite useful in my situation. I guess the more fundamental problem to complain about is that it would apply such changes to only some of the files, while leaving others unusable, instead of just giving me an error that some files weren't going to be reset, but I think I'm done venting my Oracle frustrations for now.
    Am I stuck with a tablespace that I cannot bring online with the database open, or is there some sort of 'alter database datafile' command (or anything else) that I know nothing of that will fix the straggling file?
    Edited by: jbo5112 on Oct 5, 2009 3:33 PM -- obfuscated some file names to secure identity.

  • Moving Exchange 2010 Production DAG Setup and Servers to difrent Date Center with new IP Addresses

    My Setup
    Exchange 2010 SP3 2 Mailbox, 2 Hub/CAS, 2 TMG Servers in Production
    Exchange 2010 SP3 2 Mailbox, 2 Hub/CAS, 2 TMG Servers in DR Site
    Exchange 2010 DAG implemented with site level redundancy
    Requirement
    Need move Production Exchange setup along with DAG configuration to a new location.
    IP Addresses for servers and DAG setup will change
    No change in Server name and DNS names
    During production server movement Mail services will be activated from DR Site
    Need help in planning and executing the IP Address change/DAG Change
     and Server movement.
    Kumar K S

    Hi,
    I will have the following plans:
    1. Active the DR site as the main site during the migration.
    2. Moving the Server to the new site
    3. Change the activate back to the Production Servers
    https://social.technet.microsoft.com/Forums/exchange/en-US/fb9a27c3-81f8-4079-aeb8-42119b1e23bf/changing-ip-address-of-exchange-server
    Thanks,
    Simon Wu
    TechNet Community Support

  • Oracle 10g Dataguard Setup

    Hi Guys,
    I am planning to have Dataguard setup for my single instance database in HP-UNIX.
    Please let me know the pre-requistes for this setup. I currently have Oracle enterprise edition installed. Do i need any specific application, tool or software installed for this, etc ?
    Also, help me out with the configuration steps.
    Thanks.

    user9098221 wrote:
    Hi,
    Thanks. Apart from RMAN, do i need to have anything else installed for the setup ?
    Also, if i choose the manual method(using sql's), do I need to install anything ?
    I understand these are basics. Please bear with me.
    Thanks.Hi,
    On standby side need following steps.
    1. You must install Oracle Database Enterprise Edition (same version, same patchset) Software only.
    2. Add Listener, and add standby service name
    3. You must create folders for data files, flash recovery area and diags.
    Not need any additional tools.
    Regards
    Mahir M. Quluzade

  • Drop datafile in dataguard

    Hello
    Oracle 10.2.0.1
    I have dataguard configuration with standby file management auto.
    When I add a datafile, it is automatically added to standby database.
    However,when I drop a datafile, it is dropped from primary database but it is not dropped from the standby database..
    What is the reason for this?

    Also refer to the Oracle Documentation,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3002.htm
    As in the documentation,
    "DROP Clause
    Specify DROP to drop from the tablespace an empty datafile or tempfile specified by filename or file_number. This clause causes the datafile or tempfile to be removed from the data dictionary and deleted from the operating system. The database must be open at the time this clause is specified.
    The ALTER TABLESPACE ... DROP TEMPFILE statement is equivalent to specifying the ALTER DATABASE TEMPFILE ... DROP INCLUDING DATAFILES.
    "

  • Patching on PRD system and thereafter moving the logs to DR ?

    Patching on PRD system and thereafter moving the logs to DR, will the patches take effect in DR ?
    We have patched our PRD system to the latest APPL patch levels, we have a DR system in place, the link for which was disabled before we started our patching process. The DR system was successfully patched before taking up the activity on PRD.
    The link between the PRD and DR is still not restored.
    Now, that our PRD system is on the latest patch levels, my idea is restoring a full offline database backup of PRD on the DR system should make the DR system exactly identical to the PRD and the link between the two restored to allow for dynamic syncing thereafter.
    Is my above view correct ?
    Moreover, from my understanding, Patches are a bunch of SAP notes, SAP notes in-turn are basically code corrections for some objects, which ultimately means some entries are either written or deleted from database tables. My colleagues are of the view that instead of patching the DR server separately we could have actually restored the link and transferred the logs to the DR and the patches should have taken effect in the DR SERVER, is this alright ?
    From my past experience I am aware that once you restore an offline database backup onto any system, the patch level in the target system would become similar to the source system(the system whose offline backup is actually being restored).
    Thereby request the experts on the forum to give their views on the above.
    Many Thanks in advance

    when we take up any patching activity, can we just do it on the PRD and then move on the logs to the DR and the DR should also be on the same patch levels as PRD. ?
    Yes, its a normal patching process for Primary-->Secondary setups.
    You have to apply the application patches only on Production, then sync process takes care on DR (No need to apply the pathces here, infact you don't need to touch your exisiting DR system).
    As a best practice - Just perform a Quality refresh before performing patching activity, then you can perform your test Patching/Upgrade activity in your Quality system itself (Not on DR).
    Regards,
    Nick Loy

  • Using DBConsole In Dataguard Setup

    Hi,
    We have 3 node RAC Primary and a 3 node RAC Standby part of our Production Dataguard setup. We are using DBConsole to monitor the databases in our environment.
    The agent and console was up on the Primary site and the Production database was being continuously monitored using the GUI.
    Recently as part of maintenance activity the database roles have been switched.
    My question is :
    Can we just proceed to start the emagent and console on the new primary site (which was not running initially as it was a standby previously) and continue to monitor the environment?What is the risk involved in doing so?
    Regards,
    Santosh

    Hello;
    I know it's not the question you asked but Data Guard broker makes this simple and is a great option.
    DGMGRL> show configuration;
    If I were going to use EM I would make sure that:
    1. I deploy Enterprise Manager agents to the both primary and standby servers.
    2. The Standby database(s) are added to to Enterprise Manager.
    Best Regards
    mseberg

  • Single node Dataguard setup

    Please can somebody provide me the source for step by step single instance physical standby dataguard setup for my single instance database.

    http://static7.userland.com/oracle/gems/alejandroVargas/DataGuardPhysicalStandbystep.pdf
    I got this PDF from the above link, do you have one for 'single instance to single instance dataguard setup' as in the one you sent from Page 17 of 27 it starts telling about RAC which is confusing me. Please send if you have any other pdf or source.

  • Moving a datafile in 12C

    Hi,
    in 12C on Win 2008 I'm following this tutorial :
    Oracle Database 12c Learning Library.../Performing Backup and Recovery
    In which it is said :
    Use the Linux mv command to move the datafile belonging to the APPTS tablespace to $HOME/appts.bkup: mv /u01/app/oracle/product/12.1.0/dbhome_1/dbs/appts.dbf $HOME/appts.bkup
    Being on Windows I use move commande.
    But of cours Windows refuse to move because the file is in use.
    Then what should be done ?
    Thank you.

    user10274093 wrote:
    Hi,
    in 12C on Win 2008 I'm following this tutorial :
    Oracle Database 12c Learning Library.../Performing Backup and Recovery
    In which it is said :
    Use the Linux mv command to move the datafile belonging to the APPTS tablespace to $HOME/appts.bkup: mv /u01/app/oracle/product/12.1.0/dbhome_1/dbs/appts.dbf $HOME/appts.bkup
    Being on Windows I use move commande.
    But of cours Windows refuse to move because the file is in use.
    Then what should be done ?
    Thank you.
    Worth to read this link, moving datafiles (12c) on windows
    http://docs.oracle.com/cd/E16655_01/server.121/e17636/dfiles.htm#ADMIN13837
    Below information from the URL.
    When you relocate a data file on the Windows platform, the original data file might be retained in the old location, even when the KEEP option is omitted. In this case, the database only uses the data file in the new location when the statement completes successfully. You can delete the old data file manually after the operation completes if necessary.
    HTH    

Maybe you are looking for