Clone A Database Instance on a different mount point.

Hello Gurs,
I need you help as I need to clone a 11.1.0.7 database instance to a NEW mount point on the same host. The host is a HP-UX box and my question is do I need to install oracle database software in this new mount point and then clone ?? or cloning to the NEW MOUNT point itself will create all the necessary software?. Please provide me any documents that will be helpful for the process.
Thanks In Advance.

882065 wrote:
Hello Gurs,
my question is do I need to install oracle database software in this new mount point and then clone ??No.
or cloning to the NEW MOUNT point itself will create all the necessary software?.No: cloning a database on same host means cloning database files : it does not mean cloning Oracle executables. You don't need to clone ORACLE_HOME on same host.
Please provide me any documents that will be helpful for the process.
Try to use : http://www.oracle-base.com/articles/11g/DuplicateDatabaseUsingRMAN_11gR2.php
Thanks In Advance.Edited by: P. Forstmann on 29 nov. 2011 19:53

Similar Messages

  • Destination database instance is 'started' not 'mounted' in standby alert.l

    11.2.0.2 Physcial standby db is taking long time to startup approximately about 20 to 30 minutes. In Alert.log i am seeing continous message "destination database instance is 'started' not 'mounted'" at time of startup. Its solaris 64bit platform. Any idea whats wrong?

    user530956 wrote:
    11.2.0.2 Physcial standby db is taking long time to startup approximately about 20 to 30 minutes. In Alert.log i am seeing continous message "destination database instance is 'started' not 'mounted'" at time of startup. Its solaris 64bit platform. Any idea whats wrong?what parameters you been set for LOG_ARCHIVE_DEST_2 in standby instance? Disable those parameters.
    Next time paste your PFILE of standby.
    >
    user530956      
         Newbie
    Handle:      user530956
    Status Level:      Newbie
    Registered:      Sep 15, 2006
    Total Posts:      113
    Total Questions:      63 (63 unresolved)
    >
    WOW.... Not even one question was not resolved, Sad to see that no one is helpful in OTN, then work with Support.
    CLOSE THE THREADS AS ANSWERED Keep the forum clean.
    Read Announcement: Forums Etiquette / Reward Points
    https://forums.oracle.com/forums/ann.jspa?annID=718

  • Destination database instance is 'started' not 'mounted'

    When i tried to startup the standby database, it is taking a long time. (Still it is not started)
    The error in alert log is
    destination database instance is 'started' not 'mounted'
    Could some one help me ?
    Thanks in advance..
    regards,
    Jibu

    No version information.
    No hardware or operating system.
    No indication of whether physical, logical, or snapshot.
    No indication of what commands you issued.
    No error message (your impression of the message is not the message)
    No indication as to whether it used to work or is a new install
    No help is possible at this time.

  • Is it possible in 9i to take export backup in two different mount point

    Hello Team,
    Is it possible in 9i to take export in two different mount point with file size 22 gb.
    exp owner=PERFSTAT FILE =/global/nvishome5/oradata/jlrvista/PERFSTAT_exp01.dmp,/global/nvishome4/oradata/jlrvista/export/PERFSTAT_exp02.dmp FILESIZE=22528
    I tried with above but no luck..so later killed session
    prs72919-oracle:/global/nvishome5/oradata/jlrvista$ exp owner=SLENTON FILE =/global/nvishome5/oradata/jlrvista/PERFSTAT_exp01.dmp,/global/nvishome4/oradata/jlrvista/export/PERFSTAT_exp02.dmp FILESIZE=2048
    Export: Release 9.2.0.8.0 - Production on Thu Nov 14 13:25:54 2013
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Username: / as sysdba
    Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.8.0 - Production
    Export done in US7ASCII character set and UTF8 NCHAR character set
    server uses UTF8 character set (possible charset conversion)
    About to export specified users ...
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user SLENTON
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user SLENTON
    continuing export into file /global/nvishome4/oradata/jlrvista/export/PERFSTAT_exp02.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    About to export SLENTON's objects ...
    . exporting database links
    . exporting sequence numbers
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    . exporting cluster definitions
    . about to export SLENTON's tables via Conventional Path ...
    . . exporting table                      G_AUTHORS
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp > ps -ef | grep exp
    continuing export into file ps -ef | grep exp.dmp
    Export file: expdat.dmp > ^C
    continuing export into file expdat.dmp
    EXP-00056: ORACLE error 1013 encountered
    ORA-01013: user requested cancel of current operation
    . . exporting table                        G_BOOKS
    Export file: expdat.dmp > ^C
    continuing export into file expdat.dmp
    EXP-00056: ORACLE error 1013 encountered
    ORA-01013: user requested cancel of current operation
    . . exporting table                 G_BOOK_AUTHORS
    Export file: expdat.dmp > ^C
    continuing export into file expdat.dmp
    Export file: expdat.dmp > Killed

    See the text in BOLD , if you do not specify the sufficient export file names, export will prompt you to provide additional file names. So either for your 22 GB you need to give 11 different file names or provide the filename when its prompted.
    FILE
    Default: expdat.dmp
    Specifies the names of the export files. The default extension is .dmp, but you can specify any extension. Since Export supports multiple export files , you can specify multiple filenames to be used.
    When Export reaches the value you have specified for the maximum FILESIZE, Export stops writing to the current file, opens another export file with the next name specified by the parameter FILE and continues until complete or the maximum value of FILESIZE is again reached. If you do not specify sufficient export filenames to complete the export, Export will prompt you to provide additional filenames.

  • Btrfs with different file systems for different mount points?

    Hey,
    I finally bought a SSD, and I want to format it to f2fs (and fat to boot on UEFI) and install Arch on it, and in my old HDD I intend to have /home and /var and  try btrfs on them, but I saw in Arch wiki that btrfs "Cannot use different file systems for different mount points.", it means I cannot have a / in f2fs and a /home in btrfs? What can I do? Better use XFS, ZFS or ext4 (I want the faster one)?
    Thanks in advance and sorry my english.

    pedrofleck wrote:Gosh, what was I thinking, thank you! (I still have a doubt: is btrfs the best option?)
    Just a few weeks ago many of us were worrying about massive data loss due to a bug introduced in kernel 3.17 that caused corruption when using snapshots. Because btrfs is under heavy developement, this sort of thing can be expected. That said, I have my entire system running with btrfs. I have 4 volumes, two raid1, a raid0 and a jbod. I also run rsync to an ext4 partition and ntfs. Furthermore I make offline backups as well.
    If you use btrfs make sure you have backups and make sure you are ready to use them. Also, make sure you check sum your backups. rsync has the option to use checksums in place of access times to determine what to sync.

  • Unexpected disconnection external disk and different mount points

    Dear community,
    I have an application that needs to read and write data from an external disk called "external"
    If the volume is accidentally unmounted by unproperly pluggin it off), it will remount as "external_1" in the terminal,
    and my app won't see it as the original valid destination.
    According to this documentation:
    https://support.apple.com/en-us/HT203258
    it needs a reboot to be solved, optionally removing the wrong unused mount points before rebooting.
    Would there be a way to force OSX remounting the volume on the original mount point automatically?
    or checking the disk UUID and bypassing the different mount point name (app or os level?)
    Thanks for any clue on that.

    See DUMPFILE

  • Taking export on different mount point.

    We have two databases in one server. I want to  take export of one schema from one database and store it directly on other database's mount point due to space crunch. How can i do that. We are using solaris 5.10 as OS and database version is 11.2.0.3.

    Thanks for your quick reply. Here is what i tried:
    Server Name - unixbox02
    Source database name - DV01
    Target database name - DV02
    I want to take export of test schema from DV01 to "/orabkup01/DV02/data_dump". The test schema is 100gig+ in size and i dont have enough space on /orabkup01/DV01.
    I have created directory on DV01 named datadir1 as 'unixbox02:/orabkup01/DV02/data_dump'.
    Then granted read and write privilege to system.
    (Not sure to who else i need to grant this privilege)
    After that I ran the below script:
    expdp "'/ as sysdba'"  schemas=test directory=datadir1 dumpfile=a1.dmp logfile=a2.log grants=y indexes=y rows=y constraints=y
    But I have received the below error:
    ORA-39002: invalid operation
    ORA-39070: Unable to open the log file.
    ORA-29283: invalid file operation
    ORA-06512: at "SYS.UTL_FILE", line 536
    ORA-29283: invalid file operation
    I am new to oracle dba, hence, I am trying to explain as much as possible.

  • How datafile being extended over different mount point automatically

    Hello,
    I would like to understand if I have like 20 datafiles created over 2 mount point. All of it being setup with auto extend of 1GB and maxfile of 10GB. All files are not a max size yet.
    10 datafiles at /mountpoint1 with free space of 50GB
    10 datafiles at /mountpoint2 with free space of 200MB
    Since mountpoint2 have absolutely no space for auto extend, will it keep extending datafiles at mountpoint1 until it hit the maxsize of each file?
    Will it cause any issue of having mountpoint could not be extended due to mountpoint2?

    Girish Sharma wrote:
    In general, extents are allocated in a round-robin fashionNot necessarily true. I used to believe that, and even published a 'proof demo'. But then someone (may have been Jonothan Lewis) pointed out that there were other variables I didn't control for that can cause oracle to completely fill one file before moving to the next. Sorry, I don't have a link to that converstation, but it occurred in this forum, probably some time in 2007-2008.Ed,
    I guess you are looking for below thread(s)... ?
    Re: tablespaces or datafile
    or
    Re: tablespace with multiple files , how is space consumed?
    Regards
    Girish SharmaYes,but even those weren't the first 'publication' of my test results, as you see in those threads I refer to an earlier demo. That may have been on usenet in comp.database.oracle.server.

  • Moving the database to different mount point.

    Hi,
    I have R12 app tier on one server and 10gR2 database on separate server.
    I want to move the database files from /u01/oradata/PROD to /u02/oradata/PROD on the same server.
    I know how to do this for the database using alter database rename file .....
    But what do I have to do with the application and context files.?
    Do I just need to edit the values s_dbhome1, s_dbhome2, s_dbhome3 and s_dbhome4 to point to /u02/oradata/PROD and then run adconfig?
    Many thanks
    Skulls.

    I am not cloning, just moving the databaseI know, but Rapid Clone does also help (use the same hostname, domain name, port pool, ..etc except for the directory paths) -- Please note you do not need to use "alter database rename file" as Rapid Clone will take care of that.
    so I think manual changing datfile names is needed, followed by edit context file and run autoconfig?This is also valid.
    Thanks,
    Hussein

  • Connecting to two different database instances from a swing application.

    Hi All,
    I am developing a swing application which needs to interact with two different database instances of two different weblogic servers.
    More eloborately,
    I have some data in DB_Instance1 running on[b] Weblogic_Server1 and I need to insert the same data into DB_instance2 running on Weblogic_server2. Is it possible. Could some explain me how to do that..
    Thanks in advance...
    Sreekanth.

    Hi Rick,
    Try logging onto both Server first. You'll have to use either 2 separate ODBC DSN's or 2 separate OLE DB connections. Set them both for Trusted Authentication, you'll have to configure that on the Server also.Then try your query.
    If that doesn't work then you'll have to create a Stored Procedure or View that can link the 2 Server side.
    Thank you
    Don

  • Multiple oracle database instances with different characterset on  the same server

    Hello,
    Is it possible to have 2 database instances running with different charactersets,one with AL32UTF8 and the other with WE8MSWIN1252.?
    Are there any setup requirements to be performed prior to setting up the database instances?
    The 3rd party utility that we want to use does not support AL32UTF8 and insists on using a database with character set WE8MSWIN1252.
    Kindly help.
    Thanks,
    Ram.

    Hello Zhe,
    I guess I posted my question in a wrong forum.  I tried my best to find a suitable forum and thought this was the best and closest I found.  Apparently not.  Can you please let me know the right forum for my question?
    The below is the breif of what we are currently facing:
    We are in the process of finalizing plans to install Automic for our Retail applications to schedule jobs.  In the process came to know that Automic does not support AL32UTF8.
    Right now we have RMS and UC4 (now called Automic) run on the database server as UC4 supports AL32UTF8 and schema for UC4 is inside the RMS database.
    Going forward it is recommended to have a separate database instance on the same server as that of RMS database and with a different characterset which is WE8MSWIN1252.
    Please let me know what forum to post in, I will repost the question.
    Thanks,
    Ram.

  • Split datafiles in multiple mount points

    Hi,
    I would like to split my Oracle Applications datafiles in multiple mount points to improve the performance, but I didn�t find any document(best pratices) talking about wich tablespaces should be put on the same or different mount point.
    For instance:
    I have mount points data1, data2 and data3. How to split the "data" datafiles into this 3 mount points.
    I already split data, index, logs and system in differents mount points.
    Anyone had already done something like that or have any document/information about it.
    Thanks,
    Marcelo.

    Hi,
    Its an simple concept.
    You got to move the datafiles to respective mount points.
    Recreate the control file based on the datafiles that locates.
    Thats it.
    Thanks and Regards
    A.Riyas

  • To Extend the mount point in RHEL 4

    Hi Guys,
    I have Oracle EBS R12 and 11i in one hard disk but different mount points. Now my requirement is to extend the mount point for EBS R12.
    Below is my my point and available space on server.
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda1 49G 13G 34G 28% /
    /dev/sda3 145G 97G 41G 71% /apps1
    */dev/sdb2 289G 272G 2.8G 100% /apps12*
    /dev/sda2 145G 80G 58G 59% /apps2
    none 1.3G 0 1.3G 0% /dev/shm
    /dev/sda7 981M 517M 415M 56% /home
    /dev/sda5 58G 51G 4.6G 92% /stage
    /dev/sda8 2.0G 36M 1.8G 2% /tmp
    /dev/sdb1 115G 75G 34G 69% /u01
    I want to extend the apps12 mount point? please help me.
    Thanks in advance.
    Regards,
    Nikunj

    You cannot simply extend a partition or volume under RHEL4 unless it is:
    - virtual disk
    - LVM
    - physical free space avaiable
    Your available options may depend on the output of the following commands:
    <pre>
    mount
    parted -s /dev/sdb print
    </pre>

  • Question about changing zonepath from one mount point to another

    Hi all
    A local zone is currently running with its zonepath, say /export/home/myzone, mounted on a Veritas volume. Is it possible to change the zonepath to a different mount point, say /zone/myzone, which is mounted on the same volume, without re-installing the entire zone?
    Regards
    Sunny

    Yes..
    U can use zonecfg to reconfigure the zone..
    which is Sun's supported way..
    I just usually edit /zones/zone-name.xml
    There are several ways to move a zone, but this has
    always worked for me..
    - stop the zone
    - tar the zone
    - move the zone
    - edit the /etc/zones/zone-name.xml (to reflect the new path)
    - detach the zone
    - attach the zone (so it regnerates the hash)
    - boot the zone
    hth

  • Fixed mount points for USB disks via HAL

    I've been trying to figure this out on my own but I've been able to achieve my goal.
    I want to setup my system so that it auto-mounts a specific hard drive and makes its contents available at a specific file system location.
    I'd love to do that by identifying my hard drives by UUID or ID and assigning each one a different mount point.
    I've tried the approach described in the ArchWiki:
    File: /etc/hal/fdi/policy/20-$device_name.fdi
    <?xml version="1.0" encoding="UTF-8"?>
    <deviceinfo version="0.2">
    <device>
    <match key="volume.uuid" string="$device_uuid">
    <merge key="volume.label" type="string">$device_name</merge>
    </match>
    </device>
    </deviceinfo>
    http://wiki.archlinux.org/index.php/HAL#About_volumes_mount_points
    and this one:
    <device>
    <match key="info.udi" string="/org/freedesktop/Hal/devices/volume_uuid_E265_5E6A">
    <merge key="volume.policy.desired_mount_point" type="string">ipod</merge>
    <merge key="volume.policy.mount_option.iocharset=iso8859-15" type="bool">true</merge>
    <merge key="volume.policy.mount_option.sync" type="bool">true</merge>
    </match>
    </device>
    http://www.mythic-beasts.com/~mark/random/hal/
    I restart HAL each time I change the configuration of the 20-$device_name.fdi or preferences.fdi (the second code example). Nothing at all can be found in /var/log/messages. It just silently refuses to mount the devices.
    It works okay without these configurations and HAL auto-mount all these hard drives but only if I do not mess with the configs in /etc/hal/fdi/policy.
    Can someone please explain what could be wrong here?

    Dehir wrote:Im actually having similar difficulties. Have created etc/hal/fdi/policy/20-$device_name.fdi names for each device. But when im trying mount them from pcmanfm they get mounted every single time by random order, which is not that i want. Id prefer hal mounting over fstab but still want them to be mounted with specific names.
    Yeah, that's the whole point - I want to have it done automatically with only one condition - fixed mount point names.

Maybe you are looking for