Taking export on different mount point.

We have two databases in one server. I want to  take export of one schema from one database and store it directly on other database's mount point due to space crunch. How can i do that. We are using solaris 5.10 as OS and database version is 11.2.0.3.

Thanks for your quick reply. Here is what i tried:
Server Name - unixbox02
Source database name - DV01
Target database name - DV02
I want to take export of test schema from DV01 to "/orabkup01/DV02/data_dump". The test schema is 100gig+ in size and i dont have enough space on /orabkup01/DV01.
I have created directory on DV01 named datadir1 as 'unixbox02:/orabkup01/DV02/data_dump'.
Then granted read and write privilege to system.
(Not sure to who else i need to grant this privilege)
After that I ran the below script:
expdp "'/ as sysdba'"  schemas=test directory=datadir1 dumpfile=a1.dmp logfile=a2.log grants=y indexes=y rows=y constraints=y
But I have received the below error:
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
I am new to oracle dba, hence, I am trying to explain as much as possible.

Similar Messages

  • Is it possible in 9i to take export backup in two different mount point

    Hello Team,
    Is it possible in 9i to take export in two different mount point with file size 22 gb.
    exp owner=PERFSTAT FILE =/global/nvishome5/oradata/jlrvista/PERFSTAT_exp01.dmp,/global/nvishome4/oradata/jlrvista/export/PERFSTAT_exp02.dmp FILESIZE=22528
    I tried with above but no luck..so later killed session
    prs72919-oracle:/global/nvishome5/oradata/jlrvista$ exp owner=SLENTON FILE =/global/nvishome5/oradata/jlrvista/PERFSTAT_exp01.dmp,/global/nvishome4/oradata/jlrvista/export/PERFSTAT_exp02.dmp FILESIZE=2048
    Export: Release 9.2.0.8.0 - Production on Thu Nov 14 13:25:54 2013
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Username: / as sysdba
    Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.8.0 - Production
    Export done in US7ASCII character set and UTF8 NCHAR character set
    server uses UTF8 character set (possible charset conversion)
    About to export specified users ...
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user SLENTON
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user SLENTON
    continuing export into file /global/nvishome4/oradata/jlrvista/export/PERFSTAT_exp02.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    About to export SLENTON's objects ...
    . exporting database links
    . exporting sequence numbers
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    . exporting cluster definitions
    . about to export SLENTON's tables via Conventional Path ...
    . . exporting table                      G_AUTHORS
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp > ps -ef | grep exp
    continuing export into file ps -ef | grep exp.dmp
    Export file: expdat.dmp > ^C
    continuing export into file expdat.dmp
    EXP-00056: ORACLE error 1013 encountered
    ORA-01013: user requested cancel of current operation
    . . exporting table                        G_BOOKS
    Export file: expdat.dmp > ^C
    continuing export into file expdat.dmp
    EXP-00056: ORACLE error 1013 encountered
    ORA-01013: user requested cancel of current operation
    . . exporting table                 G_BOOK_AUTHORS
    Export file: expdat.dmp > ^C
    continuing export into file expdat.dmp
    Export file: expdat.dmp > Killed

    See the text in BOLD , if you do not specify the sufficient export file names, export will prompt you to provide additional file names. So either for your 22 GB you need to give 11 different file names or provide the filename when its prompted.
    FILE
    Default: expdat.dmp
    Specifies the names of the export files. The default extension is .dmp, but you can specify any extension. Since Export supports multiple export files , you can specify multiple filenames to be used.
    When Export reaches the value you have specified for the maximum FILESIZE, Export stops writing to the current file, opens another export file with the next name specified by the parameter FILE and continues until complete or the maximum value of FILESIZE is again reached. If you do not specify sufficient export filenames to complete the export, Export will prompt you to provide additional filenames.

  • Btrfs with different file systems for different mount points?

    Hey,
    I finally bought a SSD, and I want to format it to f2fs (and fat to boot on UEFI) and install Arch on it, and in my old HDD I intend to have /home and /var and  try btrfs on them, but I saw in Arch wiki that btrfs "Cannot use different file systems for different mount points.", it means I cannot have a / in f2fs and a /home in btrfs? What can I do? Better use XFS, ZFS or ext4 (I want the faster one)?
    Thanks in advance and sorry my english.

    pedrofleck wrote:Gosh, what was I thinking, thank you! (I still have a doubt: is btrfs the best option?)
    Just a few weeks ago many of us were worrying about massive data loss due to a bug introduced in kernel 3.17 that caused corruption when using snapshots. Because btrfs is under heavy developement, this sort of thing can be expected. That said, I have my entire system running with btrfs. I have 4 volumes, two raid1, a raid0 and a jbod. I also run rsync to an ext4 partition and ntfs. Furthermore I make offline backups as well.
    If you use btrfs make sure you have backups and make sure you are ready to use them. Also, make sure you check sum your backups. rsync has the option to use checksums in place of access times to determine what to sync.

  • Unexpected disconnection external disk and different mount points

    Dear community,
    I have an application that needs to read and write data from an external disk called "external"
    If the volume is accidentally unmounted by unproperly pluggin it off), it will remount as "external_1" in the terminal,
    and my app won't see it as the original valid destination.
    According to this documentation:
    https://support.apple.com/en-us/HT203258
    it needs a reboot to be solved, optionally removing the wrong unused mount points before rebooting.
    Would there be a way to force OSX remounting the volume on the original mount point automatically?
    or checking the disk UUID and bypassing the different mount point name (app or os level?)
    Thanks for any clue on that.

    See DUMPFILE

  • Clone A Database Instance on a different mount point.

    Hello Gurs,
    I need you help as I need to clone a 11.1.0.7 database instance to a NEW mount point on the same host. The host is a HP-UX box and my question is do I need to install oracle database software in this new mount point and then clone ?? or cloning to the NEW MOUNT point itself will create all the necessary software?. Please provide me any documents that will be helpful for the process.
    Thanks In Advance.

    882065 wrote:
    Hello Gurs,
    my question is do I need to install oracle database software in this new mount point and then clone ??No.
    or cloning to the NEW MOUNT point itself will create all the necessary software?.No: cloning a database on same host means cloning database files : it does not mean cloning Oracle executables. You don't need to clone ORACLE_HOME on same host.
    Please provide me any documents that will be helpful for the process.
    Try to use : http://www.oracle-base.com/articles/11g/DuplicateDatabaseUsingRMAN_11gR2.php
    Thanks In Advance.Edited by: P. Forstmann on 29 nov. 2011 19:53

  • How datafile being extended over different mount point automatically

    Hello,
    I would like to understand if I have like 20 datafiles created over 2 mount point. All of it being setup with auto extend of 1GB and maxfile of 10GB. All files are not a max size yet.
    10 datafiles at /mountpoint1 with free space of 50GB
    10 datafiles at /mountpoint2 with free space of 200MB
    Since mountpoint2 have absolutely no space for auto extend, will it keep extending datafiles at mountpoint1 until it hit the maxsize of each file?
    Will it cause any issue of having mountpoint could not be extended due to mountpoint2?

    Girish Sharma wrote:
    In general, extents are allocated in a round-robin fashionNot necessarily true. I used to believe that, and even published a 'proof demo'. But then someone (may have been Jonothan Lewis) pointed out that there were other variables I didn't control for that can cause oracle to completely fill one file before moving to the next. Sorry, I don't have a link to that converstation, but it occurred in this forum, probably some time in 2007-2008.Ed,
    I guess you are looking for below thread(s)... ?
    Re: tablespaces or datafile
    or
    Re: tablespace with multiple files , how is space consumed?
    Regards
    Girish SharmaYes,but even those weren't the first 'publication' of my test results, as you see in those threads I refer to an earlier demo. That may have been on usenet in comp.database.oracle.server.

  • Moving the database to different mount point.

    Hi,
    I have R12 app tier on one server and 10gR2 database on separate server.
    I want to move the database files from /u01/oradata/PROD to /u02/oradata/PROD on the same server.
    I know how to do this for the database using alter database rename file .....
    But what do I have to do with the application and context files.?
    Do I just need to edit the values s_dbhome1, s_dbhome2, s_dbhome3 and s_dbhome4 to point to /u02/oradata/PROD and then run adconfig?
    Many thanks
    Skulls.

    I am not cloning, just moving the databaseI know, but Rapid Clone does also help (use the same hostname, domain name, port pool, ..etc except for the directory paths) -- Please note you do not need to use "alter database rename file" as Rapid Clone will take care of that.
    so I think manual changing datfile names is needed, followed by edit context file and run autoconfig?This is also valid.
    Thanks,
    Hussein

  • Question about changing zonepath from one mount point to another

    Hi all
    A local zone is currently running with its zonepath, say /export/home/myzone, mounted on a Veritas volume. Is it possible to change the zonepath to a different mount point, say /zone/myzone, which is mounted on the same volume, without re-installing the entire zone?
    Regards
    Sunny

    Yes..
    U can use zonecfg to reconfigure the zone..
    which is Sun's supported way..
    I just usually edit /zones/zone-name.xml
    There are several ways to move a zone, but this has
    always worked for me..
    - stop the zone
    - tar the zone
    - move the zone
    - edit the /etc/zones/zone-name.xml (to reflect the new path)
    - detach the zone
    - attach the zone (so it regnerates the hash)
    - boot the zone
    hth

  • To Extend the mount point in RHEL 4

    Hi Guys,
    I have Oracle EBS R12 and 11i in one hard disk but different mount points. Now my requirement is to extend the mount point for EBS R12.
    Below is my my point and available space on server.
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda1 49G 13G 34G 28% /
    /dev/sda3 145G 97G 41G 71% /apps1
    */dev/sdb2 289G 272G 2.8G 100% /apps12*
    /dev/sda2 145G 80G 58G 59% /apps2
    none 1.3G 0 1.3G 0% /dev/shm
    /dev/sda7 981M 517M 415M 56% /home
    /dev/sda5 58G 51G 4.6G 92% /stage
    /dev/sda8 2.0G 36M 1.8G 2% /tmp
    /dev/sdb1 115G 75G 34G 69% /u01
    I want to extend the apps12 mount point? please help me.
    Thanks in advance.
    Regards,
    Nikunj

    You cannot simply extend a partition or volume under RHEL4 unless it is:
    - virtual disk
    - LVM
    - physical free space avaiable
    Your available options may depend on the output of the following commands:
    <pre>
    mount
    parted -s /dev/sdb print
    </pre>

  • Split datafiles in multiple mount points

    Hi,
    I would like to split my Oracle Applications datafiles in multiple mount points to improve the performance, but I didn�t find any document(best pratices) talking about wich tablespaces should be put on the same or different mount point.
    For instance:
    I have mount points data1, data2 and data3. How to split the "data" datafiles into this 3 mount points.
    I already split data, index, logs and system in differents mount points.
    Anyone had already done something like that or have any document/information about it.
    Thanks,
    Marcelo.

    Hi,
    Its an simple concept.
    You got to move the datafiles to respective mount points.
    Recreate the control file based on the datafiles that locates.
    Thats it.
    Thanks and Regards
    A.Riyas

  • Fixed mount points for USB disks via HAL

    I've been trying to figure this out on my own but I've been able to achieve my goal.
    I want to setup my system so that it auto-mounts a specific hard drive and makes its contents available at a specific file system location.
    I'd love to do that by identifying my hard drives by UUID or ID and assigning each one a different mount point.
    I've tried the approach described in the ArchWiki:
    File: /etc/hal/fdi/policy/20-$device_name.fdi
    <?xml version="1.0" encoding="UTF-8"?>
    <deviceinfo version="0.2">
    <device>
    <match key="volume.uuid" string="$device_uuid">
    <merge key="volume.label" type="string">$device_name</merge>
    </match>
    </device>
    </deviceinfo>
    http://wiki.archlinux.org/index.php/HAL#About_volumes_mount_points
    and this one:
    <device>
    <match key="info.udi" string="/org/freedesktop/Hal/devices/volume_uuid_E265_5E6A">
    <merge key="volume.policy.desired_mount_point" type="string">ipod</merge>
    <merge key="volume.policy.mount_option.iocharset=iso8859-15" type="bool">true</merge>
    <merge key="volume.policy.mount_option.sync" type="bool">true</merge>
    </match>
    </device>
    http://www.mythic-beasts.com/~mark/random/hal/
    I restart HAL each time I change the configuration of the 20-$device_name.fdi or preferences.fdi (the second code example). Nothing at all can be found in /var/log/messages. It just silently refuses to mount the devices.
    It works okay without these configurations and HAL auto-mount all these hard drives but only if I do not mess with the configs in /etc/hal/fdi/policy.
    Can someone please explain what could be wrong here?

    Dehir wrote:Im actually having similar difficulties. Have created etc/hal/fdi/policy/20-$device_name.fdi names for each device. But when im trying mount them from pcmanfm they get mounted every single time by random order, which is not that i want. Id prefer hal mounting over fstab but still want them to be mounted with specific names.
    Yeah, that's the whole point - I want to have it done automatically with only one condition - fixed mount point names.

  • Which file defines mount point

    In Linux, /etc/fstab tells the OS the details of a volume when mounting it. I wanted to edit this to have certain devices use different mount points than they do automatically, so either I'm not doing it properly or OS X works a little differently in this respect.
    Thanks.
    p.s. I have Tiger, I just haven't updated my info (below)

    Brad,
    See if this info helps you.
    Beavis2084

  • R3load cannot export more than 100 mount points for Oracle?

    We have a DB with more than 390 sapdata###  mount points  (HPUX-PARISC). They are truly mount points, NOT directories under some other mount points.
    After export using R3load (i.e NO DB-specific ), the keydb.xml generated for import has only from sapdata1 to sapdata100.
    Is there any limit here for R3load?
    Thanks!

    R3load doesn't copy the filesystem structure but unloads the content of the database after having checked the size of it and then distributes it across the files.
    Why do you have so many different mountpoints? Is there a technical reason behind? Just curious...
    Markus

  • Mounting multiple directories with same name on different severs to a single mount point on another server

    We have a requirement where in we have multiple solaris servers and each solaris server has a directory with the same name.
    The files in these directories will be different.
    These same name directories on multiple severs has to be mounted to a single directory on another sever.
    We are planning to use NFS, but it seems we can not mount multiple directories with same name on different severs to a single mount point using NFS, and we need to create multiple mount points.
    Is there any way we can achieve this so that all the directories can be mounted to a single mount point?

    You can try to mount all these mount points via NFS in one additional server and then export this new tree again via NFS to all your servers.
    No sure if this works. If this works, then you will have in this case just an additional level in the tree.

  • Checking the space for /archlog mount point script

    I have the below shell script which is checking /archlog mount point space on cappire(solaris 10) server. When the space usage is above 80% it should e-mail. When i tested this script it is working as expected.
    #!/usr/bin/ksh
    export MAIL_LIST="[email protected]"
    export ARCH_STATUS=`df -k /archlog | awk '{ print $5 }' | grep -v Use%`
    echo $ARCH_STATUS
    if [[ $ARCH_STATUS > 80% ]]
    then echo "archive destination is $ARCH_STATUS full please contact DBA"
    echo "archive destination /archlog is $ARCH_STATUS full on Cappire." | mailx -s "archive destination on cappire is $ARCH_STATUS full" $MAIL_LIST
    else
    exit 1
    fi
    exit
    When i scheduled a cron job it is giving different result. Right now /archlog is 6%, it should exit without e-mailing anything. But, i am getting the below e-mail from cappire server which is strange.
    subject:archive destination on cappire is capacity
    below is the e-mail content.
    6% full
    Content-Length: 62
    archive destination /archlog is capacity 6% full on Cappire.
    Please help me in resolving this issue - why i am getting the above e-mail, i should not get any e-mail with the logic.
    Is there any issue with the cron. Please let me know.

    user01 wrote:
    I have the below shell script which is checking /archlog mount point space on cappire(solaris 10) server. When the space usage is above 80% it should e-mail. When i tested this script it is working as expected.
    #!/usr/bin/ksh
    export MAIL_LIST="[email protected]"
    export ARCH_STATUS=`df -k /archlog | awk '{ print $5 }' | grep -v Use%`
    echo $ARCH_STATUS
    if [[ $ARCH_STATUS > 80% ]]
    then echo "archive destination is $ARCH_STATUS full please contact DBA"
    echo "archive destination /archlog is $ARCH_STATUS full on Cappire." | mailx -s "archive destination on cappire is $ARCH_STATUS full" $MAIL_LIST
    else
    exit 1
    fi
    exit
    When i scheduled a cron job it is giving different result. Right now /archlog is 6%, it should exit without e-mailing anything. But, i am getting the below e-mail from cappire server which is strange.
    subject:archive destination on cappire is capacity
    below is the e-mail content.
    6% full
    Content-Length: 62
    archive destination /archlog is capacity 6% full on Cappire.
    Please help me in resolving this issue - why i am getting the above e-mail, i should not get any e-mail with the logic.
    Is there any issue with the cron. Please let me know.Not a problem with cron, but possibly an issue with the fact that you are doing a string comparison on something that you are thinking of as a number.
    Also, when I'm piping a bunch of stuff together and get unexpected results, I find it useful to break it down at a command line to confirm that each step is returning what I expect.
    df -k /archlog
    df -k /archlog | awk '{ print $5 }'
    df -k /archlog | awk '{ print $5 }' | grep -v Use%
    A common mistake is to forget that jobs submitted from cron don't source the owning user's .profile. You need to make sure the script takes care of setting its environment, but that doesn't look to be the issue for this particular problem.

Maybe you are looking for

  • What do I do if the new tab settings will not allow me to change my url preference?

    To Whom It May Concern, Currently when I open a new tab it automatically pulls up the site http://lab.search.conduit.com/HP/SH/?layid=72&gid=1068&gd=&ctid=CT3326234&octid=EB_ORIGINAL_CTID&ISID=M11DEDE1B-4324-401A-AC98-FE9D62A182FE&SearchSource=69&CUI

  • How to get seeburger work bench counter value

    hi all, i want to get the current counter value in the seeburger workbech counter. i used the following function suggested in one of the threads in forum, but its giving the below error func used: try { VariableBean be=VariableFactory.getVariableInst

  • Weird problem with session in webapp

    I am currently developing some web app and I'm stuck. Here is my problem: I have two frames (left for navigation and right as main frame for content). When I click on some button in navigation frame, it should open somepage.jsp in content frame via a

  • Ecommerce purchase on 30 day account

    Hello all, Has anyone used, or know of a way, the get a quote function in the ecommerce to allow wholesalers to purchase items on account. What I thought could happen is this: Wholesaler is approved for 30 day account Wholesaler logs in Purchases ite

  • MY SCREEN IS BLINKING BRIGHT WHITE LINES

    my screen is suddenly blinking white verticle lines help