Best Approach for using Data Pump

Hi,
I configured a new database which I set up with schemas that I imported in from another production database. Now, before this database becomes the new production database, I need to re-import the schemas so that the data is up-to-date.
Is there a way to use Data Pump so that I don't have to drop all the schemas first? Can I just export the schemas and somehow just overwrite what's in there already?
Thanks,
Nora

Hi, you can use the NETWORK_LINK parameter for import data from other remote database.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#i1007380
Regards.

Similar Messages

  • What is the best artical for understanding Data Pump

    Hi,
    What is the best artical for understanding the relationship / dependency of NETWORK_LINK with FLASHBACK_SCN or FLASHBACK_TIME .
    Why it is manditory to have NETWORK_LINK , when we are using FLASHBACK_SCN or FLASHBACK_TIME.
    Can some one explain the internal for that dependency
    Thanks
    Naveen

    There's no direct dependency between NETWORK_LINK and FLASHBACK_SCN and FLASHBACK_TIME
    As noted in this Oracle doc,
    If the NETWORK_LINK parameter is specified, the SCN refers to the SCN of the source database
    FLASHBACK_SCN and FLASHBACK_TIME are mutually exclusive.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm#sthref120

  • Best Approach for Use Case

    Experts,
    I am creating a small POC for a search engine. I am thinking on what is the best way to achieve the below scenario
    1) Assume like in Google i am entering some data in the text box. I want the area below the text box to show the records (leaving aside from where the records is fetched - DB or flatflies) based on user input values and change as an when new characters are entered or deleted.
    I guess contextual events clubbed together with regions will help me in this but i need to know if i am thinking in the right direction.
    2) My other page has registration which contains 3 sections (logically) say Emp, Dept, Country. All the EO have validations defined on them. Now the user can only enter Emp details and proceed, or Dept details and proceed. So in such a case how do i skip the other section validations.
    I guess sub forms can help me in this
    Please advise.
    Jdev 11.1.1.4 and beyond.

    Hi,
    you didnot get the article..I explains the wizard type interface with next functionality where validations is not fired at first but during at commit time.....
    When i commit how do i make sure that only Transaction level validation is fired for the tab/section of one Entity only ?That article don't meet your usecase.....
    SkipValidation will only skip validations when you need to navigate along with immediate="false"(default), but it wont allow you to commit unless and until your eo validates.
    I guess SkipValidation set to Custom would help me Wrong..
    Lets say i have a page with 3 tabs or a single page with 3 sections each sowing data from diff EO. Each EO having validations defined on them.In that case you need to supress the validation of that eo using some flag workaround .. as i explained you in my previous post !!
    just set the flag to 'DRAFT' to supress the validation of that eo before commit..
    Using it as EntityLevel script validator would be better.
    Regards,
    Edited by: Santosh Vaza on Jun 29, 2012 10:51 AM

  • Best approach for using Faces with growing children?

    Hi all,
    I'm I recent Aperture user with young kids (four-year old twins), and I'm wondering how best to use Faces to identify the kids' faces.  I started working with about 6 months worth of recent photos in Aperture (3.2.3) before importing my full iPhoto library.  Faces did quite a good job of identifying the kids in that sample.  I've just imported all of my iPhoto library, which includes photos back to when the kids were first born.  Faces is now making suggestions that seem pretty reasonable, but far from perfect.  I suspect that if I go through the process of training Faces to do a better job with the "baby faces" its performace would improve on the old photos I just added, but is that a bad idea?  I'm afraid that training Faces to recognize the baby version of someone will "broaden" the definition it's using, making recognition less accurate for new photos I add. I could tell Faces that the baby versions are different people, but that might be worse -- then I'd have two very similar face profiles that are competing to "claim" new faces.  Does anyone have any experience that might help?
    Thanks,
    Brad

    Let me start by stating:
    - I don't know
    - I don't think you'll get any help from Apple.
    That said, here's what I suggest.  The Faces parameters are biometric.  The human head changes the least of any body part over the course of life.  Still, there is bound to be an age prior to which Faces identification works less-well because the data is "smeared".  Similarly, after that age, the Faces identification should work with the same level of accuracy.
    I would, for the present, ignore that.  Identify all Faces you have.  If Faces identification is sub-optimal, pick an age that you think corresponds to what I've laid out above, and create a new Face for all pictures of the individual prior to that age.  At that point you'll have two "Faces" for each individual: let's say "Robin Infant" and "Robin (post-infant)".
    While Aperture makes it easy to combine Faces (drag-drop in Faces View), I don't know an easy way to split named Faces.  It's easy enough to group the Images you want (filter for "Face is ... " and for "Date is before ... ").  From there, you will have to rename the Faces one-by-one.  This goes quickly by pasting the name in the name field.
    My guess is that the identification algorithm rejects the data from included faces that are outliers.  IOW, I don't think you can train Faces to be sloppy.
    Let us know what you find out.

  • What is the best approach for transferring DAT to Hard Drive?

    Hi,
    I've got 10 60 min DATS of my 1970s rock band transferred from live reel to reel recording of gigs. I want to get rid of the DAT machine on Ebay and before I do get these DATS transferred for storage on CDs or DVDs.
    I have an external hard drive to transfer to before burning to disc.
    Does it make sense to use Logic Pro to transfer the DATS through the computer to the hard drive? Or is there a better / easier / faster way to do it?
    I'll want to go back and find songs later and use Logic Pro to EQ and mix them onto a CD by track at some point.
    Each DAT is 60 minutes worth of material and ideally I'd like to be able to run through the transfer and mark where songs start as the "in between" stuff is there also.
    Thanks,
    Bob

    Hey, thanks for the reply.
    Few more comments....
    Could you suggest a Mac stereo editor/recorder.
    What is s/pdif? I'm new to computer recording.
    Slave recorder to the DAT....I'm hooking the DAT into the iMac through an interface...is this what you mean?
    Got you on separate recordings....I want to do that later...Just get the whole tape onto the hard drive then onto a CD or DVD. When I have more time I'll run through and find songs, etc.
    Again, thanks,
    Bob

  • What are the 'gotcha' for exporting using Data Pump(10205) from HPUX to Win

    Hello,
    I have to export a schema using data pump from 10205 on HPUX 64bit to Windows 64bit same 10205 version database. What are the 'gotcha' can I expect from doing this? I mean export data pump is cross platform so this sounds straight forward. But are there issues I might face from export data pump on HPUX platform and then import data dump on to Windows 2008 platform same database version 10205? Thank you in advance.

    On the HPUX database, run this statement and look for the value for NLS_CHARACTERSET
    SQL> select * from NLS_DATABASE_PARAMETERS;http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_4218.htm#sthref2018
    When creating the database on Windows, you have two options - manually create the database or use DBCA. If you plan to create the database manually, specify the database characterset in the CREATE DATABASE statement - http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_5004.htm#SQLRF01204
    If using DBCA, see http://docs.oracle.com/cd/B19306_01/server.102/b14196/install.htm#ADMQS0021 (especially http://docs.oracle.com/cd/B19306_01/server.102/b14196/install.htm#BABJBDIF)
    HTH
    Srini

  • Best Practice for Using Static Data in PDPs or Project Plan

    Hi There,
    I want to make custom reports using PDPs & Project Plan data.
    What is the Best Practice for using "Static/Random Data" (which is not available in MS Project 2013 columns) in PDPs & MS Project 2013?
    Should I add that data in Custom Field (in MS Project 2013) or make PDPs?
    Thanks,
    EPM Consultant
    Noman Sohail

    Hi Dale,
    I have a Project Level custom field "Supervisor Name" that is used for Project Information.
    For the purpose of viewing that "Project Level custom field Data" in
    Project views , I have made Task Level custom field
    "SupName" and used Formula:
    [SupName] = [Supervisor Name]
    That shows Supervisor Name in Schedule.aspx
    ============
    Question: I want that Project Level custom field "Supervisor Name" in
    My Work views (Tasks.aspx).
    The field is enabled in Task.aspx BUT Data is not present / blank column.
    How can I get the data in "My Work views" ?
    Noman Sohail

  • Best approach for uploading document using custom web part-Client OM or REST API

    Hi,
     Am using my custom upload Visual web part for uploading documents in my document library with a lot of metadata.
    This columns contain single line of text, dropdownlist, lookup columns and managed metadata columns[taxonomy] also.
    so, would like to know which is the best approach for uploading.
    curretnly I am  trying to use the traditional SSOM, server oject model.Would like to know which is the best approach for uploading files into doclibs.
    I am having hundreds of sub sites with 30+ doc libs within those sub sites. Currently  its taking few minutes to upload the  files in my dev env. am just wondering, what would happen if  the no of subsites reaches hundred!
    am looking from the performance perspective.
    my thought process is :
    1) Implement Client OM
    2) REST API
    Has anyone tried these approaches before, and which approach provides better  performance.
    if anyone has sample source code or links, pls provide the same 
    and if there any restrictions on the size of the file  uploaded?
    any suggestions are appreciated!

    Try below:
    http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx
    http://stackoverflow.com/questions/9847935/upload-a-document-to-a-sharepoint-list-from-client-side-object-model
    http://www.codeproject.com/Articles/103503/How-to-upload-download-a-document-in-SharePoint
    public void UploadDocument(string siteURL, string documentListName,
    string documentListURL, string documentName,
    byte[] documentStream)
    using (ClientContext clientContext = new ClientContext(siteURL))
    //Get Document List
    List documentsList = clientContext.Web.Lists.GetByTitle(documentListName);
    var fileCreationInformation = new FileCreationInformation();
    //Assign to content byte[] i.e. documentStream
    fileCreationInformation.Content = documentStream;
    //Allow owerwrite of document
    fileCreationInformation.Overwrite = true;
    //Upload URL
    fileCreationInformation.Url = siteURL + documentListURL + documentName;
    Microsoft.SharePoint.Client.File uploadFile = documentsList.RootFolder.Files.Add(
    fileCreationInformation);
    //Update the metadata for a field having name "DocType"
    uploadFile.ListItemAllFields["DocType"] = "Favourites";
    uploadFile.ListItemAllFields.Update();
    clientContext.ExecuteQuery();
    If this helped you resolve your issue, please mark it Answered

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • Best approach for IDOC - JDBC scenario

    Hi,
    In my scenarion I am creating sales order(ORDERS04) in R/3 system and which need to be replicated in a SQL Server system. I am sending the order to XI as an IDoc and want to use JDBC for sending data to SQL Server. I need to insert data in two tables(header & details). Is it possible without BPM?  Or what is the best approach for this?
    Thanks,
    Sri.

    Yes, this is possible without the BPM.
    Just create the Corresponding Datatype for the insertion.
    if the records to be inserted are different, then there wil be 2 different datatypes ( one for header and one for detail).
    Do a mutlimapping, where your Source is mapped into the header and details datatype and then send using the JDBC sender adapter.
    For the strucutre of your Datatype for insertion , just check this link,
    http://help.sap.com/saphelp_nw04/helpdata/en/7e/5df96381ec72468a00815dd80f8b63/content.htm
    To access any Database from XI, you will have to install the corresponding Driver on your XI server.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3867a582-0401-0010-6cbf-9644e49f1a10
    Regards,
    Bhavesh

  • Best Practice for Master Data Reporting

    Dear SAP-Experts,
    We face a challenge at the moment and we are still trying to find the right approach to it:
    Business requirement is to analyze SAP Material-related Master Data with the BEx Analyzer (Master Data Reporting)
    Questions they want to answer here are for example:
    - How many active Materials/SKUs do we have?
    - Which country/Sales Org has adopted certain Materials?
    - How many Series do we have?
    - How many SKUs below to a specific season
    - How many SKUs are in a certain product lifecycle
    - etc.
    The challenge is, that the Master Data is stored in tables with different keys in the R/3.
    The keys in these tables are on various levels (a selection below):
    - Material
    - Material / Sales Org / Distribution Channel
    - Material / Grid Value
    - Material / Grid Value / Sales Org / Distribution Channel
    - Material / Grid Value / Sales Org / Distribution Channel / Season
    - Material / Plant
    - Material / Plant / Category
    - Material / Sales Org / Category
    etc.
    So even though the information is available on different detail  levels, the business requirement is to have one query/report that combines all the information. We are currently struggeling a bit on deciding, what would be the best approach for this requirement. Did anyone face such a requirement before - and what would be the best practice. We already tried to find any information online, but it seems Master data reporting is not very well documented. Thanks a lot for your valuable contribution to this discussion.
    Best regards
    Lukas

    Pass a reference to the parent into the modal popup. Then you
    can reference anything in the parent scope.
    I haven't done this i 2.0 yet so I can't give you code. I'll
    post if I do.
    Oh, also, you can reference the parent using parentDocument.
    So in the popup you could do:
    parentDocument.myPublicVariable = "whatever";
    Tracy

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • Best approach for RFC call from Adapter module

    What is the best approach for making a RFC call from a <b>reciever</b> file adapter module?
    1. JCo
    2. Is it possible to make use of MappingLookupAPI classes to achieve this or those run in the mapping runtime environment only?
    3. Any other way?
    Has anybody ever tried this? Any pointers????
    Regards,
    Amol

    Hi ,
    The JCo lookup is internally the same as the Jco call. the only difference being you are not hardcoding the system related data in the code. So its easier to maintain during transportation.
    Also the JCO lookup code is more readable.
    Regards
    Vijaya

  • Best Approach for Reporting on SAP HANA Views

    Hi,
    Kindly provide information w.r.t the best approach for the reporting on HANA views for the architecture displayed below:
    We are on a lookout for information mainly around the following points:
    There are two reporting options which are known to us and listed below namely:
    Reporting on HANA views through SAP BW  (View > VirtualProvider > BEx > BI 4.1)
    Reporting on HANA views in ECC using BI 4.1 tools
            Which is the best option for reporting (please provide supportive reasons : as in advantages and limitations)?
             In case a better approach exists, please let us know of the same.
    Best approach for reporting option on a mixed scenario wherein data of BW and HANA views is to be utilized together.

    Hi Alston,
    To be honest I did not understand the architecture that you have figured out in your message.
    Do you have HANA instance as far as I understood and one ERP and BW is running on HANA. Or there might be 2 HANA instance and ERP and BW are running independently.
    Anyway If you have HANA you have many options to present data by using analytic views. Also you have BW on HANA as EDW. So for both you can use BO and Lumira as well for presenting data.
    Check this document as well: http://scn.sap.com/docs/DOC-34403

  • What is the best approach to process data on row by row basis ?

    Hi Gurus,
    I need to code stored proc to process sales_orders into Invoices. I
    think that I must do row by row operation, but if possible I don't want
    to use cursor. The algorithm is below :
    for all sales_orders with status = "open"
    check for credit limit
    if over credit limit -> insert row log_table; process next order
    check for overdue
    if there is overdue invoice -> insert row to log_table; process
    next order
    check all order_items for stock availability
    if there is item that has not enough stock -> insert row to
    log_table; process next order
    if all check above are passed:
    create Invoice (header + details)
    end_for
    What is the best approach to process data on row by row basis like
    above ?
    Thank you for your help,
    xtanto

    Processing data row by row is not the fastest method out there. You'll be sending much more SQL statements towards the database than needed. The advice is to use SQL, and if not possible or too complex, use PL/SQL with bulk processing.
    In this case a SQL only solution is possible.
    The example below is oversimplified, but it shows the idea:
    SQL> create table sales_orders
      2  as
      3  select 1 no, 'O' status, 'Y' ind_over_credit_limit, 'N' ind_overdue, 'N' ind_stock_not_available from dual union all
      4  select 2, 'O', 'N', 'N', 'N' from dual union all
      5  select 3, 'O', 'N', 'Y', 'Y' from dual union all
      6  select 4, 'O', 'N', 'Y', 'N' from dual union all
      7  select 5, 'O', 'N', 'N', 'Y' from dual
      8  /
    Tabel is aangemaakt.
    SQL> create table log_table
      2  ( sales_order_no number
      3  , message        varchar2(100)
      4  )
      5  /
    Tabel is aangemaakt.
    SQL> create table invoices
      2  ( sales_order_no number
      3  )
      4  /
    Tabel is aangemaakt.
    SQL> select * from sales_orders
      2  /
            NO STATUS IND_OVER_CREDIT_LIMIT IND_OVERDUE IND_STOCK_NOT_AVAILABLE
             1 O      Y                     N           N
             2 O      N                     N           N
             3 O      N                     Y           Y
             4 O      N                     Y           N
             5 O      N                     N           Y
    5 rijen zijn geselecteerd.
    SQL> insert
      2    when ind_over_credit_limit = 'Y' then
      3         into log_table (sales_order_no,message) values (no,'Over credit limit')
      4    when ind_overdue = 'Y' and ind_over_credit_limit = 'N' then
      5         into log_table (sales_order_no,message) values (no,'Overdue')
      6    when ind_stock_not_available = 'Y' and ind_overdue = 'N' and ind_over_credit_limit = 'N' then
      7         into log_table (sales_order_no,message) values (no,'Stock not available')
      8    else
      9         into invoices (sales_order_no) values (no)
    10  select * from sales_orders where status = 'O'
    11  /
    5 rijen zijn aangemaakt.
    SQL> select * from invoices
      2  /
    SALES_ORDER_NO
                 2
    1 rij is geselecteerd.
    SQL> select * from log_table
      2  /
    SALES_ORDER_NO MESSAGE
                 1 Over credit limit
                 3 Overdue
                 4 Overdue
                 5 Stock not available
    4 rijen zijn geselecteerd.Hope this helps.
    Regards,
    Rob.

Maybe you are looking for

  • Open new browser window from an Applet

    I have an applet that needs to initiate a new browser window to open and go to a specified URL. Can anyone help me with this?? Thanks in advance.

  • I can t partition my drive

    when i try to partition and reinstall my mountain lion (online) coming an error tha the drive canot mounted.i think mac keeper instalation did that problems plz help!!!

  • Processing uploaded files

    I need to upload photos to a site, I want to make sure that they are only JPG files uploaded, plus restrict the size. Is there anyway to check the filesize and type before uploading? Or can somebody upload a 20meg EXE file that would hog our bandwidt

  • 928 Camera - Yellowish Tint When Using Flash

    I find that when I take pictues in lower light situations and use the flash there is a distinct yellowish tint to the picture. I have had better results not using the flash at all. Kind of defeats the purpose of the flash. The pictures were taken usi

  • Workflow Suggestions Please - Adjust Colour, White Balance, Levels, Resize

    Hi, I have a number of photographs which I need to present in a consistent fashion - each is slightly different from the next in terms of exposure, levels, white balance etc, but they should all be the same. Because each is different, I can't simply