Is the only way to import large amount of data and database objects into a primary database is to shutdown the standby, turn off archive log mode, do the import, then rebuild the standby?

I have a primary database that need to import large amount of data and database objects. 1.) Do I shutdown the standby? 2.) Turn off archive log mode? 3.) Perform the import? 4.) Rebuild the standby? or is there a better way or best practice?

Instead of rebuilding the (whole) standby, you take an incremental (from SCN) backup from the Primary and restore it on the Standby.  That way, if, for example
a. Only two out of 12 tablespaces are affected by the import, the incremental backup would effectively be only the blocks changed in those two tablespaces (and some other changes in system and undo) {provided that there are no other changes in the other ten tablespaces}
b. if the size of the import is only 15% of the database, the incremental backup to restore to the standby is small
Hemant K Chitale

Similar Messages

  • What is the best way to migrate large amounts of data from a g3 to an intel mac?

    I want to help my mom transfer her photos and other info from an older  blueberry G3 iMac to a new intel one.  There appears to be no prmigration provision on the older mac.  Also the firewire caconnestions are different.  Somebody must have done this before.

    Hello
    the cable above can be use to enable Target Disk mode for data transfert
    http://support.apple.com/kb/ht1661 for more info
    to enable Target Disk mode just after startup sound Hold on "T" key on key board until see at  screen firewire symbol aka screen saver , then plug fire wire cable betwen 2 mac
    HTH
    Pierre

  • With iOS 7 and certain apps I get a message that ask me to connect to iTunes to receive notifications. After clicking "OK" it comes back repeatedly and freezes the App: Friends, Dropbox, among others. The only way to escape is to shut down and restart.

    With iOS 7 and certain apps I get a message that ask me to connect to iTunes to receive notifications. After clicking "OK", it comes back repeatedly and freezes the App: Friends, Dropbox, among others. The only way to escape is to shut down and restart. I think this is a "bug" with older apps. How can I solve it in orther to use the apps? It doesn´t occur with my iPad, only with my iPhone 5.

    Thankyou for your kind answer: with Dropbox, it solved the problem; with Friends, still the same

  • In Yahoo Mail, when composing an email I get an "Asking to leave this page" panel; it does not matter whether i choose stay or leave, the panel pops up again....and again...the only way out is to Force Quite Firefox and start all over...help please...

    In Yahoo Mail, when composing an email I get an "Asking to leave this page" panel; it does not matter whether i choose stay or leave, the panel pops up again....and again...the only way out is to Force Quite Firefox and start all over...help please...

    Thanks for responding....but isn't your potential solution designed for Windows users? I have a Mac.
    If it does apply to Mac, I am also hoping for something simpler as this could take weeks to isolate the problem and it would be more efficient to use another browser for mail
    Also, I should have added, this problem occurs only occasionally and seemingly only when I am taking some time to compose a message

  • What is the best way to extract large volume of data from a BW InfoCube?

    Hello experts,
    Wondering if someone can suggest the best method that is availabe in SAP BI 7.0 to extract a large amount of data (approx 70 million records) from an InfoCube.  I've tried OpenHub and APD but not working.  I always need to separate the extracts into small datasets.  Any advice is greatly appreciated.
    Thanks,
    David

    Hi David,
    We had the same issue but that was loading from an ODS to cube. We have over 50 million records. I think there is no such option like parallel loading using DTPs. As suggested earlier in the forum, the only best option is to split according to the calender year of fis yr.
    But remember even with the above criteria sometimes for some cal yr you might have lot of data, even that becomes a problem.
    What i can suggest you is apart from Just the cal yr/fisc, also include some other selection criteria like comp code or sales org.
    yes you will end up load more requests, but the data loads would go smooth with lesser volumes.
    Regards
    BN

  • Best way to pass large amounts of data to subroutines?

    I'm writing a program with a large amount of data, around 900 variables.  What is the best way for me to pass parts of this data to different subroutines?  I have a main loop on a PXI RT Controller that is controlling hydraulic cylinders and the loop needs to be 1ms or better.  How on earth should I pass 900 variables through a loop and keep it at 1ms?  One large cluster??  Several smaller clusters??  Local Variables?  Global Variables??  Help me please!!!

    My suggestion, similar to Altenbach and Matt above, is to use a Functional Global Variable (FGV) and use a 1D array of 900 values to store the data in the FGV. You can retrieve individual data items from the FGV by passing in the index of the desired variable and the FGV returns the value from the array. Instead of passing in an index you could also use a TypeDef'd Enum with all of your variables as element of the Enum, which will allow you to place the Enum constant on the diagram and make selecting variables, as well as reading the diagram, simpler.
    My group is developing a LabVIEW component/example code with this functionality that we plan to publish on DevZone in a month or two.
    The attached RTF file shows the core piece of this implementation. This VI off course is non-reentrant. The Init case could be changed to allocate the internal 1D array as part of this VI rather than passing it from another VI.
    Message Edited by Christian L on 01-31-2007 12:00 PM
    Christian Loew, CLA
    Principal Systems Engineer, National Instruments
    Please tip your answer providers with kudos.
    Any attached Code is provided As Is. It has not been tested or validated as a product, for use in a deployed application or system,
    or for use in hazardous environments. You assume all risks for use of the Code and use of the Code is subject
    to the Sample Code License Terms which can be found at: http://ni.com/samplecodelicense
    Attachments:
    CVT_Double_MemBlock.rtf ‏309 KB

  • Dynamic pdf up to 200 images, the size of pdf is larger, cannot save data and images to the form

    Hi all,
    My client would like to dynamic images up to 200 pictures to my forms. It is working fine. However, when I add images up to 35 images. I could not add images to the form any more. Then, I save data on the form. All images and data typed on the form are disappeared. I don't know reason.
    If I only add 10 images - I can save data and images on the form. The size of pdf is 15456 kb.  I was unable to  add more pictures or data on the form.
    Maybe there are problem the size of pdf? How much size can an dynamic pdf  limited?
    Can we save the information and images as much as we want?
    I have spent 2 weeks to work and try to figure out this problem. However it is not successful.
    Please help,
    Cindy

    You should ensure, that your users do not import big images.
    Therefore you can use a script on the change event of an image field which checks the data size and warns the user if the file is too big.
    function formatNumber(number) {
        var num = number + '',
        x = num.split('.'),
        x1 = x[0],
        x2 = x.length > 1 ? '.' + x[1] : '',
        rgx = /(\d+)(\d{3})/;
        while (rgx.test(x1)) {
            x1 = x1.replace(rgx, '$1' + ',' + '$2');
        return x1 + x2 + "KB";
    var sizeLimit = 200, //allow upto 200KB images
      thisSize = Math.round((this.value.oneOfChild.value.length * 3 / 4) / 1024);
    if (sizeLimit > 0) {
      if (thisSize > sizeLimit) {
      xfa.host.messageBox("Note: With " + formatNumber(thisSize) + " the size of the imported image is greater that the recommended maximum of " + formatNumber(sizeLimit) + ".\nLarge images can cause a insufficent performance.\n\nIf possible, use images with the following recommended specs:\nFormat:\t\tPNG or JPG\nColor depth:\t8 Bit (higher is not supported)\nColor space:\tRGB (CMYK is not supported)\nFile Size:\t\t" + formatNumber(sizeLimit), "Recommended image size exceeded", 3, 0);

  • What java collection for large amount of data and user customizable record

    I'm trying to write an application which operates on large amount of data. I want user could customize data structure (record) from different types of variables(float,int,bool,string,enums). These records should be stored in some kind of Array. Size of record: 1-200 variables; size of Array of those records: about 100000 items (one record every second through whole day). I want these data stored in some embedded database (sqlite, hsqldb) - access using simple JDBC. Could you give me some advise how to design thoses data strucures. Sincerely yours :)
    Ok, maybe I give some example. This will be some C++ code.
    I made an interface:
    class ParamI {
    virtual string toString() = 0;
    virtual void addValue( ParamI * ) = 0;
    virtual void setValue( ParamI * ) = 0;
    virtual BYTE getType() = 0;
    Than I made some template class derived from interface ParamI:
    template <class T>
    class CParam : CParamI {
    public:
         void setValue( T val );
         T getValue();
         string toString();
         void setValue( ParamI *src ) {
              if ( itemType == src->getType() ) {
                   CParam<T> ptr = (CParam<T>)src;
                   value = ptr->value;
    private:
         BYTE itemType;
         T value;
    sample constructor of <int> template:
    template<> CParam<int>::CParam() {
         itemType = ParamType::INTEGER;
    This solution makes me possible to write collection of CParamI:
    std::vector<CParamI*> myCollection;
    CParam<int> *pi = new CParam<int>();
    pi->setValue(10);
    myCollection.push_back((CParamI*)pi);
    Is this correct solution?. My main problem is to get data from the collection. I have to check its data type using getType() method of CParamI interface.
    Please could give me some advise, some idea to make it right using java.

    If you have the requirement that you have to be able to configure on the fly, then what I've done in the past is just put everything into data pairs into a list: something along the line of: (<Vector>, <String>), where the Vector would store your data and String would contain a data type. I would then make a checker to validate the input according to the SQL databypes that I want to support on the project. It's not a big deal with the amount of data you are talking about.
    The problem you're going to have is when you try to allow dynamic definition, on the fly, of data being input to a table that has already been defined. Your DB will not support that, unless you just store that data pair--which I do not suggest.

  • Importing Large amounts of Data

    Hello-
    I have an old website that I am both redesigning and
    developing... It is for a doctor's office website.
    Currently, they have 71 different HTML pages, one for each of
    their 71 doctors. All of the pages are structured exactly the same
    way except for the content. I have been able to extract all text
    and images out of the 71 pages and put them into a text document.
    So, the data is still in order and in a repeating format. Now, I am
    trying to figure out how to automate the task of putting this data
    into my new design.
    So, in a nutshell, I have a text document that has all the
    site's physician info. I have a SPRY data region I am wanting to
    import this data into (either via XML data sets or HTML table data
    sets). I would like to do this without cutting and pasting from the
    text document, as it would take forever! Is there a way to automate
    this process?
    Here is the old page:
    Old Page
    Here is the new:
    New
    Page
    Finally, here are the text and xml files:
    Text File
    XML File
    Any suggestions on automation would be much appreciated!
    Thanks!

    "fast and easy"? Ha, ha, ha, ha, ha, ha, ha, ha, ha, ha, ha,
    ha, ha, ha!
    But seriously, folks...open the rhbag.apj file in Notepad,
    see how the baggage file entries are formatted, and format each of
    your new entries in the proper format in another file (you might
    need a good Replace tool like FAR), and then add them to the
    rhbag.apj file.
    An alternate method might be to perform the new entry
    formatting in Word,
    but then filtering those results through Notepad first
    , then copying the straight text into the rhbag.apj file.
    Good luck,
    Leon

  • Best way of handling large amounts of data movement

    Hi
    I like to know what is the best way to handle data in the following scenario
    1. We have to create Medical and Rx claims Tables for 36 months of data about 150 million records each - First month (month 1, 2, 3, 4, .......34, 35, 36)
    2. We have to add the DELTA of month two to the 36 month baseline. But the application requirement is ONLY 36 months, even though the current size is 37 months.
    3 Similarly in the 3rd month we will have 38 months, 4th month will have 39 months.
    4. At the end of 4th month - how can I delete the First three months of data from Claim files without affecting the performance which is a 24X7 Online system.
    5. Is there a way to create Partitions of 3 months each and that can be deleted - Delete Partition number 1, If this is possible, then what kind of maintenance activity needs to be done after deleting partition.
    6. Is there any better way of doing the above scenario. What other options do I have.
    7 My goal is to eliminate the initial months data from system as the requirement is ONLY 36 months data.
    Thanks in advance for your suggestion
    sekhar

    Hi,
    You should use table partitioning to keep your data on monthly partitions. Serach on table partitioning for detailed examples.
    Regards

  • Best way to store large amounts of data

    Greetings!
    I have some code that will parse through XML data one character at a time, determine if it's an opening or closing tag, what the tag name is, and what the value between the tags is. All of the results are saved in a 2D string array. Each parent result can have a variable number of child results associated with it and it is possible to have over 2,000 parent results.
    Currently, I initialize a new string that I will use to store the values at the beginning of the method.
    String[][] initialXMLValues = new String[2000][45]I have no idea how many results will actually be returned when the method is initially called, so I don't know what to do besides make initialXMLValues around the maximum values I expect to have.
    As I parse through the XML, I look for a predefined tag that signifies the start of a result. Each tag/value that follows is stored in a single element of an arraylist in the form "tagname,value". When I reach the closing parent tag, I convert the arraylist to a String[], store the size of the array if it is bigger than the previous array (to track the maximum size of the child results), store it in initialXMLValues[i.] (<- peroid to avoid post showing up in italics), then increment i
    When I'm all done parsing, I create a new String String[][] XMLValues = new String[i][j], where i is equal to the total number of parent results (from last paragraph) and j is equal to the maximum number of child results. I then use a nested for loop to store all of the values from initialXMLValues into XMLValues. The whole point of this is to minimize the overall size of the returned String Array and minimize the number of null valued fields.
    I know this is terribly inefficient, but I don't know a better way to do it. The problem is having to have the size of the array initilized before I really know how many results I'm going to end up storing. Is there a better way to do this?

    So I'm starting to rewrite my code. I was shocked at how easy it was to implement the SAX parser, but it works great. Now I'm on to doing away with my nasty string arrays.
    Of course, the basic layout of the XML is like this:
    <result1>
    <element1>value</element1>
    <element2>value</element2>
    </result1>
    <result2>
    I thought about storing each element/value in a HashMap for each result. This works great for a single result. But what if I have 1000 results? Do I store 1000 HashMaps in an ArrayList (if that's even possible)? Is there a way to define an array of HashMaps?

  • From yesterday, I have been facing the problems in my iphone and my wife iphone. The same problem, "cannot connect to itunes store". I have strong internet connection. I tried all the possible way I could like setting, general, date and time.....etc.

    I have proble of "cannot connect to itunes store. After I have updated the new software ios6, the problem arised in one of my mobile. In the second mobile, I didnot updated the software.Today in the morning, the problem ariesed in that mobile as well. I cannot open the itunes, don't have access to apple store . I tried to solve the problem like going to setting, general, Date and time, automatically off, keeping one year ahead date. I tried all the possible options I could get from the googel. How do I get rid of this problem?

    No - I have not tried other routers on my network.
    What I meant was I could use my itouch on other users wireless networks without any issues.
    I'll update the routers DNS to try  using Google's 8.8.8.8 instead of my ISP.
    Is the issue DNS performance using local ISP -or are there other possibel isues when using local isp DNS?

  • Method to move large amounts of data out of CS into a Windows fileshare

    I am looking for the best method that I can script moving files from a Content Services library to a Windows fileshare while preserving the permissions. My intent was to copy data using a WebDAV mount and then using SQL queries to retrieve the permissions set for the library. However, I don't know what tables to query in such a way that it can determine the level of permissions easily.
    Does anyone know of doc anywhere that might explain the best method to perform this task? Any help would be appreciate.
    Thank you,
    Dustin

    Thank you again for your help with this. I will check these out. I have very little experience with the APIs. The only experience I have is editing someone else's code to do what I want. It may take some time for me to really dive into this, but I appreciate the good starting point! This should get me on my way.
    Thank you.

  • I set up Keychain and do not have access to the telephone I gave.  Now I can no longer connect my ipad to my windows computer.  I can not turn off Keychain...it is awaiting authorization from the SMS on the telephone I do not have access to.

    I set up Keychain and do not have access to the telephone I gave.  Now I can no longer connect my ipad to my windows computer, itunes does not automatically open when I plug in the USB.  I can not turn off Keychain in my ipad since authorization is pending awaiting the code sent by SMS, which I can not get.  Can someone advise how to get rid of Keychain.  Should I cancel my icloud account?

    I appreciate your help.
    Have reset the ipad, but that changed nothing.  I can not turn off Keychain, iit remains "pending approval".  Probably, the only thing to do is to close my icloud account...which I hate to do since there is a lot of data saved there.  But, I am unable to back the ipad up on my Windows 8 computer because itunes no longer opens when the ipad is attached.
    Oh well...computers are computers and it is impossible to cover 100% of the situations.
    Keychain iss not the best app for someone who lives in 3 different countries.
    Thanks

  • What is the most efficient way of passing large amounts of data through several subVIs?

    I am acquiring data at a rate of once every 30mS. This data is sorted into clusters with relevant information being grouped together. These clusters are then added to a queue. I have a cluster of queue references to keep track of all the queues. I pass this cluster around to the various sub VIs where I dequeue the data. Is this the most efficient way of moving the data around? I could also use "Obtain Queue" and the queue name to create the reference whenever I need it.
    Or would it be more efficient to create one large cluster which I pass around? Then I can use unbundle by index to pick off the values I need. This large cluster can have all the values individually or it co
    uld be composed of the previously mentioned clusters (ie. a large cluster of clusters).

    > I am acquiring data at a rate of once every 30mS. This data is sorted
    > into clusters with relevant information being grouped together. These
    > clusters are then added to a queue. I have a cluster of queue
    > references to keep track of all the queues. I pass this cluster
    > around to the various sub VIs where I dequeue the data. Is this the
    > most efficient way of moving the data around? I could also use
    > "Obtain Queue" and the queue name to create the reference whenever I
    > need it.
    > Or would it be more efficient to create one large cluster which I pass
    > around? Then I can use unbundle by index to pick off the values I
    > need. This large cluster can have all the values individually or it
    > could be composed of the previously mentioned clusters (i
    e. a large
    > cluster of clusters).
    It sounds pretty good the way you have it. In general, you want to sort
    these into groups that make sense to you. Then if there is a
    performance problem, you can arrange them so that it is a bit better for
    the computer, but lets face it, our performance counts too. Anyway,
    this generally means a smallish number of groups with a reasonable
    number of references or objects in them. If you need to group them into
    one to pass somewhere, bundle the clusters together and unbundle them on
    the other side to minimize the connectors needed. Since the references
    are four bytes, you don't need to worry about the performance of moving
    these around anyway.
    Greg McKaskle

Maybe you are looking for

  • Can't post photos to Facebook

    This has been happening on and off for awhile but really annoying during the holidays! I uninstalled and reinstalled Flash Player (found as a suggestion on line). That didn't fix it but while in my Add-ons I saw Shockwave for Director. I thought it m

  • I cannot clear a signature!

    Hello! I'm using an Oberthur token for Digital Signature and my computer crashed after I have signed a document, but before I input de password for the token. Now the signature is not valid and I cannot erase it. The only options I have for that sign

  • How can I import photos from a Nikon P520 into I photo 11

    I have previously hooked the camera up to a powered USB , turned the camera on and it opened I photo 11.  I was able to import and edit the photos in I photo.  I tried this process now and I do not get I photo to open, the camera does not appear in t

  • Can XI pick up a report sitting in database?

    Hi, We have a report thats been generated using a query in DB access. Requirement: To  mail that report to the receipents. Can we use XI to pick up that report and mail it as attachment to receivers?? The important thing is how to pick up that report

  • Bridge on a windows 8.1 hi DPI display (3200x1800), can I scale the UI?

    Hello, lightroom seems to be fine scaling the UI on my hidpi laptop display, however Bridge is essentially unusable sadly.  Hopefully I just have not found the setting?  Can anyone help?