Restore single datafile from source database to target database.

Here's my issue:
Database Release : 11.2.0.3 across both the source and targets. (ARCHIVELOG mode)
O/S: RHEL 5 (Tikanga)
Database Storage: Using ASM on a stand-alone server (NOT RAC)
Using Oracle GG to replicate changes on the Source to the Targets.
My scenario:
We utilize sequences to keep the primary key in tact and these are replicated utilizing GG. All of my schema tables are located in one tablespace and datafile and all of my indexes are in seperate tablespace (nothing is being partitioned).
In the event of media failure on the Target or my target schema being completely out of whack, is there a method where I can copy the datafile/tablespace from my source (which is intact) to my target?
I know there are possibilites of
1) restore/recover the tablespace to a SCN or timestamp in the past and then I could use GoldenGate to run the transactions in (but this could take time depending on how far back I need to recover the tablespace and how many transactions have processed with GG) (This is not fool-proof).
2) Could use DataPump to move the data from the Source schema to the Target schema (but the sequences are usually out of order if they haven't fired on the source, get that 'sequence is defined for this session message'). I've tried this scenario.
3) I could alter the sequences to get them to proper number using the start and increment by feature (again this could take time depending on how many sequences are out of order).
I would think you could
1) back up the datafile/tablespace on the source,
2)then copy the datafile to the target.
3) startup mount;
4) Newname the new file copied from the source (this is ASM)
5) Restore the datafile/tablespace
6) Recover the datafile/tablespace
7) alter database open;
Question 1: Do I need to also copy the backup piece from the source when I execute the backup tablespace on the source as indicated in my step 1?
Question 2: Do I need to include "plus archivelog" when I execute the backup tablespace on the source as indicated in my step 1?
Question 3: Do I need to execute an 'alter system switch logfile' on the Target when the recover in step 6 is completed?
My scenario sounds like a Cold Backup but running with Archivelog mode, so the source could be online while the database is running.
Just looking for alternate methods of recovery.
Thanks,
Jason

Let me take another stab at sticking a fork into this myth about separating tables and indexes.
Let's assume you have a production Oracle database environment with multiple users making multiple requests and the exact same time. This assumption mirrors reality everywhere except in a classroom where a student is running a simple demo.
Let's further assume that the system looks anything like a real Oracle database system where the operating system has caching, the SAN has caching, and the blocks you are trying to read are split between memory and disk.
Now you want to do some simple piece of work and assume there is an index on the ename column...
SELECT * FROM emp WHERE ename = 'KING';The myth is that Oracle is going to, in parallel, read the index and read the table segments better, faster, whatever, if they are in separate physical files mapped by separate logical tablespaces somehow to separate physical spindles.
Apply some synapses to this myth and it falls apart.
You issue your SQL statement and Oracle does what? It looks for those index blocks where? In memory. If it finds them it never goes to disk. If it does not it goes to disk.
While all this is happening the hundreds or thousands of other users on the database are also making requests. Oracle is not going to stop doing work while it tries to find your index blocks.
Now it finds the index block and decides to use the ROWID value to read the block containing the row with KING's data. Did it freeze the system? Did it lock out everyone else while it did this? Of course not. It puts your read request into the queue and, again, first checks memory to see if it needs to go to disk.
Where in here is there anything that indicates an advantage to having separate physical files?
And even if there was some theoretical reason why separate files might be better ... are they separate in the SAN's cache? No. Are they definitely located on separate stripes or separate physical disks? Of course not.
Oracle uses logical mappings (tables and tablespaces) and SANS use logical mappings so you, the DBA or developer, have no clue as to where anything physically is located.
PS: Ouija Boards don't work either.

Similar Messages

  • Is it possible to have duplicate columns from source List to target List while copying data items by Site Content and Structure Tool in SharePoint 2010

    Hi everyone,
    Recently ,I have one publishing site template that has a lot of sub sites which  contain a large amount of  content. 
    On root publishing site, I have created custom list including many custom fields 
    and saved it as  template  in order that 
    any sub sites will be able to reuse it later on .  My scenario describe as follows.
    I need to apply Site Content and Structure Tool to copy   a lot of items
     from  one list to another. Both lists were created from same template
    I  use Site Content and Structure Tool to copy data from source list 
    to target list  as figure below.
    Once copied  ,  all items are completed.
     But many columns in target list have been duplicated from source list such as  PublishDate ,NumOrder, Detail  as  
    figure below  .
    What is the huge impact from this duplication?
    User  can input data into this list successfully  
    but several values of some columns like  “Link column” 
    won't  display on “AllItems.aspx” page 
    .  despite that they show on edit item form page and view item form page.
    In addition ,user  can input data into this list  as above but 
    any newly added item  won't appear on 
    on “AllItems.aspx” page
    at all  despite that actually, these 
    item are existing on  database(I try querying by power shell).
    Please recommend how to resolve this column duplication problem.

    Hi,
    According to your description, my understanding is that it displayed many repeated columns after you copy items from one list to another list in Site Content and Structure Tool.
    I have tested in my environment and it worked fine. I created a listA and created several columns in it. Then I saved it as template and created a listB from this template. Then I copied items from listA to listB in Site Content and Structure Tool and it
    worked fine.
    Please create a new list and save it as template. Then create a list from this template and test whether this issue occurs.
    Please operate in other site collections and test whether this issue occurs.
    As a workaround, you could copy items from one list to another list by coding using SharePoint Object Model.
    More information about SharePoint Object Model:
    http://msdn.microsoft.com/en-us/library/ms473633.ASPX
    A demo about copying list items using SharePoint Object Model:
    http://www.c-sharpcorner.com/UploadFile/40e97e/sharepoint-copy-list-items-in-a-generic-way/
    Best Regards,
    Dean Wang
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • To verify all the values are properly transferred from source db to target db which command works better?

    Hi,
    i need to validate all the source values are transferred properly from source db to target db .but i confused which is the right command to use between except & exists.
    if 100 records in source then i need to validate all the 100 records should be there in target db.
    i used below queries:
    using exists:
    Select upc_desc from DBO.MIC_TARGET
    t1
    where not exists
    Select 1 from produ_stg
    where DESCRi3= t1.UPc_desc
    using except:
     SELECT descri3 from produ_stg
    except
    select Upc_desc from MIC_Target.
    please help me which query is correct to validate source and target db values are same replica.
    Regards, 
    Jagadeesh

    You can use EXCEPT here.
    NULL will not consider in NOT EXISTS whereas EXCEPT takes care of the same.(Except is nothing but Distinct and then Not exists)

  • Error Occures while loading data from Source system to Target ODS

    Hi..
    I started loading Records From source system to target ODS.while i running the job i got the following errors.
    Record 18211 :ERROR IN HOLIDAY_GET 20011114 00000000
    Record 18212 :ERROR IN HOLIDAY_GET 20011114 00000000
    sp Please help me in these following Errors..
    Thanks in advance,

    Hello
    How r u ?
    I think this problem is at the ODS level, ZCAM_O04 is ur ODS Name.
    Could u check the ODS Settings, and the Unique Data Records is Checked or Not ?
    Best Regards....
    Sankar Kumar
    +91 98403 47141

  • Restoring single mailbox from exchange edb backup

    Hello everyone
    could any one please let me know that how to restore a single mailbox in case it is corroupted or the mails are stored on a pst file in the local pc and i have a backup as the emails to that particular address are being forwarded to the backup mailbox in exchange 2007. which is a normal email box but used for backup purposes. how can i restore the mails from the edb file to the exchange server mailbox whose emails were being stored in a pst.
    Thanks.

     Anonymousemaster wrote:
    Hello everyone
    could any one please let me know that how to restore a single mailbox in case it is corroupted or the mails are stored on a pst file in the local pc and i have a backup as the emails to that particular address are being forwarded to the backup mailbox in exchange 2007. which is a normal email box but used for backup purposes. how can i restore the mails from the edb file to the exchange server mailbox whose emails were being stored in a pst.
    Thanks.
    Hello,
    I doubt if There are some tools tools that can extract individual mailboxes in .pst format from corrupted .edb databases.
    One such tool can be downloaded from :- http://www.exchangeserverrecovery.com

  • Exporting and Importing Portal users from Source system to Target system

    Hi All,
    I have exported all portal users from source portal in to file Users.txt do i need to convert this file in to some other format so that i can import these users in Target portal.
    any links documents
    Regards,
    Murali

    Hi,
    If you look in to User.txt
    I have role also i have deleted role in User.txt uploded file with rest of the otherdata including group it it able to create users.
    so in Nut shell let's say
    1. UID-Murali
       Role- Manager
      Group- HRGroup
    user existing  in DEV and i want to trnasfer data to PRD
    Role:Manger should exist in PRD, and group is not mandatory optional
    but the link http://help.sap.com/saphelp_nw70/helpdata/EN/ae/7cdf3dffadd95ee10000000a114084/frameset.htm
    says while uploading users role is optional it throws waring but i got error.
    i am bit confused.
    Now let's sau there are 10 users, 10 roles and 2 groups in source system if i want to export all users,roles,groups to target system what sequnce i have to follow without getting any error , warining is there any restriction on number of users, roles, groups i know file size should be less than 1MB.
    Points are on the way.
    Regards,
    Murali

  • Restoring single project from vault after Aperture 2.0 failure.

    Is it possible to restore a single project from my vault?
    Suddenly, one of my projects just died on me after I had just imported a very large TIFF, producing nothing but "cut out picture frames", and even after doing the newest update, which I didn't have previously, all my photos are still gone! It also caused Aperture 2.0 to quit spontaneously! If I double click on one of the missing photos, it states:"Unsupported Image Format". Oh ooh!
    If anyone knows how this error came about (maybe the imported image was corrupted somehow?), that would be of great help!
    Any help is much appreciated! Thank you.
    Mc

    If I wanted to restore a single image personally, I would go manually grab it from the vault using Finder. You can right click on the vault and do "show package contents," then navigate to the projects, right click and again "show package contents" for your project, and then go find it.

  • Restore single image from managed library

    How does one restore a single image from a library, either managed or referenced. It seems that in Time Machine one does not have access to the individual files in a managed library as one does from the Finder.
    Is my conclusion then, that in order to be able to recover individual file(s) one should use a referenced library?

    The Aperture Restore from Vault as you know saves your current library and restores all managed photos from the Vault. Note, previews are not stored on the Vault so they need to be regenerated (at least that was true in Aperture 2 but I have not done a restore in Aperture 3).
    Referenced images are not stored on the Aperture Library or Aperture Vault, just the adjustments you make to the image. So if your image is mangled, but you know the external referenced image displays fine, say, in Preview, then something may have gone wrong with the adjustments or raw conversion etc. You can remove all adjustments on the mangled image and see if that corrects the display problem.
    If the external image itself is corrupted, then Time Machine will restore it hopefully to a prior 'good' version as you know. The contents of an Aperture Library can be individually referenced via the Finder but it's not something where you can replace any of the contents as I understand that will damage your library integrity.

  • 10204 catalog database 112 target database and OEM

    catalog database is 10.2.0.4 on server CSERV
    OEM server is 10.2.0.4 OSERV
    target database is 11.2 on server TSERV
    I ran the RMAN command from RSERV (v10204) and registered target database (v112) without error
    I went into OEM->TARGET page ->Maintenance->Recovery Catalog Settings->
         selected Use recovery catalog
         (the correct catalog was listed in the drop down (CSERV:1521:RMANUSER-rman_schema1
         added the OS username/password for TSERV
         pressed 'OK'
    The error that was returned was:
    "Recovery catalog scheme version 10.02 is not compatible with this version of RMAN."
    I'm sure i'm doing something stupid, but would really appreciate any help.....
    Regards,
    Susan

    Hi Susan,
    Probably you used a 10g rman client on the command line to connect to both the target (11.2) and the catalog (10.2.0.4) during registration.
    (according to the compatibilty matrix this works).
    Be aware:
    When using an older version of the RMAN client with a newer version of the database, you do not get the features of the newer version.According to your own information Grid uses the 11gr2 software.
    added the OS username/password for TSERVWhich is incompatible with the 10.2 catalog.
    Unfortunately, we can't upgrade the catalog at this time.You can upgrade the catalog schema without upgrading the catalog rdbms (but probably you do not want to execute any change during xmas and new year).
    Please check:
    http://download.oracle.com/docs/cd/B28359_01/backup.111/b28273/rcmsynta052.htm
    Regards,
    Tycho

  • Significant slowness in data transfer from source DB to target DB

    Hi DB Wizards,
         My customer is noticing significant slowness in Data copy from the Source DB to the Target DB. The copy process itself is using PL/SQL code along with cursors. The process is to copy across about 7M records from the source DB to the target DB as part of a complicated Data Migration process (this will be a onetime Go-Live process). I have also attached the AWR reports generated during the Data Migration process. Are there any recommendations to help improve the performance of the Data transfer process.
    Thanks in advance,
    Nitin

    multiple COMMIT will take longer to complete the task than a single COMMIT at the end!Lets check how much longer it is:
    create table T1 as
    select OWNER,TABLE_NAME,COLUMN_NAME,DATA_TYPE,DATA_TYPE_MOD,DATA_TYPE_OWNER,DATA_LENGTH,DATA_PRECISION,DATA_SCALE,NULLABLE,COLUMN_ID,DEFAULT_LENGTH,NUM_DISTINCT,LOW_VALUE,HIGH_VALUE,DENSITY,NUM_NULLS,NUM_BUCKETS,LAST_ANALYZED,SAMPLE_SIZE,CHARACTER_SET_NAME,CHAR_COL_DECL_LENGTH,GLOBAL_STATS,USER_STATS,AVG_COL_LEN,CHAR_LENGTH,CHAR_USED,V80_FMT_IMAGE,DATA_UPGRADED,HISTOGRAM
    from DBA_TAB_COLUMNS;
    insert /*+APPEND*/ into T1 select *from T1;
    commit;
    -- repeat untill it is >7Mln rows
    select count(*) from T1;
    9233824
    create table T2 as select * from T1;
    set autotrace on timing on;
    truncate table t2;
    declare r number:=0;
    begin
    for t in (select * from t1) loop
    insert into t2 values ( t.OWNER,t.TABLE_NAME,t.COLUMN_NAME,t.DATA_TYPE,t.DATA_TYPE_MOD,t.DATA_TYPE_OWNER,t.DATA_LENGTH,t.DATA_PRECISION,t.DATA_SCALE,t.NULLABLE,t.COLUMN_ID,t.DEFAULT_LENGTH,t.NUM_DISTINCT,t.LOW_VALUE,t.HIGH_VALUE,t.DENSITY,t.NUM_NULLS,t.NUM_BUCKETS,t.LAST_ANALYZED,t.SAMPLE_SIZE,t.CHARACTER_SET_NAME,t.CHAR_COL_DECL_LENGTH,t.GLOBAL_STATS,t.USER_STATS,t.AVG_COL_LEN,t.CHAR_LENGTH,t.CHAR_USED,t.V80_FMT_IMAGE,t.DATA_UPGRADED,t.HISTOGRAM
    r:=r+1;
    if mod(r,10000)=0 then commit; end if;
    end loop;
    commit;
    end;
    --call that couple of times with and without  "if mod(r,10000)=0 then commit; end if;" commented.
    Results:
    One commit
    anonymous block completed
    Elapsed: 00:11:07.683
    Statistics
    18474603 recursive calls
    0 spare statistic 4
    0 ges messages sent
    0 db block gets direct
    0 calls to kcmgrs
    0 PX remote messages recv'd
    0 buffer is pinned count
    1737 buffer is not pinned count
    2 workarea executions - optimal
    0 workarea executions - onepass
    10000 rows commit
    anonymous block completed
    Elapsed: 00:10:54.789
    Statistics
    18475806 recursive calls
    0 spare statistic 4
    0 ges messages sent
    0 db block gets direct
    0 calls to kcmgrs
    0 PX remote messages recv'd
    0 buffer is pinned count
    1033 buffer is not pinned count
    2 workarea executions - optimal
    0 workarea executions - onepass
    one commit
    anonymous block completed
    Elapsed: 00:10:39.139
    Statistics
    18474228 recursive calls
    0 spare statistic 4
    0 ges messages sent
    0 db block gets direct
    0 calls to kcmgrs
    0 PX remote messages recv'd
    0 buffer is pinned count
    1123 buffer is not pinned count
    2 workarea executions - optimal
    0 workarea executions - onepass
    10000 rows commit
    anonymous block completed
    Elapsed: 00:11:46.259
    Statistics
    18475707 recursive calls
    0 spare statistic 4
    0 ges messages sent
    0 db block gets direct
    0 calls to kcmgrs
    0 PX remote messages recv'd
    0 buffer is pinned count
    1000 buffer is not pinned count
    2 workarea executions - optimal
    0 workarea executions - onepass
    What we've got?
    Single commit at the end, avg elapsed: 10:53.4s     
    Commit every 10000 rows (923 times), avg elapsed: 11:20.5s
    Difference:     00:27.1s     3.98%
    Multiple commits is just 4% slower. But it is safer regarding Undo consumed.

  • Restoring single photo from Time Machine

    My iphoto library appears to function differently in Time Machine than it does when I access it from my MacHD, and I'm wondering if this is normal or if I screwed up a setting somewhere. From my Mac HD, when I click on my iPhoto library it launches iphoto and shows me all my pictures. Alternatively, when I right click the iphoto library icon I can scan the source files and drill down to a specific photo. This all works great.
    Problem is when I navigate to my iphoto library on my backup drive using Time Machine, I can't seem to open it and view my photos. Right clicking does nothing. And single or double clicking the iphoto icon does nothing either. I know there are photos in there becuase it says the library is 15 gigs big.
    How do I go about restoring a single lost photo? Is my only option to restore the entire library? If that is true, it would be a huge hassle and not a great solution because I still wouldn't be sure if the library I was restoring contained the single missing photo I was looking for.
    My iphoto library is kept in a shared user folder on my IMac so that my wife can access it over our wireless network from her Imac. Not sure if this has anything to do with my problem, but figured I should mention it.
    Any help would be much appreciated.

    My Time Machine iPhoto Library doesn't work the same as the one on my hard drive either. That said, I wouldn't expect it to. Time Machine uses hard links within the HFS+ file structure to keep track of previously backed up files. I would think this would make programming some of the features available in just plain vanilla HFS+ difficult. You can read about how hard links are implemented in Time Machine in this Ars Technica review.
    http://arstechnica.com/reviews/os/mac-os-x-10-5.ars/14
    However, iPhoto is integrated with Time Machine so you can browse through things that way although I'm not positive it will work with a shared user folder. Launch iPhoto and without doing anything else click on the Time Machine icon in the dock. Then use the white ladder-like lines on the right hand side of the screen to navigate to the date you want to view. You can then click on the item(s) you want to restore and click the Restore button.

  • Database Project Schema Compare Source Control to target database Not working

    I am trying to use the schema compare feature of database project / TFS.  I don't understand why it is trying to DELETE a stored procedure that exists in TFS.
    The Degenerate Dimension

    Hi MM,  
    Thanks for  your post.
    If you open this solution from a local folder(not stored in TFS), and try to compare that same file, what’s the result? We need confirm this issue relate to VS Schema Compare tool or relate to TFS Source Control first.
    http://blogs.msdn.com/b/ssdt/archive/2012/03/23/schema-compare-improvements.aspx
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Retrieve Data for the Variables from Source System in Target System

    hello...i created a variable in the source system in the table: TVARVC and I would like to get the data from this table in the source system and used this in the infopackage calendar day in the BW target system. Can anyone suggest any useful way to do this? How to take the data from this table TVARVC which is not in the same box, is a different box.
    THANKS alot =)

    Hi  FluffyToffee,
    Try to create a RFC function module to read values in source table. Use this FM in infopackage selection routine.
    Hope it Helps
    Srini

  • Trigger Process Chain from Source System into Target System

    Hello...actually in source system BZD, we have a process chain, in the target system BWD, we have another process chain as well. We want to have a process of combining these two process chain together. When the process chain in source system is completed successfully, the system will sent out some sort of signal to start the process chain in BWD. I am not sure how this can be done, I try to use the function module "BP_EVENT_RAISE" in the BZD and when this module is called, it does not start the process chain in BWD after the process chain in BZD is completed. I just wondering am I using the correct method. If will be good if anyone here can help...thanks alot.

    >
    FluffyToffee wrote:
    > Hello...actually in source system BZD, we have a process chain, in the target system BWD, we have another process chain as well. We want to have a process of combining these two process chain together. When the process chain in source system is completed successfully, the system will sent out some sort of signal to start the process chain in BWD. I am not sure how this can be done, I try to use the function module "BP_EVENT_RAISE" in the BZD and when this module is called, it does not start the process chain in BWD after the process chain in BZD is completed. I just wondering am I using the correct method. If will be good if anyone here can help...thanks alot.
    Hi Fluffy,
    Check RFC connection between the two sysems.
    Try using remote process chain process type in the process chain. the fllowing link may help you.
    http://help.sap.com/saphelp_nw70/helpdata/EN/86/6ff03b166c8d66e10000000a11402f/frameset.htm
    Regards,
    sunmit.

  • Restore single photo from time machine

    Quote from apple help
    "To restore, select the file/folder and click the "Restore" button. The file will automatically be copied to the desktop or appropriate folder.  If the file you are restoring has another file in the same location with the same name, you will be prompted to choose which file to keep or keep both...."
    So to restore one lost photo from a month ago, and presuming the entire Aperture library must be restored, do I have to "keep both" restored and current libraries and short term find another 150GB space in Pictures, export the missing photo, then delete the old library? Because if I keep only the restored library I lose all recent photos surely, but it seems a long process.
    Or is there a clever workaround?  (apart from Vault which I don't intend using at this stage. )
    Still considering using Aperture, not yet purchased. Cant find this fully explained in search. Thanks.

    Peter -- adding (I hope) the excellent responses already given:
    One of the downsides of using Aperture is that "Photo" is no longer easily defined.  When you ask, "So to restore one lost photo from a month ago ... ", you must define, in terms relevant to Aperture, what you mean by "Photo".
    In Aperture you import image-format files.  On import, Aperture makes a note of where your file is located (and by default stores your file where it wants).  Thereafter your imported file is known by Aperture as an Original (prior to v. 3.3: a "Master").  Aperture also creates a text file to hold instructions on what metadata changes you make and what adjustments you make.  This text file is called a Version.  Aperture creates an image you can see, and displays this image to you as a thumbnail in the Browser, as a Preview in the Viewer (when Quick Preview is turned on) or as a fully-rendered image in the Viewer.  Aperture refers to this (somewhat fitfully) as an "Image".
    Here's what you need to know:
    Original * the instructions in the Version = the Image you see.
    When discussing Aperture it is helpful to differentiate between
    - the image-format file you imported:  the Original
    - the text file of instructions that Aperture keeps in order to modify your Original to show you what you've done to it:  the Version
    - the picture showing you all you adjustments, with all your metadata applied:  the Image
    There is a one-to-one correspondence between Image and Version:  one and only one Version is used to create each Image.  Throughout Aperture, "Version" and "Image" are used synonymously to mean "the thing you see".  Only "Version" is used to specify the text file of instructions.
    (There are excellent reasons for this set-up -- primarily among them Aperture's non-destructive workflow and excellent performance doing the tasks we ask of it -- but they are beyond the scope of this thread.)
    Some of the responses you've gotten here are about restoring your Original.  Some are about restoring your Image.  Those are, in Aperture, different tasks, dealing with different (or in one case, more) files.

Maybe you are looking for