Replication solutions

Hi,
I got requirement of data to be sync at different locations. One will be the centralized database and other databases are on different locations like three to four locations. So, what would be the best solution in this case Should I use replication, streams or any other third party tool.
Thank you for your cooperation.
Kind Regards,
Adnan Hamdus Salam

Hi,
I'm using the Oracle streaming since last 2 Year.
Oracle Streaming is the best oracle replication solution for synchronize the multiple remote databases.
There are various other alternative as well like Oracle Golden Gate, Oracle Advance Replication, Oracle Materialized Views
Regards
Hitgon
Edited by: hitgon on Jan 11, 2012 2:40 AM

Similar Messages

  • Replication solutions with Oracle 8i

    Hello,
    I'm looking for solutions to make real time replications from an Oracle 8i database.
    For more details :
    - we have an existing database with Oracle 8i
    - we want to replicate this database: the replicated database will be used by a web application.
    - we need the replication to be in real-time
    For the moment i didn't find much informations on the web and it seems that replication solutions for the version 8i are quiet limited.
    If someone have any idea on how i can do this, it would be helpfull.
    Thanks,

    Among the problems, though, is that there are numerous ways to architect a replication process. That architecture requires you to make various trade-offs. Without understanding what you're attempting to gain in a business sense from the replication process, though, it's impossible to even attempt to figure out which trade-offs to make.
    One option would be to create a new database for the reporting app and create a fast-refreshable materialized view in the reporting database for every table in the source system. In the source system, you'd then create materialized view logs on every table in the source database. If you want the reporting system to be transactionally consistent, you'd need to put the various materialized views into one or more refresh groups and schedule those refresh groups to run every 5 minutes.
    That would probably get close to having data on the reporting system within 10 minutes of it being created on the source system. The downsides, though, would include
    - Every transaction on the source system now needs to synchronously write to the materialized view logs. This is roughly equivalent to creating a new trigger on every table in your system, which can be a decent load.
    - The reporting system will have to poll the source system for data changes every few minutes. That's likely to put a pretty decent load on the source system.
    - Making DDL or large DML changes to the source requires the source system admins to be conscious of the destination system's needs (the destination system's materialized view may need to be recreated and an initial full refresh may need to be done which may impact the timing of the change on the source system).
    - The source system admins (and the destination system admins) have to monitor the refresh process, the size of the materialized view logs, etc.
    - Spreading information across multiple systems increases the liklihood of improper data disclosure because you have less ability to figure out who has access to any particular piece of data, for example.
    For a reporting application, I'm really hard pressed to imagine that these downsides are worth whatever benefit you're trying to achieve.
    Justin

  • Replication solution required

    I have a requirement where a single table needs to be replicated one-way from source site S1 say to destination site D1. My initial thoughts are to use a simple fast refreshed materialized view b ut I'm not sure how the replication would fail over if the source site failed over to its DR site at S2. Would the replication be able to fail over with it? If an mview isn't suitable what would be a better choice of technology? Oracle Streams maybe?
    Any advice gratefully received
    James

    When failure of primary DB happens people usually do not care about replication. :) Everyone is busy to fix an issue and bring the primary DB up.
    However if replication is important, you can configure your DR solution to fail a virtual IP over to secondary site.
    Then next replication session will be transparently connected to a secondary DB.

  • Replication solution for MSCS Failover Cluster?

    That is definitely the plan, just a matter of finding the best software solution to achieve that. I imagine StarWind free version is an option...

    I am setting up a multi-site windows failover cluster (windows server 2008 r2), to be used for a print services cluster. My sites are connected via mlps network, but I've run into a bit of a snag: I need to have shared storage (cluster disk), and since it is across multiple sites, it is going to have to be 2 volumes that have replication between them that complies with MSCS (removing options like vmware replication and MS DFS).  Each location has SAN storage, but are from different providers and cannot replicate between each other on that level, so I am looking for a (free or low-cost) software solution that I can put inside a VM at each location to serve as my cluster disk.
    Any suggestions are greatly appreciated, thanks.
    This topic first appeared in the Spiceworks Community

  • Replication solution

    Hi folks,
    I need to figure out a solution for the following scenario:
    Site A and Site B connect to Site C where is a 9i Standard Edition. They enter their data into the C's database. Twice a day Site C connects to Site D thru dialup line and a particular schema in Site C's database (runs also a 9i std. edition) must get syncronized with Site D's one. Both sides run linux of course. Cheapest,fastest and painless solution what I'm looking for 8).
    thanks.,Sandor

    Since you have standard edition, you're limited in your options. If there are a minimal amount of tables, you might consider using triggers to fill another table with the changes. Then when you connect to do the update, grab everything from the extra table, followed by a truncate to empty it again. This would rank high for cheap, but will take some PL/SQL work so it's not painless.

  • GoldenGate Replication or Oracle 11g Streams?

    We are working with a customer that is interested in implementing Oracle 11g Streams.
    I don’t know the GoldenGate products and
    I am not real familiar with streams other then hearing it can be tough to tune and if something goes wrong, it can be a pain to get right..
    They recently heard of Oracle’s purchase or GoldenGate and became interested when they read that the company may offer possibly a simpler replication strategy then streams as far as setup, tuning, maintainin, etc..
    Has anyone heard of what Oracle’s future plans/direction are for integrating GoldenGate into their fold, specifically the replication side like streams?
    Thanks

    I predict that in the future the GoldenGate Oracle users are going to get the short end of the stick now that Oracle has purchased them. I do not believe that Oracle purchased GoldenGate for their Oracle to Oracle capabilities. I assume that they wanted to acquire their ability to replicate from non-Oracle databases.
    There is nothing inherently wrong (design-wise) that makes the Oracle Streams product inferior to GoldenGate or Quest's Shareplex product. But both companies have continued to sell their products (which are not cheap) against a product which basically free.
    Why is this? Do you think that Oracle does not have the technical capabilities to build a great replication solution? If Oracle was interested in making a great Oracle to Oracle replication product, they could have invested a small fraction of the amount they paid for GoldenGate and added the necessary resources to build the best replication product on the market. But the don't because that is not what they are interested in doing.
    There are a couple of press releases that Oracle put out about the GoldenGate acquisition and one of the advantages they claim GoldenGate will benefit from is their $30 Billion dollar R&D budget. But why didn't Oracle use some of that 30 billion to make Streams a great product??
    Oracle purchased GG for reasons that have nothing to do with Oracle to Oracle replication. Future resources and priorities will go towards those goals.

  • Replication on th SE

    hello
    thanks for help
    what replication solution is available on th SE ?
    if i created materialized view on the replication site, based on the primary site tables,then build my table based on these views , does that work in case of primary site failure ?
    is that considered a kind of disaster recovery solution ?
    thanks again

    I would need more information to access how expensive your refresh is. Things like the size of your tables, update/insert/delete rate, etc.
    As per the Standby setup I dind't have a doc handy that shows how to do it. Oracle probably doesn't have one since they want you to just buy Enterprise Edition. I did a google search on "Oracle Standby SE" and it turned up some results There is a product named dbvisit that might be worth looking into (I have not used it).
    The reason I know how to do this is because in the early days of Standby/Dataguard you had to do this all yourself.

  • N-way replication

    hi,
    can someone please provide a script, allowing n-way/mesh/update anywhere topology HA/DR system.
    I am able to setup a hot standby two-way replicating system;
    MACH1<---->MACH2 [SITE A]
    these two servers replicate using the two way hot standby replication scheme.
    MACH3<----->MACH4 [SITE B]
    also these two servers replicate using the two way hot standby replication scheme.
    but now i need the two sites also replicate with the other, for example;
    MACH2<------>MACH3 [SITE A]+[SITE B]
    Is it possible to do this with a secondary replication scheme? (it is not a problem two sites are one way or two way replicating, all solutions will be appreciated :) )
    Thanks.

    Thanks for the extra detail. My first comment is that unless you have a compelling reason to use TimesTen 7.0 you should really consider using TimesTen 11.2.1 for any new developments; it has many new features compared to 7.0 plus better performance in many scenarios.
    The configuration that you want to achieve can easily be realised using classic or legacy replication. Here is a 4-way example using database level replication:
    CREATE REPLICATION myrepscheme
    ELEMENT e1 DATASTORE
    MASTER mystore ON mach1
    SUBSCRIBER mystore ON mach2
    SUBSCRIBER mystore ON mach3
    SUBSCRIBER mystore ON mach4
    ELEMENT e2 DATASTORE
    MASTER mystore ON mach2
    SUBSCRIBER mystore ON mach1
    SUBSCRIBER mystore ON mach3
    SUBSCRIBER mystore ON mach4
    ELEMENT e3 DATASTORE
    MASTER mystore ON mach3
    SUBSCRIBER mystore ON mach1
    SUBSCRIBER mystore ON mach2
    SUBSCRIBER mystore ON mach4
    ELEMENT e4 DATASTORE
    MASTER mystore ON mach4
    SUBSCRIBER mystore ON mach1
    SUBSCRIBER mystore ON mach2
    SUBSCRIBER mystore ON mach3
    STORE mystore ON mach1 PORT 10000
    STORE mystore ON mach2 PORT 10001
    STORE mystore ON mach3 PORT 10002
    STORE mystore ON mach4 PORT 10003;
    Looks easy but there are some important caveats:
    1. If you use the default asynchronous mode of operation then when the currently 'active' datastore (the one receiving application updates) or the machine hosting it fails there will, in general, be some trapped transactions. If you immediately failover the application to another datastore and then recover the failed store you need to consider what will happen to those transactions... You really have 2 choices; allow them to flow across after the store has been recovered but then you run the risk of 'old' updates overwriting 'newer' updates or you discard the failed datastore entirely and re-create it by duplicating the new active datastore. Either way you will typically 'lose' some transactions and/or have diverged datastores. Using conflict resolution may help (this is only available with table level replication not datastore level replication) but it is not always a foolproof solution.
    2. You need to take care with the geopgraphic links to make sure that the bandwidth and latency of the connections is sufficient for replication to operate successfully. You will almost certainly have to activate the software compression option (COMPRESS TRAFFIC ON) if you are using geographic replication. You also need to ensure that the O/S TCP/IP settings are tunes suitable for high latency connections.
    There is an alternative (and preferred) solution for this requirement based on active/standby pair replication. Rather than explain it here (it is quite detailed) I have written a whitepaper that describes this (and other) replication solutions; if you are interested and can give me an e-mail address I will send it to you.
    Chris

  • Solution for DR site?

    Dear Experts,
    We are building a standby DR site for our oracle database servers.
    I know that there is Oracle DataGuard technology that can be used. And also I know that for oracle ebs there is a standard document about 'business continuity' that tells about DR setup for oracle ebs.
    But our company is opting for 'DOUBLE TAKE' replication/high available software that is replicating the whole server from the primary to the stand by site. This replication is at server level hence what ever are the changes on the source server it is getting replicated on target server through this 'DOUBLE TAKE' solution. Even the services, database services, database files update, software, folders, files, registry for windows, linux os etc everything is getting replicated from source to target.
    1. Can you please advice what the advantages and disadvantages of using such replication solution?
    2. Are there any chances of data corruption?
    3. And most important, What are the scenarios that I should test?
    Thanks in advance.

    Sami,
    But our company is opting for 'DOUBLE TAKE' replication/high available software that is replicating the whole server from the primary to the stand by site. This replication is at server level hence what ever are the changes on the source server it is getting replicated on target server through this 'DOUBLE TAKE' solution. Even the services, database services, database files update, software, folders, files, registry for windows, linux os etc everything is getting replicated from source to target.You need to confirm with the software vendor whether this solution is supported with Oracle EBS or not.
    1. Can you please advice what the advantages and disadvantages of using such replication solution? What software we are talking about here?
    2. Are there any chances of data corruption?Not sure what are the capabilities of the software, so I believe it is hard to answer. For the application level, I believe there should be no issues, but for the database level, you need to verify with the software vendor.
    3. And most important, What are the scenarios that I should test? Follow the same scenarios which are covered in the data guard docs (which you mentioned that you are aware of).
    Thanks,
    Hussein

  • VCenter SRM Installation and Configuration

    Hello,
    We have installed and configured SRM 5.5.1 in Main site and DR site. In DR site we have two hosts and no shared storage.
    Each have one vsphere replication appliance deployed and both the sites connected and we have also replicated two virtual machines successfully.
    Next step we have configured protection group and recovery plan as well.
    But when we test a recovery plan it gives a error "unable to access virtual machine configuration file"
    Is it necessary we must have a shared storage? Can't we use local datastore on each host to power on the machines.
    Regards
    Karthik

    Hi,
    No, shared storage is not required:
    Array/ vendor types
    Requires same storage replication solution at both sites (eg. EMC RecoverPoint, NetApp vFiler, IBM SVC, etc)
    Can support any storage solution at either end including local storage as long as it is covered by the vSphere HCL
    SRM - Array Based Replication vs. vSphere Replication | VMware vSphere Blog - VMware Blogs
    Michael.

  • Best practices for setting up virtual servers on Windows Server 2012 R2

    I am creating a Web server from scratch with Windows Server 2012 R2. I expect to have a host server, and then 3 virtual servers...one that runs all of the web apps as a web server, another as a Database Server, and then on for session state.  I
    expect to use Windows Server 2012 R2 for the Web Server and Database Server, but Windows 7 for the session state.
    I have an SATA2 Intel SROMBSASMR RAID card with battery back up that I am attaching a small SSD drive that I expect to use for the session state, and an IBM Server RAID M1015 SATA3 card that I am running Intel 520 Series SSD's that I expect to
    use for Web server and Database server.
    I have some questions. I am considering using the internal USB with a flash drive to boot the Host off of, and then using two small SSD's in a Raid 0 for the Web server (theory being that if something goes wrong, session state is on a different drive), and
    then 2 more for the Database server in a RAID 1 configuration.
    please feel free to poke holes in this and tell me of a better way to do it.
    I am assuming that having the host running on a slow USB drive that is internal has no effect on the virtual servers after it is booted up, and the virtual servers are booted up?
    DCSSR

    I am creating a Web server from scratch with Windows Server 2012 R2. I expect to have a host server, and then 3 virtual servers...one that runs all of the web apps as a web server, another as a Database Server, and then on for session state.  I
    expect to use Windows Server 2012 R2 for the Web Server and Database Server, but Windows 7 for the session state.
    I have an SATA2 Intel SROMBSASMR RAID card with battery back up that I am attaching a small SSD drive that I expect to use for the session state, and an IBM Server RAID M1015 SATA3 card that I am running Intel 520 Series SSD's that I expect to
    use for Web server and Database server.
    I have some questions. I am considering using the internal USB with a flash drive to boot the Host off of, and then using two small SSD's in a Raid 0 for the Web server (theory being that if something goes wrong, session state is on a different drive), and
    then 2 more for the Database server in a RAID 1 configuration.
    please feel free to poke holes in this and tell me of a better way to do it.
    I am assuming that having the host running on a slow USB drive that is internal has no effect on the virtual servers after it is booted up, and the virtual servers are booted up?
    There are two issues about RAID0:
    1) It's not as fast as people think. So with a general purpose file system like NTFS or ReFS (choice for Windows is limited) you're not going to have any great benefits as there are very low chances whole RAID stripe would be updated @ the same time (I/Os
    need to touch all SSDs in a set so 256KB+ in a real life). Web server workload is quite far away from sequential reads or writes so RAID0 is not going to shine here. Log-structures file system (or at least some FS with logging capabilities, think about ZFS
    and ZIL enabled) *will* benefit from SSDs in RAID0 properly assigned. 
    2) RAID0 is dangerous. One lost SSD would render whole RAID set useless. So unless you build a network RAID1-over-RAID0 (mirror RAID sets between multiple hosts with a virtual SAN like or synchronous replication solutions) - you'll be sitting on a time bomb.
    Not good :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Using DFS-R to replicate VHD and VHDX Files

    Hello -
    I am trying to find a solution to replicate VHD files and VHDX files between two data centers. I'm deploying the solution using Windows Server 2012.
    The VHD files are not virtual machines, they are simply storage areas (one for each user) that contain user documents and other files.
    The users will potentially log on to terminal servers in two locations and I'd like the VHD files to be accessible from both locations. The VHD files will mount during the logon sequence into a mount point within the users profile (or some other accessible
    area within the file system). This is a bit like user profile disks, but more user data disks.
    Has anyone attempted the replication of VHD files before using DFS-R, and are there any things I should be aware of?
    Will DFS-R support delta replication of VHDs? If a user has a 500 MB presentation, within the VHD file, and changes a single line of text, I'd prefer for only the changed blocks to synchronize.
    I don't believe DFS-R supports replicating open files, so I understand that the data won't be in sync until the VHD file us unmounted, but thats okay.
    Cheers, I'd welcome any feedback.
    Mitch

    Hello -
    I am trying to find a solution to replicate VHD files and VHDX files between two data centers. I'm deploying the solution using Windows Server 2012.
    The VHD files are not virtual machines, they are simply storage areas (one for each user) that contain user documents and other files.
    The users will potentially log on to terminal servers in two locations and I'd like the VHD files to be accessible from both locations. The VHD files will mount during the logon sequence into a mount point within the users profile (or some other accessible
    area within the file system). This is a bit like user profile disks, but more user data disks.
    Has anyone attempted the replication of VHD files before using DFS-R, and are there any things I should be aware of?
    Will DFS-R support delta replication of VHDs? If a user has a 500 MB presentation, within the VHD file, and changes a single line of text, I'd prefer for only the changed blocks to synchronize.
    I don't believe DFS-R supports replicating open files, so I understand that the data won't be in sync until the VHD file us unmounted, but thats okay.
    Cheers, I'd welcome any feedback.
    Mitch
    If you don't own a pretty fat pipe between the data centres (and it comes free of charge) that's not the best idea. For a reason: DFS-R does use compression only paired with per-file changed block tracker no deduplication see...
    http://technet.microsoft.com/en-us/library/cc771058.aspx
    ...so at the end of the day you'll replicate TONS of non-required information. What you really need to do: deploy a third-party replication solution layered up and working on top of the volume(s) where your VHD/VHDX files are stored. 
    Will DFS-R work to replicate VHD/VHDX? YES
    Is DFS-R optimal for VHD/VHDX replication scenario? NO
    Hope this helped :)
    StarWind iSCSI SAN & NAS

  • Cannot add cluster disk on windows 2012

    I have 2 servers connected to a shared storage array using Fibre Channel.  Currently there are 2 luns on the storage device.  
    From the Failover Cluster Manager, when I select “Add Disk” from Storage – Disks, both luns show up and I can add them without problems.  But if I choose to only
    add one of them first (it doesn't make a difference which one is added first), then it will not allow adding the second one later.  I get a message: "No disks suitable for cluster disks were found. For diagnostic information about disks available to the
    cluster, use Validate a Configuration Wizard to run Storage tests."
    When I do add one (or both disks at the same time), they work just fine for failover clustering.
    I can’t imagine this is by design. Is this a known/unknown issue, or is there something special that needs to be done?
    Thanks

    Ok, no problem. I run the validation tests (Validate Cluster - Storage only) and I received the details shown below. I can clearly see that Cluster Validation tool does not find any disk suitable for 2012 cluster, and even I can see why (it is plain English).
    Unfortunately this still does not explain how to fix the problem. Recommendations like "run Clear-ClusterDiskReservation PowerShell cmdlet to remove the Persistent Reservation from the disk" do not work - this discussion is available in another thread where
    people having the same issue and no luck so far. I guess that I'm using old SAN (and I know that) that simply does not support SCSI-3 persistent reservation so those disks cannot be used in 2012 cluster. Oh well, will have to wait fro new SAN and then I can
    play again...
    Validation test results:
    Physical disk with identifier {95059d50-a173-49bd-b029-fec326acd78c} has an existing Persistent Reservation placed on it, and will be removed from the validation set. This disk may already be clustered, on a different cluster or in use by another system.
    Please verify the storage configuration and LUN zoning. If you wish to cluster this disk, you can use the Clear-ClusterDiskReservation PowerShell cmdlet to remove the Persistent Reservation from the disk.
    Physical disk with identifier {0b301c72-c88f-47ee-9389-11efdeed6cdb} has an existing Persistent Reservation placed on it, and will be removed from the validation set. This disk may already be clustered, on a different cluster or in use by another system.
    Please verify the storage configuration and LUN zoning. If you wish to cluster this disk, you can use the Clear-ClusterDiskReservation PowerShell cmdlet to remove the Persistent Reservation from the disk.
    Physical disk with identifier {eec0be37-5dfe-47f7-a149-4f1dec18450b} has an existing Persistent Reservation placed on it, and will be removed from the validation set. This disk may already be clustered, on a different cluster or in use by another system.
    Please verify the storage configuration and LUN zoning. If you wish to cluster this disk, you can use the Clear-ClusterDiskReservation PowerShell cmdlet to remove the Persistent Reservation from the disk.
    Physical disk with identifier {dcb8b1fe-7671-41c9-b996-da26c0b93751} has an existing Persistent Reservation placed on it, and will be removed from the validation set. This disk may already be clustered, on a different cluster or in use by another system.
    Please verify the storage configuration and LUN zoning. If you wish to cluster this disk, you can use the Clear-ClusterDiskReservation PowerShell cmdlet to remove the Persistent Reservation from the disk.
    No disks were found on which to perform cluster validation tests. To correct this, review the following possible causes:
    * The disks are already clustered and currently Online in the cluster. When testing a working cluster, ensure that the disks that you want to test are Offline in the cluster.
    * The disks are unsuitable for clustering. Boot volumes, system volumes, disks used for paging or dump files, etc., are examples of disks unsuitable for clustering.
    * Review the "List Disks" test. Ensure that the disks you want to test are unmasked, that is, your masking or zoning does not prevent access to the disks. If the disks seem to be unmasked or zoned correctly but could not be tested, try restarting the servers
    before running the validation tests again.
    * The cluster does not use shared storage. A cluster must use a hardware solution based either on shared storage or on replication between nodes. If your solution is based on replication between nodes, you do not need to rerun Storage tests. Instead, work
    with the provider of your replication solution to ensure that replicated copies of the cluster configuration database can be maintained across the nodes.
    * The disks are Online in the cluster and are in maintenance mode. No disks were found on which to perform cluster validation tests.

  • How to replicate schema to other database

    Hi,
    I have a database of version 11.2.0.3 and running on version Enterprise Version 5.
    On this database, I have a application which resides on a schema called -> APPS (Size of it being 8G)
    I need to setup the same schema in a DR site and replicate the whole data continuously so that if there is an issue in primary site, I can switch to the DR site.
    I cannot setup a Dataguard (physical standby) configuration in this database as there is another application which resides on this database and that application basically does the replication by itself to the DR site (The application is Oracle Content Management)
    I am thinking of 3 options. Appreciate if you could please suggest me the best option and also pointers on how to implement them.
    1. Set up Materialized View in both Primary and DR database schema so that the schema gets refreshed continuously
    2. Implement streams and replicate the data across the schema's (Both from Primary db to DR and viceversa)
    3. Move this application (schema) to a new database with Physical Standby Dataguard configuration.
    Since the schema is of a very small size (8G), i want to try options 1 and 2 before trying option 3. i.e. creating new db altogether.
    Thanks!

    877343 wrote:
    I need to setup the same schema in a DR site and replicate the whole data continuously so that if there is an issue in primary site, I can switch to the DR site.- If you need Disaster Recovery Solution ->
    3. Move this application (schema) to a new database with Physical Standby Dataguard configuration. Oracle Data Guard provides the management, monitoring, and automation software
    infrastructure to create and maintain one or more standby databases to protect Oracle data from
    failures, disasters, errors, and data corruptions. Data Guard is unique among Oracle replication
    solutions in supporting both synchronous (zero data loss) and asynchronous (near-zero data loss)
    configurations. Administrators can chose either manual or automatic failover of production to a
    standby system if the primary system fails in order to maintain high availability for mission
    critical applications.
    - If you have good knowledges in "Materialized View"(More Developer Role than DBA).
    1. Set up Materialized View in both Primary and DR database schema so that the schema gets refreshed continuously- Streams -> complicated for beginners.
    2. Implement streams and replicate the data across the schema's (Both from Primary db to DR and viceversa)+
    Try Oracle Golden Gate!!!

  • Data MIgration from Oracle to SQL Server 2005

    HI Gurus,
    Kindly please advice me how to migrate Data from oracle to MS SQL Server or Vice Versa.
    I came to know about 2 methods:
    1) Using SQL Developer
    2) USing ODBC.
    KIndly let me know which method is better. I am in confusion about both option
    Kindly advice over the same
    Thanks

    Usually such questions asked and answered on forums of a target system. In this case on MS SQL forums.
    But I will answer.
    You should create a LINKED SERVER in MS SQL that connects to Oracle.
    Then issue couple of SELECT * INTO <TARGET_TABLE> FROM <ORACLE LINKED SERVER>..<SRC SCHEMA>.<SRC TABLE>.
    Install Oracle Client and OLE DB driver on SQL Server machine.
    Also, Oracle is case sensitive by default. MS SQL is case insensitive by default. If there are primary/unique keys that have mixed case values in Oracle, then in MS SQL you need to set case sensitive collation for them.
    PS. If you need not only migrate data one time, but also to have a real time replication during an application transition period, you can take a look on heterogeneous replication solutions like Golden Gates or DataCurrents.

Maybe you are looking for

  • 0CUSTOMER data from Unicode R/3 to Non-Unocode BW Error

    Hi, I am getting error data when extracting 0CUSTOMER_ATTR. the error is similar to SDN Link: [Extraction from a unicode ERP to a non unicode BW|Extraction from a unicode ERP to a non unicode BW] We don't have any Languages installed at present. we a

  • How do I make my wireless internet password protected?

    I set up my wireless router and went to the 192.168.1.1 to set up everything. But how do I make it password protected, so unwanted people can't access my wireless internet? I did it before but can't figure it out again, haha. I went to Wireless>>>Wir

  • JBO-26022 Error

    When I am running my application in integrated server i am getting this error: JBO-26022: Custom class oracle.apps.crmCommon.core.publicModel.CRMApplicationModuleDefImpl cannot be found. Error      oracle.apps.crmCommon.core.publicModel.CRMApplicatio

  • Cross Browser/OS issue?

    Hi there, This website belongs to our school project and we are getting reports from the teacher assistance and the professor that the website is not funcationing properly. They are saying the menu on the bottom of the page is not showing so they can

  • Re-Downloading Deleted Songs

    Hi there. I'm a long time iTunes user - and I've purchased quite a few songs from Apple. Recently, however, some of the purchased songs needed re-loaded onto my computer. To make a long story short, a few of them did not make the transfer. They're lo