Moving a large database

Hello, I have a customer that is looking to move a 1.7TB database from an old server to a new one. The source database is an old 8i database that was upgraded to 10.2.x. The move will be to a new 64bit windows 2008 server with lots of memory and a SAN. I am looking for suggestions on how to best move this database. A backup/restore or cold backup is NOT an option as there are some sys schema limitations migrated from the 8i database that cannot be carried over. Is transportable tablespaces the way to go? Is there any reasonable way to to an export/import with a database this size? Any thought/suggestions please. Thank you.

Hi,
Hello, I have a customer that is looking to move a 1.7TB database from an old server to a new one. The source database is an old 8i database that was upgraded to 10.2.x. The move will be to a new 64bit windows 2008 server with lots of memory and a SAN. I am looking for suggestions on how to best move this database. A backup/restore or cold backup is NOT an option as there are some sys schema limitations migrated from the 8i database that cannot be carried over. Is transportable tablespaces the way to go? Is there any reasonable way to to an export/import with a database this size? Any thought/suggestions please. Thank you.Please refer MOS tech notes:
*Master Note For Oracle Database Upgrades and Migrations [ID 1152016.1]*
*How to Perform a Full Database Export Import during Upgrade, Migrate, Copy, or Move of a Database [ID 286775.1]*
*Cross-Platform Migration on Destination Host Using Rman Convert Database [ID 414878.1]*
*How to Migrate Oracle 10.2 32bit to 10.2 64bit on Microsoft Windows [ID 403522.1]*
*Different Upgrade Methods For Upgrading Your Database [ID  419550.1]*
Hope Help's :)
thanks,
X A H E E R

Similar Messages

  • Move large database to other server using RMAN in less downtime

    Hi,
    We have large database around 20TB. We want to migrate (move) the database from one server to other server. We do not want to use standby option.
    1)     How can we move database using RMAN in less downtime
    2)     Other than RMAN is there any option is available to move the database to new server
    For option 1 (restore using RMAN),
    Whether below options are valid?
    If this option is valid, how to implement this?
    1)     How can we move database using RMAN in less downtime
    a)     Take the full backup from source (source db is up)
    b)     Restore the full backup in target (source db is up)
    c)     Take the incremental backup from source (source db is up)
    d)     Restore incremental backup in target (source db is up)
    e)     Do steps c and d, before taking downtime (source db is up)
    f)     Shutdown and mount the source db, and take the incremental backup (source db is down)
    g)     Restore last incremental backup and start the target database (target is up and application is accessing this new db
    database version: 10.2.0.4
    OS: SUN solaris 10
    Edited by: Rajak on Jan 18, 2012 4:56 AM

    Simple:
    I do this all the time to relocate file system files... But the principle is the same. You can do this in iterations so you do not need to do it all at once:
    Starting 8AM move less-used files and more active files in the afternoon using the following backup method.
    SCRIPT-1
    RMAN> BACKUP AS COPY
    DATAFILE 4 ####"/some/orcl/datafile/usersdbf"
    FORMAT "+USERDATA";
    Do as many files as you think you can handle during your downtime window.
    During your downtime window: stop all applications so there is no contention in the database
    SCRIPT-2
    ALTER DATABASE DATAFILE 4 offline;
    SWITCH DATAFILE 4 TO COPY;
    RECOVER DATAFILE 4;
    ALTER DATABASE DATAFILE 4 online;
    I then execute the delete of the original file at somepoint later - after we make sure everything has recovered and successfully brought back online.
    SCRIPT-3
    DELETE DATAFILECOPY "/some/orcl/datafile/usersdbf"
    For datafiles/tablespaces that are really busy, I typically copy them later in the afternoon as there are fewer archivelogs that it has to go through in order to make them consistent. The ones in the morning have more to go through, but less likelihood of there being anything to do.
    Using this method, we have moved upwards 600G at a time and the actual downtime to do the switchover is < 2hrs. YMMV. As I said, this can be done is stages to minimize overall downtime.
    If you need some documentation support see:
    http://docs.oracle.com/cd/E11882_01/server.112/e18951/asm_rman.htm#CHDBDJJG
    And before you do ANYTHING... TEST TEST TEST TEST TEST. Create a dummy tablespace on QFS and use this procedure to move it to ASM to ensure you understand how it works.
    Good luck! (hint: scripts to generate these scripts can be your friend.)

  • Moving A Confluence Database

    Has anyone moved a Confluence Database before?
    I need to move it as the Server it is currently sitting on is predominantly a development ant test platform.
    Both SQL Server instances are the same version and both instances are on clustered enviornments hosted by the same nodes.
    Thanks
    Please click "Mark As Answer" if my post helped. Tony C.

    Please check the below link:
    https://confluence.atlassian.com/display/DOC/Migrating+Confluence+Between+Servers
    Method one – standard procedure
    Step 1: Take note of your add-ons
    Take note of the add-ons (plugins) currently installed and enabled in Confluence, so that you can reinstate them later. Make a note of the following for each add-on:
    Add-on name
    Version
    Enabled or disabled status. This is useful if you have enabled or disabled modules yourself, making your configuration differ from the default.
    Step 2: Back up your data
    Create an XML backup of your existing data, via the Confluence administration console. See Manually Backing Up the Site. Make a note of the location
    where you put the XML file. You will need it later to import your Confluence data into your new database.
    Shut down Confluence.
    Make a copy of the
    Confluence Home and other important directories. This is a precautionary measure, to ensure you can recover your data if it is mistakenly overwritten.
    If you are using an external database, make a separate backup using the utilities that were installed with that database. This also is a precautionary measure.
    Step 3: Set up the new database
    Choose the
    database setup instructions for your new database, and follow those instructions to do the following:
    Install the database server.
    Perform any required configuration of the database server, as instructed.
    Add the Confluence database and user. Make a note of the username and password that you define in this step. You will need them later, when running the Confluence Setup Wizard.
    Step 4. Install Confluence (same version number) in a new location
    Now you will install Confluence again, with a different home directory path and installation path.
    Note: You must use the same version of Confluence as the existing installation. (If you want to upgrade Confluence, you must do it as a separate step.) For example, if your current site is running Confluence 5.1.2, your new installation
    must also be Confluence 5.1.2.
    When running the
    Confluence installer:
    Choose Custom Install. (Do not choose to upgrade your existing installation.)
    Choose a new destination directory. This is the installation directory for your new Confluence. It must not be the same as the existing Confluence installation.
    Choose a new home directory. This is the data directory for your new Confluence. It must not be the same as the existing Confluence installation.
    Step 5. Download and install the database driver if necessary
    Note that Confluence bundles some database drivers, but you'll need to install the driver yourself if it is not bundled. Follow the
    database setup instructions for your new database, to download and install the database driver if necessary.
    Step 6. Run the Confluence setup wizard and copy your data to your new database
    When running the
    Confluence setup wizard:
    Enter your license key, as usual.
    Choose Production Installation as the installation type.
    In the database configuration step, choose your new database type from the dropdown menu, then choose
    External Database.
    Choose the connection type: Direct JDBC or Datasource. If you are not sure which, choose 'Direct JDBC'. This is the most common connection type.
    When prompted for the database user and password, supply the credentials you defined earlier when adding the Confluence database to your database server.
    On the load content step, choose Restore From Backup. This is where you will import the data from your XML backup. There are two options for accessing the XML file:
    Browse to the location of your XML backup on your network, and choose
    Upload and Restore.
    Alternatively, put the XML file in the Confluence home directory of the new site (<CONFLUENCE-HOME-DIRECTORY>\restore) then choose
    Restore.
    Note: If you choose not to restore during the Confluence setup wizard, you can do the import later. Go to the Confluence administration console and choose to restore an XML backup. See
    Site Backup and Restore.
    Step 7. Re-install your add-ons
    Re-install any add-ons (plugins) that are not bundled with Confluence.
    Use the same version of the add-on as on your old Confluence site.
    The data created by the add-ons will already exist in your new Confluence site, because it is included in the XML backup.
    Step 8. Check settings for new machine
    If you are moving Confluence to a different machine, you need to check the following settings:
    Configure your new base URL. See
    Configuring the Server Base URL.
    Check your application links. See
    Linking to Another Application.
    Update any gadget subscriptions from external sites pointing to this Confluence site. For example, if your JIRA site subscribes to Confluence gadgets, you will need to update your JIRA site. See
    Adding JIRA Gadgets to a Confluence Page.
    Review any other resources that other systems are consuming from Confluence.
    Method two – for installations with a large volume of attachments
    Before you start
    Before proceeding with these instructions please check the following.
    Your existing installation must be Confluence 2.2 or later.
    Your attachments must be stored in the file system, not in your database. (To migrate between attachment storage systems, see
    Attachment Storage Configuration.)
    The instructions below will only work if both of the above are true.
    Step 1: Take note of your add-ons
    Take note of the add-ons (plugins) currently installed and enabled in Confluence, so that you can reinstate them later. Make a note of the following for each add-on:
    Add-on name
    Version
    Enabled or disabled status. This is useful if you have enabled or disabled modules yourself, making your configuration differ from the default.
    Step 2: Back up your data
    Create an XML backup of your existing data, via the Confluence administration console. See Manually Backing Up the Site. Make a note of the location
    where you put the XML file. You will need it later to import your Confluence data into your new database.
    Shut down Confluence.
    Make a copy of the attachments directory (<CONFLUENCE-HOME-DIRECTORY>\attachments) in your Confluence Home directory. You will need it later to copy your Confluence attachments data into your new Confluence installation.
    If you are using an external database, make a separate backup using the utilities that were installed with that database. This also is a precautionary measure.
    Step 3: Set up the new database
    Choose the
    database setup instructions for your new database, and follow those instructions to do the following:
    Install the database server.
    Perform any required configuration of the database server, as instructed.
    Add the Confluence database and user. Make a note of the username and password that you define in this step. You will need them later, when running the Confluence Setup Wizard.
    Step 4. Install Confluence (same version number) in a new location
    Now you will install Confluence again, with a different home directory path and installation path.
    Note: You must use the same version of Confluence as the existing installation. (If you want to upgrade Confluence, you must do it as a separate step.) For example, if your current site is running Confluence 5.1.2, your new installation
    must also be Confluence 5.1.2.
    When running the
    Confluence installer:
    Choose Custom Install. (Do not choose to upgrade your existing installation.)
    Choose a new destination directory. This is the installation directory for your new Confluence. It must not be the same as the existing Confluence installation.
    Choose a new home directory. This is the data directory for your new Confluence. It must not be the same as the existing Confluence installation.
    Step 5. Download and install the database driver if necessary
    Note that Confluence bundles some database drivers, but you'll need to install the driver yourself if it is not bundled. Follow the
    database setup instructions for your new database, to download and install the database driver if necessary.
    Step 6. Run the Confluence setup wizard and copy your data to your new database
    When running the
    Confluence setup wizard:
    Enter your license key, as usual.
    Choose Production Installation as the installation type.
    In the database configuration step, choose your new database type from the dropdown menu, then choose
    External Database.
    Choose the connection type: Direct JDBC or Datasource. If you are not sure which, choose 'Direct JDBC'. This is the most common connection type.
    When prompted for the database user and password, supply the credentials you defined earlier when adding the Confluence database to your database server.
    On the load content step, choose Restore From Backup. This is where you will import the data from your XML backup. There are two options for accessing the XML file:
    Browse to the location of your XML backup on your network, and choose
    Upload and Restore.
    Alternatively, put the XML file in the Confluence home directory of the new site (<CONFLUENCE-HOME-DIRECTORY>\restore) then choose
    Restore.
    Note: If you choose not to restore during the Confluence setup wizard, you can do the import later. Go to the Confluence administration console and choose to restore an XML backup. See
    Site Backup and Restore.
    Step 7: Copy your attachments across
    Copy the contents of the attachments directory (<CONFLUENCE-HOME-DIRECTORY>\attachments) from your old Confluence Home directory to your new Confluence Home directory.
    Step 8. Re-install your add-ons
    Re-install any add-ons (plugins) that are not bundled with Confluence.
    Use the same version of the add-on as on your old Confluence site.
    The data created by the add-ons will already exist in your new Confluence site, because it is included in the XML backup.
    Step 9. Check settings for new machine
    If you are moving Confluence to a different machine, you need to check the following settings:
    Configure your new base URL. See
    Configuring the Server Base URL.
    Check your application links. See
    Linking to Another Application.
    Update any gadget subscriptions from external sites pointing to this Confluence site. For example, if your JIRA site subscribes to Confluence gadgets, you will need to update your JIRA site. See
    Adding JIRA Gadgets to a Confluence Page.
    Review any other resources that other systems are consuming from Confluence.
    Regards, Pradyothana DP. Please Mark This As Answer if it solved your issue. Please Mark This As Helpful if it helps to solve your issue. ========================================================== http://www.dbainhouse.blogspot.in/

  • Problem with  large databases.

    Lightroom doesn't seem to like large databases.
    I am playing catch-up using Lightroom to enter keywords to all my past photos. I have about 150K photos spread over four drives.
    Even placing a separate database on each hard drive is causing problems.
    The program crashes when importing large numbers of photos from several folders. (I do not ask it to render previews.) If I relaunch the program, and try the import again, Lightroom adds about 500 more photos and then crashes, or freezes again.
    I may have to go back and import them one folder at a time, or use iView instead.
    This is a deal-breaker for me.
    I also note that it takes several minutes after opening a databese before the HD activity light stops flashing.
    I am using XP on a dual core machine with, 3Gigs of RAM
    Anyone else finding this?
    What is you work-around?

    Christopher,
    True, but given the number of posts where users have had similar problems ingesting images into LR--where LR runs without crashes and further trouble once the images are in--the probative evidence points to some LR problem ingesting large numbers.
    I may also be that users are attempting to use LR for editing during the ingestion of large numbers--I found that I simply could not do that without a crash occuring. When I limited it to 2k at a time--leaving my hands off the keyboard-- while the import occured, everything went without a hitch.
    However, as previously pointed out, it shouldn't require that--none of my other DAMs using SQLite do that, and I can multitask while they are ingesting.
    But, you are right--multiple single causes--and complexly interrated multiple causes--could account for it on a given configuration.

  • How can we suggest a new DBA OCE certification for very large databases?

    How can we suggest a new DBA OCE certification for very large databases?
    What web site, or what phone number can we call to suggest creating a VLDB OCE certification.
    The largest databases that I have ever worked with barely over 1 Trillion Bytes.
    Some people told me that the results of being a DBA totally change when you have a VERY LARGE DATABASE.
    I could guess that maybe some of the following topics of how to configure might be on it,
    * Partitioning
    * parallel
    * bigger block size - DSS vs OLTP
    * etc
    Where could I send in a recommendation?
    Thanks Roger

    I wish there were some details about the OCE data warehousing.
    Look at the topics for 1Z0-515. Assume that the 'lightweight' topics will go (like Best Practices) and that there will be more technical topics added.
    Oracle Database 11g Data Warehousing Essentials | Oracle Certification Exam
    Overview of Data Warehousing
      Describe the benefits of a data warehouse
      Describe the technical characteristics of a data warehouse
      Describe the Oracle Database structures used primarily by a data warehouse
      Explain the use of materialized views
      Implement Database Resource Manager to control resource usage
      Identify and explain the benefits provided by standard Oracle Database 11g enhancements for a data warehouse
    Parallelism
      Explain how the Oracle optimizer determines the degree of parallelism
      Configure parallelism
      Explain how parallelism and partitioning work together
    Partitioning
      Describe types of partitioning
      Describe the benefits of partitioning
      Implement partition-wise joins
    Result Cache
      Describe how the SQL Result Cache operates
      Identify the scenarios which benefit the most from Result Set Caching
    OLAP
      Explain how Oracle OLAP delivers high performance
      Describe how applications can access data stored in Oracle OLAP cubes
    Advanced Compression
      Explain the benefits provided by Advanced Compression
      Explain how Advanced Compression operates
      Describe how Advanced Compression interacts with other Oracle options and utilities
    Data integration
      Explain Oracle's overall approach to data integration
      Describe the benefits provided by ODI
      Differentiate the components of ODI
      Create integration data flows with ODI
      Ensure data quality with OWB
      Explain the concept and use of real-time data integration
      Describe the architecture of Oracle's data integration solutions
    Data mining and analysis
      Describe the components of Oracle's Data Mining option
      Describe the analytical functions provided by Oracle Data Mining
      Identify use cases that can benefit from Oracle Data Mining
      Identify which Oracle products use Oracle Data Mining
    Sizing
      Properly size all resources to be used in a data warehouse configuration
    Exadata
      Describe the architecture of the Sun Oracle Database Machine
      Describe configuration options for an Exadata Storage Server
      Explain the advantages provided by the Exadata Storage Server
    Best practices for performance
      Employ best practices to load incremental data into a data warehouse
      Employ best practices for using Oracle features to implement high performance data warehouses

  • Best approach to archival of large databases?

    I have a large database (~300 gig) and have a data/document retention requirement that requires me to take a backup of the database once every six months to be retained for 5 years. Other backups only have to be retained as long as operationally necessary, but twice a year, I need these "reference" backups to be available, should we need to restore the data for some reason - usually historical research for data that extends beyond what's currently in the database.
    What is the best approach for making these backups? My initial response would be to do a full export of the database, as this frees me from any dependencies on software versions, etc. However, an export takes a VERY long time. I can manage it by doing concurrent multiple exports by tablespace - this is able to be completed in < 1 day. Or I can back up the software directory + the database files in a cold backup.
    Or is RMAN well-suited for this? So far, I've only used RMAN for my operational-type backups - for short-term data recovery needs.
    What are other people doing?

    Thanks for your input. How would I do this? My largest table is in monthly partitions each in its own tablespace. Would the process have to be something like: alter table exchange partition-to-be-rolled-off with non-partitioned-table; then export that tablespace?

  • Can we query through 5-6gb large database with AIR

    As it creates one single file for whole database, can we have 5-6 GB large database if an AIR application requires

    There's no arbitrary limit to the database size. It would depend on performance and the user's file system, I suspect. Only you could judge the performance aspect as it should depend on the complexity of your database and queries.

  • Moving the zcm database from RC2 to General Release.

    Is there any documentation that outlines moving a zcm database from RCX to the General Release?
    Thanks,
    Patrick

    Gray,
    not supported - do not even try it for fun...
    Shaun Pond

  • SAP EHP Update for Large Database

    Dear Experts,
    We are planning for the SAP EHP7 update for our system. Please find the system details below
    Source system: SAP ERP6.0
    OS: AIX
    DB: Oracle 11.2.0.3
    Target System: SAP ERP6.0 EHP7
    OS: AIX
    DB: 11.2.0.3
    RAM: 32 GB
    The main concern over here is the DB size. It is approximately 3TB. I have already gone through forums and notes it is mentioned the DB size is not having any impact on the SAP EHP update using SUM. However am stil thinking it will have the impact in the downtime phase.
    Please advise on this.
    Regards,
    Raja. G

    Hi Raja,
    The main concern over here is the DB size. It is approximately 3TB. I have already gone through forums and notes it is mentioned the DB size is not having any impact on the SAP EHP update using SUM. However am stil thinking it will have the impact in the downtime phase.
    Although 3TB DB size may not have direct impact on the upgrade process, downtime of the system may vary with larger database size.
    Points to consider
    1) DB backup before entering into downtime phase
    2) Number of Programs & Tables stored in the database. ICNV Table conversions and XPRA execution will be dependent on these parameters.
    Hope this helps.
    Regards,
    Deepak Kori

  • Truncating a large database failed

    if (0 != dbp->truncate(dbp, NULL, &countp, 0)){
    out_string(c, "ERROR");
    }else{
    out_string(c, "OK");
    If only a few records in database, it works. But when millions of records, it fails.
    Is there a need to set following three parameters?
    DB_ENV->set_lk_max_lockers     Set maximum number of lockers
    DB_ENV->set_lk_max_locks     Set maximum number of locks
    DB_ENV->set_lk_max_objects     Set maximum number of lock objects
    Regards
    Steve Chu

    Hi Steve,
    It's highly likely that you haven't configured the locking subsystem properly. Locking is involved indeed when truncating a database that is part of an environment with the locking subsystem initialized. For each database page a lock is needed; so if you know the size of your database and the page size you can estimate how many locks you will need (you can also use the "db_stat" utility to find out this information). The default values for the maximum number of locks, lockers and lock objects is 1000, which may be insufficient in the context of your large database.
    Information on how to configure the locking subsystem is here:
    http://you've-db/db/ref/lock/max.html
    Also, I would suggest testing the value returned by DB->truncate() and printing out the eventual error using the error reporting means you'e configured.
    Regards,
    Andrei

  • Check if table, column exist in large database

    Dear All!
    I'm writing a perl script in which I use database manipulation (for Oracle), with the DBI module.
    How can I check whether a given table name is exists in a large database. The command:
    select table_name from user_tables where table_name =' name I look for'
    always returns 0. I doesn' t work for me.
    If I use select * from "my table name" : how it operates with the memory, because of the *
    command?
    Is there any way, that from the table (which existence I try check) I select a random column or the first one. ?
    thank you!!
    nagybaly

    Try instead of USER_TABLES:
    select table_name from ALL_TABLES where table_name =' name you look for'USER_TABLES only examines the tables in your personal schema, ALL_TABLES examines all tables you have privileges against.
    -cf

  • Startup restrict for export of large database ?

    Hello,
    the Oracle Admin guide suggests one possible use for the "restricted" mode of an oracle database is to do a consistent export of a large database.
    But is this necessary as the option "CONSISTENT=Y" exists in the exp tool ? At least i understand that using CONSISTENT=Y may need a lot of undo space on a large database, but could there be any other reason than this to do an export in restricted mode rather than using the CONSISTENT=Y parameter ?

    Hello,
    the Oracle Admin guide suggests one possible use for
    the "restricted" mode of an oracle database is to do
    a consistent export of a large database.
    But is this necessary as the option "CONSISTENT=Y"
    exists in the exp tool ? At least i understand that
    using CONSISTENT=Y may need a lot of undo space on a
    large database, but could there be any other reason
    than this to do an export in restricted mode rather
    than using the CONSISTENT=Y parameter ?I believe the primary reason is like you mentioned, CONSISTENT=Y is going to need a lot of undo space for a busy large database. Depends on your situation, it might be feasible to allocate such undo space.

  • RMAN Tips for Large Databases

    Hi Friends,
    I'm actually starting administering a large Database 10.2.0.1.0 on Windows Server.
    Do you guys have Tips or Docs saying the best practices for large Databases? I mean large as 2TB of data.
    I'm good administering small and medium DBs, but some of them just got bigger and bigger!!!
    Tks a lot

    I would like to mention below links :
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/partconc.htm
    http://download.oracle.com/docs/cd/B28359_01/server.111/b32024/vldb_backup.htm
    For couple of good advices and considerations for RMAN VLDB:
    http://sosdba.wordpress.com/2011/02/10/oracle-backup-and-recovery-for-a-vldb-very-large-database/
    Google "vldb AND RMAN in oracle"
    Regards
    Girish Sharma

  • Sql Server Management Assistant (SSMA) Oracle okay for large database migrations?

    All:
    We don't have much experience with the SSMA (Oracle) tool and need some advice from those of you familiar with it.  We must migrate an Oracle 11.2.0.3.0 database to SQL Server 2014.  The Oracle database consists of approximately 25,000 tables and 30,000
    views and related indices.  The database is approximately 2.3 TB in size.
    Is this do-able using the latest version of SSMA-Oracle?  If so, how much horsepower would you throw at this to get it done?
    Any other gotchas and advice appreciated.
    Kindest Regards,
    Bill
    Bill Davidson

    Hi
    Bill,
    SSMA supports migrating large database of Oracle. To migrate Oracle database to SQL Server 2014, you could use the latest version:
    Microsoft SQL Server Migration Assistant v6.0 for Oracle. Before the migration, you should pay attention to the points below.
    1.The account that is used to connect to the Oracle database must have at least CONNECT permissions. This enables SSMA to obtain metadata from schemas owned by the connecting user. To obtain metadata for objects in other schemas and then convert objects
    in those schemas, the account must have the following permissions: CREATE ANY PROCEDURE, EXECUTE ANY PROCEDURE, SELECT ANY TABLE, SELECT ANY SEQUENCE, CREATE ANY TYPE, CREATE ANY TRIGGER, SELECT ANY DICTIONARY.
    2.Metadata about the Oracle database is not automatically refreshed. The metadata in Oracle Metadata Explorer is a snapshot of the metadata when you first connected, or the last time that you manually refreshed metadata. You can manually update metadata
    for all schemas, a single schema, or individual database objects. For more information about the process, please refer to the similar article: 
    https://msdn.microsoft.com/en-us/library/hh313203(v=sql.110).
    3.The account that is used to connect to SQL Server requires different permissions depending on the actions that the account performs as the following:
     • To convert Oracle objects to Transact-SQL syntax, to update metadata from SQL Server, or to save converted syntax to scripts, the account must have permission to log on to the instance of SQL Server.
     • To load database objects into SQL Server, the account must be a member of the sysadmin server role. This is required to install CLR assemblies.
     • To migrate data to SQL Server, the account must be a member of the sysadmin server role. This is required to run the SQL Server Agent data migration packages.
     • To run the code that is generated by SSMA, the account must have Execute permissions for all user-defined functions in the ssma_oracle schema of the target database. These functions provide equivalent functionality of Oracle system functions, and
    are used by converted objects.
     • If the account that is used to connect to SQL Server is to perform all migration tasks, the account must be a member of the sysadmin server role.
    For more information about the process, please refer to the  similar article: 
    https://msdn.microsoft.com/en-us/library/hh313158(v=sql.110)
    4.Metadata about SQL Server databases is not automatically updated. The metadata in SQL Server Metadata Explorer is a snapshot of the metadata when you first connected to SQL Server, or the last time that you manually updated metadata. You can manually update
    metadata for all databases, or for any single database or database object.
    5.If the engine being used is Server Side Data Migration Engine, then, before you can migrate data, you must install the SSMA for Oracle Extension Pack and the Oracle providers on the computer that is running SSMA. The SQL Server Agent service must also
    be running. For more information about how to install the extension pack, see Installing Server Components (OracleToSQL). And when SQL Express edition is used as the target database, only client side data migration is allowed and server side data migration
    is not supported. For more information about the process, please refer to the  similar article: 
    https://msdn.microsoft.com/en-us/library/hh313202(v=sql.110)
    For how to migrate Oracle Databases to SQL Server, please refer to the  similar article: 
    https://msdn.microsoft.com/en-us/library/hh313159(v=sql.110).aspx
    Regards,
    Michelle Li

  • Zooming, scrolling and moving in large images

    I'd like to create a document viewer that works like Google Maps or Safari. Basically the "image" of the document is so big that I only want to load parts of it when the user actually goes over to that part of it.
    How can I enable zooming, scrolling and moving in "large" images like that?
    I looked for a tutorial or sample code, but I wasn't able to find anything.
    Any help would be appreciated,
    -Chris.

    I have 4 Gbyte of RAM. I tried to close every applications but the problem remains. You are right about Web Browser problems but unfortunately my problem is related with Finder windows too. I have 800 Mbyte of free RAM.
    Another fact I can say to you is that when I move Finder window (or other windows as I wrote before) the area outside the border of the window is repainted not well with puzzle effect. I think the problem is related to desktop refresh and repaint. It seems that graphic acceleration doesn't work for some window!!!!

Maybe you are looking for