Data Migration Best Practice

Is the a clear cut best practice procedure for conducting data migration from one company to a new one ?

I don't think there is a clear cut for that.  Best Practice would always be relative.  It varies dramatically depending on many factors.  There is no magical bullet here.
One except for above: you should always use Tab delimited Text format.  It is DTW friendly format.
Thanks,
Gordon

Similar Messages

  • Migration Best Practice When Using an Auth Source

    Hi,
    I'm looking for some advice on migration best practices or more specifically, how to choose whether to import/export groups and users or to let the auth source do a sync to bring users and groups into each environment.
    One of our customers is using an LDAP auth source to synchronize users and groups. I'm trying to help them do a migration from a development environment to a test environment. I'd like to export/import security on each object as I migrate it, but does this mean I have to export/import the groups on each object's ACLs before I export/import each object? What about users? I'd like to leave users and groups out of the PTE files and just export/import the auth source and let it run in each environment. But I'm afraid the UUIDs for the newly created groups will be different and they won't match up with object ACLs any more, causing all the objects to lose their security settings.
    If anyone has done this before, any suggestions about best practices and gotchas when using the migration wizard in conjunction with an auth source would be much appreciated.
    Thanks,
    Chris Bucchere
    Bucchere Development Group
    [email protected]
    http://www.bucchere.com

    The best practice here would be to migrate only the auth source through the migration wizard, and then do an LDAP sync on the new system to pull in the users and groups. The migration wizard will then just "do the right thing" in matching up the users and groups on the ACLs of objects between the two systems.
    Users and groups are actually a special case during migration -- they are resolved first by UUID, but if that is not found, then a user with the same auth source UUID and unique auth name is also treated as a match. Since you are importing from the same LDAP auth source, the unique auth name for the user/group should be the same on both systems. The auth source's UUID will also match on the two systems, since you just migrated that over using the migration wizard.

  • Unicode Migration using National Characterset data types - Best Practice ?

    I know that Oracle discourages the use of the national characterset and national characterset data types(NCHAR, NVARCHAR) but that is the route my company has decide to take and I would like to know what is the best practice regarding this specifically in relation to stored procedures.
    The database schema is being converted by changing all CHAR, VARCHAR and CLOB data types to NCHAR, NVARCHAR and NCLOB data types respectively and I would appreciate any suggestions regarding the changes that need to be made to stored procedures and if there are any hard and fast rules that need to be followed.
    Specific questions that I have are :
    1. Do CHAR and VARCHAR parameters need to be changed to NCHAR and NVARCHAR types ?
    2. Do CHAR and VARCHAR variables need to be changed to NCHAR and NVARCHAR types ?
    3. Do string literals need to be prefixed with 'N' in all cases ? e.g.
    in variable assignments - v_module_name := N'ABCD'
    in variable comparisons - IF v_sp_access_mode = N'DL'
    in calls to other procedures passing string parameters - proc_xyz(v_module_name, N'String Parameter')
    in database column comparisons - WHERE COLUMN_XYZ = N'ABCD'
    If anybody has been through a similar exercise, please share your experience and point out any additional changes that may be required in other areas.
    Database details are as follows and the application is written in COBOL and this is also being changed to be Unicode compliant:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    NLS_CHARACTERSET = WE8MSWIN1252
    NLS_NCHAR_CHARACTERSET = AL16UTF16

    ##1. while doing a test convertion I discovered that VARCHAR paramaters need to be changed to NVARCHAR2 and not VARCHAR2, same for VARCHAR variables.
    VARCHAR columns/parameters/variables should not by used as Oracle reserves the right to change their semantics in the future. You should use VARCHAR2/NVARCHAR2.
    ##3. Not sure I understand, are you saying that unicode columns(NVARCHAR2, NCHAR) in the database will only be able to store character strings made up from WE8MSWIN1252 characters ?
    No, I meant literals. You cannot include non-WE8MSWIN1252 characters into a literal. Actually, you can include them under certain conditions but they will be transformed to an escaped form. See also the UNISTR function.
    ## Reason given for going down this route is that our application works with SQL Server and Oracle and this was the best option
    ## to keep the code/schemas consistent between the two databases
    First, you have to keep two sets of scripts anyway because syntax of DDL is different between SQL Server and Oracle. There is therefore little benefit of just keeping the data type names the same while so many things need to be different. If I designed your system, I would use a DB-agnostic object repository and a script generator to produce either SQL Server or Oracle scripts with the appropriate data types or at least I would use some placeholder syntax to replace placeholders with appropriate data types per target system in the application installer.
    ## I don't know if it is possible to create a database in SQL Server with a Unicode characterset/collation like you can in Oracle, that would have been the better option.
    I am not an SQL Server expert but I think VARCHAR data types are restricted to Windows ANSI code pages and those do not include Unicode.
    -- Sergiusz

  • Win xp to Win 7 migration -- best practice

    Hi , 
    What are the best practices that need to be followed when we migrate xp to win 7 using configuration manager?
     - like computer name (should we rename it during migration or keep it as is?).
    - USMT like what should be migrated?
    Psl. share any pointers\suggeations. Thanks in advance.
    Regards,

    First determine your needs... do you really need to capture the user data or not.. perhaps they can store their precious data before the OS upgrade by themselves, so you don't need to worry about it? If you can made this kind of political decision, then
    you don't need USMT. Same goes for computer name, you can use the old ones if you please. It's a political decision, you should use unique name for all of your computers, some prefer PCs serial numbers, some prefer something else,
    it's really up to you to decide.
    Some technical pointers to consider:
    Clients should have ConfigMgr client installed on them before the migration (so that they appear in the console and can be instructed to do things, like run task sequence...)
    If the clients use static IP addresses, you need to configure your TS to capture those settings and use them during the upgrade process...

  • NLS data conversion – best practice

    Hello,
    I have several tables originate from a database with a single byte character set. I want to load the data into a database with multi-byte character set like UTF-8, and in the future, be able to use the Unicode version of Oracle XE.
    When I'm using DDL scripts to create the tables on the new database, and after that trying to load the data, I receive a lot of error messages regarding the size of the VARCHAR2 fields (which, of course, makes sense).
    As I understand, I can solve the problem by doubling the size of the verachar2 fields: VARCHAR2(20) will become VARCHAR2(40) and so on. Another option is to use the NVARCHAR2 datatype, and retain the correlation with the number of characters in the field.
    I never used NVARCHAR2 before, so I don't know if there are any side affects on the pre-built APEX processes like Automatic DML, Automatic Row Fetch and the likes, or on the APEX import data mechanism.
    What will be the best practice solution for APEX?
    I'll appreciate any comments on the subjects,
    Arie.

    Hello,
    Thanks Maxim and Patrick for your replies.
    I started to answer Maxim when Patrick post came in. It's interesting as I tried to change this nls_length_semantics parameter once before, but without any success. I even wrote an APEX procedure to run over all my VARCHAR2 columns, and change them to something like VARCHAR2(20 char). However, I wasn't satisfied with this solution, partially because what Patrick said about developers forgetting the full syntax, and partially because I read that some of the internal procedures (mainly with LOBs) do not support this character mode and always working with byte mode.
    Changing the nls_length_semantics parameter seems like a very good solution, mainly because, as Patrick wrote, " The big advantage is that you don't have to change any scripts or PL/SQL code."
    I'm just curious, what is the technique APEX is using to run on all various, SB and MB character sets?
    Thanks,
    Arie.

  • Data access best practice

    Oracle web site has an article talking about the 9iAS best practice. Predefining column type in the select statement is one of topics. The detail is following.
    3.5.5 Defining Column Types
    Defining column types provides the following benefits:
    (1) Saves a roundtrip to the database server.
    (2) Defines the datatype for every column of the expected result set.
    (3) For VARCHAR, VARCHAR2, CHAR and CHAR2, specifies their maximum length.
    The following example illustrates the use of this feature. It assumes you have
    imported the oracle.jdbc.* and java.sql.* interfaces and classes.
    //ds is a DataSource object
    Connection conn = ds.getConnection();
    PreparedStatement pstmt = conn.prepareStatement("select empno, ename, hiredate from emp");
    //Avoid a roundtrip to the database and describe the columns
    ((OraclePreparedStatement)pstmt).defineColumnType(1,Types.INTEGER);
    //Column #2 is a VARCHAR, we need to specify its max length
    ((OraclePreparedStatement)pstmt).defineColumnType(2,Types.VARCHAR,12);
    ((OraclePreparedStatement)pstmt).defineColumnType(3,Types.DATE);
    ResultSet rset = pstmt.executeQuery();
    while (rset.next())
    System.out.println(rset.getInt(1)+","+rset.getString(2)+","+rset.getDate(3));
    pstmt.close();
    Since I'm new to 9iAS, I'm not sure whether it's true that 9iAS really does an extra roundtrip to database just for the data type of the columns and then another roundtrip to get the data. Anyone can confirm it? Besides the above example uses the Oracle proprietary information.
    Is there any way to trace the db activities on the application server side without using enterprise monitor tool? Weblogic can dump all db activities to a log file so that they can be reviewed.
    thanks!

    Dear Srini,
    Data level Security is not at all issue for me. Have already implement it and so far not a single bug in testing is caught.
    It's about object level security and that too for 6 different types of user demanding different reports i.e. columns and detailed drill downs are different.
    Again these 6 types of users can be read only users or power users (who can do ad hoc analysis) may be BICONSUMER and BIAUTHOR.
    so need help regarding that...as we have to take decision soon.
    thanks,
    Yogen

  • Native Toplink to EclipseLink to JPA - Migration Best Practice

    I am currently looking at the future technical stack of our developments, and would appreciate any advise concerning best practice migration paths.
    Our current platform is as follows:
    Oracle 10g AS -> Toplink 10g -> Spring 2.5.x
    We have (approx.) 100 seperate Toplink Mapping Workbench projects (we have one per DDD Aggregate object in effect) and therefore 100 Repositories (or DAOs), some using Toplink code (e.g. Expression Builder, Class Extractors etc) on top of the mappings to support Object to RDB (legacy) mismatch.
    Future platform is:
    Oracle 11g AS -> EclipseLink -> Spring 3.x
    Migration issues are as follows:
    Spring 3.x does not provide any Native Toplink ORM support
    Spring 2.5.x requires Toplink 10g to provide Native Toplink ORM support
    My current plan is as follows:
    1. Migrate Code and Mappings to use EclipseLink (as per Link:[http://wiki.eclipse.org/EclipseLink/Examples/MigratingFromOracleTopLink])
    2. Temporarily re-implement the Spring 2.5.x ->Toplink 10g support code to use EclipseLink (e.g. TopLinkDaoSupport etc) to enable testing of this step.
    3. Refactor all Repositories/DAOs and Support code to use JPA engine (i.e. Entity Manager etc.)
    4. Move to Spring 3.x
    5. Move to 11g (when available!)
    Step 2 is only required to enable testing of the mapping changes, without changing to use the JPA engine.
    Step 3 will only work if my understanding of the following statement is correct (i.e. I can use the JPA engine to run native Toplink mappings and associated code):
    Quote:"Deployment XML files from Oracle TopLink 10.1.3 and above can be read by EclipseLink."
    Speciifc questions are:
    Is my understanding correct regarding the above?
    Is there any other path to achieve the goal of using 11g, EclipseLink (and Spring 3.x)?
    Is this achieveable without refactoring all XML mappings from Native -> JPA?
    Many thanks for any assistance.
    Marc

    It is possible to use the native/MW TopLink/EclipseLink deployment xml files with JPA in EclipseLink, this is correct. You just need to pass a persistence property giving your sessions.xml file location. The native API is also still supported in EclipseLink.
    James : http://www.eclipselink.org

  • New white paper: Character Set Migration Best Practices

    This paper can be found on the Globalization Home Page at:
    http://technet.oracle.com/tech/globalization/pdf/mwp.pdf
    This paper outlines the best practices for database character set
    migration that has been utilized on behalf of hundreds of customers
    successfully. Following these methods will help determine what
    strategies are best suited for your environment and will help minimize
    risk and downtime. This paper also highlights migration to Unicode.
    Many customers today are finding Unicode to be essential to supporting
    their global businesses.

    Sorry about that. I posted that too soon. It should become available today (Monday Aug 22nd).
    Doug

  • GRC AACG/TCG and CCG control migration best practice.

    Is there any best practice documents which illustrates the step by step migration of AACG/TCG and CCG controls from the development instance to the production? Also, how should one take the back up for the same ?
    Thanks,
    Arka

    There are no automated out of the box tools to migrate anything from CCG.  In AACG/TCG  you can export and import Access Models (includes the Entitlements) and Global Conditions.  You will have to manual setup roles, users, path conditions, etc.
    You can't clone AACG/TCG or CCG.
    Regards,
    Roger Drolet
    OIC

  • Manage Portal data ?   best practice type thing

    Hello,
    I am looking how best do set up a Portal System on Production, in particular a
    good process to back-up and re-create an Portal systems.
    I list some possibilities below. Does anyone know if this is how it is typically
    done ? Does this cover all data which should be backed up / migrated ?
    thanks!
    1- 'Entitlements' data. As far as I know, this is stored in the embedded LDAP.
    Can this be extracted ?
    2- DataSynch data.
    DataSynch web application.
    extract with ftp-like command
    upload as jar file
    3- Users and Groups.
    Export to a dat file. (Suddenly I forget how to do this, though think I saw it
    somewhere).

    Okey, and then using a RFC call from the webdynpro application to fetch data from the sap database?
    This answered my question:
    Best regards
    Øyvind Isaksen

  • Data Mining Best Practices

    Our organization is just beginning to use Data Mining in BI. We're trying to understand what the typical protocol is for moving data models into production? Is it standard practice to create the data models directly in the Production system or are these changes typically transported into the Production system? I have been unable to find any information on this in my research and would appreciate any input to help guide our decisions.
    Thanks,
    Nicole Daley

    Hi There,
    You're on the right track, here are a few additional guidelines:
    1. Determine your coverage levels along with the desired minimum data rate required by your application(s). Disabling lower data rates does have a significant impact to your coverage area.
    2. You have already prevented 802.11b clients by disabling  1,2,5.5,11 -- so that piece is taken care of.
    3. Typically, we see deployments having the lowest enabled data rate set to mandatory. This allows for the best client compatibility. You can also have higher mandatory rates, but then you need to confirm that all client devices will in fact support those higher rates. (Most clients do, but there are some exceptions). Worth noting here is that multicast traffic will be sent out at the highest mandatory data rate -- so if you have the need for higher bandwidth multicast traffic, you may want to have another data rate(s) set as mandatory.
    -Patrick Croak
    Wireless TAC

  • Help!  (Data Recovery best practices question)

    Recently my fiancé's Macbook (first White model that used an Intel chipset) running 10.6.2 began to behave strange (slow response time, hanging while applications launch, etc). I decided to take an old external USB HD I had lying around and format it on my MBP in order to time machine her photo's and itunes library. Time machine would not complete a backup and I could not get any of the folders to copy through finder(various file corrupt errors). I assumed it could be a permission issue so I inadvertantly fired up my 10.5 disk and did a permission repair. Afterwards the disk was even more flaky (which I believe was self inflicted when I repaired with 10.5).
    I've since created a 10.6.2 bootable flash key and went out and bought Disk Warrior (4.2). I ran a directory repair and several disk util repairs but was still unable to get the machine to behave properly (and unable to get time machine to complete). Attempting to run permission repairs while booted to USB or the Snow Leopard install disk resulted in it hanging at the '1 minute remaining' for well over an hour. My next step was to re-install Snow Leopard but the install keeps failing after the progress bar completes.
    As it stands now the volume on the internal HD is not bootable and I'm running off my usb key boot drive using 'CP -R *' in terminal to copy her user folder onto the external USB hard drive. It seems to be working, but it's painfully slow (somewhere along the lines of maybe 10 meg per half an hour with 30gb to copy) I'm guessing this speed has to do with my boot volume running off a flash drive.
    I'm thinking of running out and grabbing a firewire cable and doing a target boot from my MBP hoping that that would be a lot faster than what I'm experiencing now. My question is, would that be the wisest way to go? My plan of action was to grab her pictures and music then erase and reformat the drive. Is it possible that I could try something else with Disk Warrior? I've heard a lot of good things about it but I fear that I did a number on it when I accidently ran 10.5 permission repair on the volume.
    Any additional help would be appreciated as she has years of pictures on there that I'd hate to see her loose.

    That sounds like a sensible solution, although you need not replace the original drive. Install OS X on the external drive, boot from it and copy her data. Then erase her drive and use Disk Utility's Restore option to clone the external drive to the internal drive. If that works then she should continue using the external drive as a backup so the next time this happens she can restore from the backup.
    For next time: Repairing permissions is not a troubleshooting tool. It's rarely of any use and it does not repair permissions in a Home folder. If a system is becoming unresponsive or just slower then there's other things you should do. See the following:
    Kappy's Personal Suggestions for OS X Maintenance
    For disk repairs use Disk Utility. For situations DU cannot handle the best third-party utilities are: Disk Warrior; DW only fixes problems with the disk directory, but most disk problems are caused by directory corruption; Disk Warrior 4.x is now Intel Mac compatible. TechTool Pro provides additional repair options including file repair and recovery, system diagnostics, and disk defragmentation. TechTool Pro 4.5.1 or higher are Intel Mac compatible; Drive Genius is similar to TechTool Pro in terms of the various repair services provided. Versions 1.5.1 or later are Intel Mac compatible.
    OS X performs certain maintenance functions that are scheduled to occur on a daily, weekly, or monthly period. The maintenance scripts run in the early AM only if the computer is turned on 24/7 (no sleep.) If this isn't the case, then an excellent solution is to download and install a shareware utility such as Macaroni, JAW PseudoAnacron, or Anacron that will automate the maintenance activity regardless of whether the computer is turned off or asleep. Dependence upon third-party utilities to run the periodic maintenance scripts had been significantly reduced in Tiger and Leopard. These utilities have limited or no functionality with Snow Leopard and should not be installed.
    OS X automatically defrags files less than 20 MBs in size, so unless you have a disk full of very large files there's little need for defragmenting the hard drive. As for virus protection there are few if any such animals affecting OS X. You can protect the computer easily using the freeware Open Source virus protection software ClamXAV. Personally I would avoid most commercial anti-virus software because of their potential for causing problems.
    I would also recommend downloading the shareware utility TinkerTool System that you can use for periodic maintenance such as removing old logfiles and archives, clearing caches, etc. Other utilities are also available such as Onyx, Leopard Cache Cleaner, CockTail, and Xupport, for example.
    For emergency repairs install the freeware utility Applejack (not compatible with Snow Leopard.) If you cannot start up in OS X, you may be able to start in single-user mode from which you can run Applejack to do a whole set of repair and maintenance routines from the commandline. Note that AppleJack 1.5 is required for Leopard. AppleJack is not compatible with Snow Leopard.
    When you install any new system software or updates be sure to repair the hard drive and permissions beforehand. I also recommend booting into safe mode before doing system software updates.
    Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
    1. Retrospect Desktop (Commercial - not yet universal binary)
    2. Synchronize! Pro X (Commercial)
    3. Synk (Backup, Standard, or Pro)
    4. Deja Vu (Shareware)
    5. Carbon Copy Cloner (Donationware)
    6. SuperDuper! (Commercial)
    7. Intego Personal Backup (Commercial)
    8. Data Backup (Commercial)
    9. SilverKeeper 2.0 (Freeware)
    10. MimMac (Commercial)
    11. Tri-Backup (Commercial)
    Visit The XLab FAQs and read the FAQs on maintenance, optimization, virus protection, and backup and restore.
    Additional suggestions will be found in Mac Maintenance Quick Assist.
    Referenced software can be found at www.versiontracker.com and www.macupdate.com.

  • Servlet - xml data storage best practice

    Hello - I am creating a webapp that is a combination of servlets and jsp. The app will access, store and manipulate data in an xml file. I hope to deploy and distribute the webapp as a war file. I have been told that it is a bad idea to assume that the xml file, if included in a directory of the war file will be writeable, as the servlet spec does not guarantee that war are "exploded" into real file space. For that matter, they do not guarantee that the file space is writeable at all.
    So, what is the best idea for the placement of this xml file? Should I have users create a home directory for the xml file to sit in, so it can be guaranteed to be writeable? And, if so, how should I configure the webapp so they it will know where this file is kept?
    Any advice would be gratefully welcomed...
    Paul Phillips

    Great Question, but I need to take it a little further.
    First of all, my advice is to use some independent home directory for the xml file that can be located via a properties file or the like.
    This will make life easier when trying to deploy to a server such as JBoss (with Catalina/Tomcat) which doesn't extract the war file into some directory. In that case you would need to access your XML file which would be residing inside a war file. I haven't tried this (sounds painful) but I suspect there may be security access problems when trying to get the FileOutputStream on a file inside the war??
    Anyway.... so I recommend the independent directory away from the hustle and bustle of the servers' directories. Having said that..... I have a question in return: Where do you put a newly created (on the fly) jsp that you want accessed via your webapp?
    In Tomcat its easy... just put it in the tomcat/webapps/myapp directory, but this can't be done for JBoss with integrated Tomcat (jboss-3.0.0RC1_tomcat-4.0.3).
    Anyone got any ideas on that one?

  • Data Load Best Practice.

    Hi,
    I needed to know what is the best way to load the data from a source. Is the SQL load the best way or using data files better? What are the inherent advantages and disadvantages of the two processes?
    Thanks for any help.

    I have faced a scenario that explaining here
    I had an ASO cube and data is being load from txt file daily basis and data was huge. There is some problem in data file as well as Master file (file that is being used for dimension building).
    Data and master file has some special character like ‘ , : ~ ` # $ % blank spaces and tab spaces, even ETL process cannot remove these things because this is coming within a data.
    Sometimes any comment or database error were also present in data file.
    I faced problem with making rule file with different delimiter, most of the time I find same character within data that is used as a delimiter. So its increases no of data field and Essbase give error.
    So I have used sql table a for data load .a Launch table is created and data is populated in this table. All error are removed here before using data load into Essbase
    This was my scenario (this case I find SQL load the second one is better)
    Thanks
    Dhanjit G.

  • Data Model best Practices for Large Data Models

    We are currently rolling out Hyperion IR 11.1.x and are trying to establish best practces for BQYs and how to display these models to our end users.
    So far, we have created an OCE file that limits the selectable tables to only those that are within the model.
    Then, we created a BQY that brings in the tables to a data model, created metatopics for the main tables and integrated the descriptions via lookups in the meta topics.
    This seems to be ok, however, anytime I try to add items to a query, as soon as i add columns from different tables, the app freezes up, hogs a bunch of memory and then closes itself.
    Obviously, this isnt' acceptable to be given to our end users like this, so i'm asking for suggestions.
    Are there settings I can change to get around this memory sucking issue? Do I need to use a smaller model?
    and in general, how are you all deploying this tool to your users? Our users are accustomed to a pre built data model so they can just click and add the fields they want and hit submit. How do I get close to that ideal with this tool?
    thanks for any help/advice.

    I answered my own question. in the case of the large data model, the tool by default was attempting to calculate every possible join path to get from Table A to Table B (even though there is a direct join between them).
    in the data model options, I changed the join setting to use the join path with the least number of topics. This skipped the extraneous steps and allowed me to proceed as normal.
    hope this helps anyone else who may bump into this issue.

Maybe you are looking for

  • Populating the Lookup values in AD Child Form based logged in user

    HI, Below is the Lookup code(AD Groups) format. Codekey , Decodekey IT-Development , IT-Development IT-Testing , IT-Testing IT-Production , IT-Production HR-system , HR-system HR-Finance , HR-Finance My requirement is I need to filter the data being

  • HT1414 Latest Update Problem with my 4s

    I did not back up my phone before the update. Now my phone is stuck on the plug to itunes screen which wants me to restore. Is there anyway to go back and backup before I restore?

  • Excise duty indicator

    Hi All, Can anyone explain me excise duty indicator? What is thr relevance of it? regards B Shar

  • Forms 11g new features

    Hi, i want to know the new features of forms and reports in 11 g

  • Making text smooth after converting from a PDF

    Hello I'm very new to PhotoShop.  Im running CS5 and I have a PDF that I'm copying and pasting onto a layer in PhotoShop.  When I do this the text is really awful looking and I cant figure out how to smooth it out. Here is a sample of what the image