For DB2 to Oracle conversion best practices

My company is enhancing existing application adding newly J2ee web interface and database as DB2.I am new to J2EE. In future if we want to migrate my database to Oracle,which are the best things to do it now.
Which J2EE framework is good in respecte JDBC connectivity and future migration of DB2 to Oracle? (Minimal changes at Migration Time)
It is medium size application with 5000 users.Which are other best practises to follow in development keeping the migration in Mind. Thanks..

Yes, you should login as system, create a user, appowner, or what ever you call it, and assign that user a default tablespace of 'USERS' or whatever tablespace you decide. Then, grant that user all the privileges to create objects, i.e., create table, create procedure, create synonym, etc, etc.
Then, logout as system, login as appowner, and do all your object creation from there.
A user is a set of credentials that allow you access to the system. It defines your identity and your privileges and authority to do various things. A schema is the set of objects owned by a particular user. As soon as a user owns at least one object, that implicitly defines his schema. It's not possible for a user to own or control multiple schemas. If you want multiple schemas, that's fine, but you'll need multiple users, and each user will manage his own schema.
Hope that's clear,
-Mark
PS I strongly suggest you review the Concepts Guide, it really is quite good. It can be found here: http://download.oracle.com/docs/cd/E11882_01/server.112/e10713/toc.htm

Similar Messages

  • Oracle BPM Best Practices

    Hi all,
    Anybody has any information on the Oracle BPM Best Practices?
    Any guide?

    All,
    I was trying to find a developers guide for using Oracle BPM Suite (11g). I found the one in the following link, however this looks like a pretty detailed one...
    http://download.oracle.com/docs/cd/B31017_01/integrate.1013/b28981/toc.htm
    Can you someone help me find any other flavors of the developers guide? I am looking for the following...
    1. Methods of work - Best Practices for design and development of BPM process models.
    2. Naming Conventions for Process Modeling - Best Practices
    3. Coding standards for Process Modeling (J Developer)
    4. Guide with FAQ's for connecting / Publishing Process Models to the MDS Database.
    5. Deployment Standards - best practices....
    6. Infrastructure - Recommendations for Scale out deployment in Linux v/s Windows OS.
    Regards,
    Dinesh Reddy

  • NLS data conversion – best practice

    Hello,
    I have several tables originate from a database with a single byte character set. I want to load the data into a database with multi-byte character set like UTF-8, and in the future, be able to use the Unicode version of Oracle XE.
    When I'm using DDL scripts to create the tables on the new database, and after that trying to load the data, I receive a lot of error messages regarding the size of the VARCHAR2 fields (which, of course, makes sense).
    As I understand, I can solve the problem by doubling the size of the verachar2 fields: VARCHAR2(20) will become VARCHAR2(40) and so on. Another option is to use the NVARCHAR2 datatype, and retain the correlation with the number of characters in the field.
    I never used NVARCHAR2 before, so I don't know if there are any side affects on the pre-built APEX processes like Automatic DML, Automatic Row Fetch and the likes, or on the APEX import data mechanism.
    What will be the best practice solution for APEX?
    I'll appreciate any comments on the subjects,
    Arie.

    Hello,
    Thanks Maxim and Patrick for your replies.
    I started to answer Maxim when Patrick post came in. It's interesting as I tried to change this nls_length_semantics parameter once before, but without any success. I even wrote an APEX procedure to run over all my VARCHAR2 columns, and change them to something like VARCHAR2(20 char). However, I wasn't satisfied with this solution, partially because what Patrick said about developers forgetting the full syntax, and partially because I read that some of the internal procedures (mainly with LOBs) do not support this character mode and always working with byte mode.
    Changing the nls_length_semantics parameter seems like a very good solution, mainly because, as Patrick wrote, " The big advantage is that you don't have to change any scripts or PL/SQL code."
    I'm just curious, what is the technique APEX is using to run on all various, SB and MB character sets?
    Thanks,
    Arie.

  • Oracle Statistics - Best Practice?

    We run stats with brconnect weekly:
    brconnect -u / -c -f stats -t all
    I'm trying to understand how some of our stats are old or stale.  Where's my gap?  We are running Oracle 11g and have Table Monitoring set on every table.  My user_tab_modifications is tracking changes in just over 3,000 tables.  I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Plus, we have our DBSTATC entries.  A lot of those entries were last analyzed some 10 years ago.  Does the above brconnect consider DBSTATC at all?  Or do we need to regularly run the following, as well?
    brconnect -u / -c -f stats -t dbstatc_tab
    I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    SQL> select count(*) from dba_tab_statistics
      2  where owner = 'SAPR3' and stale_stats = 'YES';
      COUNT(*)
          1681
    I realize that stats last analyzed some ten years ago does not necessarily mean they are no longer good but I am curious if the weekly stats collection we are doing is sufficient.  Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?

    Hi Richard,
    > We are running Oracle 11g and have Table Monitoring set on every table.
    Table monitoring attribute is not necessary anymore or better said it is deprecated due to the fact that these metrics are controlled by STATISTICS_LEVEL nowadays. Table monitoring attribute is valid for Oracle versions lower than 10g.
    > I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Correct, if BR*Tools parameter stats_change_threshold is set to its default. Brconnect reads the modifications (number of inserts, deletes and updates) from DBA_TAB_MODIFICATIONS and compares the sum of these changes to the total number of rows. It gathers statistics, if the amount of changes is larger than stats_change_threshold.
    > Does the above brconnect consider DBSTATC at all?
    Yes, it does.
    > I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    The column STALE_STATS in view DBA_TAB_STATISTICS is calculated differently. This flag is used by the Oracle standard DBMS_STATS implementation which is not considered by SAP - for more details check the Oracle documentation "13.3.1.5 Determining Stale Statistics".
    The GATHER_DATABASE_STATS or GATHER_SCHEMA_STATS procedures gather new statistics for tables with stale statistics when the OPTIONS parameter is set to GATHER STALE or GATHER AUTO. If a monitored table has been modified more than 10%, then these statistics are considered stale and gathered again.
    STALE_PERCENT - Determines the percentage of rows in a table that have to change before the statistics on that table are deemed stale and should be regathered. The valid domain for stale_percent is non-negative numbers.The default value is 10%. Note that if you set stale_percent to zero the AUTO STATS gathering job will gather statistics for this table every time a row in the table is modified.
    SAP has its own automatism (like described with brconnect and stats_change_threshold) to identify stale statistics and how to collect statistics (percentage, histograms, etc.) and does not use / rely on the corresponding Oracle default mechanism.
    > Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?
    No performance issue? No additional and unnecessary load on the system (e.g. dynamic sampling)? No brconnect runtime issue? Then you don't need to think about the brconnect implementation or special settings. Sometimes you need to tweak it (e.g. histograms, sample sizes, etc.), but then you have some specific issue that needs to be solved.
    Regards
    Stefan

  • Oracle VM Best Practices

    Hello,
    Can anybody share Best practices for Oracle VM implementation and the test plan executed to validate the implementation before moving to production.
    Thanks in advance
    Dani

    NAME TARGET STATE SERVER STATE_DETAILS
    Local Resources
    ora.DATA.dg
    ONLINE ONLINE orarac001
    ONLINE ONLINE orarac002
    ora.LISTENER.lsnr
    ONLINE ONLINE orarac001
    ONLINE ONLINE orarac002
    ora.asm
    ONLINE ONLINE orarac001 Started
    ONLINE ONLINE orarac002 Started
    ora.eons
    ONLINE ONLINE orarac001
    ONLINE ONLINE orarac002
    ora.gsd
    OFFLINE OFFLINE orarac001
    OFFLINE OFFLINE orarac002
    ora.net1.network
    ONLINE ONLINE orarac001
    ONLINE ONLINE orarac002
    ora.ons
    ONLINE ONLINE orarac001
    ONLINE ONLINE orarac002
    ora.registry.acfs
    ONLINE ONLINE orarac001
    ONLINE ONLINE orarac002
    Cluster Resources
    ora.LISTENER_SCAN1.lsnr
    1 ONLINE ONLINE orarac001
    ora.LISTENER_SCAN2.lsnr
    1 ONLINE ONLINE orarac002
    ora.LISTENER_SCAN3.lsnr
    1 ONLINE ONLINE orarac002
    ora.oc4j
    1 OFFLINE OFFLINE
    ora.orarac001.vip
    1 ONLINE ONLINE orarac001
    ora.orarac002.vip
    1 ONLINE ONLINE orarac002
    ora.orcl.db
    1 ONLINE ONLINE orarac001 Open
    2 ONLINE ONLINE orarac002 Open
    ora.scan1.vip
    1 ONLINE ONLINE orarac001
    ora.scan2.vip
    1 ONLINE ONLINE orarac002
    ora.scan3.vip
    1 ONLINE ONLINE orarac002
    I'm assuming the "offline" ones are bad :)
    EDIT:
    it turns out that the offlines in the list are completely normal for 11gR2
    any one have any idea what's going on??
    Thanks!
    Edited by: user12245235 on Jul 28, 2011 5:24 PM

  • UOM Conversion Best Practices

    Hello All-
    Could someone provide some best practices for converting between different units of measure in MDM?
    We have an English repository with several numeric attributes defined in English units (inches, horsepower, etc). We want to add a second language which inherits these measurements, but converts them to Metric (cm, kW, etc).
    From what I can see, neither the MDM clients (except Publisher) or the Java API support this dynamic conversion. So that leads me to believe that we have to programmatically convert the units as we access the data via the Java API.
    Is there built-in functionality to handle this?
    thanks
    Tim
    MDM 5.5 SP06 Patch 1

    Hello Tim,
    By using MDM UOM Manager you can maintain system dimension and user defined dimension.
    System dimension:
    You can modify system unit.
    You can add new user defined unit to system dimension.
    User defined dimension:
    You can add new user defined dimention.
    You can add new user defined unit.
    After unload and load the MDM repository, MDS builds new indexes to the added / modified UOM and the MDM API treats those changes as it treats to the system MDM dimension and units.
    You can utilise the "new" MDM java API to convert values from one dimension unit to another:
    Package: com.sap.mdm.util
    Class: MeasurementUtils
    Method: public static double convertMeasurementValue(DimensionId dimensionId,
                                                 UnitId unitIdFrom,
                                                 UnitId unitIdTo,
                                                 double value,
                                                 DimensionsManager dimensionManager)
    Good luck, Nimrod.

  • DB2 to Oracle conversion using SQL Developer Migration Wizard - different schemas

    I am performing a conversion between DB2  to Oracle 11 XE, using the SQL Developer Migration Wizard. Specifically I am trying to migrate the DB2User schema over to Oracle.
    Using the migration wizard, when I pick the Oracle target connection to be the same schema ( DB2User schema ) the migration is successful and all data is converted.
    However if I pick a different Oracle target connection ( say OracleUser ) , I run into issues.
    Firstly , the table schema is not created. When I check the project output directory, the .out file has the following errors:
       CREATE USER DB2User IDENTIFIED BY DB2User DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP
            SQL Error: ORA-01031: insufficient privileges
            01031. 00000 -  "insufficient privileges"
        connect DB2User/DB2User
        Error report:
        Connection Failed
        Commit
        Connection created by CONNECT script command disconnected
    I worked around this by manually executing the .sql in the project output directory using the OracleUser id  in the new DB.
    Then I continue with the migration wizard and perform the Move Data step.
    Now - the message appears as succuessful, however, when I review the Migrationlog.xml file, i see errors as follows:
    <level>SEVERE</level>
      <class>oracle.dbtools.migration.workbench.core.logging.MigrationLogUtil</class>
      <message>Failed to disable constraints: Data Move</message>
      <key>DataMove.DISABLE_CONSTRAINTS_FAILED</key>
      <catalog>&lt;null&gt;</catalog>
      <param>Data Move</param>
      <param>oracle.dbtools.migration.workbench.core.logging.LogInfo@753f827a</param>
      <exception>
        <message>java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist</message>
      <level>WARNING</level>
      <class>oracle.dbtools.migration.datamove.online.TriggerHandler</class>
      <message>ORA-01031: insufficient privileges
    </message>
    I think what is happening is that the wizard is attempting to perform the 'move data' process using the DB2User id.
    How do I tell the wizard that the target schema is different than my source schema.
    My requirement is that I need to be able to migrate the DB2User schema to different schemas in the same Oracle database
    ( since we will have multiple test environments under the same database ) .
    Thanks in advance .
    K.

    Perhaps the following from the SQL Developer documentation is helpful for you:
    Command-Line Interface for Migration
    As an alternative to using the SQL Developer graphical interface for migration operations, you can use the migration batch file (Windows) or shell script (Linux) on the operating system command line. These files are located in the sqldeveloper\sqldeveloper\bin folder or sqldeveloper/sqldeveloper/bin directory under the location where you installed SQL Developer.
    migration.bat or migration.sh accepts these commands: capture, convert, datamove, delcaptured, delconn, delconverted, driver, generate, guide, help, idmap, info, init, lscaptured, lsconn, lsconverted, mkconn, qm, runsql, and scan. For information about the syntax and options, start by running migration without any parameters at the system command prompt. For example:
    C:\Program Files\sqldeveloper\sqldeveloper\bin>migration
    You can use the -help option for information about one or more actions. For the most detailed information, including some examples, use the -help=guide option. For example:
    C:\Program Files\sqldeveloper\sqldeveloper\bin>migration -help=guide
    Regards
    Wolfgang

  • I want to create checklist for rpd that tells about best practices

    Hi all,
    i want to create a checklist for rpd..that tells about what are all the best practices that we have to do..
    we have to write some script based on this script only it has to create the checklist.
    Thanks in advance
    Edited by: 988084 on 13/05/2013 02:34

    Hi,
    Pls refer the following link...
    http://www.peakindicators.com/media_pi/Knowledge/25%20-%20twenty%20golden%20rules%20for%20rpd%20design.pdf
    Thanks,
    Jprakash

  • Set filter criteria on page 1 for page 2 OData model - "best practice"?

    Hello, I have a problem with an app - where I want to filter data on a second page, based on settings from the first page. I use an OData model.
    The collections on both pages are not related in terms of "navigation" properties, that is my problem and I can not change the data source...
    So I am looking for ideas/best practices to solve this because sometimes my filtering doesn't work... the following problem occurred: Request aborted
    I have a page with a sap.m List with items="{/tabWorkPlace}" and and a local JSON model where I store relevant data during the app lifecycle.
    handleListSelect - first page
    var context = evt.getParameter("listItem").getBindingContext();
    var dataModel = sap.ui.getCore().getModel("dataModel");
    var workplace = context.getProperty("WORKPLACE_ID");
    dataModel.setProperty("/WORKPLACE_ID", workplace);
    this.nav.to("SubMaster", context);
    The general App.controller.js handles the nav.to function:
    var app = this.getView().app;
    var page = app.getPage(pageId);
    if(pageId == "secondPage") {
         page.getController().filterData();
    And the controller of the second page:
    filterData: function() {
    var oModel = sap.ui.getCore().getModel("odata");
    var dataModel = sap.ui.getCore().getModel("dataModel");
    var workplace = dataModel.getProperty("/WORKPLACE_ID");
    var items = this.getView().byId("list");
    var oFilter=new sap.ui.model.Filter("WORKPLACE_ID",sap.ui.model.FilterOperator.EQ,workplace);
    items.getBinding("items").filter(oFilter);
    I don't write this code into the onInit() or beforeRendering() function, because they are called only once and I am navigating back and forth between the two pages, because the pages are created only once and "just" the data is changed.
    The desired page looks like this - with an other collection bound to it:
    <List
      id="list"
      select="handleListSelect"
      items="{/tabWorkstep_Status}"
    >
    But when I call it - then the request gets aborted:
    The following problem occurred: Request aborted
    But despite the fact the Request is aborted, the list on the second page is filtered!
    The filter criteria for the model works when I type it into the browser with URL. Maybe this fails because the data binding for the list didn't took place at this phase?
    I have this pattern (filter criteria on one page and result on the second page) more times - (and I think a better data model would be better with navigation properties would be better, but I cannot change it)
    But at another constellation the filtering doesn't work - same error... the following problem occurred: Request aborted
    I also don't want to change the pattern (page 1 to page 2) into popup lists or this fancy new filtering possibilities because it is not suitable for my use case.
    Is there maybe a more elegant solution - because sometimes filtering works, sometimes don't..., do I have an error in my solution (general approach)?
    Many thanks for any input!
    BR,
    Denise

    Hello, yeah you are right, but it works without the odata> stuff because of this in App.controller.js:
    var uri = "http://localhost:32006/JsonOdataService.svc";
    var oModelMS = new sap.ui.model.odata.ODataModel(uri);
    sap.ui.getCore().setModel(oModelMS, "odata");
    oView.setModel(oModelMS);
    So my question is - how to navigate from one page to another - and on the other page first bind a collection to a select and then when selecting bind certain elements (a textfield) to the selected filtered entity.
    The stuff with context and binding won't work, because the two Collections don't have a navigation/association property between them...
    So for example:
    page1
    select a list item with property color: red and year 1985. Press one of the buttons and pass this criteria to another page.
    page 2:
    show a dropdown box with all car names which fullfill this criteria, and when one car is selected, then display the data for THIS car in several text fields.
    This is not a master->detail navigation example, because on page 1 i select certain criterias, and then with buttons I navigate to several pages with those criterias.
    But since the OData model has no relationships it is really hard to do it manually... With a dummy mock.json like in DJ Adams Fiori like SAPUI5 apps it is no problem... But with OData and no things related to each other it is hard...

  • Oracle 10G Best practice Installation

    Hi all,
    Somebody will have a document of like doing Oracle 10G tuning in Solaris 10?
    Thanks

    oops sorry, that's best practices and not tuning. But there may be some stuff in there.

  • Oracle Cluster Best Practice

    Is there a "Best Practice" to follow concerning Oracle and clustering. Currently we are using VCS trying to cluster a box running one Oracle engine and multiple instances. This is not working well. Is it best to cluster a box running one Oracle engine and one instance?, or is the multi-instance thing ok? Also, is VCS the best solution? Please respond to my email below.
    TIA
    James Qualls
    [email protected]

    Is there a "Best Practice" to follow concerning Oracle and clustering. Currently we are using VCS trying to cluster a box running one Oracle engine and multiple instances. This is not working well. Is it best to cluster a box running one Oracle engine and one instance?, or is the multi-instance thing ok? Also, is VCS the best solution? Please respond to my email below.
    TIA
    James Qualls
    [email protected]

  • Trade offs for spreading oraganizatons across suffixes - best practices?

    Hey Everyone, I am trying to figure out some best practices here. I'v looked through the docs but have not found anything that quite touches on this.
    In the past, here is how I created my directory (basically using dsconf create-suffix for each branch I needed)
    dsconf list-suffixes
    dc=example,dc=com
    ou=People,dc=example,dc=com
    ou=Groups,dc=example,dc=com
    o=Services,dc=example,dc=com
    ou=Groups,o=Services,dc=example,dc=com
    ou=People,o=Services,dc=example,dc=com
    o=listserv,dc=example,dc=com
    ou=lists,o=listserv,dc=example,dc=com
    A few years later, learning more, and setting up replication, it seems I may have made my life a bit more complicated that it should be. It seems i would need many more replication agreements to get every branch of the tree replicated. It also seems that different parts of the directory are stored in different backend database files.
    It seems like I should have something like this:
    dsconf list-suffixes
    dc=example,dc=com
    Instead of creating all the branches as suffixes or sub-suffixes, maybe i should have just created organization and organizational unit entries within a single suffix "dc=example,dc=com". This way I can replicate all data by replicating just one suffix. Is there a downside to having one backend db files containing all the data instead of spreading it across multiple files (were talking possibly 90K entries across the entire directory).
    Can anyone confirm the logic here or provide any insight?
    Thanks much in Advance,
    Deejam

    Well, there are a couple of dimensions to this question. The first is simply whether your DIT ought to have more or less depth. This is an old design debate that goes back to problems with changing DNs in X500 style DITs with lots of organizational information embedded in the DN. Nowadays DITs tend to be flatter even though there are more tools for renaming entries. You still can't rename entries across backends, though. The second dimension is, given a DIT, how should you distribute the containers in your DIT across the backend databases.
    As you have already determined, the principal design consideration for your backend configuration will be replication, though scalability and backup configuration might also come into it. From what you have posted, though, it does not look like you have that much data. So yes, you should configure database backends and associated suffixes with sufficient granularity to support your replication requirements. So, if a particular suffix needs to be replicated differently than another suffix, they need to be defined as distinct suffixes/backends. Usually we define the minimal number of suffixes and backends needed to satisfy the topological requirements, though I can imagine there might be cases where suffixes might be more fine grained.
    For large, extensible Directory topologies, I usually look for data that's sensibly divisible into "building blocks". So for instance you might have a top-level suffix "dc=example,dc=com" with a bunch of global ACIs, system users and groups that are going to need to be everywhere. Then you might have a large chunk of external customer data, and a small amount of internal employee data. I would consider putting the external users in a distinct suffix from the employees, because the two types of entries are likely to be quite different. If I have a need to build a public Directory somewhere, all I have to do is configure the external suffix and replicate it. The basic question I would be asking there is if I might ever need to expose a subset of the Directory, will the data already be partitioned for me or will I have to do data reorganization.
    In your case, it does not look likely you will need to chop up your data much, so it's probably simpler to stay monolithic and use only one backend.

  • Looking for a hardening guide or best practices in production for WLS 8.1.6

    Hi,
    I'm working to deliver to my government customer best practices in the form of a hardening guide that conforms to NIST SP800-44. I am aware of http://edocs.bea.com/wls/docs81/lockdown/practices.html which adds some great operational tips.
    Is anyone aware of other resources, or has delivered anything similar to their customer? I would greatly appreciate any guidance here.
    Thanks,
    Rich

    Hi I would take a guide that covers any version, or other Oracle products.
    -Rich

  • Oracle URM - best practices?

    does anyone know where i can find best practices with RM?
    thanks in advanced.

    Not sure about best practices, but I have created a demo script which can guide you through few basic concepts. Drop me an email ([email protected]) and I can send you a copy.

  • Oracle's Best practice for avoiding contingency data problems

    What is Oracle's recommendation to avoid having contingency database problems? Is the timestamp data type good enough for avoiding having the a record updated twice?
    Any feedback is welcome.

    It means you need to lock the records all by yourself.
    3 month ago,I try to simulate the Oracle Developer_2000_Form_6I ,and I found that they use "select .. for update no wait where .. " to lock record in Text_Change event. Then I did it,It seems working ok now in my vb.net program.
    Jimy Ho
    [email protected]

Maybe you are looking for