Prerequisites required ASM Migration

Hi ,
Could u please let me know "The prerequisites required before going ahead with the ASM implementation".
Any best practices for disk group confirguration ( I mean how many groups for good performance)
Details of our database:
1) Oracle version : 11.2.0.3
2) OS : Sun solaris 10
3) DB size : 1 TB
I know ASM is part of the Grid Infrastructure. But the current setup was handover from different company and we planned to implement ASM concept here.
Could u please help me out.
Regards,
Steven

It depends. I'm sure that's what they would tell you in the Oracle training class. :)
For starters Oracle recommends two diskgroups named DATA and FRA. If you are using RAC then you will probably create a third diskgroup named CRS.
The redundancy level depends on your storage. If the storage array is providing RAID protection, then you can configure the ASM diskgroups with External Redundancy. But, this is not always the case. You might want to mirror between an external SAN and in internal ioDrive using ASM Normal Redundancy and then set the parameter asm_preferred_read_failure_groups to ensure all reads come from the flash drive. So you see there are countless possibilities that make us say "it depends".
In larger organizations with Enterprise Storage it is rarely so simple. Recall that a diskgroup is a collection of like storage. If you have LUNs of different sizes and performance levels, then you should group the LUNs according to those attributes. Each grouping becomes a diskgroup. For example, never mix RAID 5 LUNs from a Clariion with RAID 5 LUNs from a Symmetrix, and never mix any of those with RAID 10 LUNs. Also, within a diskgroup you want all storage of the same size due to the impact on striping and mirroring. This is all discussed in the Oracle on-line documentation.
If your storage team carves up the array into different classes of storage, then you will probably want to create separate ASM diskgroups. The I/O profiles are completely different for log files versus datafiles, and a well trained storage admin knows how to address each case. So now, if the storage admin gives you high performance LUNs designed for redo logs, lower performance LUNs marked for archivelogs, and a third set of LUNs marked for datafiles, then you will want to create three separate ASM diskgroups named DATA, REDO, and FRA to isolate the storage based on performance characteristics.
Often times with Enterprise Storage you cannot avoid I/O conflicts, so there's no point in separating Oracle's files into their own diskgroups (data, index, redo, etc.) Back on the array you will find the stripes overlay each other. And the array internally ties the storage devices together with either copper or fiber loops, so the I/O's are getting mixed and competing for resources. All I am saying is talk to your storage team and see if they think it will make any difference to try separating tables and indexes into different files given the way they setup the array.
For a small 1 TB database like yours I generally use two ioDrive 1.2 TB PCIe flash memory cards and create one ASM diskgroup named DATA with Normal Redundancy. Bang, done. RAID 10 protection at 5x the performance of a well tuned SAN, and I don't have to wait two weeks for the Enterprise Storage team to carve up LUNs. This takes a great load off the SAN and helps the other enterprise users with their performance issues, and frees up capacity thereby extending the life of the enterprise storage.
The above "answers" are a very generic introduction to planning ASM storage. If you have more specific questions, let us know.
-R

Similar Messages

  • What all the prerequisites required for Combined Delivery &Invoice Process

    Dear Guru's
    Can you please educate me what all the prerequisites required for Combined Delivery Process(VL10A / VL10C) and Combined Invoice Process?
    Regards,
    Varma

    Hi ,
    For combined delivery you have to make some settings
    In the customer master XD01/VD01 in the shipping tab you have to check the ORDER COMBINATION  in the SHIPPING TAB
    In the COPY CONTROLS from Order to Delivery in VTAL at the Header you have to maintain the COMBINATION REQUIREMENT  as 051
    And also the data like
    INCO TERMS,
    SHIP TO PARTY
    DELIVERY DATE
    ROUTE
    SHIPPING POINT
    should be same for all the Orders
    To combine the Deliveries into Invoice
    you have to maintain the copy controls from DELIVERY TO BILLING in VTFL at the item level as 003
    And also certain data like
    PAYMENT TERMS
    PAYER
    BILLING DATE
    ACTUAL GI DATE
    INCO TERMS
    should be same for all the deliveries
    Please revert if you need any more
    regards,
    santosh

  • Filesystem to ASM migration in streams setup

    Hi All,
    We have a Bi Directional replication setup between databases A and B.
    Database A [ 10.2.0.4 ] is already in ASM
    Database B [ 10.2.0.4 ] is in Filesystem and we migrated it to ASM now..
    Steps taken...
    1. Stopped Bi Directional replication
    2. Migrated Database B to ASM [ this includes Temp and redo log files - we dropped and recreated the INACTIVE  redo log ] All datafiles, temp and redo, archive log dest was changes to ASM disk group...
    3. Started the Bi directional streams
    4. Updates from Database A coming to B without any issues and apply process is updating the database B tables.
    5. But Database A apply process is not applying the changes made in Database B.
    All the capture, propogation and apply process are running on both sides.
    Please advise.
    Thanks & Regards,
    Rakesh
    Edited by: Oracleappsdba on Jan 14, 2011 9:28 AM

    Hi All,
    We found that the propagation is ending in error with
    ORA-00604: error occurred at recursive SQL level 4
    ORA-12154: TNS:could not resolve the connect identifier specified
    ORA-06512: at "SYS.DBMS_AQADM_SYS", line 1087
    ORA-06512: at "SYS.DBMS_AQADM_SYS", line 7639
    ORA-06512: at "SYS.DBMS_AQADM", line 631
    ORA-06512: at line 1
    This is happening only after the ASM migration. We can able to connect to the target database with the db link from the user directly without any issues. Its erroring only through propagation process.
    Please advise.
    Thanks & regards,
    Rakesh

  • Prerequisites required to set Deletion Flag for an Order

    Hi Experts,
    I need to know what are all the Prerequisites required to set Deletion flag for an Order.
    i.e. can i set deletion flag for any existing order?
    Kavin

    Hi Kevin,
    System will allow you to set deletion flag if,
    - No Goods Movement is carried out.
    - No Confirmation has has been carried out.
    - Conclusion of point 1 and 2 is order should not carry any cost.
    - If In process Inspection is active then Result Recording should be not started.
    - In case of Process Order CRCR (Control Recipe Created) status is not allowed.
    Regards,
    Dhaval

  • Requirement For Migration Project

    Dear All,
    We have a requirement to migrate a project into ATG 10.1 which was developed in ATG 9.0.
    Please provide some information ASAP.
    Regards,
    Ram

    What information do you require ASAP????
    The migration kits are available by following:
    All patches, fixpacks and migration kits have been migrated to MOS. Here are the rough steps that you or customers may use to access these:
    1.     - In MOS or ISP, go to the Patches & Updates tab
    2.     - In the Patch Search panel, click on Product or Family (Advanced)
    3.     - Start typing the Product name "Oracle ATG Web" and that should auto-filter the results to the ATG products.
    4.     - Select the appropriate patch(es) from the list
    5.     - Select the Release that you are interested in (e.g. 9.3)
    6.     - Click Search to see the list of matching patch sets
    7.     - Click on the patch you want and you should be able to view its Readme or Download it.
    The migration kits are listed with the target release, so the 10.0.1 migration kits are visible if you search for 10.0.1.
    Gareth

  • Prerequisite required for Learning APO

    Hi,
    I am from ABAP background , can any one please let me know that if i want to learn  APO  then what should be my first approach towards learning this functional technology.I want to know the prerequisite required for learning APO
    Thanks
    Message was edited by:
            Abhai

    Hi Somnath,
    How are you?
    Currently I am working on sap testing - SD, MM, CRM modules. I would like to move functional side. I am confusion whether I will be moving to CRM or APO. Can you please suggest me which one best for me. If I will move to APO what is basics to learn APO.
    APO is suitable for me. Please advise me on this.
    Thanks,
    Jogendra

  • Single 10g Instance to RAC - without ASM migration

    Is it possible to comment the following steps to convert existing 10.1.0.5 single instance database to RAC? We're using OCFS2 and it is not possible to perform ASM migration.
    1. Copy the existing $ORACLE_HOME/dbs/init<SID1>.ora to
    $ORACLE_HOME/dbs/init<db_name>.ora. Add the following parameters to
    $ORACLE_HOME/dbs/init<db_name>.ora:
    *.cluster_database = TRUE
    *.cluster_database_instances = 2
         *.undo_management=AUTO (Add if you don't have it )
    <SID1>.undo_tablespace=undotbs (undo tablespace which already exists)
         <SID1>.instance_name=RAC1
         <SID1>.instance_number=1
         <SID1>.thread=1
    <SID1>.local_listener=LISTENER_RAC1
    where LISTENER_RAC1 is an entry in the tnsnames.ora file like:
    LISTENER_RAC1 =
    (ADDRESS = (PROTOCOL = TCP)(HOST = <node1>)(PORT = 1521))
    Keep only one line in $ORACLE_HOME/dbs/init<SID1>.ora:
    ifile=$ORACLE_HOME/dbs/init<db_name>.ora
    You could also create a common spfile from this pfile and add a line
    like spfile=$ORACLE_HOME/dbs/spfile<db_name>.ora in each init<SIDn>.ora
    file. For more details about how to do this, please refer to Note 136327.1.
    2. Open your database and run $ORACLE_HOME/rdbms/admin/catclust.sql to create
    cluster database specific views within the existing instance.
    3. Recreate control file if you defined maxinstances to be 1 when you created
    the single instance database.
    To check your current setting of maxinstances, run the following command
    while the database is mounted or open and connected as a user with DBA
    privileges:
    % sqlplus /nolog
    SQL> connect / as sysdba
    SQL> startup mount
    SQL> alter database backup controlfile to trace;
    The trace file is located in udump directory. Check the maxinstance value
    in the CREATE CONTROLFILE statement. Please refer to Note 118931.1
    Recreating the Controlfile in RAC and OPS
    4. Add instance specific parameters in the init<db_name>.ora for the second
    instance on the second node and set appropriate values for it:
         *** Names may need to be modified
         <SID2>.instance_name=RAC2
         <SID2>.instance_number=2
         <SID2>.local_listener=LISTENER_RAC2
         <SID2>.thread=2
         <SID2>.undo_tablespace=UNDOTBS2
    <SID2>.cluster_database = TRUE
    <SID2>.cluster_database_instances = 2
    where LISTENER_RAC2 is an entry in the tnsnames.ora file like:
    LISTENER_RAC2 =
    (ADDRESS = (PROTOCOL = TCP)(HOST = <node2>)(PORT = 1521))
    5. From the first instance, mount the database and run the following command:
         *** Path names, file names, and sizes will need to be modified
    alter database
    add logfile thread 2
    group 3 ('/dev/RAC/redo2_01_100.dbf') size 100M,
    group 4 ('/dev/RAC/redo2_02_100.dbf') size 100M;
    alter database enable public thread 2;
    6. Create a second Undo Tablespace from the existing instance:
         *** Path names, file names, and sizes will need to be modified
    CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE
         '/dev/RAC/undotbs_02_210.dbf' SIZE 200M ;
    7. Set ORACLE_SID and ORACLE_HOME environment variables on the second node.
    8. Start the second Instance.

    In order not to affect the current home on node1, another user oracle2 is created which belong to racinstall group. On node2, which doesn’t have any oracle home installed, CRS will be installed so that node2 will become the primary node.
    Been looking into this CRS for sometime as the installation has failed due to OCFS2 kernel panic upon running root.sh on the second node. Performed the cleanup as described Doc ID: Note:239998.1 (10g RAC: How to Clean Up After a Failed CRS Install), but now, the installation is failing and giving the error “ File not found /ClusterCRS/install/rootadd”. This message didn’t appear the first attempt which could indicate that the cleanup described in the note was not sufficient.

  • Need help to check prerequisites for Document migration to SharePoint

    Hi All,
    I would like to confirm the required prerequisites which I need to follow before migrating contents to SharePoint online so that, I can analysis everything and ensure my job without any interruption.
    Any suggestions/recommendations are highly appreciated.

    Hi  Jyotirmoy Deb,
    Last time might be,you are the one that asking about using join statement on 3 tables.
    Now what are you doing in your code. use are joining almost 18 tables.this will not work anyone, and it will slow down the processing time of the report.
    First of all analysis the report what exactly you want and then
    first read the header data as in your case you using the Sale Document Header data.
    On the basis of Header data fetch the line item data.
    Don't processes all the tables at a time using the join clause.Because it decrease the efficiency of the program.
    Reward points,if useful.
    Regards,
    Manoj Kumar

  • Configuration Manager 2012 SP1 Prerequisite Checker returns "Migration active source hierarchy" error

    Hello, 
    I am getting "Migration active source hierarchy" error in the SCCM 2012 SP1 prerequisite checker on the CM 2012 primary site. The error says "There is an active hierarchy configuration for the migration. Please stop data gathering for each
    source site in the source hierarchy." However, actually there is no migration source hierarchy configured and there are no migration jobs displayed in CM 2012 console. We used to configure this feature to migrate CM 2007 to current CM 2012 without service
    pack two years ago and we have deleted these configurations. 
    In migmctrl.log, the following record is being generated every 60 minutes. 
    ======================================================
    Connection string = Data Source=xxxxx;Initial Catalog=xxxxx;Integrated Security=True;Persist Security Info=False;MultipleActiveResultSets=True;Encrypt=True;TrustServerCertificate=False;Application Name="Migration Manager".~~  $$<SMS_MIGRATION_MANAGER><04-07-2014
    01:01:53.710+300><thread=6992 (0x1B50)>
    Created new sqlConnection to xxxxx~~  $$<SMS_MIGRATION_MANAGER><04-07-2014 01:01:53.741+300><thread=6992 (0x1B50)>
                                                                    [Worker]: Start two step scheduling
    for MIG_Job~~  $$<SMS_MIGRATION_MANAGER><04-07-2014 01:01:53.741+300><thread=6992 (0x1B50)>
                                                                    [Worker]:        
    Step 1. Query the schedule items that was running or requested to start immediately ...~~  $$<SMS_MIGRATION_MANAGER><04-07-2014 01:01:53.741+300><thread=6992 (0x1B50)>
                                                                    [Worker]:        
    Step 2. Query the first item in order of DateNextRun ...~~  $$<SMS_MIGRATION_MANAGER><04-07-2014 01:01:53.819+300><thread=6992 (0x1B50)>
                                                                    [Worker]:        
            No item found. Sleep until the next event.~~  $$<SMS_MIGRATION_MANAGER><04-07-2014 01:01:53.882+300><thread=6992 (0x1B50)>
                                                                    [Worker]: End two step scheduling
    for MIG_Job~~  $$<SMS_MIGRATION_MANAGER><04-07-2014 01:01:53.882+300><thread=6992 (0x1B50)>
    [MigMCtrl]: the workitem queue is full!~  $$<SMS_MIGRATION_MANAGER><04-07-2014 01:01:53.882+300><thread=6992 (0x1B50)>
    [MigMCtrl]: WAIT 3 event(s) for 60 minute(s) and 0 second(s).~  $$<SMS_MIGRATION_MANAGER><04-07-2014 01:01:53.882+300><thread=6992 (0x1B50)>
    ======================================================
    I am not really sure where to look into in this case. I would appreciate any advice. 

    Hi,
    Could you please upload the full migmctrl.log?
    How about specifying a source hierarchy with the same name and password of the hierarchy before in the console, then click Clean Up Migration Data?
    Best Regards,
    Joyce Li
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Changes required for Migration from CR 8.5 to CR 11

    Dear All,
      We have successfully migrated all the reports which were created in CR 8.5 to CR 11. But those same reports developed in CR 8.5 are called through .exe which has been built with Microsoft Visual Studio .NET 2003. The old reports of CR 8.5 is working fine. while we are unable to open CR 11 reports through .exe. It says DLL missing. I guess some higher version of p2sodbc.dll and some more required.
    Can you please help me out.
    Warm regards,
    Santosh Kumar Prashant

    If you have VS .NET 2003, then all you need to do is install CR XI Developer. I would not recommend using the RDC in.NET for two basic reasons:
    1) The RDC has not been tested in .NET (thus no escalation is available for any fix issues)
    2) The RDC has been retired in CR 12. This may have implications for the lifecycle of your project...
    To see how to use the RDC in .NET, check out the article [Using the Report Designer Component in Microsoft Visual Studio .NET|https://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/f0751a7f-a81d-2b10-55a0-e6df0e1bab6d&overridelayout=true]
    My recommendation would be to use the CR assemblies for .NET. The following resources should get you going:
    [Sample applications|https://wiki.sdn.sap.com/wiki/display/BOBJ/CrystalReportsfor.NETSDK+Samples]
    [Crystal Reports For Visual Studio 2005 Walkthroughs|https://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/2081b4d9-6864-2b10-f49d-918baefc7a23&overridelayout=true] (most of the concepts will apply to .NET 2003)
    [Advanced use of Crystal Reports for Visual Studio .NETu2019s feature set and object model|https://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/702ab443-6a64-2b10-3683-88eb1c3744bc&overridelayout=true]
    Developer libraries:
    https://boc.sdn.sap.com/developer/library
    http://devlibrary.businessobjects.com
    Also, ensure you apply the latest CR XI r1 [Service Pack|https://smpdl.sap-ag.de/~sapidp/012002523100006008952008E/crXIwin_sp4.zip]
    Ludek

  • MBAM 2.5 Recovery DB Prerequisite - Required Permission - instance login server roles "processadmin"

    technet.microsoft.com/en-us/library/dn645331.aspx
    We have successfully deployed MBAM 2.5 Stand-alone topology in a lab environment.
    Now we are moving to install / deploy MBAM 2.5 in a QA DB server.
    Prerequisites for the Recovery Database
    Required SQL Server permissions
     Required permissions:
      SQL Server instance login server roles:
       dbcreator
       processadmin
      SQL Server Reporting Services instance rights:
       Create Folders
       Publish Reports
    Question: The QA team is reluctant to assign processadmin permission.
    1. Is there any document where is explains exactly WHY we need to assign "processadmin"?
    2. If we assign "processadmin" will it by any chance have any effect on other DB instance running on that SQL server?
    Thank you in Advance for your help.

    1. I haven't seen any document on why it needs to be assigned processadmin permission.
    2. processadmin permission have rights to start stop and restarts the  sql instance not other instanceses.

  • Single node file system to 3 node rac and asm migration

    hi,
    we have several utl_file and external table applications running on 10.2 single node veritas file system. and we want to migrate to 3 node RAC ASM environment. what is the best practices in order to succeed this migration during this migration. thanks.

    1. Patch to 10.2.0.3 or 10.2.0.4 if not already there.
    2. Dump Veritas from any future consideration.
    3. Build and validate the new RAC environment and then plug in your data using transportable tablespaces.
    Do not expect the first part of step 3 to work perfectly the first time if you do not have experience building RAC clusters.
    This means have appropriate hardware in place for perfecting your skills.
    Be sure, too, that you are not trying to do this with blade or 1U servers. You need a minimum of 2U servers to be able
    to plug in sufficient hardware to have redundant paths to storage and for cache fusion and public access (a minimum of 6 ports).
    And don't let any network admin try to convince you that they can virtualize the network paths: They can not do so successfully
    for RAC.

  • What version of Forms is required to migrate using JHeadstart ?

    Hi,
    I currently have a client who is looking to move away from their existing Forms 4.5 character mode application. These are the options:
    1) Forms 4.5 -> Forms 6i -> Forms 9i ( stop here ) or...
    2) Forms 4.5 -> using JHeadstart migrate to a Java solution
    Question
    ========
    1) Do you have to upgrade Forms 4.5 character mode to Forms 6i and then Forms 9i before migrating to Java, or can the migration be done directly from either Forms 4.5 or 6i ?
    2) What version of Designer is also required ?
    Any help would be much appreciated,
    Sandy

    Question
    ========
    1) Do you have to upgrade Forms 4.5 character mode to Forms 6i and then Forms 9i before migrating to Java, or can the migration be done directly from either Forms 4.5 or 6i ?
    Jheadstart (JDG) only uses the designer repository as input and not forms. So if you have forms generated out of the designer repository, you can use the designer forms modules as jheadstart input. If you don't, you have to reverse engineer (capture) your forms into designer first.
    2) What version of Designer is also required ?
    You need designer 6i patch 4.2, see also http://otn.oracle.com/consulting/9iServices/install.html#compatibility
    Gerrit

  • ASM migration

    I have a LINUX server with an ASM instance up and running and a 10.2 database instance up and running that is presently not using the ASM, each in different ORACLE_HOME.
    I also have a 10.2 database on a different (Windows) server that I want to migrate from the Windows server to the LINUX server and create the tablespaces (datafiles) from the Windows server within the ASM diskgroup on the LINUX server .
    I am thinking all I need to do is change the init.ora parameters on the LINUX ORCL database to reference the ASM diskgroup (+DATA) for datafile creation, then run a script on ORCL to create the tablespaces in ASM on the LINUX server, then take an export from the Windows server and import it into the LINUX server.
    Any comments? WIll this work?

    and create the tablespaces (datafiles) from the Windows server within the ASM diskgroup on the LINUX server . You mean you want to migrate COMPLETELY the instance on windows to LINUX right? If yes, that should be fine.
    I am thinking all I need to do is change the init.ora parameters on the LINUX ORCL database to reference the ASM diskgroup (+DATA) for datafile creation, then run a script on ORCL to create the tablespaces in ASM on the LINUX server, then take an export from the Windows server and import it into the LINUX server.Are you referring to - db_create_file_dest* init parameters? If yes, that should be fine. As such this is optional, you can create tablespaces on the database (where the ASM instance is running) without these parameters by just indicated datafile as '+ASM/....'. BTW, with this you will have a mix of ASM and non-ASM datafiles, since you are planning on creating ASM datafiles in a database which already has non-asm files. You can convert these datafiles into ASM using RMAN....
    The overall/high-level plan looks fine...Good Luck.
    Chandra

  • Filesystem to asm migration

    I am running 10.2.0.3 on Oracle linux "OEL4" with ext3 filesystem.
    I just receive a brand new server and i want to move my actual database into it.
    I want to run my 10.2.0.3 database on asm 11g into my new server.
    Once asm testing is over, il migrate my 10.2.0.3 to 11g database.
    I saw documentation on migrating filesystem to asm, but it seems that the example given is where a database with the filesystem resides on the same machine.
    Is there documentation somewhere on how to migrate my database into asm, but not on the same machine?
    Or do you know the steps i need to perform to do this?

    What makes it complex is the following:
    1- Lack of space on my new server to run both filesystem and migrate to asm.
    2- 32 bits to 64 bits migration.
    3-The database is 1.1T in size, a little to much for an export/import
    Here is what I am going to do. I will try that today.
    I have 3 servers.
    Server A: I have my actual database on OEL4 32 bits with oracle 10g on filesystem
    Server B: I will install OEL5 64 bits running 10g filesystem
    Server C: I will install OEL5 64 bits running 10g database with ASM
    Steps:
    1) Restore a rman backup of my database on server B
    2) migrate the database on server B to 64 bits
    3) NFS mount server B from Server C
    Mount the database
    Perform a Rman backup with location = ASM
    Switch database to copy;

Maybe you are looking for

  • Error message in ical, missing appts between iphone, mobile me and ical

    I have had several appts disappear after syncing by plugging in my iphone to computer. I also got a message that told me calendar info on mobile me needed to be fixed, click replace if the most current info on computer. In this case, I sync'd iphone

  • Randomly occurring Disk Insertion error message

    Running a mac mini with 10.5.4 I seem to get a randomly occurring error message "Disk Insertion - The disk you inserted was not readable by this computer" with choices to "Initialize" or "Eject". The weirdness is that the message seems to pop up at r

  • PO header level excise condition in MIGO are wrong

    Hello , While doing goods receipt Item excise Codition values are as follow BED  = 959.71 ECS = 19.24 SECESS = 9.62 But at header level it is showing as ( divide by 100) BED =  9.59 ECS = 0.19 SECess = 0.09 Please help  to resolve this issue

  • OID function in DB

    Hi Experts, i have query in DB, we are using 11g database; i have install oracle 11g in my Aix server,I had excuted DBMS_LDAP package also in my user. I try to create below function but it's thorws error. CREATE OR REPLACE FUNCTION GETGROUP(Username

  • Author Name Change

    http://img199.imageshack.us/img199/9760/200607141135321gi.jpg If you see where it says Adam (in the picture above), I changed that in my RSS feed, but it still says the same thing in the Music Store. I am guessing that will change next time it update