Communication between RAC instances

Hi,
I see different answers for this Communication between RAC instances in google.
1. UDP
2.Distributed Lock Manager (DLM)
which one of is correct?

both of them are correct on different layers. The IP protocol used over the interconnect is udp and one of the protocols/mechanism implemented above that is DLM.

Similar Messages

  • Communication between RAC instances in google.

    Hi,
    I see different answers for this Communication between RAC instances in google.
    1. UDP
    2.Distributed Lock Manager (DLM)
    which one of is correct?

    Pl do not post duplicate threads - Communication between RAC instances
    Srini

  • Communication between Two WebLogic instances on the same machine

    Hi,
    We're having a problem with communication between two copies of Weblogic on
    the same machine. They are configured with seperate ports (regular and SSL).
    Independantly, they run fine. I can access EJBs running on either of them.
    The problem is that a bean in one of them has code which attempts to access
    an EJB on the other one. The procude fails when trying to obtain the initial
    context. This same code works if compilied independantly of WebLogic on the
    same machine.
    Are there any known issues regards communication between two running
    instances of Weblogic on the same machine?
    Thanks in advance,
    Randy Yarger
    marchFIRST
    [email protected]

    Thanks for the prompt reply.
    There is one IP address (internal address 10.227.1.34) one the machine. WLS1
    is set up at ports 7001 and 5133. WLS2 is setup at ports 7004 and 7005.
    When WLS1 attempts to obtain a context to WLS2 with the URL
    t3://10.227.1.34:7004/ it pauses for a long period of time. Running truss
    on the both WLS processes shows communication occuring between the two
    followed by long periods of silence. Finally WLS2 spits out the error
    ConnectionException[7001,7001,5133,5133,7001,7001] (paraphrased, I can get
    the entire error if it would help).
    After another long pause, WLS1 quits trying with the error 'Server
    10.227.1.34:7004 not found' (again paraphrased).
    Among the things we've tried:
    * Changing the URL from the IP to 127.0.0.1
    * Enabling/disabling SSL on either or both WLSs.
    * Changing the server name in WLS2's copy of weblogic.properties from
    'myserver' to 'myserver2' (previously they were both 'myserver')
    * Upgrading WLS2 to 5.1.0sp5 (Tried upgrading WLS1, but was getting class
    not found errors and quit because that WLS is being used by other people)
    This is a Solaris server. WLS1 is running 5.1.0 and WLS2 is running 5.1.0sp5
    Any suggestions would be appreciated.
    Best,
    Randy Yarger
    marchFIRST
    [email protected]
    "Michael Girdley" <[email protected]> wrote in message
    news:[email protected]...
    >
    >
    There should not be. What is your network configuration? Are they on
    separate IP addresses?
    Thanks,
    Michael
    Michael Girdley
    BEA Systems Inc
    "Randy Jay Yarger" <[email protected]> wrote in message
    news:[email protected]...
    Hi,
    We're having a problem with communication between two copies of Weblogicon
    the same machine. They are configured with seperate ports (regular andSSL).
    Independantly, they run fine. I can access EJBs running on either of
    them.
    The problem is that a bean in one of them has code which attempts toaccess
    an EJB on the other one. The procude fails when trying to obtain theinitial
    context. This same code works if compilied independantly of WebLogic onthe
    same machine.
    Are there any known issues regards communication between two running
    instances of Weblogic on the same machine?
    Thanks in advance,
    Randy Yarger
    marchFIRST
    [email protected]

  • Communication between two clients

    In Java Networking, communication between server-client or bwtween server-multiple clients is general. But How will be the communication between two cients, means two clients can send & receive messages to & from each other(being guided by server).

    Sorry,
    I didn't get your reply clearly.
    I want, if Server is running, and two clients(two instance of a same program) are opened, whatever message(string)client1 send to Server,then the Server needs to send those to client2.Again, the some response should come from client2 to client1 thru Server.
    How can I implement this?
    If you have idea, or some sample code about this, please let me know.
    Regards.

  • Communication between a driver and application.

    Communication between a driver and application.
    I am writing a driver for a PCI card. I have found very good examples of how the driver should be build.
    Where do I find the information about how the user mode application should be talking the driver.
    So If you have any idea that will help me, please let me know.

    Hi,Sir
    This for your reference.
    It will create pci adapter device node at /devices .
    You can use AP function call ioctl to Communicate with your device driver.
    static int
    xxattach(dev_info_t *dip, ddi_attach_cmd_t cmd)
    int instance = ddi_get_instance(dip);
    switch (cmd) {
    case DDI_ATTACH:
    allocate a state structure and initialize it.
    map the device�s registers.
    add the device driver�s interrupt handler(s).
    initialize any mutexes and condition variables.
    create power manageable components.
    * Create the device�s minor node. Note that the node_type
    * argument is set to DDI_NT_TAPE.
    if (ddi_create_minor_node(dip, "minor_name", S_IFCHR,
    instance, DDI_NT_TAPE, 0) == DDI_FAILURE) {
    free resources allocated so far.
    /* Remove any previously allocated minor nodes */
    ddi_remove_minor_node(dip, NULL);
    return (DDI_FAILURE);
    * Create driver properties like "Size." Use "Size"
    * instead of "size" to ensure the property works
    * for large bytecounts.
    xsp->Size = size of device in bytes;
    maj_number = ddi_driver_major(dip);
    if (ddi_prop_update_int64(makedevice(maj_number, instance),
    dip, "Size", xsp->Size) != DDI_PROP_SUCCESS) {
    cmn_err(CE_CONT, "%s: cannot create Size property\n",
    ddi_get_name(dip));
    free resources allocated so far
    return (DDI_FAILURE);

  • Communication between TNS_Listener and Apex Listener

    Dear all,
    I how does the communication between TNS Listener, Apex Listener, and Apex? For some reason I got the following error from Apex listener:
    Caused by: oracle.dbtools.rt.web.WebException: Unable to start the Universal Connection Pool: oracle.ucp.UniversalConnectionPoolException: Cannot get Connection from Datasource
    Caused by: oracle.net.ns.NetException: Listener refused the connection with the following error:
    ORA-12505, TNS:listener does not currently know of SID given in connect descriptorI truncated some of the Java object messages.
    This occurred when I tried to connect to Apex. Then I can fix this by going to:
    http://localhost:8080/apex/listenerConfigurethen, everything worked fine.
    My listener.ora file:
    LISTENER =
      (DESCRIPTION_LIST =
        (DESCRIPTION =
          (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
        (DESCRIPTION =
          (ADDRESS = (PROTOCOL = TCP)(HOST = firefly.snowdrop.com)(PORT = 1521))
    ADR_BASE_LISTENER = /usr/local/oracle
    ADR_BASE_ORACLE_LISTENER = /usr/local/oraclemy tnsnames.ora file
    INARA =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = firefly.snowdrop.com)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = inara.snowdrop.com)
      )where inara is the instance name and firefly is the hostname.
    Does the order of whether starting TNS Listener first or starting Apex Listener first matter?
    Best regards,
    Val
    Edited by: Valerie Debonair on Aug 1, 2011 5:47 AM

    Hm, I'm not able to reproduce the error. There was one time when I started APEX Listener got:
    java.net.BindException: Address already in use: 8080=com.sun.grizzly.http.SelectorThreadHandler@46d999aand when I did the netstat -a got:
    root@firefly:~# netstat -apnl |grep 8080
    tcp6       0      0 :::8080                 :::*                    LISTEN      4530/tnslsnr  I had no idea what made 8080 tagged by the TNS listener, as I started it out before APEX Listener. Very curious how did this happen.
    There was one time netstat -a showed me this (this happened after I stopped Apex Listener, stopped TNS Listener, shutdown the database, started database, started TNS Listener, and started Apex Listener):
    root@firefly:~# netstat -a |grep 8080
    tcp        0      0 firefly.snowdrop.:22110 firefly.snowdrop.c:8080 ESTABLISHED
    tcp        0      0 firefly.snowdrop.:22111 firefly.snowdrop.c:8080 ESTABLISHED
    tcp        0      0 firefly.snowdrop.:22107 firefly.snowdrop.c:8080 ESTABLISHED
    tcp        0      0 firefly.snowdrop.:22109 firefly.snowdrop.c:8080 ESTABLISHED
    tcp        0      0 firefly.snowdrop.:22112 firefly.snowdrop.c:8080 ESTABLISHED
    tcp        0      0 firefly.snowdrop.:22108 firefly.snowdrop.c:8080 ESTABLISHED
    tcp6       0      0 [::]:8080               [::]:*                  LISTEN    
    tcp6       0      0 firefly.snowdrop.c:8080 firefly.snowdrop.:22108 ESTABLISHED
    tcp6       0      0 firefly.snowdrop.c:8080 firefly.snowdrop.:22107 ESTABLISHED
    tcp6       0      0 firefly.snowdrop.c:8080 firefly.snowdrop.:22112 ESTABLISHED
    tcp6       0      0 firefly.snowdrop.c:8080 firefly.snowdrop.:22109 ESTABLISHED
    tcp6       0      0 firefly.snowdrop.c:8080 firefly.snowdrop.:22110 ESTABLISHED
    tcp6       0      0 firefly.snowdrop.c:8080 firefly.snowdrop.:22111 ESTABLISHEDWhat are 22108, 22107, etc? I cannot reproduce this either.
    Now I only got
    root@firefly:~# netstat -a |grep 8080
    tcp6       0      0 [::]:8080               [::]:*                  LISTEN    
    root@firefly:~# netstat -pnl |grep 8080
    tcp6       0      0 :::8080                 :::*                    LISTEN      5035/java   which is normal, right?
    Edited by: Valerie Debonair on Aug 1, 2011 9:43 PM

  • Encrypting communication between an app that uses an ODBC/DSN (with ADODB) and SQL Server 2008 R2

    I've been doing a lot of reading the last couple of days on how we can encrypt db communication between our product app and a customer's SQL Server db, but cannot make it work as expected. The app uses a ODBC/DSN to connect to the SQL Server db. I use this
    ODBC app to setup the DSN (on a Windows 7 PC):
    C:\Windows\SysWOW64\odbcad32.exe
    The  DSNconnection uses the SQL Server driver 6.01.7601.17514 and has these properties:
    - WinNT authentication.
    - Client Config button: TCP/IP to <server-name>\<instance.-name>
    - Change to default db: <name-of-app-db>
    - Everything else is default setting.
    SQL Server is on the same Windowns 7 PC and has a self-signed cert installed (used IIS to generate it) and has the Force Enryption set to "yes".
    I have a test C# program that uses the ADODB 2.7.0.0 COM-wrapper, made by Visual Studio after adding a reference to the ADO 2.7 library version 6.1.7601.17857. The program creates an ADODB.Connection object that has a simple connection string: "DSN=<san-name>;UID=<user>;PWD=<password>". The
    program then creates an ADODB.Recordset object and reads and displays a field from a table.
    Works fine.
    If I go into SQL Server and set Force Encryption to "no," clear the cert, restart the SQL service, and then re-run the program, it works fine.
    Here's the kick. If I go into the DSN and select "Use strong encryption for data" the Test button on the DSN works—why does it work? The SQL Server is no longer encrypting the connection so an error should occur. If I run the test program, it works
    as well—why? I can look at the connection properties in the test program and see that ADODB has added the ";Encrypt-yes" stuff to the end of the connection string. Yet that option seems to have no effect.
    If I set the SQL instance back to Force Encryption:yes, enable the cert, restart the SQL service, and clear the DSN's "Use strong encryption for data" option, I can still connect to the db with the
    test program—why?
    What am I doing wrong? I need to be able to ensure that the communication between our app product and the SQL instance is encrypted, and that we get an error if the SQL instance does not support encrypted communications. We really don't want the customer
    to have to enable Force Encryption because they have other db's on their SQL Server that do not use encrypted communication, but they want to know that our product's communication channel with the db is encrypted.
    No, I can't change the app product's code at this point in time. I'm stuck with working with what a DSN called from ADODB has to offer.
    Also, how can I be sure that communications are encrypted? I mean, I've tried things like "SELECT * FROM sys.dm_exec_connections" but that doesn't help because I have no idea how to tie the list of sessions shown back to my test program, although
    I guess it's a good thing that some of the sessions listed show encrypt_option as TRUE.
    -glenn-

    Ah yes, very good point. It's easy to miss because you have to delete then recreate the DSN in order to change drivers. So I switched the DSN over to the SQL Native 11 driver.
    Now when I try to connect to the SQL instance as <computer-name>\<instance-name>, and without a cert on the server, I get "the target principal name is incorrect". Perfect; now we're getting somewhere!
    Change my DSN to use <fqdn>\<instance-name> and it works. This tells me that SQL Server has auto-generated a cert and named it <fqdn>. I would have expected the "cert fail" error, not a cert
    name mismatch, because I'm not using Trust Server Cert.
    So I load up my self-signed cert, and that works too.
    I am still confused as to why I'm not seeing the "cert fail" error when I have no cert loaded on the SQL Server. I am not using Force Encryption on the server at all, so wouldn't expect SQL Server to auto-create a cert when an Encrypt=yes request comes in,
    but apparently it does?
    I also ran into a problem with this:
    select c.session_id, c.encrypt_option, s.client_interface_name
    from sys.dm_exec_connections c
    join sys.sysprocesses s
      on c.session_id = s.session_id
    where s.dbid = db_id('MyDatabase')
    There is no s.client_interface_name, probably should be s.hostname. There's also no s.session_id. I thought maybe this should be s.sid, but then no rows ever come back. The c.session_id looks like 51 and 52, but the s.sid looks like a very long binary number,
    so these two fields cannot be joined. I don't know how to convert the sid's properly so that the join would work. Ah wait, I just found the s.spid column; the join works when that column is used (I assume that's correct anyhow).
    If I add a Thread.Sleep(30 seconds) to my C# program just before the connection is closed, this query shows me the session for the correct hostname has encrypt_option=TRUE.
    And I have to keep my fingers crossed that all the app I/O will still work properly after
    changing the driver. Probably a safe bet though.
    Think I'm ready to throw in the towel on getting the "cert fail/no SSL" error to appear.
    It does look like I am able to sufficiently show that the connection is encrypted when Encrypt=yes is used with the newer driver.
    Thank for all the help!
    -glenn-

  • Communication between thread in the same process using file interface.

    Hi,
    I am developing  driver and i need to communicate between two thread.
    >can anyone guide me on implementing communication between two thread in the same process using File Interface. First thread will be driver and second will be application.I need to send IOCTL like commands using File interface ie is WriteFile(),ReadFile()
    from Host process to driver through file-interface(which run's in driver context).Host process should not be blocked for the duration of the driver to process the command.
    >File-interface will run in driver context and it will be responsible to receive command from application and pass it to the driver.
    what are the complexity introduced?
    >Can anyone also give me the link/reference to get more information on this topic?
    >How to replace IOCTL command's for instance baud _rate change command with a file interface for example with IRP.

    Here  is the detailed query:
    Hardware Abstraction Layer will interact with Driver(Both will be running in complete different process) .there is a IOCTL for command and  File interface for read and write.
    My requirement is:
    Both should run in the same process so HAL will run as one thread and driver as another thread in the same process .I don't want HAL to wait for completion of request and also i don't want driver to be blocked .
    We are planning to use a File Interface for communication between Hardware abstraction layer and Driver
    HAL will send the command or read/write operation to a file interface and driver will get the command or read/write request from the File interface
    There is a flexibility to change Hardware Abstraction layer and also the Driver
    Is it possible to use IOCTL between two thread under same process? if not what other options do we have.
    Can we use File interface to  send command (like IOCTL) between two thread?

  • Data conversion for communication between Java and C/C++ program

    The real problem, i guess, is about data type conversion between Java and C programs. For instance, and int is supposed to be 4 bytes in Java, and also in C/C++. But, as far as I know, the size of and int in C depends on the processors architecture, so if it were a 64 bit arch., size of and int, double, etc. in C, would change. The real scenario would be a communication between a 32 bit machine (Java) and a 64 bit machine (C/C++).
    First of all, is this assumption correct?
    If so,
    how is it possible to deal with this problem in Java?
    Is there any way to know 'type length' automatically? or
    would it be necessary to modify the Java program to work with a C program in 32-bit or 64-bit arch?
    thx in advance

    cotton.m wrote:
    Yes you should develop the C part of the protocol first and then build from there.
    See http://forum.java.sun.com/thread.jspa?threadID=5243547 for a previous discussion of this topic.
    I understand what you mean, also the topic you referenced. I will explain the situation a bit more. The protocol is already defined: (16 bits message size, 16 message id, 32 bit time, etc.); it is also defined in C (structures and so on). So it would be easy to write my Java code using the specified protocol in bytes, and C types wouldn't be necessary to be considered. But, let's say that the C code can't be modified, and I thought (and don't know yet if its correct), that size of C types CAN changed depending on architecture. So if size changes for C types (code is specified in short, int, etc, and not in bit format), it all would be a mess, if I don't implement something in my Java code.
    cotton.m wrote:
    I don't believe that 32 or 64 bit enters the equation here at all.Maybe 32 o 64 doesn't mind, but sparc or PC could take a point here. Communication would be between a sun solaris sparc and windows x86 PC. But if type size in C/C++ does not change, I wouldn't really have any problem then...
    The situation is not ideal, I know, but is what I have; and I think I could have problems if I run the C program in other machines; so I would have to try to solve it whithin my Java Program.
    I hope it is clear now what I need.
    thx
    Edited by: MGasa on Jan 16, 2008 1:51 AM

  • Error during VM container communication between ABAP and JAVA

    Hello,
    While creation of SC, I am getting error "Error during VM container communication between ABAP and JAVA"
    Based on earlier responses in this forum, I checked following activity.
    1. T Code - BBP_CND_CHECK_CUST
    Result - IPC Pricing is Active and IPC is now running in VMC
    2. Run Report - RSVMCRT_HEALTH_CHECK
    Result - The Java component is deactivated
    3. As per OSS note 854170, Profile parameters were existed as below
    a) vmcj/enable - on
    b) vmcj/option/maxJavaHeap = 200M
    So, How to get Java component activated?
    Thanks,
    Rahul Mandale

    Thanks Markus,
    For SM53, I am getting resulets as " Java is not active on this application server - Message no. SVMCRT011"
    Can you suggest, what I need to do for it? I can't use SRM Shopping cart because of it. thanks in advance, Rahul
    and dev_w0 trace ....
    trc file: "dev_w0", trc level: 1, release: "700"
    ACTIVE TRACE LEVEL           1
    ACTIVE TRACE COMPONENTS      all, MJ

    B Wed Aug 31 15:45:40 2011
    B  create_con (con_name=R/3)
    B  Loading DB library 'D:\usr\sap\CUS\DVEBMGS04\exe\dboraslib.dll' ...
    B  Library 'D:\usr\sap\CUS\DVEBMGS04\exe\dboraslib.dll' loaded
    B  Version of 'D:\usr\sap\CUS\DVEBMGS04\exe\dboraslib.dll' is "700.08", patchlevel (0.46)
    B  New connection 0 created
    M sysno      04
    M sid        CUS
    M systemid   560 (PC with Windows NT)
    M relno      7000
    M patchlevel 0
    M patchno    52
    M intno      20050900
    M make:      multithreaded, Unicode, optimized
    M pid        456
    M
    M  kernel runs with dp version 210000(ext=109000) (@(#) DPLIB-INT-VERSION-210000-UC)
    M  length of sys_adm_ext is 572 bytes
    M  ***LOG Q0Q=> tskh_init, WPStart (Workproc 0 456) [dpxxdisp.c   1293]
    I  MtxInit: 30000 0 0
    M  DpSysAdmExtCreate: ABAP is active
    M  DpSysAdmExtCreate: VMC (JAVA VM in WP) is not active
    M  DpShMCreate: sizeof(wp_adm)          18304     (1408)
    M  DpShMCreate: sizeof(tm_adm)          3954072     (19672)
    M  DpShMCreate: sizeof(wp_ca_adm)          24000     (80)
    M  DpShMCreate: sizeof(appc_ca_adm)     8000     (80)
    M  DpCommTableSize: max/headSize/ftSize/tableSize=500/8/528056/528064
    M  DpShMCreate: sizeof(comm_adm)          528064     (1048)
    M  DpFileTableSize: max/headSize/ftSize/tableSize=0/0/0/0
    M  DpShMCreate: sizeof(file_adm)          0     (72)
    M  DpShMCreate: sizeof(vmc_adm)          0     (1452)
    M  DpShMCreate: sizeof(wall_adm)          (38456/34360/64/184)
    M  DpShMCreate: sizeof(gw_adm)     48
    M  DpShMCreate: SHM_DP_ADM_KEY          (addr: 05C00040, size: 4613144)
    M  DpShMCreate: allocated sys_adm at 05C00040
    M  DpShMCreate: allocated wp_adm at 05C01E28
    M  DpShMCreate: allocated tm_adm_list at 05C065A8
    M  DpShMCreate: allocated tm_adm at 05C065D8
    M  DpShMCreate: allocated wp_ca_adm at 05FCBB70
    M  DpShMCreate: allocated appc_ca_adm at 05FD1930
    M  DpShMCreate: allocated comm_adm at 05FD3870
    M  DpShMCreate: system runs without file table
    M  DpShMCreate: allocated vmc_adm_list at 06054730
    M  DpShMCreate: allocated gw_adm at 06054770
    M  DpShMCreate: system runs without vmc_adm
    M  DpShMCreate: allocated ca_info at 060547A0
    M  DpShMCreate: allocated wall_adm at 060547A8
    X  EmInit: MmSetImplementation( 2 ).
    X  MM diagnostic options set: 0
    X  <ES> client 0 initializing ....
    X  Using implementation flat
    M  <EsNT> Memory Reset disabled as NT default
    X  ES initialized.

    M Wed Aug 31 15:45:41 2011
    M  ThInit: running on host crmsys

    M Wed Aug 31 15:45:42 2011
    M  calling db_connect ...
    C  Prepending D:\usr\sap\CUS\DVEBMGS04\exe to Path.

    C Wed Aug 31 15:45:47 2011
    C  Client NLS settings: AMERICAN_AMERICA.UTF8
    C  Logon as OPS$-user to get SAPSR3's password
    C  Connecting as /@CRM on connection 0 (nls_hdl 0) ... (dbsl 700 240106)
    C  Nls CharacterSet                 NationalCharSet              C      EnvHp      ErrHp ErrHpBatch
    C    0 UTF8                                                      1   0619F158   061A46F4   061A3F7C
    C  Attaching to DB Server CRM (con_hdl=0,svchp=061A3EC8,svrhp=061B5794)
    C  Starting user session (con_hdl=0,svchp=061A3EC8,srvhp=061B5794,usrhp=061CA558)

    C Wed Aug 31 15:45:48 2011
    C  Now '/@CRM' is connected (con_hdl 0, nls_hdl 0).
    C  Got SAPSR3's password from OPS$-user
    C  Disconnecting from connection 0 ...
    C  Closing user session (con_hdl=0,svchp=061A3EC8,usrhp=061CA558)
    C  Now I'm disconnected from ORACLE
    C  Connecting as SAPSR3/<pwd>@CRM on connection 0 (nls_hdl 0) ... (dbsl 700 240106)
    C  Nls CharacterSet                 NationalCharSet              C      EnvHp      ErrHp ErrHpBatch
    C    0 UTF8                                                      1   0619F158   061A46F4   061A3F7C
    C  Starting user session (con_hdl=0,svchp=061A3EC8,srvhp=061B5794,usrhp=061CA558)
    C  Now 'SAPSR3/<pwd>@CRM' is connected (con_hdl 0, nls_hdl 0).
    C  Database NLS settings: AMERICAN_AMERICA.UTF8
    C  Database instance CRM is running on CRMSYS with ORACLE version 10.2.0.1.0 since 20110831

    B Wed Aug 31 15:45:49 2011
    B  Connection 0 opened (DBSL handle 0)
    B  Wp  Hdl ConName          ConId     ConState     TX  PRM RCT TIM MAX OPT Date     Time   DBHost         
    B  000 000 R/3              000000000 ACTIVE       NO  YES NO  000 255 255 20110831 154542 CRMSYS         
    M  db_connect o.k.
    M  ICT: exclude compression: .zip,.cs,.rar,.arj,.z,.gz,.tar,.lzh,.cab,.hqx,.ace,.jar,.ear,.war,.css,.pdf,.js,.gzip,.uue,.bz2,.iso,.sda,.sar,.gif

    I Wed Aug 31 15:46:12 2011
    I  MtxInit: 0 0 0
    M  SHM_PRES_BUF               (addr: 0A7C0040, size: 4400000)
    M  SHM_ROLL_AREA          (addr: 788A0040, size: 61440000)
    M  SHM_PAGING_AREA          (addr: 0AC00040, size: 32768000)
    M  SHM_ROLL_ADM               (addr: 0CB50040, size: 615040)
    M  SHM_PAGING_ADM          (addr: 0CBF0040, size: 525344)
    M  ThCreateNoBuffer          allocated 544152 bytes for 1000 entries at 0CC80040
    M  ThCreateNoBuffer          index size: 3000 elems
    M  ThCreateVBAdm          allocated 12160 bytes (50 server) at 0CD10040
    X  EmInit: MmSetImplementation( 2 ).
    X  MM diagnostic options set: 0
    X  <ES> client 0 initializing ....
    X  Using implementation flat
    X  ES initialized.
    B  db_con_shm_ini:  WP_ID = 0, WP_CNT = 13, CON_ID = -1
    B  dbtbxbuf: Buffer TABL  (addr: 10E700C8, size: 30000000, end: 12B0C448)
    B  dbtbxbuf: Buffer TABLP (addr: 12B100C8, size: 10240000, end: 134D40C8)
    B  dbexpbuf: Buffer EIBUF (addr: 0FBA00D0, size: 4194304, end: 0FFA00D0)
    B  dbexpbuf: Buffer ESM   (addr: 134E00D0, size: 4194304, end: 138E00D0)
    B  dbexpbuf: Buffer CUA   (addr: 138F00D0, size: 3072000, end: 13BDE0D0)
    B  dbexpbuf: Buffer OTR   (addr: 13BE00D0, size: 4194304, end: 13FE00D0)
    M  rdisp/reinitialize_code_page -> 0
    M  icm/accept_remote_trace_level -> 0
    M  rdisp/no_hooks_for_sqlbreak -> 0
    M  CCMS: AlInitGlobals : alert/use_sema_lock = TRUE.

    S Wed Aug 31 15:46:15 2011
    S  *** init spool environment
    S  initialize debug system
    T  Stack direction is downwards.
    T  debug control: prepare exclude for printer trace
    T  new memory block 1946AA80

    S Wed Aug 31 15:46:16 2011
    S  spool kernel/ddic check: Ok
    S  using table TSP02FX for frontend printing
    S  1 spool work process(es) found
    S  frontend print via spool service enabled
    S  printer list size is 150
    S  printer type list size is 50
    S  queue size (profile)   = 300
    S  hostspool list size = 3000
    S  option list size is 30
    S      found processing queue enabled
    S  found spool memory service RSPO-RCLOCKS at 1D6D00A8
    S  doing lock recovery
    S  setting server cache root
    S  found spool memory service RSPO-SERVERCACHE at 1D6D0430
    S    using messages for server info
    S  size of spec char cache entry: 297028 bytes (timeout 100 sec)
    S  size of open spool request entry: 2132 bytes
    S  immediate print option for implicitely closed spool requests is disabled

    A Wed Aug 31 15:46:18 2011

    A  -PXA--
    A  PXA INITIALIZATION
    A  PXA: Fragment Size too small: 73 MB, reducing # of fragments
    A  System page size: 4kb, total admin_size: 5132kb, dir_size: 5076kb.
    A  Attached to PXA (address 688A0040, size 150000K)
    A  abap/pxa = shared protect gen_remote
    A  PXA INITIALIZATION FINISHED
    A  -PXA--


    A Wed Aug 31 15:46:20 2011
    A  ABAP ShmAdm attached (addr=57A40000 leng=20955136 end=58E3C000)
    A  >> Shm MMADM area (addr=57EB5E58 leng=126176 end=57ED4B38)
    A  >> Shm MMDAT area (addr=57ED5000 leng=16150528 end=58E3C000)
    A  RFC Destination> destination crmsys_CUS_04 host crmsys system CUS systnr 4 (crmsys_CUS_04)

    A Wed Aug 31 15:46:21 2011
    A  RFC Options> H=crmsys,S=04,d=2,
    A  RFC FRFC> fallback activ but this is not a central instance.
    A   
    A  RFC rfc/signon_error_log = -1
    A  RFC rfc/dump_connection_info = 0
    A  RFC rfc/dump_client_info = 0
    A  RFC rfc/cp_convert/ignore_error = 1
    A  RFC rfc/cp_convert/conversion_char = 23
    A  RFC rfc/wan_compress/threshold = 251
    A  RFC rfc/recorder_pcs not set, use defaule value: 2
    A  RFC rfc/delta_trc_level not set, use default value: 0
    A  RFC rfc/no_uuid_check not set, use default value: 0
    A  RFC rfc/bc_ignore_thcmaccp_retcode not set, use default value: 0
    A  RFC Method> initialize RemObjDriver for ABAP Objects
    M  ThrCreateShObjects          allocated 13730 bytes at 0FFB0040

    N Wed Aug 31 15:46:22 2011
    N  SsfSapSecin: putenv(SECUDIR=D:\usr\sap\CUS\DVEBMGS04\sec): ok

    N  =================================================
    N  === SSF INITIALIZATION:
    N  ===...SSF Security Toolkit name SAPSECULIB .
    N  ===...SSF trace level is 0 .
    N  ===...SSF library is D:\usr\sap\CUS\DVEBMGS04\exe\sapsecu.dll .
    N  ===...SSF hash algorithm is SHA1 .
    N  ===...SSF symmetric encryption algorithm is DES-CBC .
    N  ===...sucessfully completed.
    N  =================================================

    N Wed Aug 31 15:46:23 2011
    N  MskiInitLogonTicketCacheHandle: Logon Ticket cache pointer retrieved from shared memory.
    N  MskiInitLogonTicketCacheHandle: Workprocess runs with Logon Ticket cache.
    M  JrfcVmcRegisterNativesDriver o.k.
    W  =================================================
    W  === ipl_Init() called
    B    dbtran INFO (init_connection '<DEFAULT>' [ORACLE:700.08]):
    B     max_blocking_factor =  15,  max_in_blocking_factor      =   5,
    B     min_blocking_factor =  10,  min_in_blocking_factor      =   5,
    B     prefer_union_all    =   0,  prefer_join                 =   0,
    B     prefer_fix_blocking =   0,  prefer_in_itab_opt          =   1,
    B     convert AVG         =   0,  alias table FUPD            =   0,
    B     escape_as_literal   =   1,  opt GE LE to BETWEEN        =   0,
    B     select *            =0x0f,  character encoding          = STD / <none>:-,
    B     use_hints           = abap->1, dbif->0x1, upto->2147483647, rule_in->0,
    B                           rule_fae->0, concat_fae->0, concat_fae_or->0

    W Wed Aug 31 15:46:24 2011
    W    ITS Plugin: Path dw_gui
    W    ITS Plugin: Description ITS Plugin - ITS rendering DLL
    W    ITS Plugin: sizeof(SAP_UC) 2
    W    ITS Plugin: Release: 700, [7000.0.52.20050900]
    W    ITS Plugin: Int.version, [32]
    W    ITS Plugin: Feature set: [10]
    W    ===... Calling itsp_Init in external dll ===>
    W  === ipl_Init() returns 0, ITSPE_OK: OK
    W  =================================================
    E  Enqueue Info: rdisp/wp_no_enq=1, rdisp/enqname=<empty>, assume crmsys_CUS_04
    E  Replication is disabled
    E  EnqCcInitialize: local lock table initialization o.k.
    E  EnqId_SuppressIpc: local EnqId initialization o.k.
    E  EnqCcInitialize: local enqueue client init o.k.

    B Wed Aug 31 15:46:48 2011
    B  table logging switched off for all clients

    M Wed Aug 31 15:47:55 2011
    M  SecAudit(RsauShmInit): WP attached to existing shared memory.
    M  SecAudit(RsauShmInit): addr of SCSA........... = 05BD0040
    M  SecAudit(RsauShmInit): addr of RSAUSHM........ = 05BD07A8
    M  SecAudit(RsauShmInit): addr of RSAUSLOTINFO... = 05BD07E0
    M  SecAudit(RsauShmInit): addr of RSAUSLOTS...... = 05BD07EC

    A Wed Aug 31 15:48:44 2011
    A  RFC FRFC> fallback on the central gateway crmsys sapgw04 activ

    B Wed Aug 31 15:49:47 2011
    B  dbmyclu : info : my major identification is 3232288873, minor one 4.
    B  dbmyclu : info : Time Reference is 1.12.2001 00:00:00h GMT.
    B  dbmyclu : info : my initial uuid is D98FA690E8AA314D9B69930868792664.
    B  dbmyclu : info : current optimistic cluster level: 0
    B  dbmyclu : info : pessimistic reads set to 2.

  • Synchronous Communication between two Decoupled Orchestration

    Hi All,
    Is there a way I can achieve synchronous communication between 2 orchestrations (decoupled orchestrations) where I need to pass multiple parameters from calling orchestration to called orchestration.
    Thanks,
    Sumit
    Sumit Verma - MCTS BizTalk 2006/2010 - Please indicate "Mark as Answer" or "Mark as Helpful" if this post has answered the question

    Hi Sumit,
    if you have hundreds of orchestration means lot of small small business processes which are tightly coupled...
    It is better to publish process / orchestration as service so it will  SOA base ..it will work isolate. but remember if you publish more web base service. it will be burden on IIS. we have only one Host instance to handle..we can't create more host
    instance to handle like in-process host in one server..we have to go for cluster environment..so first approach will be window base service (if possible achieve business process ) to handle by in-process host instance.
    If you have many small small services then we need one master service to watch/process all the services.
    Exmaple...
    Like one process A, need to execute five small services/processes a1,b1,a2,c1,c3
    and Process B need to execute four small services/processes b1,b2,a1,c1
    You have to go for canonical schema and master entry table where you can dynamically change or attached or remove small service.  
    Regards
    Suman

  • Rac Instance Crashes

    Dear all,
    My version is 11.2.0.2.5 one of my rac instance crashes with message ORA-00240: control file enqueue held for more than 120 seconds. Received an instance abort message from instance 1.
    here are the contents of alert log file
    IPC Send timeout detected. Receiver ospid 27423 [[email protected] (LMON)]
    2013-03-22 22:30:05.644000 -07:00
    Errors in file /u01/app/oracle/diag/rdbms/lfgoimdb/LFGoimdb2/trace/LFGoimdb2_lmon_27423.trc:
    2013-03-22 22:31:08.734000 -07:00
    Errors in file /u01/app/oracle/diag/rdbms/lfgoimdb/LFGoimdb2/trace/LFGoimdb2_arc2_27691.trc (incident=15905):
    ORA-00240: control file enqueue held for more than 120 seconds
    Incident details in: /u01/app/oracle/diag/rdbms/lfgoimdb/LFGoimdb2/incident/incdir_15905/LFGoimdb2_arc2_27691_i15905.trc
    2013-03-22 22:31:13.409000 -07:00
    Received an instance abort message from instance 1
    Please check instance 1 alert and LMON trace files for detail.
    LMS0 (ospid: 27427): terminating the instance due to error 481
    System state dump requested by (instance=2, osid=27427 (LMS0)), summary=[abnormal instance termination].
    System State dumped to trace file /u01/app/oracle/diag/rdbms/lfgoimdb/LFGoimdb2/trace/LFGoimdb2_diag_27413.trc
    2013-03-22 22:31:18.376000 -07:00
    Dumping diagnostic data in directory=[cdmp_20130322223113], requested by (instance=2, osid=27427 (LMS0)), summary=[abnormal instance termination].
    ORA-1092 : opitsk aborting process
    Instance terminated by LMS0, pid = 27427

    Thanks for reply,
    My redo logs size is default 50mb.There is currently no load on the system since we are not using this environment for time being.The log switches are averaged to be 8 per day.I think Increasing the size of redo will further cause the problems since the archiver may again hold lock for more time.
    Since there is no dedicated connection between the nodes and storage ,So increasing the hardware and network configuration is only solution to this? Or I am still missing something...
    As far as configuration is considered i cannot add more resources to this environment.How can I solve this issue?

  • How to determine which RAC-instance the appl. is logged onto?

    Dear all,
    I need to have my application server determine which RAC-
    instance is currently active (logged onto). I have a
    tnsnames.ora file with a primary-, and secondary RAC-
    instance configured, and Failover/Failback between the
    instances work fine. However, I would be interested in
    determining which instance I am curently using.
    Does the Oracle Net Protocol have support for letting me
    "read" this out, or...?
    Thanks.
    Regards, Eldor R.

    Thank you for the prompt reply.
    Is there, in the Oracle Net Protocol, available
    function(s) for reading out this information
    directly without "parsing" the trace file?
    I would like to read out this information from my
    application run-time.
    Thanks.

  • New RAC Instance 10.2.0.2 - won't start

    Hi There,
    I am not a DBA, though have a good understanding of UNIX. For a work project I need to create a RAC instance on Solaris 10 x86 u6... I have installed the software, created the instance using DBCA, but unfortunatley it will not start.
    I am recieving the following errors when i run the command below to start the instance:
    bash-3.00$ srvctl start instance -d orcldb -i orcldb1
    PRKP-1001 : Error starting instance orcldb1 on node sol002
    CRS-0215: Could not start resource 'ora.orcldb.orcldb1.inst'.
    ===== alert log ======
    Fri Jul 17 10:24:05 2009
    WARNING: EINVAL creating segment of size 0x000000001a402000
    fix shm parameters in /etc/system or equivalent
    I have the following projects defined for oracle on both nodes in the cluster.
    sol002:
    bash-3.00$ prctl -n project.max-shm-memory -i project oracle
    project: 100: oracle
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged 700MB - deny -
    system 16.0EB max deny -
    bash-3.00$ cat /etc/project
    system:0::::
    user.root:1::::
    noproject:2::::
    default:3::::
    group.staff:10::::
    oracle:100::::project.max-shm-memory=(priv,734003200,deny)
    sol003:
    # prctl -n project.max-shm-memory -i project oracle
    project: 100: oracle
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged 700MB - deny -
    system 16.0EB max deny -
    # cat /etc/project
    system:0::::
    user.root:1::::
    noproject:2::::
    default:3::::
    group.staff:10::::
    oracle:100::::project.max-shm-memory=(priv,734003200,deny)
    After trying to tweak these values between 700mb and 4gb (i only have 1500mb of RAM) i added some values into the /etc/system (which i understand to be depricated on solaris 10) as reccomended in the post below due to bug 5340239 on Metalink, but still did not resolve the problem.
    http://posulliv.com/?cat=12
    When using DBCA to create the database, I specified to use 40% of available system memory. I am not sure where those values get defined and if that may also be playing a part in this issue.
    Any help wuld be greatly appreciated.

    bash-3.00$ srvctl start instance -d orcldb -i orcldb1
    PRKP-1001 : Error starting instance orcldb1 on node sol002
    CRS-0215: Could not start resource 'ora.orcldb.orcldb1.inst'.Please check imon* file at $ORACLE_HOME/log/<nodename>/racg/
    Or check by debug
    $ export SRVM_TRACE=TRUE
    $ srvctl start instance -d orcldb -i orcldb1
    Show us about your error....
    if you have the problem about project
    Can you make project for the "system" ?*
    by memory for "system" > "oracle" ("system" can more physical memory)*
    and try start instance again ;)

  • Rconfig: converting a single instance to RAC instance

    Hi,
    I am trying to use the "rconfig" utility to convert a single instance to a RAC instance in an existing RAC cluster.
    I have modified the .xml file, and am trying to run the conversion from the 1st node in the 2 node cluster (where the single instance resides).
    The only error message i seem to be getting is below:
    <Response>
    <Result code="1" >
    Operation Failed
    </Result>
    <ErrorDetails>
    ORCL_DATA_ORCLCLN The specified diskgroup is not mounted.
    </ErrorDetails>
    </Response>
    </Convert>
    </ConvertToRAC></RConfig>
    Now I dont really understand why I would be getting that message as the instance is up and running and ASM disk group is mounted on node1 at the time i run the rconfig command, though its not clear to me if I also need to somehow mount the ASM disk group on the second node prior to running the rconfig command??
    node1:
    bash-3.00$ asmcmd -p
    ASMCMD [+] > lsdg
    State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
    MOUNTED EXTERN N N 512 4096 1048576 10181 7442 0 7442 0 ORCL_DATA_ORCLCLN/
    node2:
    ASMCMD [+] > lsdg
    State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
    I have attached the output of the alert log during the rconfig conversion of the target database, but it all looks pretty standard to me (keep in mind i am an oracle novice!).
    alert.log
    Completed: ALTER DATABASE OPEN
    Thu Jul 23 13:51:55 2009
    Shutting down instance (abort)
    License high water mark = 2
    Instance terminated by USER, pid = 15030
    Thu Jul 23 13:51:57 2009
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Interface type 1 e1000g1 10.128.113.0 configured from OCR for use as a cluster interconnect
    Interface type 1 e1000g0 10.128.113.0 configured from OCR for use as a public interface
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/10.2.0/db_1/dbs/arch
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =18
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.2.0.
    System parameters with non-default values:
    processes = 150
    __shared_pool_size = 121634816
    __large_pool_size = 4194304
    __java_pool_size = 4194304
    __streams_pool_size = 0
    sga_target = 440401920
    control_files = +ORCL_DATA_ORCLCLN/control01.ctl
    db_block_size = 8192
    __db_cache_size = 306184192
    compatible = 10.2.0.2.0
    log_archive_format = %t_%s_%r.dbf
    db_file_multiblock_read_count= 16
    cluster_database = FALSE
    cluster_database_instances= 1
    db_recovery_file_dest_size= 2147483648
    norecovery_through_resetlogs= TRUE
    undo_management = AUTO
    undo_tablespace = UNDOTBS1
    remote_login_passwordfile= EXCLUSIVE
    db_domain = netapp.com
    job_queue_processes = 10
    background_dump_dest = /u01/app/oracle/admin/orcldb/bdump/ORCLCLN
    user_dump_dest = /u01/app/oracle/admin/orcldb/udump/ORCLCLN
    core_dump_dest = /u01/app/oracle/admin/orcldb/cdump/ORCLCLN
    db_name = ORCLCLN
    open_cursors = 300
    pga_aggregate_target = 145752064
    Cluster communication is configured to use the following interface(s) for this instance
    10.128.113.200
    Thu Jul 23 13:51:59 2009
    cluster interconnect IPC version:Oracle UDP/IP (generic)
    IPC Vendor 1 proto 2
    PMON started with pid=2, OS id=15085
    DIAG started with pid=3, OS id=15091
    PSP0 started with pid=4, OS id=15094
    LMON started with pid=5, OS id=15097
    LMD0 started with pid=6, OS id=15102
    MMAN started with pid=7, OS id=15112
    DBW0 started with pid=8, OS id=15114
    LGWR started with pid=9, OS id=15116
    CKPT started with pid=10, OS id=15125
    SMON started with pid=11, OS id=15128
    RECO started with pid=12, OS id=15130
    CJQ0 started with pid=13, OS id=15134
    MMON started with pid=14, OS id=15143
    MMNL started with pid=15, OS id=15146
    Thu Jul 23 13:52:03 2009
    lmon registered with NM - instance id 1 (internal mem no 0)
    Thu Jul 23 13:52:04 2009
    Reconfiguration started (old inc 0, new inc 2)
    List of nodes:
    0
    Global Resource Directory frozen
    * allocate domain 0, invalid = TRUE
    Communication channels reestablished
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Resources and enqueues cleaned out
    Resources remastered 0
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Post SMON to start 1st pass IR
    Submitted all GCS remote-cache requests
    Post SMON to start 1st pass IR
    Reconfiguration complete
    Thu Jul 23 13:52:04 2009
    ALTER DATABASE MOUNT
    Thu Jul 23 13:52:04 2009
    Starting background process ASMB
    ASMB started with pid=17, OS id=15157
    Starting background process RBAL
    RBAL started with pid=18, OS id=15169
    Thu Jul 23 13:52:09 2009
    SUCCESS: diskgroup ORCL_DATA_ORCLCLN was mounted
    Thu Jul 23 13:52:13 2009
    Setting recovery target incarnation to 2
    Thu Jul 23 13:52:13 2009
    Successful mount of redo thread 1, with mount id 4437636
    Thu Jul 23 13:52:13 2009
    Database mounted in Exclusive Mode
    Completed: ALTER DATABASE MOUNT
    Thu Jul 23 13:52:14 2009
    ALTER DATABASE OPEN
    Thu Jul 23 13:52:14 2009
    Beginning crash recovery of 1 threads
    Thu Jul 23 13:52:14 2009
    Started redo scan
    Thu Jul 23 13:52:14 2009
    Completed redo scan
    105 redo blocks read, 32 data blocks need recovery
    Thu Jul 23 13:52:14 2009
    Started redo application at
    Thread 1: logseq 2, block 929
    Thu Jul 23 13:52:15 2009
    Recovery of Online Redo Log: Thread 1 Group 2 Seq 2 Reading mem 0
    Mem# 0 errs 0: +ORCL_DATA_ORCLCLN/redo_2_1.log
    Mem# 1 errs 0: +ORCL_DATA_ORCLCLN/redo_2_0.log
    Thu Jul 23 13:52:15 2009
    Completed redo application
    Thu Jul 23 13:52:15 2009
    Completed crash recovery at
    Thread 1: logseq 2, block 1034, scn 613579
    32 data blocks read, 25 data blocks written, 105 redo blocks read
    Thu Jul 23 13:52:15 2009
    Thread 1 advanced to log sequence 3
    Thread 1 opened at log sequence 3
    Current log# 1 seq# 3 mem# 0: +ORCL_DATA_ORCLCLN/redo_1_1.log
    Current log# 1 seq# 3 mem# 1: +ORCL_DATA_ORCLCLN/redo_1_0.log
    Successful open of redo thread 1
    Thu Jul 23 13:52:15 2009
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Thu Jul 23 13:52:15 2009
    SMON: enabling cache recovery
    Thu Jul 23 13:52:17 2009
    Successfully onlined Undo Tablespace 1.
    Thu Jul 23 13:52:17 2009
    SMON: enabling tx recovery
    Thu Jul 23 13:52:17 2009
    Database Characterset is WE8ISO8859P1
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=21, OS id=15328
    Thu Jul 23 13:52:23 2009
    Completed: ALTER DATABASE OPEN
    Any help would be greatly appreciated!!!!

    Ok,
    So I managed to get the disk group mounted on the second node, and re-ran the rconfig process.
    I got a little further, but encountered another error which is displayed below:
    -bash-3.00$ rconfig racconv.xml
    <?xml version="1.0" ?>
    <RConfig>
    <ConvertToRAC>
    <Convert>
    <Response>
    <Result code="1" >
    Operation Failed
    </Result>
    <ErrorDetails>
    /u01/app/oracle/product/10.2.0/db_1/dbs Data File is not shared across all nodes in the cluster
    </ErrorDetails>
    </Response>
    </Convert>
    </ConvertToRAC></RConfig>
    I am not using a shared oracle home, each node in the cluster has its own oracle installation residing on local disk. Is a shared oracle home a pre-requisite for usin rconfig?
    I have provided the .xml file I am using below:
    -bash-3.00$ cat racconv.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <n:RConfig xmlns:n="http://www.oracle.com/rconfig"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.oracle.com/rconfig">
    <n:ConvertToRAC>
    <!-- Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY -->
    <n:Convert verify="YES">
    <!--Specify current OracleHome of non-rac database for SourceDBHome -->
    <n:SourceDBHome>/u01/app/oracle/product/10.2.0/db_1</n:SourceDBHome>
    <!--Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome -->
    <n:TargetDBHome>/u01/app/oracle/product/10.2.0/db_1</n:TargetDBHome>
    <!--Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion -->
    <n:SourceDBInfo SID="ORCLCLN">
    <n:Credentials>
    <n:User>oracle</n:User>
    <n:Password>password</n:Password>
    <n:Role>sysdba</n:Role>
    </n:Credentials>
    </n:SourceDBInfo>
    <!--ASMInfo element is required only if the current non-rac database uses ASM Storage -->
    <n:ASMInfo SID="+ASM1">
    <n:Credentials>
    <n:User>oracle</n:User>
    <n:Password>password</n:Password>
    <n:Role>sysdba</n:Role>
    </n:Credentials>
    </n:ASMInfo>
    <!--Specify the list of nodes that should have rac instances running. LocalNode should be the first node in this nodelist. -->
    <n:NodeList>
    <n:Node name="sol002"/>
    <n:Node name="sol003"/>
    </n:NodeList>
    <!--Specify prefix for rac instances. It can be same as the instance name for non-rac database or different. The instance number will be attached to this prefix. -->
    <n:InstancePrefix>ORCLCLN</n:InstancePrefix>
    <!--Specify port for the listener to be configured for rac database.If port="", alistener existing on localhost will be used for rac database.The listener will be extended to all nodes in the nodelist -->
    <n:Listener port=""/>
    <!--Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type. -->
    <n:SharedStorage type="ASM">
    <!--Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path. -->
    <n:TargetDatabaseArea></n:TargetDatabaseArea>
    <!--Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac database will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have a recovery area. -->
    <n:TargetFlashRecoveryArea></n:TargetFlashRecoveryArea>
    </n:SharedStorage>
    </n:Convert>
    </n:ConvertToRAC>
    </n:RConfig>

Maybe you are looking for

  • New HH3 - Issues with Kodak Printer and iPhone 3GS

    Hello, Here is the background: Previously had a D-Link DSL-G624M router and everything connected wirelessly. This included 2 laptops, Kodak ESP 5250 Printer and iPhone 3GS. Router broadcasting on channel 6 (locked). Since upgrading to Infinity replac

  • CIM_ERR_FAILED: HTTP response code: 404 Not found

    Hello! We are installing NW7.0 PI with local SLD as HA system with virtual name and after patching it up to SPS17 we started Configuration wizard PI_00_This wizard will execute Postinstall steps of technical configuration for the PI Usage and on step

  • Function Modules called during a Transaction

    Hi Experts,        Is there a way to find out the Function Modules/BAPI's called when I run a Transaction/Program (during runtime).       Let's say I run a trasaction ME22N and Press Save button. The transaction saves the data. Now I would like to kn

  • How do I delete programs from my download queue?

    How do I delete programs from my download queue?  I have some just hanging there in "downloads" that won't download and won't go away.

  • Consumer Cellular (USA) APN settings

    For Consumer Cellular Customers in USA: After much experimentation, here are working Consumer Cellular APN (Access Point Name) settings for Nokia Symbian Belle phones, including the Nokia 808, 603, 701, E6, 700, N8 and probably others. These settings