Does OWF Support a Cluster environment?????

I have a Cluster environment. I have an IP for Node 1 (xxx.xxx.xxx.001), another IP for Node2 (xxx.xxx.xxx.002), another IP for the Cluster (xxx.xxx.xxx.003) and another IP for the Database (xxx.xxx.xxx.004)
I have installed OWF in both nodes and I created the OWF repository in Oracle (user OWF_MGR). It’s right?
Does OWF Support a Cluster environment?????
What do I have to configure if I want High availability for Oracle Workflow????
Thanks in advance!

Hi,
Every servernode runs in its own java process.
Every java process has its own heap where it works in, with normal use its not possible to share memory between servernodes so also not caches that are stored in memory.
The only components that are shared in a Java "cluster" are the SCS instance that contains the Message and Enqueue Server, the DB, and the shared file system /sapmnt.
Try investigating JNI... search for "Java JNI Shared Memory"... this will need custom development and seems to be very low level and complex... If it was easy I guess SAP might had implemented in there WAS Java I gues.
Also take a look at:
http://www.google.nl/url?sa=t&source=web&ct=res&cd=1&url=http%3A%2F%2Figuanaworks.net%2F~shucker%2Fppt%2FJaDiSM.ppt&ei=3OqmSY6pF8Ov-QakpJnGAg&usg=AFQjCNFZCeSE-PIRpVIN39TDtGaeXMAKHA&sig2=mSGrGSf-5hT7roefFDj0lw
and
http://www.google.nl/url?sa=t&source=web&ct=res&cd=2&url=http%3A%2F%2Figuanaworks.net%2F~shucker%2Fpapers%2FJaDiSMPaper.doc&ei=3OqmSY6pF8Ov-QakpJnGAg&usg=AFQjCNF0cozxeRsKdOoHlynHWCdCv1zDmA&sig2=upwiB0HKiyxYJHaCZWiIeg
Good luck!
Benjamin Houttuin

Similar Messages

  • Does Appliance support HA/Cluster install ?

    Hi
    I want to make iprint appliance work on HA Mode...Does Appliance support HA ?
    Whichc document could provide such information let me learning configuration ?
    thanks!!
    wyld

    On 09/12/2013 08:16, wyld wrote:
    > I want to make iprint appliance work on HA Mode...Does Appliance
    > support HA ?
    Not that I'm aware of.
    > Whichc document could provide such information let me learning
    > configuration ?
    What exactly are you wanting to achieve?
    HTH.
    Simon
    Novell Knowledge Partner
    If you find this post helpful and are logged into the web interface,
    please show your appreciation and click on the star below. Thanks.

  • Will it support non cluster environment.

    Dear Coherent,
    How will I test my configuration of oracle coherence on non clustered Oracle database environment.
    Thanks,
    Biplab.

    There is no relation between Coherence in memory Data GRID and Oracle Db (with or not RAC)

  • Windows Cluster Environment

    Hi all,
    Do you know if B1 supports Windows Cluster Environment or Network Load Balancing Environment.
    The customer wants to to load B1 SAP on a failover environment so that if B1 Server dies, it will failover to secondary B1 SAP server.  Can you let me know if this is a possiblity and also what config needs to be done on server side?
    Any advice greatly appreciated.
    Thanks,
    John O'Brien

    Hi John,
    To be able to answer your enquiry, I would like to point you to the Portal > Product Availability > Supported Platforms.
    Link:
    https://websmp206.sap-ag.de/smb/knowledge/
    At this location, you will find all the supported platforms, Server, Client, Database.
    It is recommended to implement all the potential new systems in accordance to this list of supported platforms since they have been tested, verified and passed.
    Hope you will find it helpful.
    Kind regards,
    Willy
    SAP Business One Forums Team

  • TimesTen database in Sun Cluster environment

    Hi,
    Currently we have our application together with the TimesTen database installed at the customer on two different nodes (running on Sun Solaris 10). The second node acts as a backup to provide failover functionality, although right now only manual failover is supported.
    We are now looking into a hot-standby / high availability solution using Sun Cluster software. As understood from the documentation, applications can be 'plugged-in' to the Sun Cluster using Agents to monitor the application. Sun Cluster Agents should be already available for certain applications such as:
    # MySQL
    # Oracle 9i, 10g (HA and RAC)
    # Oracle 9iAS Application Server
    # PostgreSQL
    (See http://www.sun.com/software/solaris/cluster/faq.jsp#q_19)
    Our question is whether Sun Cluster Agents are already (freely) available for TimesTen? If so, where to find them. If not, should we write a specific Agent separately for TimesTen or handle database problems from the application.
    Does someone have any experience using TimesTen in a Sun Cluster environment?
    Thanks in advance!

    Yes, we use 2-way replication, but we don't use cache connect. The replication is created like this on both servers:
    create replication MYDB.REPSCHEME
    element SERVER01_DS datastore
    master MYDB on "SERVER01_REP"
    transmit nondurable
    subscriber MYDB on "SERVER02_REP"
    element SERVER02_DS datastore
    master MYDB on "SERVER02_REP"
    transmit nondurable
    subscriber MYDB on "SERVER01_REP"
    store MYDB on "SERVER01_REP"
    port 16004
    failthreshold 500
    store MYDB on "SERVER02_REP"
    port 16004
    failthreshold 500
    The application runs on SERVER01 and is standby on SERVER02. If an invalid state is detected in the application, the application on SERVER01 is stopped and the application on SERVER02 is started.
    In addition to this, we want to fail over if the database on the SERVER01 is in invalid state. What should we have monitored by the Clustering Agent to detect an invalid state in TT?

  • IIS proxy 5.1 and Weblogic 6.1 does not support sticky session

    Dear Sir,
    Our system is migrating from Weblogic 5.1 to Weblogic 6.1. After testing on
    development environment, it is found that IIS proxy for 5.1 plug-in and Weblogic
    6.1 server is perfect match for our case. Since our appliction system hit some
    bugs of IIS proxy for 6.1. In development environment, one IIS match with one
    Weblogic.
    During production launch, another problem found. It seems that IIS proxy 5.1
    plug-in with Weblogic 6.1 does not support the sticky load balancing. A sticky
    service is one where a client sends its requests to the same instance and those
    requests are not redirected to other instances. In production, two IIS match with
    two Weblogic. Below is
    #WebLogicHost=10.0.3.12
    #WebLogicPort=8012
    WebLogicCluster=10.0.3.12:8012,10.0.3.13:8012
    COnnectionTimeoutSecs=10
    ConnectionRetrySecs=2
    ErrorPage=https://www.xxxx.com/eBank/sysnotready.htm
    CookieName=eBankingWebLogicSession
    Anyone have idea on out case?
    Thanks,
    KAI

    My test was with 6.1 SP3.
    The way to tell is by analyzing the cookie(JSESSIONID).
    Perhaps the behaviour changed post SP1. I can't say for sure.
    Eric
    "Gary Rudolph" <[email protected]> wrote in message
    news:[email protected]...
    Is that entirely true concerning you don't need the persistence set to
    replicated in the weblogic.xml to gain sticky load balancing?
    The reason I ask was that in our situation sticky wouldn't work without
    having the persistence set to replicated. This was with NSAPI and WLS 6.1
    SP1. The weblogic servers were configured in a weblogic cluster. So..based
    on this statement we should not have needed to set the persistence, but in
    practice we did for it to work.
    Gary
    "Eric Gross" <[email protected]> wrote in message
    news:[email protected]...
    I just checked, and you are correct. You just need to have clustering
    enabled in 6.1. You do not necessarily need to have persistence set to
    replicated.
    Of course, you won't get failover, but you will get the sticky load
    balancing.
    Regards,
    Eric
    "Ricky Wong" <[email protected]> wrote in message
    news:[email protected]...
    Why do we need to set session persistence to replicate in order to
    perform
    sticky load balancing ? There is no such requirement in WebLogic 5.1.
    As
    far
    as I know, the IIS plugin simply interprets the value of the sessioncookie,
    which should be embedded with the application server address, then
    forward
    the request to that particular application server.
    We didn't use session replication in our environment because not allsession
    variables are serializable.
    "Eric Gross" <[email protected]> wrote in message
    news:[email protected]...
    The problem you mentionned in the other newsgroup post has been
    fixed
    and
    will be in SP4. If you are in production or nearing production and
    need
    a
    resolution now, then please open a case with support.
    You should not need any other parameters to do the load balancing.
    But
    to
    have the sticky load balancing, you must make sure you have session
    persistence set to replicated for the webapp in question.
    I'm not sure I am understanding your 3rd question.
    In any case, my advice is to either wait for SP4 to bereleased(scheduled
    sometime this month) or if you really need to go into production
    soon,
    contact support to obtain the latest IIS plugin.
    Regards,
    Eric
    "Mike" <[email protected]> wrote in message
    news:[email protected]...
    Dear Eric,
    Thanks very much for you kindly information, but we still have thefollowing issues
    regarding the WL IIS proxy:
    1. We have already tried the IIS proxy that comes with WL6.1 SP3.However, the
    result from that version of IIS proxy is not satisfactory, as weexperienced cases
    where the web page is not displayed correctly (as in
    http://newsgroups.bea.com/cgi-bin/dnewsweb?cmd=article&group=weblogic.develo
    per.interest.plug-in&item=994&utag=).
    If there is any IIS proxy released after WL6.1 SP3, Could you
    kindly
    give
    us
    a pointer to the plugin?
    2. In WL5.1 case, we are only required to have "WebLogicCluster"
    parameter
    set
    to two weblogic servers in order use the load balancing features.
    In
    WL6.1, we
    do not come across any additional settings required to support
    load
    balancing.
    Is there any such settings required (e.g. in
    config.xml,weblogic.xml,
    application.xml,
    etc?)
    3. Does WL IIS proxy problem has anything to do with the version
    of
    the
    IIS server/windows
    versions that are using? we have already tried with IIS4 and IIS5
    and
    have
    different
    kinds of issues.
    Thanks in advance for your kind assistance.
    Mike
    "Eric Gross" <[email protected]> wrote:
    Yes, the session format has changed when using clustering and you
    cannot
    use
    the 5.1 plugin to proxy to 6.1.
    What problems did you have using the 6.1 plugin? Maybe you need
    the
    latest
    6.1 plugin.
    Regards,
    Eric
    "KAI" <[email protected]> wrote in message
    news:[email protected]...
    Dear Sir,
    Our system is migrating from Weblogic 5.1 to Weblogic 6.1.
    After
    testing on
    development environment, it is found that IIS proxy for 5.1
    plug-in
    and
    Weblogic
    6.1 server is perfect match for our case. Since our appliction
    system
    hit
    some
    bugs of IIS proxy for 6.1. In development environment, one IIS
    match
    with
    one
    Weblogic.
    During production launch, another problem found. It seems
    that
    IIS
    proxy 5.1
    plug-in with Weblogic 6.1 does not support the sticky load
    balancing.
    A
    sticky
    service is one where a client sends its requests to the same
    instance
    and
    those
    requests are not redirected to other instances. In production,
    two
    IIS
    match with
    two Weblogic. Below is
    #WebLogicHost=10.0.3.12
    #WebLogicPort=8012
    WebLogicCluster=10.0.3.12:8012,10.0.3.13:8012
    COnnectionTimeoutSecs=10
    ConnectionRetrySecs=2
    ErrorPage=https://www.xxxx.com/eBank/sysnotready.htm
    CookieName=eBankingWebLogicSession
    Anyone have idea on out case?
    Thanks,
    KAI

  • Deploying Java Web Application (WAR-File) into a cluster environment

    Hi,
    we have a web application which has to read from and write to the file system.
    Since a short time we have a cluster environment (2 parallel servers) and since thisa time we have the problem, that files are worked double in the cluster. The application is working on both servers now and so we have this problem.
    Does anybody know how we have to deploy the application correctly in a cluster environment or do we have to change anything in our source code of the application?
    I didn't find any documentation about this.
    At the moment we have deployed the application on one of the two servers only, but I think there must be a better way to solve this problem.
    Thanks for your replies.
    Regards
    Thorsten

    Hi,
    I think first you need to wrap it into an EAR file, then you can deploy it.
    As far as I know standalone deployment of WAR is deprecated as of 640.
    similar threads:
    How to deploy .war on NWDI
    Deploying an existing WAR file into the Portal
    Hopefully this tutorial also gives some idea:
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/70/13353094af154a91cbe982d7dd0118/frameset.htm
    Regards,
    Ervin

  • PI 7.1 in a cluster environment (multiple ip-adresses): P4 port

    We want to  install PI 7.1 on unix in a cluster environment.Therefore we  installed also DEV+QA with virtual hostnames like the prodsystem, which will be later installed.
    At all sapinst installation screens we have used only the virtual hostname <virtual-hostname-server interface>.We have also set the SAPINST_USE_HOSTNAME=<virtual-hostname-server interface>. Although the P4-port seems to have used the physical hostname: in step 57 of sapinst we got problems and in dev_icm were:
    [Thr 05] *** ERROR => client with this banner already exists:
    1:<physical-hostname>:35644 {000306f5} [p4_plg_mt.c 2495]
    After we have set
    icm/server_port_1 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-server interface>
    icm/server_port_6 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-user interface>
    icm/server_port_7 = PROT=P4,PORT=5$$04, HOST=<physical hostname>
    icm/server_port_8 = PROT=P4,PORT=5$$04, HOST=127.0.0.1
    the sapinst was successfull.
    Now we're not sure how to set these P4-parameters in our  future productive cluster environment.
    Our productive system PX1 will live in a HA environment, so we don't want to use the physical hostnames in any profile.
    Our environment will look like:
    HOST-A (<physical-hostname-A>):
    <virtual-hostname-server interface>
    <virtual-hostname-user interface>
    HOST-B (<physical-hostname-B>):
    Normally our prodsystem will live on Host-A (physical-hostname-A). All parameters should
    only take the virtual hostname <virtual-hostname-server-interface>. During switchover the
    virtual hostnames (server and user interface) will be taken over to HOST-B, while the physical
    hostnames of HOST-A and HOST-B will stay like there are.
    How do the parameters have to be set here ?
    Have also the physical hostnames of both cluster nodes set in the
    instance profile, e.g:
    icm/server_port_1 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-server interface>
    icm/server_port_6 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-user interface>
    icm/server_port_7 = PROT=P4,PORT=5$$04, HOST=<physical-hostname-A>
    icm/server_port_8 = PROT=P4,PORT=5$$04, HOST=<physical-hostname-B>
    icm/server_port_9 = PROT=P4,PORT=5$$04, HOST=<localhost>
    Any recommendations ? In note 1158626 is some infomation regarding P4 ports with multiple network interfaces, but it's not 100% clear for us.
    Best regards,
    Uta

    Hi Uta!
    Obviously we are the only human beings in the SAP community having this problem. Nevertheless let's give it another try with a - hopefully - more simple problem description (and maybe it will be helpful to copy and paste this description into the open SAP CSN also).
    So here comes the scenario:
    We have one physical host:
    Physical hostname: physhost
    Physical IP address: 1.1.1.1
    On this physical host there is running one OS: SUN Solaris 10/SPARC
    On top of this we have two virtual hosts where we install 2 completely independent PI 7.1 instances with separate virtual hostnames and separate virtual IP addresses and separate DB2 9.1 databases. That is this is not an MCOD installation.
    Virtual Host no. 1 is PI 7.1 Development System:
    Virtual hostname: virthostdev
    Virtual IP address: 2.2.2.2
    Java Port numbers: 512xx
    Virtual Host no. 2 is PI 7.1 QA System:
    Virtual hostname: virthostqa
    Virtual IP address: 3.3.3.3
    Java Port numbers: 522xx
    With this constellation we face serious problems with the P4 port. Currently for example the JSPM for virthostdev does not start, because JSPM cannot connect to the P4 port.
    In SAP note 1158626 we have learned that by default always the physical hostname/IP address is used to address the P4 port and that we have to configure instance profile parameter icm/server_port_xx to avoid this.
    So how do we have to configure the instance profile parameter icm/server_port_xx for both systems to resolve these P4 port conflicts?
    Additionally: Is it important to use distinct server port slot numbers xx in both systems?
    Additionally: Is it possible to configure this parameter with hostnames instead of using IP addresses?
    So far we have tried several combinations, but with each combination at least one or even both systems have problems with that f.... P4 port.
    Please help! Thanx a lot in advance!
    Regards,
    Volker

  • Your Client Does not Support Opening this list with Windows Explorer on Windows Server 2012 R2

    I am trying to open a document library in Internet Explorer 11 on a Windows 2012 R2 server and I am receiving Your Client Does not Support Opening this list with Windows Explorer

    Hi,
    According to your post, my understanding is that you wanted to open a document library with the Internet Explorer 11 on a Windows 2012 R2 server. But you received the following error message: Your client
    does not support opening this list with Windows Explorer.
    There can be multiple reasons for it.
    So, I recommend you can check with the following steps:
    Go to the “Server Manager” of “Administrative Tools” > Enable the “Desktop Experience” feature in your environment.
    Go to the “Services” of “Administrative Tools” >
    Ensure the “WebClient” service is started.
    Ensure that your computer has supported Web browser.
    Ensure that Internet Explorer is configured correctly to add
    https://*.sharepoint.com to Local intranet site in the “Security Tab” of “Internet Options”.
    Ensure that
    you have applied the latest updates on the system.
    For more information, you can refer to the following articles:
    http://blogs.technet.com/b/asiasupp/archive/2011/06/13/error-message-quot-your-client-does-not-support-opening-this-list-with-windows-explorer-quot-when-you-try-to-quot-open-with-explorer-quot-on-a-sharepoint-document-library-in-office-365-site.aspx
    http://mcgeeky.blogspot.com/2010/02/your-client-does-not-support-opening.html
    Thanks & Regards,
    Jason
    Jason Guo
    TechNet Community Support

  • Trying to open a sharepoint library in Windows Explorer via IE 8 results in the error "your client does not support opening this list with windows explorer"

    I am attempting to connect to a Microsoft Sharepoint library, via the "Actions -> Open in Windows Explorer" option from the Sharepoint page.  When I do so, I get the error message "your client does not support opening this list with windows explorer".  In short, I am trying to get the "drag and drop" functionality of a Sharepoint library.
    I am running Windows 7 64-bit and IE8.  The target Sharepoint environment is a MOSS 2007 EE.  My Windows "WebClient" service is running, and I have no problem connecting to this exact library via a Windows XP computer running IE8.

    Verify that the Webclient is started automatically from the Services.msc
    Verify that your Portal is in the Intranet local security zone (You can reach it from Your Internet Explorer Browser Security Tag) if not add it to the list
    restart the WebClient service if it's work good
    If not you have to create a network share to your portal URL directly not your documents library.
    Open Windows Explorer or My Computer from the Windows Start Menu.
    From the Tools menu, click Map Network Drive…. A new Map Network Drive window opens.
    In the Map Network Drive window, choose an available drive letter from the dropdown list located next to the "Drive:" option. Any drives already mapped will have a shared folder name displayed inside the dropdown list, next to the drive letter.
    Type the name of the folder to map wich is in this case your SharePoint Portal URL (Don't include the documents libraries)
    Click the "Reconnect at login" checkbox if this network drive should be mapped permanently. Otherwise, this drive will un-map when the user logs out of this computer.
    If the remote computer that contains the shared folder requires a different username and password to log in, click the "different user name" hyperlink to enter this information.
    Click Finish.
    I am looking for a suite method :)

  • [unixODBC][Driver Manager]Driver does not support this function {IM001}

    Hello,
    I start from the end and details show below - this error message i got in sql session:
    SQL> select count(*) from EnergyType@ENERGOPLAN;
    select count(*) from EnergyType@ENERGOPLAN
    ERROR at line 1:
    ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
    [unixODBC][Driver Manager]Driver does not support this function {IM001}
    ORA-02063: preceding 2 lines from ENERGOPLAN
    SQL>
    First question - is Oracle Heterogeneous Services are licensed for standard edition ? I cant find this information, and my database - is SE 11.2.0.3.0 - 64bit.
    If its ok and HS are licensed for SE, then please see details of my problem:
    ----OS and packages version
    [oracle@aris_sv_db log]$ uname -a
    Linux aris_sv_db 2.6.18-308.24.1.el5 #1 SMP Tue Dec 4 17:43:34 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
    [oracle@aris_sv_db log]$
    [oracle@aris_sv_db log]$ rpm -qa | grep odbc
    [oracle@aris_sv_db log]$ rpm -qa | grep unixodbc
    [oracle@aris_sv_db log]$ rpm -qa | grep unixODBC
    unixODBC-libs-2.2.11-10.el5
    unixODBC-libs-2.2.11-10.el5
    unixODBC-devel-2.2.11-10.el5
    unixODBC-2.2.11-10.el5
    unixODBC-devel-2.2.11-10.el5
    [oracle@aris_sv_db log]$ rpm -qa | grep freetds
    freetds-0.91-1.el5.rf
    [oracle@aris_sv_db log]$
    -----ODBC.INI, ODBCINST.INI and FREETDS.CONF
    [oracle@aris_sv_db log]$ more /home/oracle/.odbc.ini
    [ENERGOPLAN]
    Driver = FreeTDS
    Servername = ENERGOPLAN
    Database = ess2
    [oracle@aris_sv_db log]$
    [oracle@aris_sv_db log]$ more /etc/odbcinst.ini
    # Example driver definitions
    [FreeTDS]
    Description = MSSQL Driver
    Driver = /usr/lib64/libtdsodbc.so.0
    #Setup = /usr/lib64/libtdsodbc.so.0
    #Driver = /usr/lib64/libodbc.so
    #Driver = /usr/lib/libodbc.so
    UsageCount = 1
    Trace = Yes
    TraceFile = /tmp/freetds.log
    [ODBC]
    DEBUG = 1
    TraceFile = /tmp/sqltrace.log
    Trace = Yes
    [oracle@aris_sv_db log]$
    [oracle@aris_sv_db log]$ more /etc/freetds.conf
    # A typical Microsoft server
    [ENERGOPLAN]
    host = 192.168.10.64
    port = 1433
    tds version = 8.0
    # client charset = UTF-8
    client charset = cp1251
    [oracle@aris_sv_db log]$
    ----CHECK CONNECT from ODBC
    [oracle@aris_sv_db log]$ isql -v ENERGOPLAN user pass
    | Connected! |
    | |
    | sql-statement |
    | help [tablename] |
    | quit |
    | |
    SQL> select count(*) from EnergyType;
    | |
    | 8 |
    SQLRowCount returns 1
    1 rows fetched
    SQL> [oracle@aris_sv_db log]$ tsql -S ENERGOPLAN -U user -P pass
    locale is "en_US.UTF-8"
    locale charset is "UTF-8"
    using default charset "cp1251"
    1> select count(*) from EnergyType;
    2> go
    8
    (1 row affected)
    1> [oracle@aris_sv_db log]$
    ----LISTENER.ORA, TNSNAMES and initENERGOPLAN.ora
    [oracle@aris_sv_db log]$ more /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
    # listener.ora Network Configuration File: /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
    # Generated by Oracle configuration tools.
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
    SID_LIST_ENERGOPLAN =
    (SID_LIST =
    (SID_DESC=
    (SID_NAME=ENERGOPLAN)
    (ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)
    (PROGRAM=dg4odbc)
    (ENVS="LD_LIBRARY_PATH=/usr/lib64:/u01/app/oracle/product/11.2.0/dbhome_1/lib")
    ENERGOPLAN =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = PNPKEY))
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.10.72)(PORT = 1523))
    ADR_BASE_LISTENER = /u01/app/oracle
    [oracle@aris_sv_db log]$ more /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora
    # tnsnames.ora Network Configuration File: /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora
    # Generated by Oracle configuration tools.
    ORCL =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = orcl)
    ENERGOPLAN =
    (DESCRIPTION=
    (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.10.72)(PORT=1523))
    (CONNECT_DATA=(SID=ENERGOPLAN))
    (HS=OK)
    [oracle@aris_sv_db log]$ more /u01/app/oracle/product/11.2.0/dbhome_1/hs/admin/initENERGOPLAN.ora
    # This is a sample agent init file that contains the HS parameters that are
    # needed for the Database Gateway for ODBC
    # HS init parameters
    HS_FDS_CONNECT_INFO = ENERGOPLAN
    #HS_FDS_CONNECT_INFO = 192.168.0.199:1433//test
    HS_FDS_TRACE_LEVEL = DEBUG
    #HS_FDS_TRACE_FILE_NAME = /tmp/hs1.log
    HS_FDS_TRACE_FILE_NAME = /u01/app/oracle/product/11.2.0/dbhome_1/hs/log/mytrace.log
    HS_FDS_SHAREABLE_NAME = /usr/lib64/libodbc.so #/usr/lib64/libtdsodbc.so.0
    #HS_FDS_SHAREABLE_NAME = /usr/lib64/libtdsodbc.so.0
    #HS_FDS_SHAREABLE_NAME = /usr/lib/libodbc.so
    #HS_LANGUAGE=american_america.we8iso8859p1
    #HS_LANGUAGE=AMERICAN_AMERICA.AL32UTF8
    #HS_LANGUAGE=AMERICAN_AMERICA.CL8MSWIN1251
    #HS_LANGUAGE=RUSSIAN_RUSSIA.UTF8
    #HS_LANGUAGE=Russian_CIS.AL32UTF-8
    #HS_FDS_FETCH_ROWS=1
    HS_NLS_NCHAR = UCS2
    HS_FDS_SQLLEN_INTERPRETATION=32
    # ODBC specific environment variables
    set ODBCINI=/home/oracle/.odbc.ini
    set ODBCINSTINI=/etc/odbcinst.ini
    #HS_KEEP_REMOTE_COLUMN_SIZE=ALL
    #HS_NLS_LENGTH_SEMANTICS=CHAR
    #HS_FDS_SUPPORT_STATISTICS=FALSE
    # Environment variables required for the non-Oracle system
    #set <envvar>=<value>
    [oracle@aris_sv_db log]$
    [oracle@aris_sv_db log]$ tnsping ENERGOPLAN
    TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 01-APR-2013 16:27:49
    Copyright (c) 1997, 2011, Oracle. All rights reserved.
    Used parameter files:
    /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.10.72)(PORT=1523)) (CONNECT_DATA=(SID=ENERGOPLAN)) (HS=OK))
    OK (0 msec)
    [oracle@aris_sv_db log]$
    ----CREATE DBLINK and test from sqlplus
    CREATE DATABASE LINK "ENERGOPLAN" CONNECT TO "user" IDENTIFIED BY "pass" USING 'ENERGOPLAN';
    [oracle@aris_sv_db log]$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Mon Apr 1 16:30:14 2013
    Copyright (c) 1982, 2011, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
    SQL> select count(*) from EnergyType@ENERGOPLAN;
    select count(*) from EnergyType@ENERGOPLAN
    ERROR at line 1:
    ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
    [unixODBC][Driver Manager]Driver does not support this function {IM001}
    ORA-02063: preceding 2 lines from ENERGOPLAN
    SQL>
    ----logs from hs and odbc
    [oracle@aris_sv_db log]$ tail -50 ENERGOPLAN_agt_12117.trc
    12 VARCHAR N 100 100 0/ 0 1000 0 200 ConsumptionYearCostUOM
    3 DECIMAL N 24 24 9/ 3 0 0 0 ConsumptionYearFactorAmount
    -7 BIT N 1 1 0/ 0 0 0 20 NeedToBeApprovedByREK
    Exiting hgodtab, rc=0 at 2013/04/01-16:30:42
    Entered hgodafr, cursor id 0 at 2013/04/01-16:30:42
    Free hoada @ 0x14e5fd20
    Exiting hgodafr, rc=0 at 2013/04/01-16:30:42
    Entered hgopars, cursor id 1 at 2013/04/01-16:30:42
    type:0
    SQL text from hgopars, id=1, len=36 ...
    00: 53454C45 43542043 4F554E54 282A2920 [SELECT COUNT(*) ]
    10: 46524F4D 2022454E 45524759 54595045 [FROM "ENERGYTYPE]
    20: 22204131 [" A1]
    Exiting hgopars, rc=0 at 2013/04/01-16:30:42
    Entered hgoopen, cursor id 1 at 2013/04/01-16:30:42
    hgoopen, line 87: NO hoada to print
    Deferred open until first fetch.
    Exiting hgoopen, rc=0 at 2013/04/01-16:30:42
    Entered hgodscr, cursor id 1 at 2013/04/01-16:30:42
    Allocate hoada @ 0x14e5fd80
    Entered hgodscr_process_sellist_description at 2013/04/01-16:30:42
    Entered hgopcda at 2013/04/01-16:30:42
    Column:1(): dtype:4 (INTEGER), prc/scl:10/0, nullbl:1, octet:0, sign:1, radix:0
    Exiting hgopcda, rc=0 at 2013/04/01-16:30:42
    Entered hgopoer at 2013/04/01-16:30:42
    hgopoer, line 231: got native error 0 and sqlstate IM001; message follows...
    [unixODBC][Driver Manager]Driver does not support this function {IM001}
    Exiting hgopoer, rc=0 at 2013/04/01-16:30:42
    hgodscr, line 407: calling SQLSetStmtAttr got sqlstate IM001
    Free hoada @ 0x14e5fd80
    hgodscr, line 464: NO hoada to print
    Exiting hgodscr, rc=28500 at 2013/04/01-16:30:42 with error ptr FILE:hgodscr.c LINE:407 FUNCTION:hgodscr() ID:Set array fetch size
    Entered hgoclse, cursor id 1 at 2013/04/01-16:31:24
    Exiting hgoclse, rc=0 at 2013/04/01-16:31:24
    Entered hgocomm at 2013/04/01-16:31:24
    keepinfo:0, tflag:1
    00: 4F52434C 2E343535 32623466 342E362E [ORCL.4552b4f4.6.]
    10: 32322E37 363237 [22.7627]
    tbid (len 20) is ...
    00: 4F52434C 5B362E32 322E3736 32375D5B [ORCL[6.22.7627][]
    10: 312E345D [1.4]]
    cmt(0):
    Entered hgocpctx at 2013/04/01-16:31:24
    Exiting hgocpctx, rc=0 at 2013/04/01-16:31:24
    Exiting hgocomm, rc=0 at 2013/04/01-16:31:24
    Entered hgolgof at 2013/04/01-16:31:24
    tflag:1
    Exiting hgolgof, rc=0 at 2013/04/01-16:31:24
    Entered hgoexit at 2013/04/01-16:31:24
    Exiting hgoexit, rc=0
    [oracle@aris_sv_db log]$
    [oracle@aris_sv_db log]$ tail -50 /tmp/sqltrace.log
    Native = 0x7fff6ca974f4
    Message Text = 0x14e5f968
    Buffer Length = 510
    Text Len Ptr = 0x7fff6ca97750
    [ODBC][12117][SQLGetDiagRecW.c][582]
    Exit:[SQL_SUCCESS]
    SQLState = IM001
    Native = 0x7fff6ca974f4 -> 0
    Message Text = [[unixODBC][Driver Manager]Driver does not support this function]
    [ODBC][12117][SQLGetDiagRecW.c][540]
    Entry:
    Statement = 0x14e399f0
    Rec Number = 2
    SQLState = 0x7fff6ca97700
    Native = 0x7fff6ca974f4
    Message Text = 0x14e5f908
    Buffer Length = 510
    Text Len Ptr = 0x7fff6ca97750
    [ODBC][12117][SQLGetDiagRecW.c][582]
    Exit:[SQL_NO_DATA]
    [ODBC][12117][SQLEndTran.c][315]
    Entry:
    Connection = 0x14dbd4b0
    Completion Type = 0
    [ODBC][12117][SQLGetInfo.c][214]
    Entry:
    Connection = 0x14dbd4b0
    Info Type = SQL_CURSOR_COMMIT_BEHAVIOR (23)
    Info Value = 0x7fff6ca9781e
    Buffer Length = 8
    StrLen = 0x7fff6ca9781c
    [ODBC][12117][SQLGetInfo.c][528]
    Exit:[SQL_SUCCESS]
    [ODBC][12117][SQLEndTran.c][488]
    Exit:[SQL_SUCCESS]
    [ODBC][12117][SQLDisconnect.c][204]
    Entry:
    Connection = 0x14dbd4b0
    [ODBC][12117][SQLDisconnect.c][341]
    Exit:[SQL_SUCCESS]
    [ODBC][12117][SQLFreeHandle.c][268]
    Entry:
    Handle Type = 2
    Input Handle = 0x14dbd4b0
    [ODBC][12117][SQLFreeHandle.c][317]
    Exit:[SQL_SUCCESS]
    [ODBC][12117][SQLFreeHandle.c][203]
    Entry:
    Handle Type = 1
    Input Handle = 0x14dbb0c0
    [oracle@aris_sv_db log]$

    To see which ODBC function DG4ODBC is looking for and unixODBC isn't supporting it would be best to get an ODBC trace file. But as your unixODBC driver (unixODBC-2.2.11-10.el5) is outdated and these old drivers had a lot of issues when being used on 64bit operating systems (for example wrong sizeofint etc). So best would be to update the unixODBC Driver manager to release 2.3.x. More details can be found on the web site: www.unicodbc.org
    - Klaus

  • Steps to upgrade kernel patch in AIX cluster environment

    Hello All,
    We are going to perform kernel upgrade in AIX cluster environment.
    Please let me know the other locations to copy the new kernel files ,
    default location
    CI+DB server
    APP1
    Regards
    Subbu

    Hi Subbu
    Refer the SAP link
    Executing the saproot.sh Script - Java Support Package Manager (OBSOLETE) - SAP Library
    1. Extract the downloaded files to a new location using SAPCAR -xvf <file_name> as sidadm.
    2. copy the extracted files to sapmnt/<SID>/exe
    3. Start the DB & Application.
    Regards
    Sriram

  • Installation of CRM 2007 in Windows with oracle and cluster environment

    Dear Experts,
    We are about to start the installation of CRM 2007 (ABAP+JAVA) with
    Oracle 10g on Windows x64 in cluster environment. In the SAPINST dialog
    box under High availability option, I could see installation options like ASCS
    Instance, SCS Instance, First MSCS node, Database Instance, Additional
    MSCS Node, Enqueue Replication Server, Central Instance and Dialog
    Instance.
    I have gone through the installation guide. I have below queries
    regarding the same. Can you please clarify the same
    1) Our requirement is we want to have an ACTIVE-ACTIVE cluster setup
    with sap service running in Node A and Database running in Node B. Can
    we have this setup in ABAP+JAVA setup
    2) Also,in the SAPINST dialog box as said above except last two
    (Central and Dialoge instance) as per standard installation guide are
    to be installed in shared drives as per the requirement. But, central
    and dialog are said to be installed locally on any one node in MSCS or
    in separate server. As per my understanding Dialog instance will be
    used for load balancing which we do not require now. Hence I feel this
    is optional in our case. Where as Central Instance I am not able to
    understand why this option is for? Is it mandatory to be installed or optional. If
    so, when I installed it in one of the MSCS node the incase of failover
    how does it effect the system. As per my understanding ASCS and SCS
    comprise central instance.
    Please clarify and thanks in advance.
    Regards,
    Sharath Babu M
    +91-9003072891

    I am following as per the standard installation guide.
    Regards
    Sharath

  • How to Process Files in Cluster Environment

    Hi all,
    We are facing the below situation, and would like to know your opinions on how to proceed.
    We have a cluster environment ( server a and server b). A ESP Job is picking the files from a windows location and placing it in the unix location( server a or server b).
    The problem is ESP job can place the file in only one server. This will affect the basic purpose of the cluster environment (then the file will always be processed from that particular server only).
    If we place the file in both the servers, then there are chances that the same file can be processed mutilple times.
    Is there a way that the load balancer can direct the file to either one of the server based on the load of the servers(just like how it does with the BPEL processes).
    Or are there any other suggestions/solutions for this.
    Thanks in Advance !!
    Regards
    Mohan

    Hi,
    wch version of SOA are you using? ...have a luk at this:Re: Duplicate instance created in BPEL

  • Progress bars dont work in a cluster environment

    When you go to the host:port it works but when you go to the link in a clustered environment they stop working.
    Any suggestions?

    I am using jdev 11.1.2.1. I have 3 progress bars on the page. two of them render and the third one does not.
    It renders perfectly in a non cluster environment. I am not seeing where the problem is.

Maybe you are looking for