Connectivity for SAN replication

I am a newbie in the SAN world so hope am asking the right questions.
We are looking into replacing our aging SAN. As part of the replacement, managemnt is thinking of placing a secondary SAN at one of our remote locations for backup & DR purposes.
My question revolves around providing connectivity between the two SANS. I have been asked to look into a dedicated point-to-point connection for the SAN replication. Not counting the actual SAN hardware, what other hardware would I need? Am I bacically building an independent network for this? Why this route instead of over our corporate WAN? Any good recommended reading for learning about the SAN world?
Brent

Hi, Brent,
I agree with Mike. I originally posted this before I saw his reply.
Here are a couple of good Cisco-specific document to start with:
http://www.cisco.com/en/US/solutions/ns340/ns517/ns224/ns378/net_design_guidance0900aecd800ed146.pdf
http://www.cisco.com/go/storage
Your options are either native FC over some sort of fiber transport (dark fiber, CWDM, DWDM, SONET) or FCIP over a WAN link. Native FC is usually for synchronous replication for distances up to 200 km for large bandwidth requirements. FCIP is usually for asynchronous replication at longer distances with less bandwidth. The transport largely dictates the hardware. FCIP is generally cheaper.
If you go the FCIP route, a dedicated link is preferable to make sure replication traffic has predictable latency and bandwidth, but you can share an existing link if you have to. Basic QOS is available in this case. Also, FCIP on Cisco devices has special capabilities to help you get the most out of your bandwidth (TCP optimizations, compression, write acceleration, etc.).
Regards,
Phil Lowden

Similar Messages

  • 6140 replication - zone configation for SAN ?

    Hi All,
    Following is the setup for two 6140 for replication, I want to know the best practice for SAN zoning for this scenario.
    Primary site:
    6140 Cntrl A and B - port 4 is connected to primary site switch B port 4
    DR site:
    6140 Cntrl A and B - port 4 is connected to dr site switch A port 4
    Primary switch B is connected dr switch A port 7 and the link is up.
    Sun documentation says,
    "Make sure the FC Switch(s) has been correctly zoned to allow the arrays to communicate over the SAN"
    So we can create a zone for replication, with switch A port 4 (6140 DR - port 4), switch B port 4 (6140 PR - port 4) , switch A port 7 (Link), switch B port 7 (Link)
    Is it the best way of doing that,
    If I want to read only mount the replicated (secondary) volumes, should I create a seperate zone for DR site with switch A port 4 (6140 DR - port 4) and my DR host ports in the switch A.
    There may be several zone scenarios, If some body has come across a best way of doing zoning or a simple way of looking at it.
    Thanks for the time,
    Aruna

    Hi maytlsserver
    I usally create an alias in the switch which includeds the 6140 controllers then add that into my zone for the hosts. I'm not doing any replication yet ie 6140 to 6140 but I have got multiple 6140's across my location and this method has worked fine for me
    David

  • Do we need to create two zones for Two HBA for a host connected with SAN ?

    Hi,While creating Zone , Do we need to create two zones for Two HBA for a host connected with SAN ? Or a zone is enough for
    a host which having Two HBAs...We have two 9124s for our SAN fabric...
    As I found like one zone  below, I little bit confused that , if a host having two HBA connected with SAN, should I expect two zones for every Host?
    from the zone set, I gave the command show zoneset
    zone name SQLSVR-X-NNN_CX4 vsan 1
        pwwn 50:06:NN:NN:NN:NN:NN:NN
        pwwn 50:06:NN:NN:NN:NN:NN:NN
              pwwn 10:00:NN:NN:NN:NN:NN:NN
    But I found only one zone for the server's HBA2:by the same time in the fabric I found switches A & B showing the WWNs of those HBAs on its
    connected N port...Its not only for this server alone, but for all hosts..Can you help me to clarify on this please..that should we need to create one zone for
    one HBA?

    if u have two independent fabrics between hosts and storage, i think the below confs are recommended.
    Scenario 1:  2 HBAs single port each ( redundancy across HBA / Storage port )
    HBA1 - port 0 ---------> Fabric A ----------> Storage port ( FAx/CLx )
    HBA2 - port 0 ---------> Fabirc B ----------> Storage port ( FAy/CLy )
    Scenario 2: 2 HBAs of dual port each
    HBA1 - port 0 -------> Fabric A ---------> Storage port ( FAx/CLx )
    HBA2 - port0 ---------> Fabric A ---------> Storage port ( FAs/CLs )
    HBA1 - port 1 --------> Fabric A --------> Storage port ( FAy/CLy )
    HBA2 - port 1 ---------> Fabric B --------> Storage port ( FAt/CLt )
    the zone which is in your output is VSAN 1. if its a production VSAN, Cisco doesn't recomends to use VSAN 1 ( default vsan ) for production.

  • Not able to start the Connection Digital Networking Replication Agent Service

    Hi All,
         I am getting the following alert through RTMT, when i tried to start the service in Unity Connection Serviceability, it doesn't start.   Unity Connection 8.5.1.  It is still in the stopped status, Although service is not important, however it is pushing a lot of rtmt alert.   Could you please help me how to start this Connection Digital Networking Replication Agent Service. 
    Service operational status is DOWN. Connection Digital Networking Replication Agent.
    Thanks
    Barry

    Thanks for Nadeem and Aokanlawon, for the quick reply.
    1.  You are right  i got only one cluster i understand its not working,  However howcome this service was started in pub UC and was working. I will disable anyway now as i got only one cluster. It all started when I started to fix the following alerts which was pushed by sub Unity Connections(Branchsite)
    Failed to retrieve DbSpace usage information when running the "Monitor the Unity Connection databases" SysAgent task.
    This alert started coming after there was a network connection drop between HQ cucm and the branch sub UC, after the drop, all ports disconnected between these two servers, services deactivated in Sub UC, after the network connection back again, ports connected back and the most of the services activated.  However the following services were not in the running state in Sub UC.   I found these information from the sysagent trace log.  I thought to fix this error  is to start the Connection Database Proxy in Sub UC.  However this has stopped the following service 'Connection' Database Proxy' and 'Connection Digital Network Replication Agent' in Pub UC.  I managed to start Connection Database Proxy in Pub UC, however Connection Digital Network Replication agent is now in deactivated status.
    Critical Services
    connection exchange notification web service
    connection mailbox sync
    connection message transfer agent
    connection notifier
    Optional services
    Connection Database Proxy
    Connection Digital Networking Replication Agent
    Connection Speechview processor
    2. How do I stop the alert from sub UC
    Failed to retrieve DbSpace usage information when running the "Monitor the Unity Connection databases" SysAgent task.
    Thanks

  • L3 Link for Storage Replication

    Hi,
    Has anyone experienced on provisioning L3 MPLS link for Storage Replication.  I have two DC at different location and EMC Storage.
    we will have 2 MPLS L3 Link between two DCs to replicate the data.  We have 4 RPA at each side and 2 MPLS Link.  but dont have idea how to connect all these things.  I have Nexus 7k in new DC and 6509 in old DC.
    If anyone has any idea how to provisioned this link, it would great help to me.

    Hi Micky,
    Such table doesn´t exist in standard SAP! You should try to accomplish this in a view!

  • SMOEAC site settings for material replication

    SRM Gurus,
    While doing the settings for material replication, in SMOEAC transaction
    I am trying to create site 'SRM' of type CRM, In the site attributes if I enter RFC destination 'SRMCLNT001' and click on get values I am getting the following message "RFC destination SRMCLNT001 does not exist. No valid backend system".
    Can someone please suggest what I need to check/configure to fix this issue?
    Info:
    R/3 side : RFC destination SRMCLNT001 (for connecting to SRM 5.0 system) created and remote logon tested OK.
    Completed all CRM* table settings
    SRM side: RFC destination R3CLNT800 (for connecting to ECC 6.0) created and remote logon tested OK
    Thanks,
    Arun

    Make sure you have created the RFC destination in SRM also. You need to create the following:
    RFC destination of SRM in SRM and in R/3.
    RFC destination of R/3 in R/3 and SRM.
    Regards, IA

  • Can i use 3750 for SAN

    hi.
    my question is can i use the switch 3750 for SAN?? and what must i do to make it right ??
    thnx

    The Catalyst 3750 cannot be used for fibre channel SANs at this time. It can be used in conjunction with the MDS9500 switch with an IP storage services module (IPS module) or an SN5428-2 storage router to connect hosts to storage using iSCSI or geographically separate switches using FCIP. The MDS9500 and SN5428-2 have gigabit Ethernet interfaces that could be connected to a 3750 with GigE.

  • DFSR - This member is waiting for initial replication for replicated folder SYSVOL Share - How long should it take?

    Yesterday we were forced to perform a non-authoritative sync of the SYSVOL folder as replication had stopped because one of the DCs had been disconnected from it's replication partner for more than 60 days (caused by a unexpected shutdown and we did not
    pick up on the fact replication had stopped until now).
    I performed the non-authoritative sync of the SYSVOL folder and now the folder is in state 2
    ReplicatedFolderName  ReplicationGroupName  State
    SYSVOL Share          Domain System Volume  2
    and has been for more than 12 hours.  The DFS replication health report, reports "This member is waiting for initial replication for replicated folder SYSVOL Share".
    How long should it take and is there anyway to force it so that replication can resume?

    I'm fairly sure it's not tombstoned.  Here is the DCDIAG output:
    Directory Server Diagnosis
    Performing initial setup:
       Trying to find home server...
       Home Server = RC-CURDC-02
       * Identified AD Forest. 
       Done gathering initial info.
    Doing initial required tests
       Testing server: Default-First-Site-Name\RC-CURDC-02
          Starting test: Connectivity
             ......................... RC-CURDC-02 passed test Connectivity
    Doing primary tests
       Testing server: Default-First-Site-Name\RC-CURDC-02
          Starting test: Advertising
             ......................... RC-CURDC-02 passed test Advertising
          Starting test: FrsEvent
             ......................... RC-CURDC-02 passed test FrsEvent
          Starting test: DFSREvent
             There are warning or error events within the last 24 hours after the
             SYSVOL has been shared.  Failing SYSVOL replication problems may cause
             Group Policy problems. 
             ......................... RC-CURDC-02 passed test DFSREvent
          Starting test: SysVolCheck
             ......................... RC-CURDC-02 passed test SysVolCheck
          Starting test: KccEvent
             ......................... RC-CURDC-02 passed test KccEvent
          Starting test: KnowsOfRoleHolders
             ......................... RC-CURDC-02 passed test KnowsOfRoleHolders
          Starting test: MachineAccount
             ......................... RC-CURDC-02 passed test MachineAccount
          Starting test: NCSecDesc
             ......................... RC-CURDC-02 passed test NCSecDesc
          Starting test: NetLogons
             ......................... RC-CURDC-02 passed test NetLogons
          Starting test: ObjectsReplicated
             ......................... RC-CURDC-02 passed test ObjectsReplicated
          Starting test: Replications
             ......................... RC-CURDC-02 passed test Replications
          Starting test: RidManager
             ......................... RC-CURDC-02 passed test RidManager
          Starting test: Services
             ......................... RC-CURDC-02 passed test Services
          Starting test: SystemLog
             A warning event occurred.  EventID: 0x000003FC
                Time Generated: 06/12/2014   18:26:05
                Event String:
                Scope, 10.59.96.64, is 83 percent full with only 7 IP addresses remaining.
             A warning event occurred.  EventID: 0x000003FC
                Time Generated: 06/12/2014   18:26:05
                Event String:
                Scope, 10.59.98.0, is 95 percent full with only 5 IP addresses remaining.
             A warning event occurred.  EventID: 0x00000560
                Time Generated: 06/12/2014   18:26:05
                Event String:
                IP address range of scope 10.59.96.64 is 83 percent full with only 7 IP addresses available.
             A warning event occurred.  EventID: 0x00000560
                Time Generated: 06/12/2014   18:26:05
                Event String:
                IP address range of scope 10.59.98.0 is 95 percent full with only 5 IP addresses available.
             A warning event occurred.  EventID: 0x000016AF
                Time Generated: 06/12/2014   18:39:43
                Event String:
                During the past 4.23 hours there have been 95 connections to this Domain Controller from client machines whose IP addresses don't map to any of the existing sites in the enterprise. Those clients, therefore, have undefined
    sites and may connect to any Domain Controller including those that are in far distant locations from the clients. A client's site is determined by the mapping of its subnet to one of the existing sites. To move the above clients to one of the sites, please
    consider creating subnet object(s) covering the above IP addresses with mapping to one of the existing sites.  The names and IP addresses of the clients in question have been logged on this computer in the following log file '%SystemRoot%\debug\netlogon.log'
    and, potentially, in the log file '%SystemRoot%\debug\netlogon.bak' created if the former log becomes full. The log(s) may contain additional unrelated debugging information. To filter out the needed information, please search for lines which contain text
    'NO_CLIENT_SITE:'. The first word after this string is the client name and the second word is the client IP address. The maximum size of the log(s) is controlled by the following registry DWORD value 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\LogFileMaxSize';
    the default is 20000000 bytes.  The current maximum size is 20000000 bytes.  To set a different maximum size, create the above registry value and set the desired maximum size in bytes.
             ......................... RC-CURDC-02 passed test SystemLog
          Starting test: VerifyReferences
             ......................... RC-CURDC-02 passed test VerifyReferences
       Running partition tests on : curriculum
          Starting test: CheckSDRefDom
             ......................... curriculum passed test CheckSDRefDom
          Starting test: CrossRefValidation
             ......................... curriculum passed test CrossRefValidation
       Running partition tests on : Schema
          Starting test: CheckSDRefDom
             ......................... Schema passed test CheckSDRefDom
          Starting test: CrossRefValidation
             ......................... Schema passed test CrossRefValidation
       Running partition tests on : Configuration
          Starting test: CheckSDRefDom
             ......................... Configuration passed test CheckSDRefDom
          Starting test: CrossRefValidation
             ......................... Configuration passed test CrossRefValidation
       Running enterprise tests on : riddlesdown.local
          Starting test: LocatorCheck
             ......................... riddlesdown.local passed test LocatorCheck
          Starting test: Intersite
             ......................... riddlesdown.local passed test Intersite
    Eveything is passing, except for the bits about the SystemLog and DFRS, all seems good to me.
    Event 2213 is in the logs.  I will look and changing the MaxOfflineTimeInDays and see if that gets it going.

  • How-to use Cisco DCNM for SAN to manage storage fabric

    I recently purchased DCNM for SAN (and LAN), have installed and licensed it. The software is up and running, and I have installed the necessary features & licenses to each of my Nexus 5596UP devices. Unfortunately, I'm not able to make any changes to the fabric from the DCNM SAN client. Am I missing some steps here? Do I need to have fiber connections in place from end points, and the SAN, in order to see/manage the fabric?
    Thanks in advance!

    Did you do a fabric discovery ? have you setup proper accounts on the N5k ?
    see
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/mds9000/sw/5_2/configuration/guides/fund/DCNM-SAN-LAN_5_2/DCNM_Fundamentals/fmaaa.html

  • Connecting two san switch in cascade

    I am planning to connect two San switch in cascade because the 2/16 san switch that I am using now only have one unused port and I am planning add four server more. So my less expensive solution is using a second 4/16 switch that I have.
    - My questions
    How I do that ?
    Do I need a special license?
    Do I need some other change in the ports configuration for this purpose?
    Some care I must take?
    Thank you
    Ehab

    The answers to your questions are really going to depend on the make and model of your switches.
    Some switch types require a license to configure E_Port (switch-to-switch) functionality, but most do not. Whichever port(s) you use to make the connection will need to either be configured as an E_Port or configured in a mode that will allow it to auto-negotiate itself to E (this could be a GL_Port, Gx_Port, or U_Port depending on switch vendor) after the switches establish communication.
    If the switch that you are adding has some zoning defined on it that is different than the existing switch then you may run into some conflicts so it is often best to just clear the zoning config on the switch that is being added. Finally, you would want to make sure that each switch has a unique Domain ID otherwise the two switches will segment and not merge. By default, most switches are set to Domain ID 1 so just make sure they are both not set to 1.
    hth.
    -j

  • SAN replication from ASM to NON ASM

    Hi,
    We have a requirement:
    PROD Site: 2 RAC nodes on ASM with 11g Rel 2. ASM will be on SAN.
    Requirement: For DR Site: create non ASM single database on which SAN from PROD will be replicated. No Data Guard.
    Question: Can we do this? Will standalone DB will be able to read all the data files, as on DR Site non ASM DB will be created. What Oracle recommends? Becuase another solution is that we can create some cron tab job and it will aply archive log files after some interval o DR site database.
    Regards.

    You cannot do it with SAN replication, because SAN replication will give you exact replica of the LUN on storage level, so you will need ASM to read it.
    You can "simulate" data guard. Restore database backup on standalone database file system (you need to explicitly set the restore destination directory, otherwise rman will try to restore the database on ASM) and apply archivelogs on a schedule.

  • Getting Error:closing a connection for you. Please help

    Hello All,
    I'm using jboss3.2.6. I used ejb2.1 (session bean and entity bean[BMP]). I did few data base transations in cmp and few in simple data source connection.
    I'm getting below errors occasionally
    'No managed connection exception
    java.lang.OutOfMemoryError: Java heap space
    [CachedConnectionManager] Closing a connection for you. Plea
    se close them yourself: org.jboss.resource.adapter.jdbc.WrappedConnection@11ed0d
    5
    I've given below my dao connection code here,
    package com.drtrack.util;
    import java.sql.Connection;
    import java.sql.SQLException;
    import javax.naming.Context;
    import javax.naming.InitialContext;
    import javax.naming.NamingException;
    import javax.sql.DataSource;
    public class DAOUtil {
    private static DataSource _ds;
    public Connection con;
    public DAOUtil() throws SQLException {
    try {
    if (_ds == null)
    assemble();
    if(_ds != null && con == null) {
    con = _ds.getConnection();
    }catch(SQLException ex) {
    ex.printStackTrace();
    private void assemble() {
    Context ic = null;
    try {
    ic = new InitialContext();
    DrTrackUtil drutil = new DrTrackUtil();
    _ds = (DataSource) ic.lookup("java:/" + drutil.getText("SOURCE_DIR"));
    drutil = null;
    }catch (Exception e) {
    e.printStackTrace();
    }finally {
    try {
    ic.close();
    }catch(NamingException ne) {}
    public void closeConnection() throws SQLException {
    if(con != null)
    con.close();
    con = null;
    }below is the code with get connection and doing transaction in it.
    public static AccountMasterValueBean getAccountMasterByAcctId(String acctId) {
    AccountMasterValueBean bean = null;
    DAOUtil dao = null;
    CallableStatement cst = null;
    ResultSet rs = null;
    try {
    dao = new DAOUtil();
    cst = dao.con.prepareCall(DrTrackConstants.MSSQL_USP_ACCOUNTMASTER_BY_ACCTID);
    cst.setObject(1, acctId);
    rs = cst.executeQuery();
    if(rs != null && rs.next()) {
    bean = new AccountMasterValueBean(
    Integer.valueOf(rs.getString("accountkeyid")),
    rs.getString("latitude"),
    rs.getString("longitude"));
    }catch(SQLException se) {
    logger.info("SQL Error: " + se);
    finally {
    if(rs != null){
    try {
    rs.close();
    }catch(SQLException se) {
    logger.info("SQL Error: " + se);
    finally {
    rs = null;
    if(cst != null) {
    try{
    cst.close();
    }catch(SQLException se) {
    logger.info("SQL Error: " + se);
    finally {
    cst = null;
    if(dao != null) {
    try {
    dao.closeConnection();
    }catch(SQLException se) {
    logger.info("SQL Error: " + se);
    finally {
    dao = null;
    return bean;
    }I closed connections, resultsets and statements properly.
    Why I'm getting these errors.? Where I'm doing wrong. ? Please help me. I have to fix them ASAP.
    Thanks.

    Hello All,
    I'm using jboss3.2.6. I used ejb2.1 (session bean and entity bean[BMP]). I did few data base transations in cmp and few in simple data source connection.
    I'm getting below errors occasionally
    'No managed connection exception
    java.lang.OutOfMemoryError: Java heap space
    [CachedConnectionManager] Closing a connection for you. Plea
    se close them yourself: org.jboss.resource.adapter.jdbc.WrappedConnection@11ed0d
    5
    I've given below my dao connection code here,
    package com.drtrack.util;
    import java.sql.Connection;
    import java.sql.SQLException;
    import javax.naming.Context;
    import javax.naming.InitialContext;
    import javax.naming.NamingException;
    import javax.sql.DataSource;
    public class DAOUtil {
    private static DataSource _ds;
    public Connection con;
    public DAOUtil() throws SQLException {
    try {
    if (_ds == null)
    assemble();
    if(_ds != null && con == null) {
    con = _ds.getConnection();
    }catch(SQLException ex) {
    ex.printStackTrace();
    private void assemble() {
    Context ic = null;
    try {
    ic = new InitialContext();
    DrTrackUtil drutil = new DrTrackUtil();
    _ds = (DataSource) ic.lookup("java:/" + drutil.getText("SOURCE_DIR"));
    drutil = null;
    }catch (Exception e) {
    e.printStackTrace();
    }finally {
    try {
    ic.close();
    }catch(NamingException ne) {}
    public void closeConnection() throws SQLException {
    if(con != null)
    con.close();
    con = null;
    }below is the code with get connection and doing transaction in it.
    public static AccountMasterValueBean getAccountMasterByAcctId(String acctId) {
    AccountMasterValueBean bean = null;
    DAOUtil dao = null;
    CallableStatement cst = null;
    ResultSet rs = null;
    try {
    dao = new DAOUtil();
    cst = dao.con.prepareCall(DrTrackConstants.MSSQL_USP_ACCOUNTMASTER_BY_ACCTID);
    cst.setObject(1, acctId);
    rs = cst.executeQuery();
    if(rs != null && rs.next()) {
    bean = new AccountMasterValueBean(
    Integer.valueOf(rs.getString("accountkeyid")),
    rs.getString("latitude"),
    rs.getString("longitude"));
    }catch(SQLException se) {
    logger.info("SQL Error: " + se);
    finally {
    if(rs != null){
    try {
    rs.close();
    }catch(SQLException se) {
    logger.info("SQL Error: " + se);
    finally {
    rs = null;
    if(cst != null) {
    try{
    cst.close();
    }catch(SQLException se) {
    logger.info("SQL Error: " + se);
    finally {
    cst = null;
    if(dao != null) {
    try {
    dao.closeConnection();
    }catch(SQLException se) {
    logger.info("SQL Error: " + se);
    finally {
    dao = null;
    return bean;
    }I closed connections, resultsets and statements properly.
    Why I'm getting these errors.? Where I'm doing wrong. ? Please help me. I have to fix them ASAP.
    Thanks.

  • How do I set miminum # of connections for pool with Oracle and Tomcat?

    Hi,
    I can't seem to find any attribute to initialize the number of connections for my connection pool. Here is my current context.xml file under my /App1 directory:
    <Context path="/App1" docBase="App1"
    debug="5" reloadable="true" crossContext="true">
    <Resource name="App1ConnectionPool" auth="Container"
    type="oracle.jdbc.pool.OracleDataSource"
    driverClassName="oracle.jdbc.driver.OracleDriver"
    factory="oracle.jdbc.pool.OracleDataSourceFactory"
    url="jdbc:oracle:thin:@127.0.0.1:1521:oddjob"
    user="app1" password="app1" />
    </Context>
    I've been googling and reading forums, but haven't found a way to establish the minimum number of connections. I've tried all sorts of parameters like InitialLimit, MinLimit, MinActive, etc, with no success.
    Here is some sample code that I am testing:
    package web;
    import oracle.jdbc.pool.OracleDataSource;
    import oracle.jdbc.OracleConnection;
    import javax.naming.*;
    import java.sql.SQLException;
    import java.sql.ResultSet;
    import java.sql.Statement;
    import java.util.Properties;
    public class ConnectionPool {
    String message = "Not Connected";
    public void init() {
    OracleConnection conn = null;
    ResultSet rst = null;
    Statement stmt = null;
    try {
    Context initContext = new InitialContext();
    Context envContext = (Context) initContext.lookup("java:/comp/env");
    OracleDataSource ds = (OracleDataSource) envContext.lookup("App1ConnectionPool");
    message = "Here.";
         String user = ds.getUser();
    if (envContext == null)
    throw new Exception("Error: No Context");
    if (ds == null)
    throw new Exception("Error: No DataSource");
    if (ds != null) {
    message = "Trying to connect...";
    conn = (OracleConnection) ds.getConnection();
    Properties prop = new Properties();
    prop.put("PROXY_USER_NAME", "adavey/xxx");
    if (conn != null) {
    message = "Got Connection " + conn.toString() + ", ";
              conn.openProxySession(OracleConnection.PROXYTYPE_USER_NAME,prop);
    stmt = conn.createStatement();
    rst = stmt.executeQuery("SELECT username, server from v$session where username is not null");
    while (rst.next()) {
    message = "DS User: " + user + "; DB User: " + rst.getString(1) + "; Server: " + rst.getString(2);
    rst.close();
    rst = null;
    stmt.close();
    stmt = null;
    conn.close(); // Return to connection pool
    conn = null; // Make sure we don't close it twice
    } catch (Exception e) {
    e.printStackTrace();
    } finally {
    // Always make sure result sets and statements are closed,
    // and the connection is returned to the pool
    if (rst != null) {
    try {
    rst.close();
    } catch (SQLException e) {
    rst = null;
    if (stmt != null) {
    try {
    stmt.close();
    } catch (SQLException e) {
    stmt = null;
    if (conn != null) {
    try {
    conn.close();
    } catch (SQLException e) {
    conn = null;
    public String getMessage() {
    return message;
    I'm using a utility to repeatedly call a JSP page that uses this class and displays the message variable. This utility allows me to specify the number of concurrent web requests and an overall number of requests to try. While that is running, I look at V$SESSION in Oracle and occassionaly, I will see a brief entry for app1 or adavey depending on the timing of my query and how far along the code has processed in this example. So it seems that I am only using one connection at a time and not a true connection pool.
    Is it possible that I need to use the oci driver instead of the thin driver? I've looked at the javadoc for oci and the OCIConnectionPool has a setPoolConfig method to set initial, min and max connections. However, it appears that this can only be set via Java code and not as a parameter in my context.xml resource file. If I have to set it each time I get a database connection, it seems like it sort of defeats the purpose of having Tomcat maintain the connection pool for me and that I need to implement my own connection pool. I'm a newbie to this technology so I really don't want to go this route.
    Any advice on setting up a proper connection pool that works with Tomcat and Oracle proxy sessions would be greatly appreciated.
    Thanks,
    Alan

    Well I did some more experiments and I am able to at least create a connection pool within my example code:
    package web;
    import oracle.jdbc.pool.OracleDataSource;
    import oracle.jdbc.OracleConnection;
    import javax.naming.*;
    import java.sql.SQLException;
    import java.sql.ResultSet;
    import java.sql.Statement;
    import java.util.Properties;
    public class ConnectionPool {
    String message = "Not Connected";
    public void init() {
    OracleConnection conn = null;
    ResultSet rst = null;
    Statement stmt = null;
    try {
    Context initContext = new InitialContext();
    Context envContext = (Context) initContext.lookup("java:/comp/env");
    OracleDataSource ds = (OracleDataSource) envContext.lookup("App1ConnectionPool");
    message = "Here.";
         String user = ds.getUser();
    if (envContext == null)
    throw new Exception("Error: No Context");
    if (ds == null)
    throw new Exception("Error: No DataSource");
    if (ds != null) {
    message = "Trying to connect...";
    boolean cache_enabled = ds.getConnectionCachingEnabled();
    if (!cache_enabled){
    ds.setConnectionCachingEnabled(true);
    Properties cacheProps = new Properties();
    cacheProps.put("InitialLimit","5");
         cacheProps.put("MinLimit","5");
    cacheProps.put("MaxLimit","10");
    ds.setConnectionCacheProperties(cacheProps);
              conn = (OracleConnection) ds.getConnection();
    Properties prop = new Properties();
    prop.put("PROXY_USER_NAME", "adavey/xyz");
    if (conn != null) {
    message = "Got Connection " + conn.toString() + ", ";
              conn.openProxySession(OracleConnection.PROXYTYPE_USER_NAME,prop);
    stmt = conn.createStatement();
    //rst = stmt.executeQuery("SELECT 'Success obtaining connection' FROM DUAL");
    rst = stmt.executeQuery("SELECT user, SYS_CONTEXT ('USERENV', 'SESSION_USER') from dual");
    while (rst.next()) {
    message = "DS User: " + user + "; DB User: " + rst.getString(1) + "; sys_context: " + rst.getString(2);
    message += "; Was cache enabled?: " + cache_enabled;
    rst.close();
    rst = null;
    stmt.close();
    stmt = null;
    conn.close(OracleConnection.PROXY_SESSION); // Return to connection pool
    conn = null; // Make sure we don't close it twice
    } catch (Exception e) {
    e.printStackTrace();
    } finally {
    // Always make sure result sets and statements are closed,
    // and the connection is returned to the pool
    if (rst != null) {
    try {
    rst.close();
    } catch (SQLException e) {
    rst = null;
    if (stmt != null) {
    try {
    stmt.close();
    } catch (SQLException e) {
    stmt = null;
    if (conn != null) {
    try {
    conn.close();
    } catch (SQLException e) {
    conn = null;
    public String getMessage() {
    return message;
    In my context.xml file, I tried to specify the same Connection Cache Properties as attributes, but no luck:
    <Context path="/App1" docBase="App1"
    debug="5" reloadable="true" crossContext="true">
    <Resource name="App1ConnectionPool" auth="Container"
    type="oracle.jdbc.pool.OracleDataSource"
    driverClassName="oracle.jdbc.OracleDriver"
    factory="oracle.jdbc.pool.OracleDataSourceFactory"
    url="jdbc:oracle:thin:@127.0.0.1:1521:oddjob"
    user="app1" password="app1"
    ConnectionCachingEnabled="1" MinLimit="5" MaxLimit="20"/>
    </Context>
    These attributes seemed to have no effect:
    ConnectionCachingEnabled="1" ; also tried "true"
    MinLimit="5"
    MaxLimit="20"
    So basically if I could find some way to get these attributes set within the context.xml file instead of my code, I would be a happy developer :-)
    Oh well, it's almost Miller time here on the east coast. Maybe a few beers will help me find the solution I'm looking for.

  • Performance degradation using Jolt ASP Connectivity for TUXEDO

    We have a customer that uses Jolt ASP Connectivity for TUXEDO and is suffering
    from a severe performance degradation over time.
    Initial response times are fine (1 s.), but they tend to increase to 3 minutes
    after some time (well, eh, a day or so).
    Data:
    - TUXEDO 7.1
    - Jolt 1.2.1
    - Relatively recent rolling patch installed (so no there are probably no JSH performance
    issues and memory leaks as fixed in earlier patches)
    The ULOG shows that during the night the JSH instances notice a timeout on behalf
    of the client connection and do a forced shutdown of the client:
    040911.csu013.cs.kadaster.nl!JSH.234333.1.-2: JOLT_CAT:1185: "INFO: Userid:
    [ZZ_Webpol], Clientid: [AP_WEBSRV3] timed out due to inactivity"
    040911.csu013.cs.kadaster.nl!JSH.234333.1.-2: JOLT_CAT:1198: "WARN: Forced
    shutdown of client; user name 'ZZ_Webpol'; client name 'AP_WEBSRV3'"
    This happens every 10 minutes as per configuration of the JSL (-T flag).
    The customer "solved" the problem for the time being by increasing the connection
    pool size on the IIS web server.
    However, they didn't find a "smoking gun" - no definite cause for the problem.
    So, it is debatable whether their "solution" suffices.
    It is my suspicion the problem might be located in the Jolt ASP classes running
    on the IIS.
    Maybe the connection pool somehow loses connections over time, causing subsequent
    users having to queue before they get served (although an exception should be
    raised if no connections are available).
    However, there's no documentation on the functioning of the connection pool for
    Jolt ASP.
    My questions:
    1) What's the algorithm used for managing connections with Jolt ASP for TUXEDO?
    2) If connections are terminated by a JSH, will a new connection be established
    from the web server automatically? (this is especially interesting, because the
    connection policy can be configured in the JSL CLOPT, but there's no info on how
    this should be handled/configured by Jolt ASP connectivity for TUXEDO)
    Regards,
    Winfried Scheulderman

    Hi,
    For ASP connectivity I would suggest looking at the .Net client facility provided in Tuxedo 9.1 and later.
    Regards,
    Todd Little
    Oracle Tuxedo Chief Architect

  • We have an airport extreme wifi in house, but also want to set up a hardwired ethernet connection for gaming and streaming of netflix.  The computer only sees one or the other not both at same time.

    We have a new iMac 2.9ghz and are running an airport extreme set up in house with several express entenders.  The problem is that our son is streaming netflix and doing on line gaming and hogging all the band width.  Time Warner tech suggested a hard wired ethernet connection to computer base sattion next to computer so that it can have direct conenction to internet.  After much fishing of wire the connection worked great, but the wifi connection for the house is gone.  I unplugged the ethernet connection and everything is fine.  I read the articles about adjusting network preferences, but the issue seems to be in the extreme not in the computer.  the connection goes from cable modem to extreme, from extreme to house and to hard wired conenction to other computer.  Do we need a splitter before the extreme?

    No. Something else is going on.
    Your son may be hogging all the bandwidth but your wireless network should never simply disappear. Moreover, if your son isn't doing anything the available bandwidth for other devices should remain unaffected.
    I suspect that something is miswired, and from what you describe I suspect that link is between the Extreme and the "other computer".
    The way to accomplish what you propose is
    Modem > Ethernet cable to Extreme's WAN port
    Extreme's LAN ports > wired Ethernet devices.
    There should be nothing but an Ethernet cable linking an Extreme LAN port and any other wired device. If you run out of available LAN ports on the Extreme, you need to by an "Ethernet switch" - they are not expensive, but don't call it a "splitter" or you will only confuse yourself. The switch would be connected to one of the Extreme's LAN ports, and you would connect additional devices to it. You can also use one of your Expresses for that purpose, assuming it is the current generation model with two Ethernet ports.

Maybe you are looking for

  • Oracle 11gR2 to SQL Server 2008 R2 - Problem executing SP on SQL server

    Hi, I have this problem: Executing a sp in SQL server from oracle --------------------ORACLE SP-------------------- create or replace PROCEDURE DRIVER_SP AP IN OUT appl_param%ROWTYPE as v_sp_name varchar2(50); v_control_id number; val VARCHAR2(100);

  • How to Process "Being Created" Sessions

    Dear Experts, Program created sessions are tagged to be "Being Created". Can any one of you explain me how to process these sessions through a program again.? It would be of great help and useful answers are rewarded. Best Regards, Arunkumar S

  • How to make fonts on menus larger

    I just installed elements 11. The font size on many of the selections is small, too small to easily see with my level of vision (bifocals). Examples are the Quick and Guided edit menu picks, the popups when you curser over a tool. Can anyone tell me

  • Weird kernel and apps mess!!

    I have 4 kernels installed on my system , -stock,-ck,-ice,-pf. I have Xorg 1.9 installed. When I startx in stock and pf , Xorg 1.9 starts whereas in ck and ice Xorg 1.8 RC 2 starts , i know that kernel and apps are completely independent , i have ask

  • Spaces and emptying trash

    Whenever I am on space other than the space that I have assigned for the finder and I empty the trash, I am not automatically transferred to the space of the finder, I have to do it manually. For example I am on space 2, I have assigned space 1 for f