Disk replication for Shared Storage in Weblogic server

Hi,
Why we need a disk replication in web-logic server for shared storage systems? What is the advantage of it and how this disk replication can be achieved in web-logic for the shared storage which contains the common configurations and software's which will be used by a pool of client machines? Please clarify.
Thanks.

Hi,
I am not the middleware expert. However ACFS (Oracle Cloud File System) is a clustering filesystem, which also has the functionality for replication:
http://www.oracle.com/technetwork/database/index-100339.html
Maybe you also finde information on what you need on the MAA website: www.oracle.com/goto/maa
Regards
Sebastian

Similar Messages

  • OCR and vote disk Allocation for shared raw storage with Solaris 10 questio

    Hi all,
    Current environment is Solaris 10 SPARC 64 bit OS with Hitachi SAN for shared storage and Sun E6900 servers.
    For Oracle 10g RAC (10.2) and ASM, I am setting up the vote disk and OCR files on shared raw storage area network.
    Assume that I have a 35Gb LUN carved out on raw device for the vote disk and OCR for a 2-node Oracle 10g RAC cluster and since the vote disk and OCR only require 120MB of storage, is there a way to use only a 120MB slice from the LUN or do I need to allocate the entire LUN/raw device to the vote disk?
    I am looking for a way to avoid wasting space for the OCR and vote disk with my 10g RAC cluster and we are using raw devices and Oracle 10g RAC Clusterware with Veritas filesystem for the disk management. Can I have several slices on Solaris 10 with raw shared storage (Hitachi) to triple mirror my OCR and vote disk files?
    It seems odd to have to use an entire 35GB LUN just for 2 small files: vote disk and OCR. Is there a way to partition a 120MB slice on one of the devices and allocate the rest to ASM?

    Hi
    In our RAC env.
    we have 1GB LUN for OCR and Voting Disk and 100GB LUN for ASM disk. we have 4 Node RAC env on AIX with 2 SAN storage on Hitachi system for our Application.
    we keept 1st Voting diks on SAN-1 2nd Voting Disk on SAN-2 and third voting on SAN-3.
    OCR disk on SAN1 and OCR_MIRROR Disk on SAN-2.
    we have ASM DIsk with Normal Redundancy . Failure Group 1 on SAN-1 and
    Failure Group 2 on SAN-2
    Regards
    Bharat

  • Inqmy resource adapter for SAP with Bea Weblogic Server

              Hi everybody,
              Anybody have tried to use INQMY resource adapter for SAP with Bea weblogic server
              It works well with INQMY server, but with BEA I'm getting a lot of problems creating
              the connections.
              Thanks in advance.
              Xavi.
              

    All,
              Here are the steps we (used for internal testing) had to perform to get
              IN-Q-MY adapter for SAP to work with WebLogic:
              The wli.adapter.inqmy.sapr3.spi package contains extensions to the
              In-Q-My J2EE Connector Architecture classes to overcome some limitations
              in the base implementation classes. To get around these issues, we had
              to extend their R3ManagedConnectionFactory, R3ConnectionManager, and
              R3ConnectionFactory classes.
              * The javax.resource.spi.ManagedConnectionFactory implementation
              does not over-ride the equals and hashCode methods correctly. This
              causes problems with WLS 6.1.
              * There is a bug in their javax.resource.spi.ConnectionManager
              implementation for non-managed uses of the adapter. Consequently, their
              adapter cannot be used in a non-managed scenario.
              * The javax.resource.cci.ConnectionFactory class does not support
              the getConnection() method that does not take any arguments (it throws a
              null pointer exception).
              I am attaching the classes discussed above.
              Cheers,
              Chris
              Torsten Friebe wrote:
              > Hi,
              >
              > does anybody know where to get a trail version - if one exists - of IN-Q-MY
              > application server or the resource adapter?
              >
              > Thanks, regards
              > Torsten
              >
              > "Xavi" <[email protected]> schrieb im Newsbeitrag
              > news:[email protected]...
              >
              >>Hi everybody,
              >>
              >>Anybody have tried to use INQMY resource adapter for SAP with Bea weblogic
              >>
              > server
              >
              >>?
              >>
              >>It works well with INQMY server, but with BEA I'm getting a lot of
              >>
              > problems creating
              >
              >>the connections.
              >>
              >>Thanks in advance.
              >>Xavi.
              >>
              >>
              >
              >
              package wli.adapter.inqmy.sapr3.spi;
              import java.io.Serializable;
              import javax.resource.ResourceException;
              import javax.resource.cci.Connection;
              import javax.resource.cci.ConnectionSpec;
              import javax.resource.spi.ConnectionManager;
              import javax.resource.spi.ConnectionRequestInfo;
              import javax.resource.spi.ManagedConnection;
              import javax.resource.spi.ManagedConnectionFactory;
              import com.inqmy.r3adapter.R3ConnectionSpec;
              import com.inqmy.r3adapter.R3ManagedConnectionFactory;
              * Extends the In-Q-My implementation to allow for getConnection() with no
              * connection spec, i.e. use the default configured connection parameters.
              public class R3ConnectionFactory
              extends com.inqmy.r3adapter.R3ConnectionFactory
              implements com.bea.connector.IProxyMarker {
              private R3ConnectionSpec m_cspec;
              public R3ConnectionFactory(ConnectionManager cm, R3ManagedConnectionFactory mcf)
              throws ResourceException {
              super(cm, mcf);
              String strClientNumber = mcf.getClientNumber();
              if (strClientNumber == null) {
              throw new javax.resource.spi.IllegalStateException("ClientNumber not set for "+mcf);
              String strLanguage = mcf.getLanguage();
              if (strLanguage == null) {
              throw new javax.resource.spi.IllegalStateException("Language not set for "+mcf);
              String strUserName = mcf.getUserName();
              if (strUserName == null) {
              throw new javax.resource.spi.IllegalStateException("UserName not set for "+mcf);
              String strPassword = mcf.getPassword();
              if (strPassword == null) {
              throw new javax.resource.spi.IllegalStateException("Password not set for "+mcf);
              m_cspec = new R3ConnectionSpec(strClientNumber, strLanguage, strUserName, strPassword);
              public Connection getConnection(ConnectionSpec connectionSpec)
              throws ResourceException {
              if (connectionSpec == null) connectionSpec = m_cspec;
              return super.getConnection(connectionSpec);
              package wli.adapter.inqmy.sapr3.spi;
              import java.io.Serializable;
              import javax.resource.ResourceException;
              import javax.resource.spi.ConnectionManager;
              import javax.resource.spi.ConnectionRequestInfo;
              import javax.resource.spi.ManagedConnection;
              import javax.resource.spi.ManagedConnectionFactory;
              * Extends the In-Q-My implementation to over-ride the
              * allocateConnection method to return a CCI connection vs. a ManagedConnection
              public class R3DefaultConnectionManager
              implements ConnectionManager, Serializable {
              public R3DefaultConnectionManager() {}
              public Object
              allocateConnection(ManagedConnectionFactory mcf, ConnectionRequestInfo cri)
              throws ResourceException {
              ManagedConnection mc = mcf.createManagedConnection(null, cri);
              return mc.getConnection(null, cri);
              package wli.adapter.inqmy.sapr3.spi;
              import javax.resource.spi.ConnectionManager;
              import javax.resource.spi.ConnectionRequestInfo;
              import javax.resource.spi.ManagedConnection;
              import javax.security.auth.Subject;
              * Extends the In-Q-My implementation to get around some problems encountered
              * while running on WebLogic:
              * <ul>
              * <li>Must over-ride default implementation of equals and hashCode method</li>
              * <li>Needed to provide my version of the CCI ConnectionFactory</li>
              * <li>Needed to provide my version of the default ConnectionManager for the
              * non-managed scenario use case</li>
              * </ul>
              public class R3ManagedConnectionFactory
              extends com.inqmy.r3adapter.R3ManagedConnectionFactory {
              private int m_iHashCode;
              transient private com.inqmy.r3adapter.R3ConnectionRequestInfo t_cri = null;
              public R3ManagedConnectionFactory() {
              super();
              java.rmi.server.UID uid = new java.rmi.server.UID();
              m_iHashCode = uid.hashCode();
              public Object createConnectionFactory() {
              // need to install our own default connection manager because In-Q-My
              // version causes a ClassCastException in CCI ConnectionFactory
              // getConnection
              return createConnectionFactory(new R3DefaultConnectionManager());
              public ManagedConnection createManagedConnection(Subject subject, ConnectionRequestInfo cri)
              throws javax.resource.ResourceException {
              // need to check for null on the ConnectionRequestInfo object because the
              // In-Q-My R3ManagedConnection ctor does not check for null
              if (cri == null) cri = getDefaultConnectionRequestInfo();
              return new com.inqmy.r3adapter.R3ManagedConnection(this, subject, cri);
              public Object createConnectionFactory(ConnectionManager connectionManager) {
              // need to supply a connection factory that can deal with getConnection
              // that does not take a ConnectionSpec
              try {
              return new R3ConnectionFactory(connectionManager, this);      
              } catch (javax.resource.ResourceException re) {
              re.printStackTrace();
              throw new java.lang.IllegalStateException(re.getMessage());
              com.inqmy.r3adapter.R3ConnectionRequestInfo getDefaultConnectionRequestInfo()
              throws javax.resource.spi.IllegalStateException {
              if (t_cri == null) {
              String strClientNumber = this.getClientNumber();
              if (strClientNumber == null) {
              throw new javax.resource.spi.IllegalStateException("ClientNumber not set for "+this);
              String strLanguage = this.getLanguage();
              if (strLanguage == null) {
              throw new javax.resource.spi.IllegalStateException("Language not set for "+this);
              String strUserName = this.getUserName();
              if (strUserName == null) {
              throw new javax.resource.spi.IllegalStateException("UserName not set for "+this);
              String strPassword = this.getPassword();
              if (strPassword == null) {
              throw new javax.resource.spi.IllegalStateException("Password not set for "+this);
              t_cri = new com.inqmy.r3adapter.R3ConnectionRequestInfo(strClientNumber, strLanguage, strUserName, strPassword);
              return t_cri;
              public boolean equals(Object obj) {
              if (obj == null) return false;
              if (obj == this) return true;
              if (!this.getClass().isInstance(obj)) return false;
              R3ManagedConnectionFactory mcf = (R3ManagedConnectionFactory)obj;
              return compare(getClientNumber(), mcf.getClientNumber()) &&
              compare(getLanguage(), mcf.getLanguage()) &&
              compare(getUserName(), mcf.getUserName()) &&
              compare(getPassword(), mcf.getPassword()) &&
              compare(getServerName(), mcf.getServerName()) &&
              compare(getSystemNumber(), mcf.getSystemNumber());
              protected final boolean compare(final Object obj1, final Object obj2) {
              if (obj1 == obj2) return true;
              if (obj1 != null) {
              return obj1.equals(obj2);
              } else {
              if (obj2 == null) {
              return true;
              } else {
              return false;
              public int hashCode() { return m_iHashCode; }
              

  • Searching for shared files over a server network using a client computer

    I think spotlight can only be used to search for files on a local computer. Is there a way to search for shared files on a server from a logged in client computer?

    It should work on shared HFS volumes. You would have to start the search from the Finder, not from the Spotlight menu.

  • DFSr supported cluster configurations - replication between shared storage

    I have a very specific configuration for DFSr that appears to be suffering severe performance issues when hosted on a cluster, as part of a DFS replication group.
    My configuration:
    3 Physical machines (blades) within a physical quadrant.
    3 Physical machines (blades) hosted within a separate physical quadrant
    Both quadrants are extremely well connected, local, 10GBit/s fibre.
    There is local storage in each quadrant, no storage replication takes place.
    The 3 machines in the first quadrant are MS clustered with shared storage LUNs on a 3PAR filer.
    The 3 machines in the second quadrant are also clustered with shared storage, but on a separate 3PAR device.
    8 shared LUNs are presented to the cluster in the first quadrant, and an identical storage layout is connected in the second quadrant. Each LUN has an associated HAFS application associated with it which can fail-over onto any machine in the local cluster.
    DFS replication groups have been set up for each LUN and data is replicated from an "Active" cluster node entry point, to a "Passive" cluster node that provides no entry point to the data via DFSn and a Read-Only copy on it's shared cluster
    storage.
    For the sake of argument, assume that all HAFS application instances in the first quadrant are "Active" in a read/write configuration, and all "Passive" instances of the HAFS applications in the other quadrants are Read-Only.
    This guide: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i.aspx defines
    how to add a clustered service to a replication group. It clearly shows using "Shared storage" for the cluster, which is common sense otherwise there effectively is no application fail-over possible and removes the entire point of using a resilient
    cluster.
    This article: http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_061 defines the following:
    DFS Replication in Windows Server 2012 and Windows Server 2008 R2 includes the ability to add a failover cluster
    as a member of a replication group. The DFS Replication service on versions of Windows prior to Windows Server 2008 R2
    is not designed to coordinate with a failover cluster, and the service will not fail over to another node.
    It then goes on to state, quite incredibly: DFS Replication does not support replicating files on Cluster Shared Volumes.
    Stating quite simply that DFSr does not support Cluster Shared Volumes makes absolutely no sense at all after stating clusters
    are supported in replication groups and a technet guide is provided to setup and configure this configuration. What possible use is a clustered HAFS solution that has no shared storage between the clustered nodes - none at all.
    My question:  I need some clarification, is the text meant to read "between" Clustered
    Shared Volumes?
    The storage configuration must to be shared in order to form a clustered service in the first place. What
    we am seeing from experience is a serious degradation of
    performance when attempting to replicate / write data between two clusters running a HAFS configuration, in a DFS replication group.
    If for instance, as a test, local / logical storage is mounted to a physical machine the performance of a DFS replication group between the unshared, logical storage on the physical nodes is approaching 15k small files per minute on initial write and even high
    for file amendments. When replicating between two nodes in a cluster, with shared clustered storage the solution manages a weak 2,500 files per minute on initial write and only 260 files per minute when attempting to update data / amend files.
    By testing various configurations we have effectively ruled out the SAN, the storage, drivers, firmware, DFSr configuration, replication group configuration - the only factor left that makes any difference is replicating from shared clustered storage, to another
    shared clustered storage LUN.
    So in summary:
    Logical Volume ---> Logical Volume = Fast
    Logical Volume ---> Clustered Shared Volume = ??
    Clusted Shared Volume ---> Clustered Shared Volume = Pitifully slow
    Can anyone explain why this might be?
    The guidance in the article is in clear conflict with all other evidence provided around DFSr and clustering, however it seems to lean towards why we may be seeing a real issue with replication performance.
    Many thanks for your time and any help/replies that may be received.
    Paul

    Hello Shaon Shan,
    I am also having the same scenario at one of my customer place.
    We have two FileServers running on Hyper-V 2012 R2 as guest VM using Cluster Shared Volume.  Even the data partition drive also a part of CSV.
    It's really confusing whether the DFS replication on CSV are supported or not, then what would be consequence if using.
    In my knowledge we have some customers they are using Hyper-V 2008 R2 and DFS is configured and running fine on CSV since more than 4 years without any issue.
    Appreciate if you can please elaborate and explain in details about the limitations on using CSV.
    Thanks in advance,
    Abul

  • How to Create Shared Storage using VM-Server 2.1 Red Hat Enterprise Linux 5

    Thanks in advance.
    Describe in sequence how to create shared storage for a two guest/node Red Hat Linux Enterprise using Oracle 2.1 VM Server on Red Hat Linux Enterprise 5 using command line or appropriate interface.
    How to create Shared Storage using Oracle 2.1 VM Server?
    How to configure Network for two node cluster (oracle clusterware)?

    Hi Suresh Kumar,
    Oracle Application Server 10g Release 2, Patch Set 3 (10.1.2.3) is required to be fully certified on OEL 5.x or RHEL 5.x.
    Oracle Application Server 10g Release 2 10.1.2.0.0 or 10.1.2.0.1 versions are not supported with Oracle Enterprise Linux (OEL) 5.0 or Red Hat Enterprise Linux (RHEL) 5.0. It is recommended that version 10.1.2.0.2 be obtained and installed.
    Which implies Oracle AS 10.1.2.x is some what certified on RHEL 5.x
    I think it would be better if you get in touch with Oracle Support regarding this .
    Sorry , I am not aware of any document on migration from Sun Solaris to RH Linux 5.2 .
    Thanks,
    Sutirtha

  • Using JDriver for Oracle 8i from Weblogic server 6.1

    Hi
    We have solaris machine running Weblogic server 6.1 SP1 and another solaris running Oracle 8i. I have installed client libraries in machine running WLS 6.1. I have set the LD_SERVER_PATH in setEnv.sh file of weblogic. when i create coinnection pool from WLS CONSOLE and i restart it it is throwing an error
    Jan 3, 2002 4:48:21 PM GMT-05:00> <Error> <JDBC> <Cannot startup
    connection pool "OraclePool" weblogic.common.ResourceException:
    Could not create pool connection. The DBMS driver exception was:
    java.sql.SQLException: System.loadLibrary(weblogicoci37) threw
    java.lang.UnsatisfiedLinkError:
    /oracle8i/CAPS/bea/wlserver6.1/lib/solaris/oci816_8/libweblogicoci37.so:
    ld.so.1:
    /oracle8i/CAPS/bea/jdk131/jre/bin/../bin/sparc/native_threads/java:
    fatal: /oracle8i/product/8.1.6/lib64/libclntsh.so.8.0: wrong ELF class:
    ELFCLASS64
    at weblogic.jdbc.oci.Driver.loadLibraryIfNeeded(Driver.java:226)
    at weblogic.jdbc.oci.Driver.connect(Driver.java:76)
    at
    weblogic.jdbc.common.internal.ConnectionEnvFactory.makeConnection(Connec
    tionEnvFactory.java:192)
    at
    weblogic.jdbc.common.internal.ConnectionEnvFactory.createResource(Connec
    tionEnvFactory.java:134)
    at
    weblogic.common.internal.ResourceAllocator.makeResources(ResourceAllocat
    or.java:698)
    at
    weblogic.common.internal.ResourceAllocator.<init>(ResourceAllocator.java
    :282)
    at
    weblogic.jdbc.common.internal.ConnectionPool.startup(ConnectionPool.java
    :629)
    at
    weblogic.jdbc.common.JDBCService.addDeployment(JDBCService.java:107)
    at
    weblogic.management.mbeans.custom.DeploymentTarget.addDeployment(Deploym
    entTarget.java:329)
    at
    weblogic.management.mbeans.custom.DeploymentTarget.addDeployments(Deploy
    mentTarget.java:279)
    at
    weblogic.management.mbeans.custom.DeploymentTarget.updateServerDeploymen
    ts(DeploymentTarget.java:233)
    at
    weblogic.management.mbeans.custom.DeploymentTarget.updateDeployments(Dep
    loymentTarget.java:193)
    at java.lang.reflect.Method.invoke(Native Method)
    at
    weblogic.management.internal.DynamicMBeanImpl.invokeLocally(DynamicMBean
    Impl.java:608)
    at
    weblogic.management.internal.DynamicMBeanImpl.invoke(DynamicMBeanImpl.ja
    va:592)
    at
    weblogic.management.internal.ConfigurationMBeanImpl.invoke(Configuration
    MBeanImpl.java:352)
    at
    com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1555)
    at
    com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1523)
    at
    weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:449)
    at
    weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:190)
    at $Proxy33.updateDeployments(Unknown Source)
    at
    weblogic.management.configuration.ServerMBean_CachingStub.updateDeployme
    nts(ServerMBean_CachingStub.java:2734)
    at
    weblogic.management.mbeans.custom.ApplicationManager.startConfigManager(
    ApplicationManager.java:362)
    at
    weblogic.management.mbeans.custom.ApplicationManager.start(ApplicationMa
    nager.java:154)
    at java.lang.reflect.Method.invoke(Native Method)
    at
    weblogic.management.internal.DynamicMBeanImpl.invokeLocally(DynamicMBean
    Impl.java:608)
    at
    weblogic.management.internal.DynamicMBeanImpl.invoke(DynamicMBeanImpl.ja
    va:592)
    at
    weblogic.management.internal.ConfigurationMBeanImpl.invoke(Configuration
    MBeanImpl.java:352)
    at
    com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1555)
    at
    com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1523)
    at
    weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:449)
    at
    weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:190)
    at $Proxy46.start(Unknown Source)
    at
    weblogic.management.configuration.ApplicationManagerMBean_CachingStub.st
    art(ApplicationManagerMBean_CachingStub.java:480)
    at
    weblogic.management.Admin.startApplicationManager(Admin.java:1151)
    at weblogic.management.Admin.finish(Admin.java:570)
    at weblogic.t3.srvr.T3Srvr.start(T3Srvr.java:506)
    at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:203)
    at weblogic.Server.main(Server.java:35)
    at
    weblogic.jdbc.common.internal.ConnectionEnvFactory.makeConnection(Connec
    tionEnvFactory.java:208)
    at
    weblogic.jdbc.common.internal.ConnectionEnvFactory.createResource(Connec
    tionEnvFactory.java:134)
    at
    weblogic.common.internal.ResourceAllocator.makeResources(ResourceAllocat
    or.java:698)
    at
    weblogic.common.internal.ResourceAllocator.<init>(ResourceAllocator.java
    :282)
    at
    weblogic.jdbc.common.internal.ConnectionPool.startup(ConnectionPool.java
    :629)
    at
    weblogic.jdbc.common.JDBCService.addDeployment(JDBCService.java:107)
    at
    weblogic.management.mbeans.custom.DeploymentTarget.addDeployment(Deploym
    entTarget.java:329)
    at
    weblogic.management.mbeans.custom.DeploymentTarget.addDeployments(Deploy
    mentTarget.java:279)
    at
    weblogic.management.mbeans.custom.DeploymentTarget.updateServerDeploymen
    ts(DeploymentTarget.java:233)
    at
    weblogic.management.mbeans.custom.DeploymentTarget.updateDeployments(Dep
    loymentTarget.java:193)
    at java.lang.reflect.Method.invoke(Native Method)
    at
    weblogic.management.internal.DynamicMBeanImpl.invokeLocally(DynamicMBean
    Impl.java:608)
    at
    weblogic.management.internal.DynamicMBeanImpl.invoke(DynamicMBeanImpl.ja
    va:592)
    at
    weblogic.management.internal.ConfigurationMBeanImpl.invoke(Configuration
    MBeanImpl.java:352)
    at
    com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1555)
    at
    com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1523)
    at
    weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:449)
    at
    weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:190)
    at $Proxy33.updateDeployments(Unknown Source)
    at
    weblogic.management.configuration.ServerMBean_CachingStub.updateDeployme
    nts(ServerMBean_CachingStub.java:2734)
    at
    weblogic.management.mbeans.custom.ApplicationManager.startConfigManager(
    ApplicationManager.java:362)
    at
    weblogic.management.mbeans.custom.ApplicationManager.start(ApplicationMa
    nager.java:154)
    at java.lang.reflect.Method.invoke(Native Method)
    at
    weblogic.management.internal.DynamicMBeanImpl.invokeLocally(DynamicMBean
    Impl.java:608)
    at
    weblogic.management.internal.DynamicMBeanImpl.invoke(DynamicMBeanImpl.ja
    va:592)
    at
    weblogic.management.internal.ConfigurationMBeanImpl.invoke(Configuration
    MBeanImpl.java:352)
    at
    com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1555)
    at
    com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1523)
    at
    weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:449)
    at
    weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:190)
    at $Proxy46.start(Unknown Source)
    at
    weblogic.management.configuration.ApplicationManagerMBean_CachingStub.st
    art(ApplicationManagerMBean_CachingStub.java:480)
    at
    weblogic.management.Admin.startApplicationManager(Admin.java:1151)
    at weblogic.management.Admin.finish(Admin.java:570)
    at weblogic.t3.srvr.T3Srvr.start(T3Srvr.java:506)
    at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:203)
    at weblogic.Server.main(Server.java:35)
    I would appreciate if you can offer some help.
    Thanks,
    S Gopikrishna

    IT WORKS FINE FOR ME.
    Script that starts the bwls...
    CTP=t3://wls_ip:port
    JAVA_HOME=/usr/j2se
    WL_HOME=/weblogic61
    ORACLE_HOME=/oracle/oracle_client
    NLS_LANG=american_america.WE8ISO8859P1
    ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
    PATH=$PATH:$JAVA_HOME/bin:/usr/ccs/bin:/usr/ucb:$ORACLE_HOME/bin
    LD_LIBRARY_PATH=$WL_HOME/lib/solaris/oci815_8:$ORACLE_HOME/lib
    export LD_LIBRARY_PATH PATH ORA_NLS33 NLS_LANG ORACLE_HOME
    SET CLASSPATH
    START WLS
    weblogic.properties
    weblogic.jdbc.connectionPool.MyOraPool=\
    url=jdbc:weblogic:oracle,\
    driver=weblogic.jdbc.oci.Driver,\
    loginDelaySecs=1,\
    initialCapacity=3,\
    maxCapacity=9,\
    capacityIncrement=2,\
    allowShrinking=true,\
    shrinkPeriodMins=12,\
    refreshMinutes=10,\
    props=user=orausr;password=orafoousr;server=ORASERVER;weblogic.oci.min_bind_size=1000

  • Location for dll files in weblogic server 6.1 sp2

    "where to keep *.dll files in weblogic server 6.1 sp2". Application is deployed on weblogic server 6.1 sp2. Application is being integrated with webmethod to publish data. For that we need to put awssl40jn.dll in weblogic server. Could not able to locate where to put this dll file.

    You can have the dll's anywhere. You just need to mention the location
    in java.library.path.
    For example if you have your Dll's in C:\bea\lib
    You need to mention the following in your command line args:
    java -Djava.library.path=C:\bea\lib ****Other Command Line Args****
    weblogic.Server
    Hope this helps.
    -Kiran
    "Dave Martin" <[email protected]> wrote in message news:<[email protected]>...
    I don't see a DLL like yours in a fresh install of WLS 6.1 SP2
    This is the complete list of DLLs I find in my fresh install of WLS 6.1 SP2 (starting
    from the BEA home) on Windows:
    /jdk131/bin/dt_shmem.dll
    /jdk131/bin/dt_socket.dll
    /jdk131/bin/jdwp.dll
    /jdk131/jre/bin/ActPanel.dll
    /jdk131/jre/bin/agent.dll
    /jdk131/jre/bin/awt.dll
    /jdk131/jre/bin/classic/jvm.dll
    /jdk131/jre/bin/cmm.dll
    /jdk131/jre/bin/dcpr.dll
    /jdk131/jre/bin/dt_socket.dll
    /jdk131/jre/bin/fontmanager.dll
    /jdk131/jre/bin/hotspot/jvm.dll
    /jdk131/jre/bin/hpi.dll
    /jdk131/jre/bin/hprof.dll
    /jdk131/jre/bin/ioser12.dll
    /jdk131/jre/bin/java.dll
    /jdk131/jre/bin/jawt.dll
    /jdk131/jre/bin/jcov.dll
    /jdk131/jre/bin/JdbcOdbc.dll
    /jdk131/jre/bin/jdwp.dll
    /jdk131/jre/bin/jpeg.dll
    /jdk131/jre/bin/jpins32.dll
    /jdk131/jre/bin/jpishare.dll
    /jdk131/jre/bin/jsound.dll
    /jdk131/jre/bin/msvcrt.dll
    /jdk131/jre/bin/net.dll
    /jdk131/jre/bin/NPJava11.dll
    /jdk131/jre/bin/NPJava12.dll
    /jdk131/jre/bin/NPJava131.dll
    /jdk131/jre/bin/NPJava32.dll
    /jdk131/jre/bin/NPOJI600.dll
    /jdk131/jre/bin/packager.dll
    /jdk131/jre/bin/server/jvm.dll
    /jdk131/jre/bin/server/jvm_g.dll
    /jdk131/jre/bin/verify.dll
    /jdk131/jre/bin/zip.dll
    /wlserver6.1/bin/fastfile.dll
    /wlserver6.1/bin/iisforward.dll
    /wlserver6.1/bin/iisproxy.dll
    /wlserver6.1/bin/jsafe.dll
    /wlserver6.1/bin/md5.dll
    /wlserver6.1/bin/md5_g.dll
    /wlserver6.1/bin/nodemanager.dll
    /wlserver6.1/bin/oci816_7/weblogicoci37.dll
    /wlserver6.1/bin/oci816_7/weblogicoxa37.dll
    /wlserver6.1/bin/oci816_8/weblogicoci37.dll
    /wlserver6.1/bin/oci816_8/weblogicoxa37.dll
    /wlserver6.1/bin/oci817_8/weblogicoci37.dll
    /wlserver6.1/bin/oci817_8/weblogicoxa37.dll
    /wlserver6.1/bin/oci901_8/weblogicoci37.dll
    /wlserver6.1/bin/oci901_8/weblogicoxa37.dll
    /wlserver6.1/bin/proxy30.dll
    /wlserver6.1/bin/proxy35.dll
    /wlserver6.1/bin/proxy36.dll
    /wlserver6.1/bin/stackdump.dll
    /wlserver6.1/bin/stackdump_g.dll
    /wlserver6.1/bin/terminalio.dll
    /wlserver6.1/bin/wlenv.dll
    /wlserver6.1/bin/wlntio.dll
    /wlserver6.1/bin/wlntio_g.dll
    /wlserver6.1/bin/wlntrealm.dll
    /wlserver6.1/bin/wlntrealm_ms.dll
    /wlserver6.1/uninstaller/resource/iawin32.dll
    /wlserver6.1/uninstaller_servicepack/resource/iawin32.dll
    manoj <[email protected]> wrote:
    "where to keep *.dll files in weblogic server 6.1 sp2". Application is
    deployed on weblogic server 6.1 sp2. Application is being integrated
    with webmethod to publish data. For that we need to put awssl40jn.dll
    in weblogic server. Could not able to locate where to put this dll file.

  • Moving shared storage repository between server pools

    Is there a way to move a fiberchannel storage repository between server pools? The documentation says its possible only with NFS, but we are moving to new server pool and for this we need to be able to take a loaded SR and remove it from one server pool and discover and use it on another pool?

    Exactly how are you trying to do this. Are you choosing to clone and then running the clone customizer to map your storage and vnics?

  • Encryption for properties file in Weblogic server 6.1

    My application needs to connect to a file server. I need to encrypt the password in properties file. Is there a built in encryption tool in weblogic server 6.1 to achieve that?
    Thanks

    My application needs to connect to a file server. I need to encrypt the password in properties file. Is there a built in encryption tool in weblogic server 6.1 to achieve that?
    Thanks

  • CVU report for shared storage

    During Configuration RAC
    I am running CVU and getting this error message.
    Verification of shared storage accessiblity was unsuccessful on all the nodes.
    Need suggestion

    I don't know. I don't know whether my knowledge is useful in your situation.
    I have really no idea what kind of enviroinment you are using. After pulling teeth, I finally find out you are using EMC - of some sort. I think you might be running Linux, but I'm not quite sure because you don't tell us. I think you have a logical volume visible, and hope that volume is visible as raw devices on a hared environment or using a commercial cluster file system.
    Perhaps you might open a service request with Oracle. Or just assume it is and proceed.
    However - I, as a volunteer, hereby walk away from this question.

  • State databases for shared storage

    When adding a disk to a disk-set (Solaris Volume Manager), it automatically adds a state DB to slice 7 and then of course to the next disk added.
    Sun Cluster recommends 3 state databases on each disk of an internal. Is there a recommendation for the shared disks. I know one is added automatically, should it have more?

    No, you don't need to an any normally. SVM will automatically balance the replicas across the disks added to the set. The only time you might want to mess with this default allocation is if you want to create a preferenced site. In the case where you have two sites and an even number of disk arrays located across these sites, i.e. nodeA & arrayA in siteA, nodeB & arrayB in siteB, you might want to make site A preferred.
    SVM works on the principle of requiring a majority of replicas to allow a diskset to be imported read/write. Mediators help if you have a system failure then at some point later an array failure by allowing the internally held mediator to go 'golden'. However, if you have an instantaneous site failure, then the array and the server are loss simultaneously. The mediator on the remaining site does not go golden. Thus any diskset previously held by the failed site cannot be brought in read-write because there are only 50% of the replicas available (on say arrayB). You can manually delete some 'dead' replicas and get the set imported, but that's not very highly available. Alternatively, you an add an additional replica to siteB's arrayat configuration time and this would give it the majority it needed. However, if site B failed, you are in the same situation! The best option is to have a 3rd site with some of the storage on.
    Why does it work this way? Because if you were in the middle of a change to the configuration at the point of failure you might get the wrong info about the disk arrangement on a set switchover and possibly corrupt data. Data corruption is highly undesirable, so SVM protects you by forcing a manual intervention to resolve the issue.
    Don't know if that helps?
    Regards,
    Tim
    ---

  • Access Denied Error For Shared Folder with Win Server 2008R2 Task Manager Scheduled Task

    Hi,
    I have scheduled a Task with the Task scheduler. It invokes an .EXE file after every 5 min.
    The application is supposed to access some files lying on a different Server's shared path, process them and move them across folders on the Shared path only.
    Problem: When the .EXE gets executed from the Task scheduler, I am getting "Access Denied to the Shared path" error. I have already given Full Control to Everyone as well as to the Account with which the Task has been configured with.
    Another important point to note is, if I run the .EXE manually, the solution is able is able to do everything intended; I don't get any Access Denied error.
    Kindly help me with what needs to be done in order that this issue is resolved. This is really urgent for me.
    Thanks a lot in advance..
    AC

    Hello Alex,
    first of all, make sure your task was correctly create: How to Create Advanced Tasks with the Task
    Scheduler.
    Please, read these:
    TechNet Library Task Security Context
    TechNet Forums post How
    does "Run with the highest privileges" really work in Task Scheduler ? - Look at the answer "...When you want to run a program with admin rights from a standard user account, you have to select "run whether the user is logged
    on or not" and select a user which is member of the admingroup."
    TechNet Forums post
    Log on as batch job right (written on previous post)
    serverfault Task Scheduler is not executing the program
    serverfault
    unable to schedule a task (access denied)
    UAC: Do you receive the User Account Control "Windows need your permission to continue" message to approve the scheduled application ?
    If yes, maybe "Run with highest privileges" option will not take precedence of the UAC. While the Admin Approval Mode for built-in Administrator account is enabled, UAC will still ask for approval according to the settings on the Behavior
    of elevation to prompt for the administrators. Check whether the "User Account Control: Admin Approval Mode for built-in Administrator account" is enabled. If yes, disable it or change the setting on "User Account Control: Behavior
    of elevation to prompt for the administrators" to elevate without prompting.
    Local Computer Policy ---> Computer Configuration ---> Windows Settings ---> Security Settings ---> Local policies ---> Security Options (source: Task
    Scheduler "run with highest privileges": does not work on Windows Server 2008 ?)
    Bye,
    Luca
    Disclaimer: This posting is provided AS IS with no warranties or guarantees, and confers no rights. Whenever you see a helpful reply, click on [Vote As Help] and click on [Mark As Answer] if a post answers your question.

  • Use Sparse Bundle Disk Image for sharing library among users?

    Apple's knowledge base article HT1198 (http://support.apple.com/kb/HT1198) on sharing iphoto libraries among multiple users on the same Macintosh describes using a sparse disk image in the /users/Shared/ directory. For a Mac that uses a Time Capsule for Time Machine backups won't this require the entire iPhoto library to be backed up anytime a picture is added or modified?
    Would using a sparse bundle disk image instead work better?
    Also, HT1198 doesn't say anything about "Partitions" parameter setting in Disk Utility when creating a blank sparse image or sparse bundle disk image. Does it matter what setting is selected if the image is being kept on the Mac internal disk drive?
    Is there any difference between iPhoto '08 or iPhoto '09 when attempting sharing the iPhoto library among users?

    I believe it may require the entire bundle to be backed up. You'd best ask that question in the Time Machine forum. They would know more about the ins and outs of TM there.
    If you can afford an external FW hard drive that would be the best option by far. No worry about filling up the sparse bundle, and you could use the external HD as a work platform to help keep a minimum of 20 GB of free space on your boot drive for optimal performance of system and applications.

  • Shared Storage is not showing in SErver pool physical disk .

    Hi i have added couple of images to give you exact picture what i am doing .. please have a look
    OVM can see Shared storage from Openfiler
    http://www.picpanda.com/viewer.php?file=u1pf8hre1oqufkx6vyr3.gif
    OVM , SErver Pool (physical disk ) can see Shared Storage name
    http://www.picpanda.com/viewer.php?file=0h2dn48x6gh5avr6fob.gif
    OVM does not show the physical disk from Shared storage
    http://www.picpanda.com/viewer.php?file=c3njd7bg4exyjpymsit.gif
    if i remmeber, When i tryed with NFS share from openfiler, its worked before ... but this time i am trying with ISCSI target ..
    thanks
    is there any thing do i have to do to make it work ??

    Fosiul wrote:
    if i remmeber, When i tryed with NFS share from openfiler, its worked before ... but this time i am trying with ISCSI target ..
    is there any thing do i have to do to make it work ??Check to see if the physical disk appears under the default volume group for your san. If not, refresh the san and wait until it does. Also, make sure you have an admin server configured for that san and that your Oracle VM servers are in the default access group.

Maybe you are looking for

  • How to delete un-needed Lion install files

    To recover some extra disk space I am wondering if there are any temporary install files that I can now safely remove.  I did a google search and wasn't able to find anything.  Maybe the Lion install software already cleaned up as part of the process

  • GRC 10.1 to Portal integration steps review

    Dear All Could you let me know the below points for integrating the Portal with GRC 10.1 is correct? a. GRC Portal Plug-in is deployed on Portal b. Check if you can find the Web Service in WS Navigator on your Portal c. Go to "Maintain Configuration

  • Polar plot scale labels and background colour

    Dear Labview Forum, I am trying to create a polar plot.  However when I put the polar plot indicator on my front panel and wire up some data and parameter control boxes, the scale labels are missing.  See polar plot basic.VI.  Are the scale labels th

  • Rollback Segment: dedicate it to a given session ?

    Dear Experts, A job is performing nightly deletion on some data. I created a "big" RBS for it and perform the following: SET TRANSACTION USE ROLLBACK SEGMENT RBS01; DELETE FROM TABLE1 WHERE (...); COMMIT; Is it possible that only this transaction use

  • I know its been asked before, but really, where is the restore button????!!

    OK clever people... lets get this straight: I start up itunes (updated, downloaded this morning) I plug in ipod Then i click on my ipod in the source list I then see all my music appear in a long greyed out list AND, where is the summary tab? What is