Clustering wls6.0 and wls5.1

Hi,
          Does anybody know if it is possible to cluster a wls6.0 and a wls5.1 servers
          Thanks for your help,
          Olivier
          

No, that's not possible.
          All servers in a cluster must be using the same version of WLS as well as on
          the same version of the Service Packs.
          "Olivier Demin" <[email protected]> wrote in message
          news:[email protected]..
          > Hi,
          >
          > Does anybody know if it is possible to cluster a wls6.0 and a wls5.1
          servers
          > ?
          >
          > Thanks for your help,
          > Olivier
          >
          >
          

Similar Messages

  • What is RID in non clustered index and its use

    Hi All,
    I need help regarding following articles on sql server
    1) what is RID in non clustered index and its use.
    2) What is Physical and virtual address space. Difference in 32 bit vs 64 bit Virtual address space
    Regards
    Rahul

    Next time Please ask single question in a thread you will get better response.
    1. RID is location of heap. When you create Non clustered index on heap and
    lookup happens to get extra records RID is used to locate the records. RID is basically Row ID. This is basic definition for you. Please read
    this Thread for more details
    2. I have not heard of Physical address space. I Know Virtual address space( VAS)
    VAS is simple terms is amount of memory( virtual )  'visible' to a process, a process can be SQL Server process or windows process. It theoretically depends on architecture of Operating System. 32 bit OS will have maximum range of 4 G VAS, it's calculated
    like a process ruining on 32 bit system can address max up to 2^32 locations ( which is equivalent to 4 G). Similarly for 64 bit max VAS will be 2^64 which is theoretically infinite. To make things feasible maximum VAS for 64 bit system is kept to 8 TB. Now
    VAS acts as layer of abstraction an intermediate .Instead of all request directly mapping to physical memory it first maps to VAS and then mapped to physical memory so that it can manage request for memory in more coordinated fashion than allowing process
    to do it ,if not it will  soon cause memory crunch.Any process when created on windows will see virtual memory according to its VAS limit.
    Please read
    This Article for detailed information
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Clustered indexes and deadlocks

    Hi,
    I have run into some problem with clustered indexes and deadlocks. I have found some breadcrumbs about this on the web but didn't really understand everything. I am an DBA by accident and mainly BI and DWH developer. The article most relevant to the problem
    seems to be the following:
    SQL Server Deadlocks Caused By Clustered Index Scan .
    The database is running with READ COMMITTED SNAPSHOT Transaction Isolation Level. Problematic seems to be the second query. First a row is inserted into table SubjectRevisionEntity.  Second a row is inserted into table Partner which has a foreign key
    on SubjectRevisionEntity. This foreign key is validated using a Clustered Index Seek.
    Having done some research on the topic using the internet my hypothesis is as follows:
    - Insert from Query 1 in Session 1 locks page in Table SubjectRevisionEntity
    - Now a new session (Session 2) is started. Insert from Query 1 in Session 2 probably locks the same page in Table SubjectRevisionEntity.
    - Insert from Query 2 in Session 1 locks page in Table Partner
       Lock on page in Table SubjectRevisionEntity is necessary to do the Clustered Index Seek. However this page is already locked by Session 2. However, Session 2 needs to lock page in Table Partner which is already locked by Session 1 --> deadlock
    occurs
    Does this make any sense? At the moment I am not having the means to test the hypothesis but I will look after that.
    I am just thinking about countermeasures to undertake. What about configuring the index to avoid page locks? All other queries seem to be fine I suppose as they operate on only one table. My fellows from software engineering favour to replace clustered indexes
    by nonclustered indexes as the already have done in the past. However, I think the disadvantages of nonclustered indexes aka heaps regarding storage (forwarded records) and query performance are much bigger than their use for problems like these.
    Regarding the
    article I did not understand the author's point, that two simultaneous table scans on one table by two sessions won't work. I thought that this is no problem as the sessions would use a shared lock on the table.
    Thank you very much for sharing your expertise in advance!
    Martin

    As you describe this that cannot be the cause of your deadlock.
    After session 1 executes query 1, it will have an IX (Intent Exclusive) lock on the clustered index of table SubjectRevisionEntity (note that a lock on a clustered index is a lock on the table since the table is contained in the clustered index), also an
    IX lock on the page in the index where the new row will be inserted and an X (exclusive) lock on the key that you just inserted.
    When session 2 executes it also needs an IX lock on the clustered index of table SubjectRevisionEntity, this is allowed because multiple sessions can have IX locks on the same resource at the same time, also an IX lock on the page in the index where the
    new row will be inserted (also allowed even if this entry is in the same page as the row inserted by session 1), and an X lock on the key that session 2 inserted.  This is also allowed UNLESS session 2 and session 1 are trying to insert a row with the
    SAME primary key value.  From your description, I gather that session 1 and session 2 are trying to insert different keys.
    Then session 1 attempts to insert a row in Partner which has a foreign key reference to the row in SubjectRevisionEntity.  That means it must check for the existence of the row that session 1 inserted.  But it can do this because all it needs is
    a S (shared) lock on the SubjectRevisionEntity table, and an S lock on the page.  It that get those even though session 2 has an IX lock on those resources because S locks and IX locks are compatible.  It also needs an S lock on the row in SubjectRevisionEntity. 
    That is no problem unless it is trying to reference the row which session 2 just entered.  (Once again, I assume this is not the case in your situation?)  It then inserts the row in Partner getting an IX lock on the Partner table, an IX lock on the
    page and an X lock on the new key in Partner.
    Then session 2 attempts to insert a row in Partner.  That will work unless either it is inserting a row in Partner with the same primary key as the row inserted by session 1 or it is trying to reference the same row in SubjectRevisionEntity that session
    1 inserted.
    So this cannot be the cause of your deadlock unless both sessions are entering the same key values.  The fact that they may both be entering keys on the same page should not be causing you deadlock problems.
    Regarding your question about why 2 table scans of the entire table (or entire clustered index scan if the table has a clustered index), you are correct, the scans do get shared locks and there is no deadlock problem UNLESS both sessions are holding locks
    that are incompatible with S locks.  For example, if you were in read committed mode and session 1 had inserted a row with clustered index key = 47 and session 2 had inserted a row with clustered index key = 23 and and both sessions attempt to do a complete
    clustered index scan (by, for example, by doing something like SELECT <blah blah> FROM <table> WHERE <some nonindexed column> = 0, then both sessions will try to get S locks on every row so session 1 will be stopped at key = 23 and session
    2 will be stopped at key = 47 and that is a deadlock.  BUT
    You are using READ COMMITTED SNAPSHOT, not READ COMMITTED.  In READ COMMITTED SNAPSHOT, writes do not block reads.  So the situation above does not apply to you since neither of the sessions would be blocked because it was attempting to read a
    row which was locked by an update from another session.
    Tom

  • Install Real Application Clusters (RAC) and Automated Storage Management (A

    Do I have to install Real Application Clusters (RAC) and Automated Storage Management (ASM) for ETL
    I have installed Oracle 10g Release 1 on WIndows 2000 with datawarehousing option.
    Is it ok
    Or do I need to install Oracle database again.
    Please help

    Greg,
    From what you describe you did exactly the right thing. The runtime platform service will by default run on DWH01, but you can shut it down on that node and have it run on DWH02 (in case you wanted to bring down DWH01 but still run Warehouse Builder processes). Note that we strongly recommend using the net service name in the location registration in the deployment manager (so that you automatically take advantage of the client-side load balancing feature of RAC (as well as the server-side load balancing)).
    Thanks,
    Mark.

  • Wat is the exact differences between clustered table and pooled table

    hi,
       can you tell me ravi...wat is the exact differences between clustered table and pooled table
    with regards//
    anilreddyg

    Hi Anil Reddy
    Pooled Tables, Table Pools, Cluster Tables, and Table Clusters
    These types of tables are not transparent in the sense that they are not legible or manageable directly using the underlying database system tools. They are managed from within the R/3 environment from the ABAP dictionary and also at runtime when they are loaded into application memory.Pool and cluster tables are logical tables. Physically, these logical tables are arranged as records of transparent tables. The pool and cluster tables are grouped together in other tables, which are of the transparent type. The tables that group together pool tables are known as table pools, or just pools; similarly, table clusters, or just
    clusters, are the tables which group cluster tables.Not all operations that can be performed over transparent tables can be executed over pool or cluster tables.
    For instance, you can manage these tables using Open SQL calls from ABAP, but not Native SQL.These tables are meant to be buffered and loaded in memory, because they are commonly used for storing internal control information and other types of data with no external (business) relevance. SAP recommends that tables of pool or cluster type be used exclusively for control information such as
    program parameters, documentation, and so on. Transaction and application data should be stored in transparent tables.
    Table Pools
    From the point of view of the underlying DBMS as from the point of view of the ABAP dictionary, a table pool is a transparent table containing a group of pooled tables which, when created, were assigned to this table pool.
    Field Type Description
    TABNAME CHAR(10) Table name
    VARKEY CHAR(n) Maximum key length n =< 110
    DATALN INT2(5) Length of the VARDATA record returned
    VARDATA RAW(m) Maximum length of the data varies according to DBMS
    Table Clusters
    Similarly to pooled tables, cluster tables are logical tables which, when created, are assigned to a table cluster. Therefore, a table cluster, or just cluster, groups together several tables of type clusters.Several logical rows from different cluster tables are brought together in a single physical record. The records
    from the cluster tables assigned to a cluster are thus stored in a single common table in the database.A cluster contains a transparent cluster key which must be located at the start of the key of all logical cluster tables to be included in the cluster. As well, a cluster contains a long field (VARDATA), which contains the
    data of the cluster tables for this key. If the data does not fit into a field, continuation records are created.
    Field Type Description
    CLKEY1 CHAR(*) First key fields
    CLKEY2 CHAR(*) Second key field
    CLKEYN CHAR(*) nth key field
    PAGENO INT2(5) Number of the next page
    TIMESTMP CHAR(14) Time stamp
    PAGELG INT2(5) Length of the VARDATA record returned
    VARDATA RAW(*) Maximum length of the data section; varies according to database system
    Working with Tables
    The dictionary includes many functions for working with tables. There are five basic operations you can perform on tables: display, create, delete, modify, copy. Please do not confuse displaying a table with displaying the table entries (table contents). In order to display a table, it must previously exist; otherwise the system will display an error message in the status bar. For the following example, the table TABNA is used. To display this table, from the main dictionary screen, enter the table name in the Object name
    input field with the radio button selected next to Tables. Then, click on the Display button at the bottom of the screen, or press the F7 function key, or, alternatively,
    select Dictionary object Display from the menu.
    In this screen, you can see table information such as
    ¨ Table type, shown next to the name of the object. In the example, it is a transparent table.
    ¨ Short text description.
    ¨ Name of the user who made the last change, and the date of the change.
    ¨ Master language.
    ¨ Table status. On the screen, you can see this table is saved and active.
    ¨ Development class. For information on development classes, refer to Chap. 6.
    Delivery class, which sets the maintenance group for the table. It controls how tables will behave during client copy procedures, upgrades, and so forth.¨
    Tab. Maint. Allowed flag, which indicates whether you can generate a screen for maintaining table entries.
    ¨Then, on the lower part of the screen, you can see the table fields with all associated characteristics such as:
    ¨ Field name.
    ¨ Key indicator. When set, this field is the primary key, or part of it.
    ¨ Data element.
    ¨ Basic data type.
    ¨ Length.
    ¨ Check table.
    ¨ Short text, describing the field.
    Additional information about the table can be displayed by selecting the corresponding functions from the menu or directly from the application toolbar, such as keys, indexes, or technical settings
    Standard table:
    The key access to a standard table uses a sequential search. The time required for an access is linearly dependent on the number of entries in the internal table.
    You should usually access a standard table with index operations.
    Sorted table:
    The table is always stored internally sorted by its key. Key access to a sorted table can therefore use a binary search. If the key is not unique, the entry with the lowest index is accessed. The time required for an access is logarithmically dependent on the number of entries in the internal table.
    Index accesses to sorted tables are also allowed. You should usually access a sorted table using its key.
    Hash table:
    The table is internally managed with a hash procedure. All the entries must have a unique key. The time required for a key access is constant, that is it does not depend on the number of entries in the internal table.
    You cannot access a hash table with an index. Accesses must use generic key operations (SORT, LOOP, etc.).
    Index table:
    The table can be a standard table or a sorted table.
    Index access is allowed to such an index table. Index tables can be used to define the type of generic parameters of a FORM (subroutine) or a function module.
    Just have a look at these links:
    http://help.sap.com/saphelp_nw04/helpdata/en/90/8d7304b1af11d194f600a0c929b3c3/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/74/83015785d811d295a800a0c929b3c3/frameset.htm
    Regards
    Sreeni

  • Function modules to read Time clusters B1 and B2 from PCL1 and PCL2

    Hi All
    Are there any function modules or macros to read time clusters B1 & B2?
    I want to read time data in the clusters for reporting purpose.
    Regards,
    Rupesh Mhatre

    You can also call the FM HR_TIME_RESULTS_GET and get the exact cluster you need from B2 like WPBP, ZE, SALDO etc.
    Otherwise if you want to use the older FM declare the GET_TBUFF and GET_BUFFER_DIR as of below structure.
    DATA: BEGIN OF TBUFF OCCURS 5000.                           "XPMK014785
            INCLUDE STRUCTURE PCL1.
            DATA: SGART(2),
          END OF TBUFF.
    DATA: BEGIN OF BUFFER_DIR OCCURS 2000,                      "XPMK014785
            SGART(2),
            CLIENT LIKE PCL1-CLIENT,
            RELID LIKE PCL1-RELID,
            SRTFD LIKE PCL1-SRTFD,
            NTABX LIKE SY-TABIX, "pointer auf aktuellen satz
            OTABX LIKE SY-TABIX, "pointer auf alten satz (falls vorhanden)
            NNUXT LIKE PCL1-SRTF2, "anzahl folgesaetze aktueller Satz
            ONUXT LIKE PCL1-SRTF2, "anzahl folgesaetze alter Satz
          ofset(3) type p,     "offset innerhalb eines entry
          END OF BUFFER_DIR.
    INT_TIME_RESULTS should be of type PTM_TIME_RESULTS.
    Regards
    Ranganath

  • Difference between the design of clusters PCLx and others like RFBLG etc .

    There are a few nagging questions which I was not able to find in the forum hence i have to post a new question.
    I am a little confused about the difference between the different clusters .
    If i start with RFBLG i.e. the cluster for BSEG BSEC etc ,I can see that the tables which are part of this cluster
    can be viewed through different methods like
    1) whr usd list for RFBLG
    2) dd02l table and give the required parameters there
    now when I compare this with another so-called cluster PCL1 if find that PCL1 is not recognized as  a cluster
    and also I am not able to see the same in dd02l table when i give PCL1 and the tabtype as cluster which I was able to see
    for the RFBLG ,there are other tables similar to RFBLG .
    1) SO what is the difference between the RFBLG type of clusters and the PCLx type of cluster
    2) are pclx and rfblg..type of clusters same ?
    3) why does PCL1 shows that it is a transparent table ? where as rfblg shows in a diff way in se11
    4) i know we access data from PCL1 using import and export stmts ,DO OR CAN WE DO THE SAME FOR RFBLG
    5) I found that each and evry cluster table had diff fields ,this was kinda surprising for me as I had been thinking
    that all cluster tables need to follow a certain rule ,SO WHO DECIDES THE FIELDS OF A TABLE CLUSTER ?
    6) PCL1 has the index button enabled ,which again I think is not according to the cluster table rules?how?
    7) I understand that we can save data in form of internal tables in the PCL1 cluster ,can we do the same in RFBLG ?
    8) Can I think on lines that PCL1 and RFBLG type of cluster are two totally different types of data dictionary objects
    and the usage and implementation of both of them is different and that the design and the BASE of both of such objects
    is different .
    I know this is a long list but I am sure that answers to these questions would really require some one who has really really work hard and invested a lot of time in understanding the dictionary system.I am awaiting a few answers ,few hints and a healthy discussion till we get them .
    Thanks ...
    a

    Hello,
    1/
    BSEG is a typical Cluster Table.
    This means that the physical table BSEG does NOT exist in the database, physical data for BSEG is stored in the database (table) cluster RFBLG.
    In ABAP however you can perform selects on BSEG (with all fields from the SAP repository structure, see SE11 on BSEG), during execution the SAP database layer will translate these statements to physical selects in the RFBLG database table, so in ABAP this is transparant.
    More info :
    [http://help.sap.com/saphelp_nw04/helpdata/en/cf/21f083446011d189700000e8322d00/content.htm|http://help.sap.com/saphelp_nw04/helpdata/en/cf/21f083446011d189700000e8322d00/content.htm]
    2/
    PCL1, PCL2, ... are normal SAP transparent tables, however in HR they are often called HR cluster table.
    Transparent tables are SAP objects where there is also a database table with the same name that contains the physical data.
    However the PCL tables are somewhat different from normal transparent tables (data is compressed, external programs can not interpret the data, ...).
    This means that in ABAP you can not use simple SQL statements to access data in PCL tables (because of compressed format).
    In stead statements like EXPORT TO DATABASE and IMPORT FROM DATABASE need to be used.
    More info :
    [http://fuller.mit.edu/hr/cluster_tables.html|http://fuller.mit.edu/hr/cluster_tables.html]
    Wim

  • Can we rename the clustered instance and change its IP address?

    Hi,
    We have an Sql Server 2008R2 Clustered production instance by name 'ProdVir' configured in 2 nodes(Active-passive) with
    WIndows Server 2008 R2. We also have another clustered instance as disaster recovery by name 'VirDr' configured again
    in another 2 nodes of Windows Server 2008 R2. Every day morning there is mainatenace plan which backups all the database
    in production and another maintenace plan in the disaster recovery server 'VirDr' which restores the backups into the
    VirDr instance. I would like to know that in an eventuality of a disaster in the clustered production instance of 'ProdVir'
    could we rename the the instance VirDr(meant for disaster recovery) to ProdVir and also change the Ip addresses accordingly
    so that the application programs do not have to change the details for the datasource in the connection strings.
    Thanking you in advance,
    Binny Mathew

    Binny - Yes this is quite achievable, higher level steps (You also need to consider other operational checks):
    In the cluster administrator landing page, select your SQL Server Network Name, Press f2 or right click and select rename, give it a new name and then
    Take resource OFFLINE and then
    Bring it ONLINE back again, test your application
    Changing IP Address:
    Add one more Network resource with the new IP Address
    Then you add this IP address to the SQL Server group and set dependency accordingly
    Take SQL Server Application OFFLINE and ONLINE again
    Here you will end-up with 2 IP Addresses, you can keep both of them or remove the old one later
    Good Luck! Please Mark This As Answer if it solved your issue. Please Vote This As Helpful if it helps to solve your issue

  • Automation failing in OSM clustered env and getting -unread block data

    One of the customer is getting following exceptions while trying to place orders on the clustered environment. The same issue is also reported by other two and is discussed in communities (https://communities.oracle.com/portal/server.pt?open=514&objID=187443&mode=2&threadid=367195)
    <04-Jun-2012 11:20:11,369 ICT AM> <ERROR> <message.ClusterMessageHandlerBean> <ExecuteThread: '37' for queue: 'oms.automation'> <Failed to process cluster request for order ID [100739]>
    java.lang.IllegalStateException: unread block data
         at java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2376)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1360)
         at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946)
         at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
         at weblogic.rmi.extensions.server.CBVInputStream.readObject(CBVInputStream.java:64)
         at weblogic.rmi.internal.ServerRequest.copy(ServerRequest.java:261)
         at weblogic.rmi.internal.ServerRequest.sendReceive(ServerRequest.java:166)
         at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:222)
         at com.mslv.oms.security.base.OMSRequestBalancer_y7pdy3_EOImpl_1033_WLStub.routeRequestToRemoteJMSDestination(Unknown Source)
         at com.mslv.oms.automation.plugin.l.a(Unknown Source)
         at oracle.communications.ordermanagement.cluster.message.ClusterMessageHandlerBean.onMessage(Unknown Source)
         at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:466)
         at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:371)
         at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:327)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:4659)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:4345)
         at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:3821)
         at weblogic.jms.client.JMSSession.access$000(JMSSession.java:115)
         at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5170)
         at weblogic.work.ExecuteRequestAdapter.execute(ExecuteRequestAdapter.java:21)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:117)
    <Jun 4, 2012 11:20:11 AM ICT> <Error> <oms> <BEA-000000> <message.ClusterMessageHandlerBean: Failed to process cluster request for order ID [100739]
    java.lang.IllegalStateException: unread block data
         at java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2376)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1360)
         at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946)
         at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
         Truncated. see log file for complete stacktrace
    >

    Logs clearly indicates that OSM is running in legacy mode. “OMSRequestBalancer_y7pdy3_EOImpl_1033_WLStub.routeRequestToRemoteJMSDestination” will not come in picture in case of optimized mode.
    Please make sure below settings:
    1)     Studio – optimized mode.
    2)     Cartridge target version 7.0.3

  • How do you transfer different types of data through a data socket (data types: clusters, images and boolean states)

    I am attempting to transfer positional data (on a predetermined route) overlaid on a map and indications of boolean output states to secondary computer through datasocket (LabVIEW 8.6, Datasocket 4.5).  Is it required to compress all these parts into an array which is bundled and transmitted to the secondary computer which then would have to be unbundled and separated out of the array?  Is this the only option, or are there other methods? Also how would we go about these methods?

    Hi Maruti,
    It seems like the way you described would be the way to do it unless you wanted to pass all the data seperately.
    Check out this KB relating to passing clusters through datasocket: http://digital.ni.com/public.nsf/allkb/1085057DB6F930058625672400646805?OpenDocument
    Kind Regards,
    Owen.S
    Applications Engineer
    National Instruments

  • UCCX Historical Reports client on my pc with two clusters 7x and 9x

    I have two separate clusters, a 7x running UCCX 7x and we are building a new 9.1 cluster running UCCX 9x.
    Can I install the new 9x UCCX Historical Reporting Client and talk to both clusters?  I don’t think both old and new HRC clients can be installed on same pc, or can they?

    Why would you use HRC with version 9.X?  Is CUIC not an option?

  • Clustering problems and load balancing question

              I am using Weblogic 6.1. My Windows NT environment consists of 10 web client-simulator
              machines, 2 App. Server machines and one database server machine. I have defined
              one cluster on each app. server. Each cluster is running 3 Weblogic instances, or
              so it should be when I fix my problems!
              My questions/problems are the following:
              1. Can I use a software dispatcher to perform workload balancing between the 2 weblogic
              clusters? That is, the client-simulator machines send the requests to the software
              dispatcher which performs workload balancing between the 2 Weblogic clusters. The
              clusters perform round-robin amongst all instances. Note that the documentation only
              talks about Hardware Balancing.
              2. I am having problems with my multicast IP addresses. For instance, on one App.
              Server machine, I am using the multicast IP address: 239.0.0.1 for MyCluster. When
              I start the Admin Server, I get a JDBC error: "... multicast socket error: Request
              Time Out". I have used the utils.MulticastTest utility which shows the packets not
              being received:
              I (S1) sent message num 1
              I (S1) sent message num 2
              I (S1) sent message num 3
              I (S1) sent message num 4
              What am I doing wrong?
              3. Re. the cluster configuration:
              NOTE: I have executed my workload using 2 independent App. Server machines with a
              software dispatcher - no clustering. Each App. Server used a jdbc connection pool
              of 84 database connections. The db connections happened to become my bottleneck.
              When I tried to increase the number of connections in the jdbc pool, throughput decreased
              dramatically. Thus, I decided to add a cluster of Weblogic instances to each one
              of my 8 x 900Mhz machines in order to scale up. Unfortunatly, adding clusters have
              not been that simple a task - probably because I am totally new to the Web Application
              Server world!
              Here is what I've got so far:
              I have obtained 3 static IP addresses for the 3 instances of Weblogic instances that
              I wish to run within the cluster. All servers in the cluster use port number 80.
              There is a corresponding DNS entry for each IP address. My base assumption is that
              one of these instances will double up as the Administration Server... Is it true,
              or do I need to define a separate Admin server if I wish to run 3 Weblogic instances
              (each with a connection pool of 84 database connections for a total of 252 database
              connections)?
              Do I need to re-deploy my applications for the cluster? And if so, would this explain
              why I am having problem starting my Admin Server?
              I think this is it for now. Any help will be greatly appreciated!
              Thanks in advance,
              Guylaine.
              

              Guylaine Cantin wrote:
              > I am using Weblogic 6.1. My Windows NT environment consists of 10 web client-simulator
              > machines, 2 App. Server machines and one database server machine. I have defined
              > one cluster on each app. server. Each cluster is running 3 Weblogic instances, or
              > so it should be when I fix my problems!
              >
              > My questions/problems are the following:
              >
              > 1. Can I use a software dispatcher to perform workload balancing between the 2 weblogic
              > clusters? That is, the client-simulator machines send the requests to the software
              > dispatcher which performs workload balancing between the 2 Weblogic clusters. The
              > clusters perform round-robin amongst all instances. Note that the documentation only
              > talks about Hardware Balancing.
              >
              We also support software load balancers (for e.g. resonate)
              The software dispatcher should be intelligent enough to decode the
              cookie and route the request to the appropriate servers. This is
              necessary to maintain sticky load balancing.
              > 2. I am having problems with my multicast IP addresses. For instance, on one App.
              > Server machine, I am using the multicast IP address: 239.0.0.1 for MyCluster. When
              > I start the Admin Server, I get a JDBC error: "... multicast socket error: Request
              > Time Out". I have used the utils.MulticastTest utility which shows the packets not
              > being received:
              >
              > I (S1) sent message num 1
              > I (S1) sent message num 2
              > I (S1) sent message num 3
              > I (S1) sent message num 4
              > ...
              >
              > What am I doing wrong?
              >
              You should run the above utility from multiple windows and see if each
              of them being recognized or not.
              i.e. java utils.MulticastTest -N S1 -A 239.0.0.1
              java utils.MulticastTest -N S1 -A 239.0.0.1
              > 3. Re. the cluster configuration:
              >
              > NOTE: I have executed my workload using 2 independent App. Server machines with a
              > software dispatcher - no clustering. Each App. Server used a jdbc connection pool
              > of 84 database connections. The db connections happened to become my bottleneck.
              > When I tried to increase the number of connections in the jdbc pool, throughput decreased
              > dramatically. Thus, I decided to add a cluster of Weblogic instances to each one
              > of my 8 x 900Mhz machines in order to scale up. Unfortunatly, adding clusters have
              > not been that simple a task - probably because I am totally new to the Web Application
              > Server world!
              >
              You have to stress test your application several times and set
              maxCapacity of the conn pool accordingly.
              > Here is what I've got so far:
              >
              > I have obtained 3 static IP addresses for the 3 instances of Weblogic instances that
              > I wish to run within the cluster. All servers in the cluster use port number 80.
              > There is a corresponding DNS entry for each IP address. My base assumption is that
              > one of these instances will double up as the Administration Server... Is it true,
              > or do I need to define a separate Admin server if I wish to run 3 Weblogic instances
              > (each with a connection pool of 84 database connections for a total of 252 database
              > connections)?
              BEA recommends to use Admin server for administrative tasks only
              like configuring new deployments, jdbc conn pools, adding users etc..
              It's not a good idea to have admin server part of cluster.
              >
              > Do I need to re-deploy my applications for the cluster? And if so, would this explain
              > why I am having problem starting my Admin Server?
              >
              You have to target all your apps to the Cluster.
              > I think this is it for now. Any help will be greatly appreciated!
              >
              > Thanks in advance,
              >
              > Guylaine.
              >
              

  • Common reason for Mirage Server Failure in Clustered Environment and how clients will be switched to other server in a cluster

    Hi,
    Can Anybody share me the information regarding Common reason for Mirage Server Failure in Clustered Environment.
    And how clients will be switched to other server in a cluster to continue their operations from the failed server.
    Regards,
    Bathesha C

    Hello,
    if you have more than one mirage server configured with Load balancing (LB or MSFTNLB) the client would disconnect from the faulting server and then reconnect to an other server to progress with the action as before.
    All Mirage server are stateless and share the same SIS (single instance store) so any server can update or create CVD file set for an client.
    Hope that helps.

  • JMS/clustering design and configuration question

    Hi:
              Any help or feedback much appreciated cos' this topic (no pun intended)
              seems to be complicated.
              I have an application that uses stateless session beans for mainline
              business functionality (heavy database writing) that is say then
              audited. The auditing is taken care of asynchronously by the SS bean's
              business method (after completing whatever it is doing) using a JMS
              connection factory to write to a queue which is consumed by a MDB. So,
              control is returned to the user ostensibly before the auditing is done
              for performance reasons. No transactional semantics are expected between
              the mainline business logic and the audit of same. So, our main use of
              JMS/MDB is for asynchrous operation. This design seems to be fine on a
              single server as long as a persistent store is configured.
              Now the hard part. We need to cluster - again for load-balancing and
              performance. As I've read thru this newsgroup and experimented some,
              I've quickly realized that it's not as simple as just configuring a
              cluster of just EJBs.
              Since the queues are basically used for asynch processing, would the
              following work?
              1) Set up a JMS server config for each WL server in the cluster targeted
              only for each WL server
              2) Set up a different queue for each JMS server with prefixed JNDI name
              (myserver1_someJNDIname)
              3) Configure/code the producer SS EJBs so that they ask the connection
              factory for a connection with a prefixed JNDI name (e.g
              myserver1_someJNDIname) based on the server they are running on (known
              perhaps from a property file local to each server)
              4) Target MDBs to be deployed on all servers in the cluster just like
              regular EJBs
              5) Finally, deploy the MDBs unjarred on each WL server and edit the
              weblogic-ejb-jar.xml on each server to set the -destination name-
              prefixed by the server on which it is deployed (e.g.
              myserver1_someJNDIname)
              Will the above basically accomplish what I'm after, which is a cluster
              of machines each equally capable of locally offloading asynchronous
              processing? I understand that for the most part WL server will use local
              EJBs, etc if they are available. I'm not particularly interested in
              failover at either EJB or JMS level, but rather HttpSession level so
              that if a WL server goes down another could pick up perhaps by by the
              user re-requesting whatever operation failed.
              If this is not possible because of the way clustering works (homogenous)
              then is there some other way to solve this problem. I basically do not
              want a cluster where there is a single point of failure (other than DB
              server) for any given piece of functionality.
              Thanks for any help.
              Alex
              

    First and easiest,
              Just create a set of JMS tables for each server in your db. You can specify
              a JMS tablename prefix in your weblogic properties for eash server using
              weblogic.jms.tableNamePrefix=W5
              as long as the prefix is only two chars. This worked great for us until we
              decided to drop JMS because of its memory leaks. We were having to restart
              our servers every 8 hours.
              Second, get rid of JMS. Dirty little secret:You don't need JMS to do
              asynchronous work! You can use threads inside of J2EE and not break J2EE.
              Can't tell you how for proprietary reasons but you can.
              Mica Cooper
              

  • Standard Java RMI and WLS5.1?

     

    I would surmise that the problem you are seeing is the result of the fact
    that we do not except standard Java RMI calls using the functionality in the
    JDK. You must use the WebLogic implementation of RMI. It is exactly the
    same in terms of APIs, but we have gone through and optimize the underlying
    protocol.
    Please see the documentation for more details.
    We offer both RMI over IIOP and RMI over T3.
    Thanks,
    Michael
    Michael Girdley
    Product Manager, WebLogic Server & Express
    BEA Systems Inc
    Mario Felarca <[email protected]> wrote in message
    news:[email protected]..
    Hello,
    I was trying to get a simple callback demo working using standard java
    RMI and the WLS5.1. Unfortunately, although things seemed to compile
    and launch smoothly, when the client started up and tried to talk to the
    WLS I would get the following error:
    weblogic.rmi.server.ExportException: A description for CallbackImpl was
    found but it could not be read due to: [Failed to find a stub for [class
    CallbackImpl] implements at least one interface [interface Callback]
    which extends Remote.]
    weblogic.rmi.StubNotFoundException: Failed to find a stub for [class
    CallbackImpl] implements at least one interface [interface Callback]
    which extends Remote.
    I tried determining if this was a classpath problem, but all my efforts
    kept producing this result.
    On the flip side, if I retool my objects slightly in order to make them
    use weblogic.rmi.*, then everything works perfectly.
    Is there a tradeoff to using weblogic.rmi over java.rmi?
    Also, does anyone have any ideas as to what might be causing my error
    when using standard rmi?
    Thanks so much in advance,
    Mario-

Maybe you are looking for

  • Wireless printing from mixed Mac (10.6) and Windows 7 network

    My wife's five-year-old iBook is on its last legs and her demands just don't justify dropping a grand on a Macbook, sadly. I'm looking at a $500 Wintel laptop running Windows 7 Home. Will she be able to print to the printers connected to the AX witho

  • Creating separate jars from one package structure

    Hello, I have the following package structure: project project.util project.mainAll of these packages are in the same classes directory. I would like to create one jar that contains project, one jar that contains project.util and one jar that contain

  • Folder with question mark + wont boot from disk

    Hi I am trying to reinstall a powermac g5 from the disk, using a old hard drive that works.. (formatted in NTFS) (hoping that when selected to boot from disk, mac will then erase and reformat it)this image is presented when i hold down the option key

  • Problem in supporting bluetooth headset in N73

    Dear Friend, I am using bluetooth head set it is working but not giving stereo sound. so please help any body, anybody have any software that can help to produce stereo sound.. let me know. thanks

  • Exchange 2013 ECP 500 Error

    We are in the beginning phase of migrating from Exchange 2010 to Exchange 2013. When I login to the exchange 2013 admin console and navigate to servers it does not show me any of my servers. When I navigate to mailflow-receive connectors I receive a