Is replication asynchronous ?

I have a client with a Near Cache, and with the configuration specified as "Client Configuration" below, and two servers with the configuration specified as "Server Configuration" below (note: added the backup-count node). When an object is put into the cache on the client side, it is reasonable to expect the object to be synchronously sent to one of the servers.
The question is, does the replication of that object to the second server get done synchronously or asynchronously? Is there a way to explicitly specify that?
Thanks
Adsen
Client Configuration:
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>near-scheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<near-scheme>
<scheme-name>near-scheme</scheme-name>
<front-scheme>
<local-scheme>
<scheme-ref>local-cache</scheme-ref>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>distributed-cache</scheme-ref>
</distributed-scheme>
</back-scheme>
</near-scheme>
<!--
Default Distributed caching scheme.
-->
<local-scheme>
<scheme-name>local-cache</scheme-name>
<service-name>LocalCache</service-name>
<eviction-policy>LRU</eviction-policy>
<high-units>0</high-units>
<low-units>0</low-units>
<unit-calculator>FIXED</unit-calculator>
<expiry-delay>0</expiry-delay>
<flush-delay>0</flush-delay>
<pre-load>false</pre-load>
</local-scheme>
<distributed-scheme>
<scheme-name>distributed-cache</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
          <local-scheme>
          <scheme-ref>unlimited-backing-map</scheme-ref>
          </local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<local-scheme>
     <scheme-name>unlimited-backing-map</scheme-name>
</local-scheme>
</caching-schemes>
</cache-config>
Server Configuration:
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>distributed-cache</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
Distributed caching scheme.
-->
<distributed-scheme>
<scheme-name>distributed-cache</scheme-name>
<service-name>DistributedCache</service-name>
<!-- To use POF serialization for this partitioned service,
uncomment the following section -->
<!--
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
-->
<backing-map-scheme>
<local-scheme>
<scheme-ref>unlimited-backing-map</scheme-ref>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
<backup-count>1</backup-count>
</distributed-scheme>
<!--
Backing map scheme definition used by all the caches that do
not require any eviction policies
-->
<local-scheme>
<scheme-name>unlimited-backing-map</scheme-name>
</local-scheme>
</caching-schemes>
</cache-config>

user10307225 wrote:
Hi Aleks,
Thanks for your reply.
Is there any way to configure Coherence to do the replication Asynchronously to avoid the replication delays?Hi Adsen,
that is a very bad idea, as that means that your code can return earlier than your data is safe. Now if the primary node died after your code returned but before the data was backed up, your data would be lost.
The main strong point of Coherence is that it safeguards your data, and what you propose makes that theoretically impossible.
If you have concerns about performance then Coherence has multitude of options allowing you to tune the system, but until those performance issues manifest, don't worry about them. And of course stress test your system frequently during development to ensure that if the problems are valid then they do manifest :-)
Also the usual way of improving throughput in high performance systems is batching.
For improving latency, you could
- try to minimize the traffic you need, e.g. send an entry-processor doing the modification instead of sending your entire cached value over the network
- try to have your synchronous code minimal but still retain data safety but e.g. execute the logic asynchronously (with data safety ensuring that it will be executed), e.g. send a command, which will be enqueued synchronously but executed asynchronously (the command pattern can be found in the Coherence Incubator: http://coherence.oracle.com/display/INCUBATOR/Command+Pattern )
Best regards,
Robert

Similar Messages

  • Asynchronous replication

              Hi,
              I couldn't find a way to make session replication asynchronous in weblogic.xml.
              How can I do that?
              thanks,
              Vishu
              

              No...
              S
              "Vishu" <[email protected]> wrote:
              >
              >Hi,
              >I couldn't find a way to make session replication asynchronous in weblogic.xml.
              >How can I do that?
              >
              >thanks,
              >Vishu
              

  • Oracle OCP

    Here’s the scoop. I am pursing an Oracle 9i OCP certification, though I have not taken the tests yet for OCA. I am planning on taking the SQL (007) test this weekend on the Internet. I have the Sybex book to study with for this course. Well I skimmed through the book in a day, really only reading the differences from 81 to 9i. I took the practice test that came with the book last night, it gave me well over an hour to take the test and after I took it I still had over an hour left to review the test. I didn’t review the test and went straight to grading, I only got a little over 60% correct. After reviewing the incorrect answers, I already knew almost all of the correct answers but since I was under a time constraint, I zoomed thru the test to insure I finished in time. Of the questions I missed, most were tricky, which lead me to select the incorrect answer (now I know better). So I have the following questions:
    1)     Is the real test just as tricky as the practice test?
    2)     Are the questions on the practice test very similar to the real test?
    Secondly, I saw something on the Oracle site that stated to achieve the OCP certification, the applicant must have at least one Oracle training course. Question is that I have had several Oracle training courses in the past, do these count? If not, no problem because I am attending the Replication course at the end of next month. But for this course to count, do I have to have all the OCP exams passed prior to taking the course?

    355099 OK, none of the courses I have taken were for 9i, only for 8i and 7. So I see I need to take the approved 9i courses. No problem, because I am planning to take the replication course at the end of next month, it is listed as one of the approved 91 DBA courses.
    By tricky, yes of course only one answer is correct. Though they make it look like at first glance that several answers are correct. After looking closely, you may notice the comma missing. Yes defiantly we should be able to identify the syntax errors, but from experience Oracle does an okay job identifying those kinds of errors at compile time (but not dynamic SQL). Of which are easily fixed.
    Similar: I meant similar, like comparable. Like are they as tricky as the practice test? I don’t mean they are difficult.
    404045 I don’t think the course needs to be taken after you have completed the OCA certification, reason being is that the Oracle courses that count towards the DBA include many different courses of which SQL and Fundamentals I courses are listed.
    All in all, I have about 15 years experience with Oracle DBA tasks. I use Multimaster Replication (asynchronous propagation) extensively, and am very failure with SQL and PRO*C. Thanks for your input guys/gals.

  • Problem with asynchronous replication

    Hi All ,
    In asynchronous replication, I am getting huge number of records in defcall view.
    its very difficult to take care of each row manually.
    Is there any way to create the INSERT/UPDATE/DELETE script from defcall
    or with the help of other views
    by which we can keep all the DB( in asynchronous replication) in sync.
    Thanks in advance.

    If the PUSH job has been executed and yet the calls have failed, they would have moved to DEFERROR.
    If they are in DEFERROR, you can execute DBMS_DEFER_SYS.EXECUTE_ERROR
    Here's a script that I had obtained a long time ago. It would have worked in Oracle8, and 8i. It is from Charles Dye's book "Oracle Distributed Systems"
    http://oreilly.com/catalog/9781565924321
    -- Filename:     deferror.sql
    -- Purpose:     Reports on deferred transaction with errors, and generates
    --          call to dbms_defer_sys.execute_error to clear them.
    -- Author:     Chas. Dye ([email protected])
    -- Date:     28-Jun-1996
    column ORIGIN_TRAN_DB     heading "Origin|Tran|DB"     format a15
    column DEFERRED_TRAN_ID     heading "Deferred|Tran|ID"     format a15
    column DESTINATION     heading "Destination"          format a15
    column ERROR_TIME     heading "Error Time"          format a22
    column ERROR_NUMBER     heading "Error#"          format 999999
    column FIX          heading "Run This to Clear"     format a80
    SELECT     deferred_tran_id,
         deferred_tran_db,
         destination,
         error_number
    FROM     deferror
    SELECT     'EXECUTE dbms_defer_sys.execute_error(' || chr(39) ||
         deferred_tran_id || chr(39) || ', '|| chr(39) ||
         deferred_tran_db || chr(39) || ',  - '|| chr(10) ||chr(39) ||
         destination || chr(39) || ' )'  fix
    FROM     deferror
    /Hemant K Chitale

  • Load Balancing and Asynchronous Multimaster Replication

    We are planning a new project with 2 server lines for load balancing reasons. We also plan to use sessions being valid for 30 minutes. Each of the 2 server lines has its own Oracle database (identical tables) for the transactions. Within one session the load balancing sends the user always to the same server line.
    Independant transactions are executed on both server lines.
    Is Asynchronous Multimaster Replication the adequate solution to have the two databases synchronized, what has to be considered (network, memory, cpu) or what else can (has to) be used ?
    We want to avoid a RAC solution because of the high costs.

    The two options that come to mind here are asynchronous multimaster replication and Oracle Streams.
    - What version of the Oracle database will you be using?
    - What sort of transaction volume do you anticipate? How much redo generation?
    - What sort of connection exists between the two machines?
    - Is there an upper limit on the time that can be taken to replicate a transaction from one machine to the other?
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Partial transaction for multi-master asynchronous replication

    I have a fundamental question about multi-master asynchronous replication.
    Lets consider a situation where we have 2 servers participating in a multimaster asynchronous mode of replication.
    3 tables are part of an Oracle transaction. Now if I mark one of these tables for replication and the other 2 are not a part of any of the replication groups.
    Say as a part of Oracle transaction one record is inserted into each of the 3 tables.
    Now if I start replicating, will the change made in the table marked for replication be replicated on to the other server. Since the change made to the other 2 tables are not propogated by the deferred queue.
    Please reply.
    null

    MR.Bradd piontek is very much correct.If the tables involved are interdependent you have to place them in a group and all of them should exist at all sights in a multi master replication.
    If the data is updated(pushed) from a snapshot to a table at a master site it may get updated if it is not a child table in a relationship.
    But in multi master replication environment even this is not possible.

  • DFS-R 2012 file access in asynchronous replication

    Here is my scenario: Asynchronous replication is set with long period like 2 days in 2 nodes DFS-R. Considering load balancing, client A accessing files on DFS will be redirected to node A while client B will be redirected to node B. There is a text file
    called "test.txt" with date 2014-1-1 on the DFS shared folder, After client A modify "test.txt" with date 2014-2-2 in node A, and the replication doesn't run yet, client B want to read the file test.txt, in which version the file will be
    read by client B?

    Hi,
    When a client accesses a namespace root or folder with targets, the client attempts to access the first target in the referral ordering list. If the target is not available, the client attempts to access the next target.
    Even though the latest test.txt is located on node A, the client B still access the test.txt that located on node B.
    The latest test.txt will be replicated to node B when the dfs replication runs again.
    Best Regards,
    Mandy 
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Asynchronous multimaster replication and DB shutdown

    Hi
    I have a question:
    What Happen if in replication environment (Asynch multimaster) master defenition site or other master site shutdown?
    Are transaction remained in deffered queue?

    Thank you, so I should write in documentation that shutdown Abort is forbidden

  • Slow replication to asynchronous replica

    Are there any guidelines as to what a 'normal' throughput for replication is using AlwaysOn?
    I have a 100 megabit connection that is able to replicate over 1000 databases using log shipping, however the same connection frequently gets a backlog for a system using AlwaysOn.
    The connection has a ping time of approximately 200ms, however can always achieve 100 megabits when pushed using UDP.
    Are there any settings to increase the number of TCP connections, or can communications between replicas use UDP instead?

    Hi,
    Your question falls into the paid support category which requires a more in-depth level of support. Please visit the below link to see the various paid support options that are
    available to better meet your needs.
    http://support.microsoft.com/default.aspx?id=fh;en-us;offerprophone 
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Jun Zh - MSFT
    Microsoft Online Community Support

  • Error while creating MV replication group object

    Hi,
    I am getting error while creating replication group object. I tried to create using OEM and SQLPlus
    OEM error
    This error while creating M.V. rep. group object
    There is a table or view named SCOTT.EMP.
    It must be dropped before a materialized view can be created.
    In SQLPLUS
    SQL> CONNECT MVIEWADMIN/MVIEWADMIN@SWEET
    Connected.
    SQL>
    SQL> BEGIN
    2 DBMS_REPCAT.CREATE_MVIEW_REPOBJECT (
    3 gname => 'SCOTT',
    4 sname => 'KARTHIK',
    5 oname => 'emp_mv',
    6 type => 'SNAPSHOT',
    7 min_communication => TRUE);
    8 END;
    9 /
    BEGIN
    ERROR at line 1:
    ORA-23306: schema KARTHIK does not exist
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 2840
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 773
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 5570
    ORA-06512: at "SYS.DBMS_REPCAT_SNA", line 82
    ORA-06512: at "SYS.DBMS_REPCAT", line 1332
    ORA-06512: at line 2
    Please not already I have created KARTHIK schema.

    Arthik,
    I think I know what may have happened.
    As I can see you are trying to create support for an updateable materialized view.
    You have to make sure the name of the schema that owns the materialized view is the same as the schema owner of the master table (at master site).
    From the code you have shown, I bet the owner of table EMP is SCOTT.
    From the other hand, you want to create materialized view EMP_MV under schema KARTHIK that refers to table SCOTT.EMP at master site.
    According to the documentation, the schema name used in DBMS_REPCAT.CREATE_MVIEW_REPOBJECT must be same as the schema that owns the master table.
    Please check the documentation at the link below
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14227/rarrcatpac.htm#i109228
    I tried to reproduce your example in my environment, and I got exactly the same error which actually confirms my assumption that the reason for the error is the fact that you tried to create the materialized view in a schema with different name than the one where master table exists.
    I'll skip some of the steps that I used to create the replication environment.
    I have two databases, DB1.world and DB2.world
    On DB2.world I will generate replication support for table EMP which belongs to user SCOTT
    SQL> conn scott/*****@DB2.world
    Connected.
    SQL>create materialized view log on EMP with primary key;
    Materialized view log created.
    SQL>
    SQL>conn repadmin/*****@DB2.world
    Connected.
    SQL>BEGIN
      2       DBMS_REPCAT.CREATE_MASTER_REPGROUP(
      3         gname => 'GROUPA',
      4         qualifier => '',
      5         group_comment => '');
      6*   END;
    PL/SQL procedure successfully completed.
    SQL>BEGIN
      2       DBMS_REPCAT.CREATE_MASTER_REPOBJECT(
      3         gname => 'GROUPA',
      4         type => 'TABLE',
      5         oname => 'EMP',
      6         sname => 'SCOTT',
      7         copy_rows => TRUE,
      8         use_existing_object => TRUE);
      9*   END;
    10  /
    PL/SQL procedure successfully completed.
    SQL> BEGIN
      2       DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT(
      3         sname => 'SCOTT',
      4         oname => 'EMP',
      5         type => 'TABLE',
      6         min_communication => TRUE);
      7    END;
      8  /
    PL/SQL procedure successfully completed.
    SQL>execute DBMS_REPCAT.RESUME_MASTER_ACTIVITY(gname => 'GROUPA');
    PL/SQL procedure successfully completed.
    SQL> select status from dba_repgroup;
    STATUS                                                                         
    NORMAL                                                                          Now let's create updateable materialized view at DB1. Before that I want to let you know that I created one sample in DB1 user named MYUSER. MVIEWADMIN is Materialized View administrator.
    SQL>conn mviewadmin/****@DB1.world
    Connected.
    SQL>   BEGIN
      2       DBMS_REFRESH.MAKE(
      3         name => 'MVIEWADMIN.MV_REFRESH_GROUPA',
      4         list => '',
      5         next_date => SYSDATE,
      6         interval => '/*1:Hr*/ sysdate + 1/24',
      7         push_deferred_rpc => TRUE,
      8         refresh_after_errors => TRUE,
      9         parallelism => 1);
    10    END;
    11  /
    PL/SQL procedure successfully completed.
    SQL>   BEGIN
      3       DBMS_REPCAT.CREATE_SNAPSHOT_REPGROUP(
      5         gname => 'GROUPA',
      7         master => 'DB2.wolrd',
      9         propagation_mode => 'ASYNCHRONOUS');
    11    END;
    12  /
    PL/SQL procedure successfully completed.
    SQL>conn myuser/*****@DB1.world
    Connected.
    SQL>CREATE MATERIALIZED VIEW MYUSER.EMP_MV
      2    REFRESH FAST
      3    FOR UPDATE
      4    AS SELECT EMPNO, ENAME, JOB, MGR, SAL, COMM, DEPTNO, HIREDATE
      5*      FROM   [email protected];
    Materialized view created.
    SQL>conn mviewadmin/******@DB1.world
    Connected.
    SQL> BEGIN
      2       DBMS_REFRESH.ADD(
      3         name => 'MVIEWADMIN.MV_REFRESH_GROUPA',
      4         list => 'MYUSER.EMP_MV',
      5         lax => TRUE);
      6    END;
      7  /
    PL/SQL procedure successfully completed.And now lets run CREATE_MVIEW_REPOBJECT.
    SQL>   BEGIN
      2       DBMS_REPCAT.CREATE_MVIEW_REPOBJECT(
      3         gname => 'GROUPA',
      4         sname => 'MYUSER',
      5         oname => 'EMP_MV',
      6         type => 'SNAPSHOT',
      7         min_communication => TRUE);
      8    END;
      9  /
      BEGIN
    ERROR at line 1:
    ORA-23306: schema MYUSER does not exist
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 2840
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 773
    ORA-06512: at "SYS.DBMS_REPCAT_SNA_UTL", line 5570
    ORA-06512: at "SYS.DBMS_REPCAT_SNA", line 82
    ORA-06512: at "SYS.DBMS_REPCAT", line 1332
    ORA-06512: at line 3 I reproduced exactly the same error message.
    So the problem is clearly in the schema name that owns the materialized view.
    Now lets see if what would happen if I create the MV under schema SCOTT which has the same name as the schema on DB2.world where the master table exists.
    SQL>conn scott/****@DB1.world
    Connected.
    SQL>CREATE MATERIALIZED VIEW SCOTT.EMP_MV
      2    REFRESH FAST
      3    FOR UPDATE
      4    AS SELECT EMPNO, ENAME, JOB, MGR, SAL, COMM, DEPTNO, HIREDATE
      5*      FROM   [email protected];
    Materialized view created.
    SQL>conn mviewadmin/******@DB1.world
    Connected.
    SQL> BEGIN
      2       DBMS_REFRESH.ADD(
      3         name => 'MVIEWADMIN.MV_REFRESH_GROUPA',
      4         list => 'SCOTT.EMP_MV',
      5         lax => TRUE);
      6    END;
      7  /
    PL/SQL procedure successfully completed.And now lets run CREATE_MVIEW_REPOBJECT.
    SQL>   BEGIN
      2       DBMS_REPCAT.CREATE_MVIEW_REPOBJECT(
      3         gname => 'GROUPA',
      4         sname => 'SCOTT',
      5         oname => 'EMP_MV',
      6         type => 'SNAPSHOT',
      7         min_communication => TRUE);
      8    END;
    PL/SQL procedure successfully completed.As you can see everything works fine when the name of the schema owner of the MV at DB1.world is the same as the schema owner of the master table at DB2.world .
    -- Mihajlo
    Message was edited by:
    tekicora

  • Question on replication in Oracle 10G Release 2

    Good day,
    I have a few questions on setting up replication that fits my scenario described below. Thank you in advance for reading and answering my post.
    Scenario
    I need to replicate 100-200 tables from the first OLTP server to the second DSS server that is read-only. The servers are physically located in different countries. Both servers use Oracle 10G Release 2. Required frequency of refreshes is 1-3 hours.
    Questions
    1. Is it optimal to use materialized views with fast/force refreshes for implementation of this scenario? If no, what are the better options?
    2. How do network interruptions and latency affect stability of work of replication with materialized views?
    3. How big is additional performance overhead at OLTP (source) server due to setting up replication with materialized views?

    1) I guess it depends on how you define "optimal". It's certainly a reasonable option. You might also look at Streams or even logical standby databases. There are various trade-offs involved, so it really depends on your environment.
    2) What does "stability of work of replication" mean, exactly? Obviously, if the network fails, the replication job(s) will generate errors. Depending on how you set things up, the replication process will be retried after increasing intervals until it succeeds.
    3) Maintaining materialized view logs on the OLTP system could certainly impact performance-- the logs have to be maintained synchronously with the OLTP transactions. That may or may not noticably impact OLTP transaction performance-- it's probably roughly equivalent to putting a trigger on each of the 100-200 tables. Something like Streams is designed to put less load on the source system because changes are captured asynchronously.
    Justin

  • Query regarding replication process in oracle 8i

    While replicating the master database I need some data. For that I stopped the replication process but in that time the application using the master database was not able to modify the database content because stopping of replication process put the master definition in read only mode.
    So my query is Does oracle 8i puts the master definition site in read only mode if someone stops the replication ? If yes then is there any way to avoid this?
    Thanks

    Hi
    First of all why you are using so much old version (oracle 8i).
    when we have synchronous oracle database replication (multi master replication) then before saving our transaction at source site it will communication at all the oracle database server by using database link if at one site it find any link breakage then it will not save transaction at the source site and that transaction will ber rollback. ( Mean in synchronous replication your database link will be up any all time. This method is used where there are heavy network resource or they have replication on LAN.
    In Relication 90% people use Asynchronous replication
    In asynchronous replication if database link become down then our transaction become pending in the queue when link become resoted then transaction are pushed one by one in the fifo (First in First our).to their destination site to all other master or master definition sites.
    Second
    When we stopp replication there are two methods to stop the replication
    Method A is quised the replication group at master definition site. When you quised replication group then your all tables present in that replication becomes read only you can not perform DML statment on that tables at all sites. and it is only perform by DBA to perform some administration activities like include new table in a group or regererate replication support of existing tables.or add new replication site.
    Method B we totally sotpped replication that now our this server will not be in replication further. for this we use store procedure. like this
    connect repadmin/repadmin@hr
    SQL> Execute DBMS_REPUTIL.REPLICATION_OFF;
    PL/SQL procedure successfully completed.
    If you wants to again continue replication at that server. after connection wtih replication administrator apply the following command
    SQL> Execute DBMS_REPUTIL.REPLICATION_ON;
    PL/SQL procedure successfully completed.
    For further problem you can contact me any time
    Best Regard
    Abdul Hameed Malik
    [email protected]
    Islambad Pakistan
    Edited by: Abdul Hameed on Feb 9, 2009 12:04 AM
    Edited by: Abdul Hameed on Feb 9, 2009 12:05 AM

  • Queries related to Replication concepts

    Hi,
    I have following queries w.r.t. replications
    1) What is subscriber database and whta's the purpose of this.
    2) What' the difference between the Standby database and Subscriber database.
    3) if I have a standby database then why do we need subscriber database
    4) Can i have more then one standby database.
    Regards,
    Harmeet Kaur

    The A/S pair concept goes as follows:
    - Two 'master' databases as a tightly coupled pair; at any moment one has the 'active' role and the other has the 'standby' role.
    - There can only ever be two master databases within the overall configuration
    - Updates are only allowed at the active, the standby is read only
    - Cache tables (if used) are present in both masters as actual cache groups
    - Cache autorefresh drives into the active master and is replicated to the standby, updates to AWT cache groups happen at the active and are replicated to the standby; the standby propagates them to Oracle
    - Role switch within the pair is easy and can be as a result of a failover or a managed switchover
    - The two masters must be on the same LAN or at least the same network with LAN characteristics and their system clocks must be aligned to within 250 ms
    - Replication between the masters can be asynchronous, return receipt (on request) or return twosafe (on request)
    - Subscribers are optional. You can have 0 to 128 of them.
    - Subscribers are always read only
    - Subscribers can reside on the same LAN as the masters or remotely (e.g. across a WAN)
    - Replication to the subscribers is always asynchronous
    - Cache tables (if used) are present on the subscribers as regular (non-cache) tables
    - While it is possible to convert a subscriber to a master this will mean that it is no longer part of this A/S pair setup. Thsi is usually only done in a DR scenario where a remote master is being used to instantiate a new A/S pair at the disaster recovery site.
    I hope this clarifies the use of subscribers. They are primarily used for:
    1. Read scale out (reader farm)
    2. Simple DR
    3. Oracle/AWT DR (advanced use case)
    Chris

  • Value Mapping replication issue

    Hi  PI Experts,
    I am working on the Value mapping replication scenario using Z-table created in R/3 system.
    I have configured the value mapping Replication Out Abap proxy.
    I am getting the following error :
    Audit Log for Message: 4d404b41-39e4-0083-e100-80008b3557e6
    Time Stamp Type Description
    2011-01-27 07:56:19 Information The message was successfully received by the messaging system. Protocol: XI URL: http://gendevhrcx51.unix.appliarmony.net:54000/MessagingSystem/receive/JPR/XI Credential (User): PIAPPLUSER
    2011-01-27 07:56:19 Information Using connection JPR. Trying to put the message into the receive queue.
    2011-01-27 07:56:19 Information Message successfully put into the queue.
    2011-01-27 07:56:19 Information The message was successfully retrieved from the receive queue.
    2011-01-27 07:56:19 Information The message status was set to DLNG.
    2011-01-27 07:56:19 Information Java Proxy Runtime (JPR) accepted the message.
    2011-01-27 07:56:19 Error JPR could not process the message. Reason: Cannot locate proxy bean ValueMappingApplication.
    2011-01-27 07:56:19 Error Delivering the message to the application using connection JPR failed, due to: com.sap.engine.interfaces.messaging.api.exception.MessagingException: Error processing inbound message. Exception: Cannot locate proxy bean ValueMappingApplication.
    2011-01-27 07:56:19 Information The message status was set to WAIT.
    2011-01-27 07:56:19 Information The asynchronous message was successfully scheduled to be delivered at Thu Jan 27 08:01:19 CET 2011.
    I have followed the threads :
    1) /people/udo.martens/blog/2009/04/03/value-mapping-replication-scenario
    2) ValueMappingReplication in PI 7.1 
    3) How to Perform Value Mapping u2013 A Walkthrough ->  Sarath Chandra Kandadai
    which had similar issue but could not make any headway.
    Questions :
    1) Are there any specific PIAPPLUSER authorizations required ,I have configured the CC as per the 3rd thread.
    2) There is an issue with SLD access when I look at the JPR monitoring,could be the possible reason.
      SLD access SLD host:port = gendevhrcx51:54000
    Error getting JPR configuration from SLD. Exception: No entity of class SAP_BusinessSystem for DHX.SystemHome.gendevhrcx51 found
    No access to get JPR configuration
    I have refered to the Note : 809420 and asked the basis team to look into this.
    I am running out of ideas ,request you guys to help on this issue.
    Thanks
    -Alok

    Hi Alok,
    i have similar error. Here the error:
    30.12.2013 20:40:17.789
    Information
    Java Proxy Runtime (JPR) accepted the message.
    30.12.2013 20:40:17.871
    Error
    JPR could not process the message. Reason: No remote bean found for reference of class com.sun.proxy.$Proxy352.
    30.12.2013 20:40:17.876
    Error
    Delivering the message to the application using connection JPR failed, due to: com.sap.engine.interfaces.messaging.api.exception.MessagingException: Error processing inbound message. Exception: No remote bean found for reference of class com.sun.proxy.$Proxy352.
    30.12.2013 20:40:17.911
    Information
    The asynchronous message was successfully scheduled to be delivered at Mon Dec 30 20:45:17 CET 2013.
    I have registered the inbound interfaces:
    http://sap.com/xi/XI/System#ValueMappingReplication = localejbs/sap.com/com.sap.xi.services/ValueMappingApplication:valueMappingReplication
    http://sap.com/xi/XI/System#ValueMappingReplicationSynchronous = localejbs/sap.com/com.sap.xi.services/ValueMappingApplicationSynchronous:valueMappingReplicationSynchronous
    2 interfaces found
    But somehow the bean is not found and gave me the error :
    JPR could not process the message. Reason: No remote bean found for reference of class com.sun.proxy.$Proxy352.
    Can you tell me in detail what you have done to resolve the problem? I send the test data from soapui using the outbound interface ValueMappingReplicationOut provided by the content in SAP BASIS 7.11.
    Thanks,
    Ly-Na

  • Multimaster replication survival & overhead

    Hi
    We are planning a system which should have a very high survivability rate. To this aim we consider using 3 oracle servers (8.1.7), located in 3 different sites, replicated in an asynchronous multimaster configuration, with a short refresh time. The nature of the application is that many small update transactions are happening constantly.
    Several questions:
    1. We are trying to assess the required bandwidth of the site-to-site network connection we will need to support this configuration. What is the approx. network overhead of a small (~500 bytes) transaction?
    2. Provided we come up with a smart enough conflict resolving mechanism, will this system survive? It is vital that committed transactions will never, ever, be lost.
    3. Can the 3 db architecture can be left in an inconsistent state? For example, can it be that a committed transaction in database A, is propagated to database B, and then A fails before it can be propagated to database C? leaving inconsistencies in working databases B & C (atleast until A is restored).
    4. How difficult would be to return such a system to normal state after one (or more) sites fails?
    Thanks,
    Idan

    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by Patricia McElroy ([email protected]):
    This requires a bit more detail - what indicates that replication is hung... is there anything in any of the database logs that indicates a problem. You indicate that even changes to the local database can not be made - is this true also for non-replicated tables? Also, Run "Select * from dba_repprop;" and confirm that the method for propagation is indeed ASYNCHRONOUS. <HR></BLOCKQUOTE>
    thanks patricia
    here is the details:
    1. i am having 2 DB servers at remote locations running oracle 8.1.6 on solaris 2.7
    2. When thre is a link fails between these two server, i am not able to put the job in a deffered que transactions though i have scheduled job and link as well.
    3. it's not allowing me to commit insert statement on local server,when the link is down.
    null

  • Unstable session replication in a HA cluster (CF10)

    Hi,
    We have tried to create a HA cluster with requests being distributed round robin to N instances of coldfusion, we are NOT using sticky sessions as we are replication session state to all cf instances. What we are seing is that all is fine with low to moderate load, however under heavy load and at random times the replication fails and leads to things in session scope not working. This manifests in users not being able to login to our application (we store a token in session scope to store logged in status).
    Again key point, under low to moderate load it all works fine, users are directed to random nodes in the cluster and their session is picked up fine as the session is distributed to all nodes,so pretty confident config is right.
    Linux servers using CF10 with update 12 applied. Also running is fusion reactor 5.04 on all instances. Each instance has a 64GB heap, Java 7.0.15 (latest certified).
    Firstly apache setup.
    workers.properties
    worker.list=balancer, jkstatus
    worker.jkstatus.type=status
    worker.balancer.type=lb
    worker.balancer.balance_workers=cfusion_master,cfusion_slave2,cfusion_slave1
    worker.balancer.method=R
    worker.balancer.sticky_session=False
    worker.balancer.ping_mode=A
    worker.cfusion_master.type=ajp13
    worker.cfusion_master.host=localhost
    worker.cfusion_master.port=8012
    worker.cfusion_master.max_reuse_connections=250
    worker.cfusion_master.lbfactor=100
    worker.cfusion_slave2.reference=worker.cfusion_master
    worker.cfusion_slave2.port=8014
    worker.cfusion_slave1.reference=worker.cfusion_master
    worker.cfusion_slave1.port=8013
    Now the server.xml from 2 nodes (as an example if I run a 2 node cluster)
    One of the configs from a server in the cluster
    <Server port="8007" shutdown="SHUTDOWN">
      <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on">
      </Listener>
      <Listener className="org.apache.catalina.core.JasperListener">
      </Listener>
      <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener">
      </Listener>
      <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener">
      </Listener>
      <GlobalNamingResources>
        <Resource description="User database that can be updated and saved" name="UserDatabase" pathname="conf/tomcat-users.xml" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" type="org.apache.catalina.UserDatabase" auth="Container">
        </Resource>
      </GlobalNamingResources>
      <Service name="Catalina">
        <Executor name="tomcatThreadPool" minSpareThreads="4" maxThreads="150" namePrefix="catalina-exec-">
        </Executor>
        <Connector port="8012" protocol="AJP/1.3" connectionTimeout="600000" redirectPort="8445" tomcatAuthentication="false">
        </Connector>
        <Engine jvmRoute="cfusion" name="Catalina" defaultHost="localhost">
          <Realm className="org.apache.catalina.realm.LockOutRealm">
            <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase">
            </Realm>
          </Realm>
          <Host name="localhost" autoDeploy="false" unpackWARs="true" appBase="webapps">
            <Valve pattern="%h %l %u %t &quot;%r&quot; %s %b" directory="logs" prefix="localhost_access_log." className="org.apache.catalina.valves.AccessLogValve" suffix=".txt" resolveHosts="false">
            </Valve>
          </Host>
          <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
            <Manager notifyListenersOnReplication="true" expireSessionsOnShutdown="false" className="org.apache.catalina.ha.session.DeltaManager">
            </Manager>
            <Channel className="org.apache.catalina.tribes.group.GroupChannel">
              <Membership port="45564" dropTime="3000" address="228.0.0.4" className="org.apache.catalina.tribes.membership.McastService" frequency="500">
              </Membership>
              <Receiver port="4001" autoBind="100" address="auto" selectorTimeout="5000" maxThreads="6" className="org.apache.catalina.tribes.transport.nio.NioReceiver">
              </Receiver>
              <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
                <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender">
                </Transport>
              </Sender>
              <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector">
              </Interceptor>
              <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor">
              </Interceptor>
            </Channel>
            <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter="">
            </Valve>
            <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve">
            </Valve>
            <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener">
            </ClusterListener>
            <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener">
            </ClusterListener>
          </Cluster>
        </Engine>
        <Connector port="8499" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="20000" redirectPort="8443" executor="tomcatThreadPool">
        </Connector>
      </Service>
    </Server>
    Config from one of the other nodes
    <Server port="8008" shutdown="SHUTDOWN">
      <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on">
      </Listener>
      <Listener className="org.apache.catalina.core.JasperListener">
      </Listener>
      <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener">
      </Listener>
      <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener">
      </Listener>
      <GlobalNamingResources>
        <Resource description="User database that can be updated and saved" name="UserDatabase" pathname="conf/tomcat-users.xml" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" type="org.apache.catalina.UserDatabase" auth="Container">
        </Resource>
      </GlobalNamingResources>
      <Service name="Catalina">
        <Executor name="tomcatThreadPool" minSpareThreads="4" maxThreads="150" namePrefix="catalina-exec-">
        </Executor>
        <Connector port="8013" protocol="AJP/1.3" connectionTimeout="600000" redirectPort="8446" tomcatAuthentication="false">
        </Connector>
        <Engine jvmRoute="cfusion" name="Catalina" defaultHost="localhost">
          <Realm className="org.apache.catalina.realm.LockOutRealm">
            <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase">
            </Realm>
          </Realm>
          <Host name="localhost" autoDeploy="false" unpackWARs="true" appBase="webapps">
            <Valve pattern="%h %l %u %t &quot;%r&quot; %s %b" directory="logs" prefix="localhost_access_log." className="org.apache.catalina.valves.AccessLogValve" suffix=".txt" resolveHosts="false">
            </Valve>
          </Host>
          <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
            <Manager notifyListenersOnReplication="true" expireSessionsOnShutdown="false" className="org.apache.catalina.ha.session.DeltaManager">
            </Manager>
            <Channel className="org.apache.catalina.tribes.group.GroupChannel">
              <Membership port="45564" dropTime="3000" address="228.0.0.4" className="org.apache.catalina.tribes.membership.McastService" frequency="500">
              </Membership>
              <Receiver port="4002" autoBind="100" address="auto" selectorTimeout="5000" maxThreads="6" className="org.apache.catalina.tribes.transport.nio.NioReceiver">
              </Receiver>
              <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
                <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender">
                </Transport>
              </Sender>
              <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector">
              </Interceptor>
              <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor">
              </Interceptor>
            </Channel>
            <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter="">
            </Valve>
            <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve">
            </Valve>
            <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener">
            </ClusterListener>
            <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener">
            </ClusterListener>
          </Cluster>
        </Engine>
        <Connector port="8500" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="20000" redirectPort="8443" executor="tomcatThreadPool">
        </Connector>
      </Service>
    </Server>
    So what do i see in the logs?. Well sometimes I see exceptions like this
    Mar 05, 2014 9:55:19 PM org.apache.catalina.ha.session.DeltaManager messageReceived
    SEVERE: Manager [localhost#/]: Unable to receive message through TCP channel
    java.lang.IllegalStateException: removeAttribute: Session already invalidated
              at org.apache.catalina.ha.session.DeltaSession.removeAttribute(DeltaSession.java:617)
              at org.apache.catalina.ha.session.DeltaRequest.execute(DeltaRequest.java:171)
              at org.apache.catalina.ha.session.DeltaManager.handleSESSION_DELTA(DeltaManager.java:1347)
              at org.apache.catalina.ha.session.DeltaManager.messageReceived(DeltaManager.java:1293)
              at org.apache.catalina.ha.session.DeltaManager.messageDataReceived(DeltaManager.java:1014)
              at org.apache.catalina.ha.session.ClusterSessionListener.messageReceived(ClusterSessionListe ner.java:92)
              at org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:897)
              at org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:878)
              at org.apache.catalina.tribes.group.GroupChannel.messageReceived(GroupChannel.java:278)
              at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelIntercepto rBase.java:84)
              at org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.messageReceived(TcpFailu reDetector.java:113)
              at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelIntercepto rBase.java:84)
              at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelIntercepto rBase.java:84)
              at org.apache.catalina.tribes.group.ChannelCoordinator.messageReceived(ChannelCoordinator.ja va:253)
              at org.apache.catalina.tribes.transport.ReceiverBase.messageDataReceived(ReceiverBase.java:2 87)
              at org.apache.catalina.tribes.transport.nio.NioReplicationTask.drainChannel(NioReplicationTa sk.java:212)
              at org.apache.catalina.tribes.transport.nio.NioReplicationTask.run(NioReplicationTask.java:1 01)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
              at java.lang.Thread.run(Thread.java:722)
    I'm unsure why this happens as tribes uses certified mesaging so it should have resent right?, in any case I believe I can change it so messages are not sent asynchronously, should sort this out.
    I see (good) messages like this
    Mar 05, 2014 9:42:19 PM org.apache.catalina.ha.session.DeltaManager startInternal
    INFO: Register manager localhost#/ to cluster element Engine with name Catalina
    Mar 05, 2014 9:42:19 PM org.apache.catalina.ha.session.DeltaManager startInternal
    INFO: Starting clustering manager at localhost#/
    Mar 05, 2014 9:42:19 PM org.apache.catalina.ha.session.DeltaManager getAllClusterSessions
    INFO: Manager [localhost#/], requesting session state from org.apache.catalina.tribes.membership.MemberImpl[tcp://{192, 168, 128, 50}:4001,{192, 168, 128, 50},4001, alive=68824148, securePort=-1, UDP Port=-1, id={123 126 89 39 96 -59 69 8 -113 79 51 122 25 108 -11 -110 }, payload={}, command={}, domain={}, ]. This operation will timeout if no session state has been received within 60 seconds.
    Mar 05, 2014 9:42:20 PM org.apache.catalina.ha.session.DeltaManager waitForSendAllSessions
    INFO: Manager [localhost#/]; session state send at 3/5/14 9:42 PM received in 929 ms.
    Mar 05, 2014 9:42:20 PM org.apache.catalina.ha.session.JvmRouteBinderValve startInternal
    INFO: JvmRouteBinderValve started
    So session state dies appear to be flying around the cluster, I do nightly restarts of some of the nodes due to another issue I have with an ever growing heap (separate issue), interestingly I also see nodes leave and join the cluster, again this is good (shows the multicast is working, and also that replication should be working).
    Mar 05, 2014 2:30:16 AM org.apache.catalina.tribes.group.interceptors.TcpFailureDetector memberDisappeared
    INFO: Verification complete. Member disappeared[org.apache.catalina.tribes.membership.MemberImpl[tcp://{192, 168, 128, 50}:4001,{192, 168, 128, 50},4001, alive=18629101, securePort=-1, UDP Port=-1, id={-2 65 10 -79 53 -75 76 52 -99 63 -90 -120 34 -89 -14 100 }, payload={}, command={66 65 66 89 45 65 76 69 88 ...(9)}, domain={}, ]]
    Mar 05, 2014 2:30:16 AM org.apache.catalina.ha.tcp.SimpleTcpCluster memberDisappeared
    INFO: Received member disappeared:org.apache.catalina.tribes.membership.MemberImpl[tcp://{192, 168, 128, 50}:4001,{192, 168, 128, 50},4001, alive=18629101, securePort=-1, UDP Port=-1, id={-2 65 10 -79 53 -75 76 52 -99 63 -90 -120 34 -89 -14 100 }, payload={}, command={66 65 66 89 45 65 76 69 88 ...(9)}, domain={}, ]
    Mar 05, 2014 2:35:16 AM org.apache.catalina.ha.tcp.SimpleTcpCluster memberAdded
    INFO: Replication member added:org.apache.catalina.tribes.membership.MemberImpl[tcp://{192, 168, 128, 50}:4001,{192, 168, 128, 50},4001, alive=1083, securePort=-1, UDP Port=-1, id={123 126 89 39 96 -59 69 8 -113 79 51 122 25 108 -11 -110 }, payload={}, command={}, domain={}, ]
    So stuck now on how to proceed, to establish why at random times the replication fails, leading to cluster collapse. Could it be the size of the session?, I have a few CFCs stuffed into session scope, but perhaps when the load is high there is too many?. Things fail even with a cluster of 2 on one server, initially I had a 8 node cluster on 2 separate machines but when it failed it rolled it back to a cluster of 2 instances on the one server to see if that was stable (its not 100% which is what I need).
    Any advice, points gratefully received.

    So Lynux, that’s an interesting sounding solution. Would be great if it made some difference for Guitsboy. We’ll see. I notice that you say you’ve not yet tried it, though, and fair enough. Thanks for offering it.
    But I’m curious: did you ever resolve your original problem? And if not, hopefully you saw the note I just wrote to Guitsboy, asking him something that may well interest you if you still have your problem. On rereading this thread, from back in April, I’ve also had some new thoughts come to mind which I’ll share, if it may help either of you, or others with this seeming same issue.
    To remind readers who may not want to review the whole thread, you had said originally that “all is fine with low to moderate load, however under heavy load and at random times the replication fails“, and that this failure “manifests in users not being able to login to our application (we store a token in session scope to store logged in status)”.  Then it seems you may have concluded that things were down to the error you were seeing in the logs:
    Mar 05, 2014 9:55:19 PM org.apache.catalina.ha.session.DeltaManager messageReceived
    SEVERE: Manager : Unable to receive message through TCP channel
    java.lang.IllegalStateException: removeAttribute: Session already invalidated
    And now guitsboy reports seeing the same error.
    But here’s the thing that came to mind for me tonight as I read this: you know, there can be a lot of other reasons that users can feel that they “lose their session”, even without using clustering and replication.
    There are issues related sometimes to folks having duplicate session tokens (which can happen for various reasons, including perhaps ones in your code, and maybe only when people visit pages in a certain pattern, so that it happens only occasionally and not always).
    Then there is an issue that can arise if you are supporting both http and https requests, where Tomcat (not CF) balks at that (see http://www.petefreitag.com/item/817.cfm, and though he shows a solution in IIS you should be able to implement a similar one in mod_rewrite if that was indeed perhaps your issue).
    So I’d be curious if either of you may be in a position to have a failing client use any sort of client tool (like Chrome’s dev tools, or Firebug or Firefox’s new builtin tools, or IE’s f12 dev tools) to watch the communication between the client and the server, and especially to watch the cookies being sent. You guys both mention using jsessionid. Are they the same cookie value on each request? And/or are there more than jsessionid? I’ve seen it happen. There could be differences in the domain property reported for the cookie, the httponly property, the secure property, and so on.  And you really do want to view the value sent from the client to the server, because if you view the cookie scope on the server a) it may show values set ON the server rather than sent TO the server, and b) it won’t show these additional cookie properties that were in play on the client. CF only sees the cookie name and value.
    I’ve helped many people find out that this was the reason for the seeming session loss (and sometimes it was not all requests by all clients but perhaps only some requests for some clients, all on the same server). At least if this is the crux of the problem, you can then tackle WHY it’s happening.  There can be many reasons, from code to configuration, so I won’t belabor them now.
    But if either of you may be able to confirm this, perhaps we can help you both get a little closer to a real explanation and solution for your problem. Again, I’m just guessing a bit based on what you’ve written. I realize it may be that none of this is the problem and you have hit some other real unrelated bug. But I really feel confident that you ought to try to check this out first, as it’s indeed been the crux of problems for others, without respect to clustering.  It seems worth ruling out, so that you don’t get misled chasing the problem on the assumption that it is about clustering.
    As always, hope that helps.
    /charlie

Maybe you are looking for

  • How to start Oracle Report Builder in Linux

    Hi all Can someone please help by telling how can i start Oracle Report Builder in Linux???? Thank you and best regards to all

  • How can I place good screenshots into InDesign?

    This is essentially what I am trying to do as well...Is there a correct answer that was given? How do know which answer is the one correct answer that is noted in the forum? My question is identical to the one below. My screen shots are off a website

  • My bookmarks disappear each time I boot up.  How do I fix?

    Each time I boot up my Mac Powerbook and use Safari, the bookmarks that I setup from previous sessions do not appear any longer.   And my home page reverts to another one.   I have done the troubleshooting in help and this doesn't help  My favorites

  • Content Type and dispatcher.forward()

    Hello, I return a xhtml web page as a response of a servlet request. However, due to client issues I need that for some clients the header type is text/xhtml and for others is application/xhtml+xml. I make a response.setContentType(type) and then I d

  • Upgrad FMW from 11.1.1.4 to 11.1.2.0

    Hi Team, I recently upgraded FMW from 11.1.1.4 to 11.1.2.0 since then my report server not working even though it is up and running. [oracle@smiggins repsisupg]$ opmnctl status Processes in Instance: asinst_1 -----------------------------------------