OS Clustering in Oracle

hai all,
We have a client wants to have run oracle databases on IBM Power .. Now he wants to do OS Clustering (HACMP) in the same environment, and he wants us to co-ordinate in that activity with the system admin team?..what is the role of DBA activity in deploying HACMP ?
Any idea ?
Kai

As with any question about roles, it really depends on the organization.
At a minimum, the DBA would probably need to
- install the appropriate binaries on the failover server (assuming the Oracle Home is on local disks and the database files are on the SAN)
- identify what scripts need to be run on a failover
- work with the admins to test the failover, etc
If you don't already have the data files on a shared storage system (i.e. a SAN), you probably have a lot more work to do. And different organizations may want you to do far more than the minimum depending on how well the admins understand Oracle.
Justin

Similar Messages

  • How to create clustering on oracle application server on windows

    hi,
    i want to know how can i create clustering on oracle application server on windows machine.
    regards

    I think that you need to configure Web cache for this as a frontend, and two or more servers behind it.
    Try to read :
    Metalink DOC ID 312834.1
    http://www.oracle.com/technetwork/developer-tools/forms/documentation/advancedformsconf-128186.pdf
    Edited by: Marcos_Pawloski on 15/12/2010 00:02

  • Linux Clustering with oracle

    Hai All,
    Please forward the good pdf documents and links describing the Redhat Linux Clustering with Oracle.
    Please help...
    Shiju

    Hai All,
    Please forward the good pdf documents and
    links describing the Redhat Linux Clustering with
    Oracle.Check following link
    http://www.oracle.com/pls/db102/to_pdf?pathname=install.102%2Fb15660.pdf&remark=portal+%28Getting+Started%29
    Virag

  • Linux clusters and oracle

    Hello gentlemen!
    Does anybody have experience with linux clusters and oracle?
    Does this combination exist? Is linux great in this role?
    Any links on this topic?
    Any help would be greatly appreciated.
    Best regards,
    San.
    null

    This contradicts the information in the Oracle® Database Release Notes 10g Release 2 (10.2) for Linux x86-64:
    In Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide, Chapter 2, "Preinstallation," in the section "Oracle Clusterware Home Directory," it incorrectly lists the path /u01/app/oracle/product/crs as a possible Oracle Clusterware home (or CRS home) path. This is incorrect. A default Oracle base path is /u01/app/oracle, and the Oracle Clusterware home must never be a subdirectory of the Oracle base directory.
    A possible CRS home directory is in a path outside of the Oracle base directory. for example, if the Oracle base directory is u01/app/oracle, then the CRS home can be an option similar to one of the following:
    u01/crs/
    /u01/crs/oracle/product/10/crs
    /crs/home
    This issue is tracked with Oracle bug 5843155.

  • Clustering of Oracle AS 10.1.3.0 nodes in active-active topology

    Hi,
    The following requirements for clustering in our environment exist:
    * Active-active topology
    * External hardware load-balancer
    * Fail-over
    * Application session-state replication
    We use multiple Oracle AS 10.1.3.0 nodes. Applications are deployed on 2 or more OC4J instances on those nodes.
    I've read the Oracle docs on clustering and I have some questions regarding the clustering configuration to use:
    1. When an external hardware load-balancer is used, is configuring an Oracle AS cluster needed then? The docs talk about clusters, but in some cases I'm not sure whether an Oracle AS cluster or a hardware cluster is meant.
    2. It seems not entirely clear what type of application clustering to use in combination with an Oracle AS cluster: either multicast or peer-to-peer? If I'm correct the Oracle docs suggest to use dynamic peer-to-peer application clustering if an Oracle AS cluster is used and multicast for standalone OC4J. Is that correct?
    Thanks,
    Ronald

    1. Well, the idea is to have all clients route their HTTP requests to the physical load-balancer. The load-balancer will route/forward the requests to one of the Oracle AS nodes based on the load on those machines. If application state is synchronized between the application(s) on the OC4J instances, it would not be necessary to configure an Oracle AS cluster? Or are there advantages for having such a cluster? One of the pros I can think of would be that groups are automatically created so deployment/management of the application(s) becomes easier. Are there any other? Or is this configuration w/o Oracle AS cluster not a good idea?
    2. Clear.
    3. JMS, thanks for the tip.
    4. Yes we use Oracle RAC. Does that impose constraints on the Oracle AS clustering and/or application clustering?
    Ronald

  • Problem with clustering of oracle application server 10.1.3.1

    Hi
    I have clustered two instances of oracle application server 10.1.3.1 running on solaris.
    Instance 1
    <notification-server interface="ipv4">
    <port local="6100" remote="6200" request="6003"/>
    <ssl enabled="true" wallet-file="$ORACLE_HOME/opmn/conf/ssl.wlt/default"/>
         <topology>
              <nodes list="10.x.x.1:6201"/>
         </topology>
    </notification-server>
    Instance 2
    <notification-server interface="ipv4">
    <port local="6101" remote="6201" request="6004"/>
    <ssl enabled="true" wallet-file="$ORACLE_HOME/opmn/conf/ssl.wlt/default"/>
    <topology>
              <nodes list="10.x.x.2:6200"/>
         </topology>
    </notification-server>
    But when i log on to enterprise manager i get the following message on top of the page:
    You have more than one instance of the Application Server Control application running in this cluster. This is not a recommended configuration and could lead to unexpected problems. Please stop the additional instances of Application Server Control or disable routing to these instances.
    When i click on one of the instances, i get the session expired error.
    what is the problem behind this?
    regards,

    In the doc, it states that you should only ever have one active instance of application server control running in a cluster at a time -- ASC was not designed to operate as a session sharing application, so your session state is specific to one instance. If for some reason you were directed to the ASC application on the other instance, you get errors.
    You can use the OC4J admin_client.jar utility or the opmnctl command to shut down the ascontrol application in one of the instances.
    -steve-

  • Simon Greener's Morton Key Clustering in Oracle Spatial

    Hi folks,
    Apologies for the rambling.  With mattyschell heading for greener open source big apple pastures I am looking for new folks to bounce ideas and code off.  I was thinking this week about the discussion last autumn over spatial clustering.
    https://community.oracle.com/thread/3617887
    During the course of the thread we all kind of pooh-poohed spatial clustering as not much of solution, myself being one of the primary poohers.  Yet the concept certainly remains as something to consider regardless of our opinions.  The yellow book, the Greener/Ravada book, Simon's recent treatise (http://download.oracle.com/otndocs/products/spatial/pdf/biwa_2015/biwa2015_uc_comparativeperformance_greener.pdf), they all put forward clustering such that at the very least we should consider it a technique we should be able as professionals to do - a tool in the toolbox whether or not it always is the right answer.  I am mildly (very mildly) curious to see if Kothuri, Godfrind and Beinat will recycle their section on spatial clustering with the locked-down MD.HHENCODE into their 12c revision out this summer.  If they don't then what is the replacement for this technique?  If they do then we return to all of our griping about this ancient routine that Simon implies may date back to the CHS and their hhcode indexes - at least its not written in Java! 
    Anyhow, so I've been in the midst this month of refreshing some of the datasets I manage and considering clustering the larger tables whilst I am at it.  Do I really expect to see huge performance gains?   Well... not really.  But it does seem like something that should be easy to accomplish, certainly something that "doesn't hurt" and shows that I am on top of things (e.g. "checks the box").  But returning to the discussion from last fall, just what is the best way to do this in Oracle Spatial?
    So if we agree to ignore poor old MD.HHENCODE, then what?  Hilbert curves look nifty but no one seems to be stepping up with the code for them.  And this reroutes us back around to Simon and his Morton key code.
    http://www.spatialdbadvisor.com/oracle_spatial_tips_tricks/138/spatial-sorting-of-data-via-morton-key
    So who all is using Simon's code currently?  If you read that discussion from last fall there does not seem to be anyone doing so and we never heard back from Cat Person on either what he decided to do or what his name is.
    I thought I could take a stab at streamlining Simon's process somewhat to make things easier for myself to roll this onto many tables.  I put together the following small package
    https://github.com/pauldzy/DZ_SDO_CLUSTER/tree/master/Packages
    In particular I wanted to bundle up the side issues of how to convert your lines and polygons into points, automate things somewhat and provide a little verification function to see what results look like.  So again nothing that Simon does not already walk through on his webpage, just make it bit easier to bang out on your tables without writing a separate long SQL process for each one.
    So for example to use Simon's Morton key logic, you need to know the extent envelope of the data (in order to define a proper grid).  So if its a large table, you'd want to stash the envelope info in the metadata.  You can do this with the update_metadata_envelope procedure or just suffer through the sdo_aggr_mbr each time if you don't want to go that route (I have one table of small watershed polygons that takes about 9 hours to run sdo_aggr_mbr upon).  So just run things at the sql prompt
    SELECT
    DZ_SDO_CLUSTER.MORTON_UPDATE(
        p_table_name => 'CATCHMENT_NP21'
       ,p_column_name => 'SHAPE'
       ,p_grid_size => 1000
    FROM dual;
    This will return the update clause populated with the values to use with the morton_key wrapper function, e.g. "morton_key(SHAPE,160.247133275879,-17.673722530871,.0956820001136141,.0352063207508021)".  So then just paste that into an update statement
    UPDATE foo
    SET my_morton_key = dz_sdo_cluster.morton_key(
        SHAPE
       ,160.247133275879
       ,-17.673722530871
       ,.0956820001136141
       ,.0352063207508021
    Then rebuild your table sorting on the morton_key.  I just use the TOAD rebuild table tool and manually add the order by clause to the rebuild script.  I let TOAD do all the work of moving the indexes, constraints and grants to the new table.  I imagine there are other ways to do this.
    The final function is meant to be popped into Oracle mapviewer or something similar to show your family and friends the results.
    SELECT
    dz_sdo_cluster.morton_visualize(
        'NHDPLUS'
       ,'NHDFLOWLINE_NP21_ACU'
       ,'SHAPE'
       ,'OBJECTID'
       ,'100'
       ,10000
       ,'MORTON_KEY'
    FROM dual;
    Look Mom, there it is!
    So anyhow this is first stab at things and interested in feedback or suggestions for improvement.  Did I get the logic correct?  Don't spare my feelings if I botched something.  Note that like Simon I passed on the matter of just how to determine the proper grid size.  I've been using 1000 for the continental US + Hawaii/PR/VI and sitting here this morning I think that probably is too large.  Of course it depends on the size of the geometries and thus the density of the resulting points.  With water features this can vary a lot from place to place, so perhaps 1000 is okay.  What would the algorithm be to determine a decent grid size?  It occurs to me I could tell you the average feature count per morton key value, okay well its about 10.  That seems small to me.  So I could see another function in this package that returns some kind of summary on the results of the keying to tell you if your grid size estimate was reasonable.
    Cheers and Happy Saturday,
    Paul

    I've done some spatial clustering testing this week.
    Firstly, to reiterate the purpose of spatial clustering as I see it:  spatial clustering can be of benefit in situations where frequent window based spatial queries are made.  In particular it can be very useful in web mapping scenarios where a map server is requesting data using SDO_FILTER or SDO_ANYINTERACT and there is a need to return the data as quickly as possible.  If the data required to satisfy the query can be squeezed into as few blocks as possible, then the IO overhead is clearly reduced.
    As Bryan mentioned above, once the data is in the buffer cache, then the advantage of spatial clustering is reduced.  However it is not always possible to get/keep enough of the data in the buffer cache, so I believe spatial clustering still has merits, particularly if it can be implemented alongside spatial partitioning.
    I ran the tests using an 11.2.0.4 database on my laptop.  I have a hard disk rather than SSD, so the effects of excessive IO are exaggerated.  The database is configured with the default 8kb block size.
    Initially, I created a table PARCELS:
    create table parcels (
    id            integer,
    created_date  date,
    x            number,
    y            number,
    val1          varchar2(20),
    val2          varchar2(100),
    val3          varchar2(200),
    geometry      mdsys.sdo_geometry,
    hilbert_key  number);
    I inserted 2.8 million polygons into this table.  The CREATED_DATE is the actual date the polygons were captured.  I populated val1, val2 and val3 with string values to pad the rows out to simulate some business data sitting alongside the sdo_geometry.
    I set X,Y to the first ordinate of the polygon and then set hilbert_key = sdo_pc_pkg.hilbert_xy2d(power(2,31), x, y).
    I then created 4 tables to base the tests upon:
    PARCELS_RANDOM:  Ordered by dbms_random.random - an absolute worst case scenario.  Unrealistic, but worthwhile as a benchmark.
    PARCELS_BASE_DATE:  Ordered by CREATED_DATE.  This is probably pretty close to how the original source data is structured on disk.
    PARCELS_RTREE:  Ordered by RTree.  Achieved by inserting based on an SDO_FILTER query
    PARCELS_HILBERT:  Ordered by the hilbert_key attribute
    As a first test, I counted the number of blocks required to satisfy an SDO_FILTER query.  E.g.
    select count(distinct(dbms_rowid.rowid_block_number(rowid)))
    from parcels_rtree
    where sdo_filter(geometry,
                    sdo_geometry(2003, 2157, null, sdo_elem_info_array(1, 1003, 3),
                                    sdo_ordinate_array(644232,773809, 651523,780200))) = 'TRUE';
    I'm assuming dbms_rowid.rowid_block_number(rowid) is suitable for this.
    I ran this on each table and repeated it over three windows.
    Results:
    So straight off we can see that the random ordering gave pretty horrific results as the data required to satisfy the query is spread over a large number of blocks.  The natural date based clustering was far better. RTree and Hilbert based clustering reduced this by a further 50% with Hilbert just nosing out RTree.
    Since web mapping is the use case I am most likely to target, I then setup a test case as follows:
    Setup layers in GeoServer for each of the tables
    Used a script to generate 1,000 random squares over the extent of the data, ranging from 200m to 500m in width and height.
    Used JMeter to make a WMS request for a png of the each of the 1,000 windows.  JMeter was run sequentially with just one thread, so it waited for each request to complete before starting the next.  I ran these tests 3 times to balance out the results, flushing the buffer cache before each run.
    Results:
    Again the random ordering performed woefully bad - somewhat exacerbated by the quality of the disk on my laptop.  The natural date based clustering performed far better.  RTree and hilbert based clustering further reduced the time by more than half.
    In summary, the results suggest that spatial clustering is worth the effort if:
    the data is not already reasonably well clustered
    you've got a decent quantity of data
    you're expecting a lot of window based queries which need to be returned as quickly as possible
    you don’t expect to be able to fit all the data in the buffer cache
    When it comes to deciding between RTree and Hilbert (or Morton/z-order or any other space filling curve method).... I found that the RTree method can be a bit slow on large datasets, although this may not matter as a one off task.  Plus it requires a spatial index on the source table to start off with.  The key based methods are based on an xy, so for lines and polygons there is an intermediate step to extract an xy.  I would tend to recommend this approach if you also partition the data based on a subset of the cluster key.
    Scripts are available here: https://github.com/john-otoole/oracle_spatial_cluster_test
    John

  • SOA Suite 11g Clustering error/oracle bug ?

    Hi, i got several experiment to implement soa suite 11g clustering, in my company.. at this moment we still use development mode in our application server. now i'm trying to get into production and also i want cluster the server.. here is the chronology what i do.
    i got 3 server :
    my plan is
    server #1 become an proxy server, let say the name is soa_proxy
    server #2 become an cluster node, let say the name is soa_server1
    server #3 become an cluster node, let say the name is soa_server2
    so i start install & configure it (applications is running under Windows Server 2003)
    1.*soa_proxy*
    - Installing WebLogic server
    - Installing SOA
    - Create Repository with RCU (oracle 10g database)
    2.*soa_server1*
    - Installing WebLogic server
    - Installing SOA
    3.*soa_server2*
    - Installing WebLogic server
    - Installing SOA
    after installation finished, then i create domain in soa_proxy server let say it name 'MyDomain', then i assign the soa_server1 and soa_server2 in cluster node named 'MyCluster', also i assign the soa_proxy server as a HTTP Proxy Server.. i'm following the instructions just like this link Link: [http://download.oracle.com/docs/cd/E12839_01/doc.1111/e13925/config_screens.htm#CJAEABGD]
    all the installation seems good, but when i try to deploy my composite application into the proxy server 'soa_proxy' with JDev, i got error like this
    Image: !http://img9.imageshack.us/img9/2122/error2z.jpg!
    and when i go to Http://hostname:7001/em
    i just see my composite deployed only in 'soa_server2'.
    UPDATE : well, now the error was gone when deploying. but we still got little odd result,
    when i deploy with jdev, it still fine, until i go to the enterprise manager. yess still.. my composite only deplyed in 'soa_server1'. i got some reference that we can copy the domain directory in soa_proxy and then paste into the server that not yet been deployed before. its seems oke..
    i've been thinking, how if i got 10 node of cluster? 50 node of cluster ? should i copy the directory just like that ?? is there another charmed way to do it???
    can somebody help me with this simple case?
    regards
    Wildan
    Edited by: wildsoft on Feb 12, 2010 11:33 AM
    =======
    here is my last trial to deploy the cluster environment, then i just got the same issues, our team here think thats the issues causing by oracle bug in code behind. for reminder what i do last experiment
    1. we deploy/configure the domain (admin console,cluster node) then starting up all the services smoothly without no issues & as we saw in the console monitor we can see the cluster is syncronizing.
    2. then we create simple composite.
    3. we try to deploy the composite to the cluster domain, and what we got is the composite only deployed in one cluster node.
    let me describe to clear it,
    * first trial, when we deploy the composite, here is what we got in em console
    *[-] SoaInfra (SoaServer1)*
    [+] CompositeLabTest
    *[-] SoaInfra (SoaServer2)*
    in JDev we also saw this message (the project CompositeLabTest Deploy Process is skipped), some kind like that
    * second time trial, we try to shut the SoaServer2 down, then restart it, what we got in em console
    *[-] SoaInfra (SoaServer1)*
    [+] CompositeLabTest
    *[-] SoaInfra (SoaServer2)*
    [+] CompositeLabTest
    . so until today my soa suite 11g clustering experience is never going good practice.. any of you may had experience with clustering too, how that can be done? please respon...
    thanks
    regards
    Wildan Abdat
    Edited by: wildsoft on Apr 27, 2010 4:08 PM
    Edited by: wildsoft on Apr 27, 2010 4:13 PM

    Hi there.
    Me too I have a similar problem with cluster deployment.
    I have two different domains in two different phisical machine on the same LAN.
    These domains are equals and the sole difference is regarding the IP configuration:
    domain 1
    machine 172.0.0.1
    soaadmin 172.0.0.1
    soa1 172.0.0.2
    soa1 172.0.0.3
    domain 2
    machine 172.0.1.1
    soaadmin 172.0.1.1
    soa1 172.0.1.2
    soa1 172.0.1.3
    When I try to deploy a new composite from the EM this fails (it remains in waiting for a response like the followed)
    <Jun 18, 2010 10:42:00 AM CEST> <Warning> <org.apache.myfaces.trinidadinternal.context.RequestContextImpl> <BEA-000000> <Could not find partial trigger idArchiveFileBrowserDialog from RichInputText[UIXEditableFacesBeanImpl, id=idArchiveLoc]>
    Processing sar=/tmp/dir2448127768139530528tmp/sca_SOAComposite1_rev1.0.jar
    Adding sar file - /tmp/dir2448127768139530528tmp/sca_SOAComposite1_rev1.0.jar
    Creating HTTP connection to host:172.0.0.2, port:10002
    What I checked:
    In multicast configuration soa1 (domain1) sent the request to all soa servers in the LAN (even in that one that aren't configurated in his domain)
    In unicast configuration soa1 (domain1) sent the request only to the soa2 (domain1) server but it remain in waiting for a response for all the time.
    What it is strange is that if I stop the domain2 the request can be performed and my composite is deployed in both the servers.
    Any idea about this strange behaviour?
    N.B.
    I'm not using Coherence.

  • Real Time Application Clusters in Oracle 9i

    want to know what exactly the Real Application Clusters In oracle9i

    Hello again;
    Real time apply is a new feature introduced in Oracle database 10g.
    Without real time apply, log apply services wait for the full archived redo log to arrive before applying to the standby database.
    If a standby redo log is used real time apply can be enabled allowin Data Guard to recover redo data from the current standby redo log as it is being filled by the RFS process.
    Best Regards
    mseberg

  • Clustering in Oracle 11g/10g

    Hi
    We have 40000 records in our database (Oracle 11g/10g), in which each record holds 2 to 3 images and so each record will be of some MBs.
    Do we need to perform clustering to get a better performance?
    Thanks
    Shoba Anandhan

    Thanks for the immediate replies.
    We have a Windows 2003 Server, where in we have the Microsoft Visual Studio 2008 and Oracle 11g database.
    Our web application will be developed using Microsoft Visual Studio 2008 having Oracle as backend.
    At at time we are assuming 100 users to work with the web applications. We connect to the database everytime to display data in the web page.
    100% performance is expected.
    The Windows 2003 Server has a 4 GB of RAM. Is 4 GB enough to process these database requests or do we need more?

  • Problem with Clustering in Oracle AS10.1.3.1.0

    I have an Oracle Application Server 10.1.3.1 installed with a multicast IP along with an home instance whcih is an administrative instance. Installed another instance on other system with only J2EE Server installation and having home not as an administrative instance which is basically a Active - Active topology.
    Created 2 home instances on the 2 Application Servers with in the same group and deploymed my application in both the instances with Peer-Peer Dynamic Discovery mode. but when i run the application its running fine when i go from the localhost address but when i mention the internal IP or the Public IP of the system session is getting failed. i hope that the request is transferring between the 2 instances when i specify the IP. So i specified the system property -Doracle.j2ee.rmi.loadBalance=client which makes the request to stick to only one instance till its life ends. but this also throws me the same error. And the error is
    07/08/26 12:47:08 java.io.StreamCorruptedException
    07/08/26 12:47:08      at java.io.ObjectInputStream.readTypeString(ObjectInputStream.java:1372)
    07/08/26 12:47:08      at java.io.ObjectStreamClass.readNonProxy(ObjectStreamClass.java:607)
    07/08/26 12:47:08      at java.io.ObjectInputStream.readClassDescriptor(ObjectInputStream.java:778)
    07/08/26 12:47:08      at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1528)
    07/08/26 12:47:08      at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1460)
    07/08/26 12:47:08      at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1693)
    07/08/26 12:47:08      at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1299)
    07/08/26 12:47:08      at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1912)
    07/08/26 12:47:08      at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1836)
    07/08/26 12:47:08      at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1713)
    07/08/26 12:47:08      at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1299)
    07/08/26 12:47:08      at java.io.ObjectInputStream.readObject(ObjectInputStream.java:339)
    07/08/26 12:47:08      at com.evermind.server.ejb.EJBInputStream.readSession(EJBInputStream.java:263)
    07/08/26 12:47:08      at com.evermind.server.ejb.EJBInputStream.readSession(EJBInputStream.java:185)
    07/08/26 12:47:08      at com.oracle.bricks.j2ee.EJBLiveSession.unmarshall(EJBLiveSession.java:228)
    07/08/26 12:47:08      at com.oracle.bricks.j2ee.EJBSessionFacade.unmarshall(EJBSessionFacade.java:95)
    07/08/26 12:47:08      at com.evermind.server.ejb.JGroupEJBService.requestSession(JGroupEJBService.java:193)
    07/08/26 12:47:08      at com.evermind.server.ejb.StatefulSessionEJBHome.getSession(StatefulSessionEJBHome.java:670)
    07/08/26 12:47:08      at com.evermind.server.ejb.StatefulSessionEJBHome.getSession(StatefulSessionEJBHome.java:653)
    07/08/26 12:47:08      at com.evermind.server.ejb.StatefulSessionEJBHome.getSession(StatefulSessionEJBHome.java:648)
    07/08/26 12:47:08      at com.evermind.server.ejb.StatefulSessionEJBHome.getRemoteSession(StatefulSessionEJBHome.java:686)
    07/08/26 12:47:08      at com.evermind.server.ejb.StatefulSessionHandle.getEJBObject(StatefulSessionHandle.java:40)
    07/08/26 12:47:08      at ezc.client.EzcUtilManager.<init>(EzcUtilManager.java:30)
    07/08/26 12:47:08      at ezcommerce.ezLoginUser._jspService(_ezLoginUser.java:259)
    07/08/26 12:47:08      at com.orionserver.http.OrionHttpJspPage.service(OrionHttpJspPage.java:59)
    07/08/26 12:47:08      at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:453)
    07/08/26 12:47:08      at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:591)
    07/08/26 12:47:08      at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:515)
    07/08/26 12:47:08      at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    07/08/26 12:47:08      at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:711)
    07/08/26 12:47:08      at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:368)
    07/08/26 12:47:08      at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:866)
    07/08/26 12:47:08      at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:448)
    07/08/26 12:47:08      at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:302)
    07/08/26 12:47:08      at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:190)
    07/08/26 12:47:08      at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
    07/08/26 12:47:08      at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
    07/08/26 12:47:08      at java.lang.Thread.run(Thread.java:595)
    Please kindly someone help me in this regard....
    Thanks in advance
    Satya

    Vander,
    Please open a support request if you can. It will allow us to track this important issue better and provide us a channel in case we need to see you JDev files.
    Doug

  • Clustering of Oracle 10gAS (Forms and Reports Standalone edition)

    Hi,
    Does anyone have experience of clustering of Forms\Reports in 10g environement with Forms and Reports Standalone edition?
    Clustering at Application Server level?
    Clustering at hardware level (load balancing \SAN for code storage)?
    Clustering at OS level (MS Clustering)?
    I am interested to know about others experiences\thoughts.
    John

    Since 10g Version (904) you can only cluster the infra (Cold Failover Cluster and Active Failover Cluster), the HA for Forms and Reports is only by frontending with a Loadbalancer, and I'll sugesst not to use WebCache as this frontend because forms get a lot of problems.
    Regards.

  • Clustering using Oracle ADF - CHECK_STATE_SERIALIZATION=all

    We are using Oracle ADF 11.1.1.6. We have started testing our Oracle ADF application for the cluster compatibility and used following document. We have coded all recommendations given in the document including enabling all applicable java classes to serialization and also setup adf_config.xml <adfc:adf-scope-ha-support>true</adfc:adf-scope-ha-support> and <adf:adf-scope-ha-support>true</adf:adf-scope-ha-support> exactly as per given in the document.
    http://docs.oracle.com/cd/E12839_01/core.1111/e10106/adf.htm
    Now, to check High Availability on a standalone web logic server for debugging purpose we configured Java Option "-Dorg.apache.myfaces.trinidad.CHECK_STATE_SERIALIZATION=all" as per the recommendation.
    After setting this flag system started throwing bunch of errors similar to the error given below.
    [2012-07-26T14:17:43.602-04:00] [OPERA_ADF1] [ERROR] [] [org.apache.myfaces.trinidadinternal.config.CheckSerializationConfigurator$MutatedBeanChecker] [tid: [ACTIVE].ExecuteThread: '17' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: edozer] [ecid: 64915ee7c1860d85:40a983d1:138bf4550b2:-7ffd-0000000000005fcb,0] [APP: OperaApplications_o90_smoke#V9.0.0] Failover error: Serialization of Session attribute:data has changed from oracle.adf.model.servlet.HttpBindingContext@365f8709 to oracle.adf.model.servlet.HttpBindingContext@36b6565b without the attribute being dirtied
    The question is what are we missing here?
    Please note that our application is still missing ControllerContext.getInstance().markScopeDirty(pageFlowScope); but when I tried to create a sample application and deliberately left the "markScopeDirty" using below code the above mentioned error was not reproduceable.
    public class myBackingBean
         public void changeValueButton( ActionEvent ae )
         Map<String, Object> pageFlowScope = AdfFacesContext.getCurrentInstance().getPageFlowScope();
              pageFlowTestBean obj = ( pageFlowTestBean )pageFlowScope.get( "pageFlowTestBean" );
              System.out.println( "Old Value " + obj.getYVar() );
              obj.setYVar( "New Value" );
              /* ControllerContext.getInstance().markScopeDirty(pageFlowScope); */
    Can anyone shade more light on it?
    Also, I would like to know that can someone guide me what is the best practice to perform ControllerContext.getInstance().markScopeDirty(pageFlowScope);. Because developers can easily miss this statement and I didn't find any easyway to search the code that is missing this statement (I have already tried Jdeveloper's Audit feature and it works for very few cases).

    We checked the ADF source (pasted below). It reads that markScopeDirty is getting skipped if ADF_SCOPE_HA_SUPPORT is not configured but when we debug our application the serialization fires after every pageScope/viewScope variable gets changed regardless of markScopeDirty() is being called or not. That signifies that markScopeDirty() call is not needed. Do we really need markScopeDirty() everywhere in our application or this is just a public helper method to manually trigger replication for some exceptional case? Can anyone from team Oracle explain this behavior?
    @Override
    public void markScopeDirty(Map<String, Object> scopeMap)
    Boolean handleDirtyScopes =
    (Boolean)ControllerConfig.getProperty(ControllerProperty.ADF_SCOPE_HA_SUPPORT); //checks for adf-scope-ha-support
    if (handleDirtyScopes.booleanValue())
    if (!(scopeMap instanceof SessionBasedScopeMap))
    String msg = AdfcExceptionsArb.get(AdfcExceptionsArb.NOT_AN_ADF_SCOPE);
    throw new IllegalArgumentException(msg);
    ((SessionBasedScopeMap)scopeMap).markDirty();
    }

  • Clustering on Oracle AS Portal categories

    Does anyone know how the category information is formatted when passed to the crawler? When clustering you can specify the delimiters for tokenization and hierarchy, but I am not sure what these are. I am having problems getting the clustering to mirror the portal category hierarchy, but I don't know if I am configuring it correctly without knowing how the metadata is passed to SES.
    Thanks,
    Jennifer

    Just in case anyone else has this same question - this functionality is not currently available in SES, it will only index the final subcateogry from Portal. We have submitted a request to add this as an enhancement.

  • Sun Clustering for Oracle database

     

    Ambrose,
    Big question! The requirements are somewhat simplistic,
    1 Admin workstation (SS20 +, we use Ultra5),
    1 Terminal Concentrator - Special from Sun,
    2 Ultra 2 or better servers. We have 2 E450's,
    1+ Public network interface (hme0, hme1, etc, etc),
    1 Redundant private cluster interface (hme2, hme3),
    1 Shared mirrored disk storage. (we use A1000/D1000),
    2 Veritas File system licenses,
    2 Veritas Volume Manager Licenses,
    Solaris 2.6 Only, unless you want to use SDS then you dont need veritas,
    SUN Cluster 2.2 CD's,
    If you use the A1000's for storage you will need the PCI/SBUS dual channel UltraScsi cards with the Raid Manager Software.
    Now for the bad news.
    And of course don't forget to pay SUN Micro $$$ for the platinum maintenance on the cluster. Not to mention $$$$ upfront to have them install the cluster! Even though you go to the ES-330 class and know just as much about the environment as the guy who comes out to install it, you're not supported if done by yourself. Thanks SUN!
    $20,000+ for install and $2995 for the class!
    20K + for the platinum maintenance for which they never come out for their monthly "account review".
    Hope this helps. Email me at [email protected] for a more thorough discussion.
    Heath

Maybe you are looking for