Clustering - is it worth it?

About one week a month we kill our HFM server, close week. We currently have a single instance of HFM on one server. Have another server that isn't doing much and was reading up on HFM clustering. Tried it when it was HFM 3 but haven't revisited it since...
Soo... HFM 9.2.0.3 - is clustering worth the effort of setting up?

Hi,
What I mean is, that I guess you must have a core of HFM Application Administrators, power users if you like, who do the biggest consolidations. Well you can set up your second HFM application server so that only these power users know about it. So normal users will do what they have always done, but the power users will run their big consolidations on this second server, that way their big consolidations will not hamper the users too much.
Yes the 300 seconds sync delay can be a problem, 5 minutes is a long time to wait, the rule I have always followed is never go below 60 seconds. If you go too low on busy servers the cubes will start getting flushed so often that you are not getting any caching benefits, plus the checking for changes has a cpu overhead. However if you are only running 2 application servers its not really a problem. If you are running allot more application servers then this setting may well start to get critical and cause problems.
The mechanics used for multi server and clustering have not changed that much since 3.5, Im not sure if the function already existed in 3.5 but in later versions hfm has a UseStickyServer setting which is enabled by default. That means that a user logging on to a server cluster and running reports and comparing them to data forms for example or Excel smartview retrieves will always use the same hfm application server for its data source. That way a user will always see the same data if he uses his ID multiple times, however this will not work for multiple users, If they land on different application servers then they will always see different data till the cubes get refreshed.

Similar Messages

  • QMASTER hints 4 usual trouble (QM NOT running/CLUSTEREd nodes/Networks etc

    All, I just posted this with some hints & workaround with very common issues people have on this forum and keep asking concerning the use of APPLE QMASTER with FCP, SHAKE, COMPRESSOR and MOTION. I've had many over the last 2 years and see them coming up frequently.
    Perhaps these symptoms are fixed in FCS2 at MAY 2007 (now). However if not here's some ROTS that i used for FCP to compressor via QMASTER cluster for example. NO special order but might help someone get around the stuff with QMASTER V2.3, FCP V5.1.4, compressor.app V2.3
    I saw the latest QMASTER UI and usage at NAB2007 and it looked a little more solid with some "EASY SETUP" stuff. I hope it has been reworked underneath.. I guess I will know soon if it has.
    For most FCP/COMPRESSOR, SHAKE. MOTION and COMPRESSOR:
    • provide access from ALL nodes to ALL the source and target objects (files) on their VOLUMES. Simply MOUNT those volumes through the APPLE file system (via NFS) using +k (cmd+k) or finder/go/connect to server. OR using an SSAFS such as XSAN™ where the file systems are all shared over FC not the network. YOu will notice the CPU's going very busy for a small while. THhis is the APPLE FILE SYSTEM task,,, I guess it's doing 'spotlight stuff". This goes away after a few minutes.
    • set the COMPRESSOR preferences for "CLUSTER OPTIONS" to "Never copy source to Cluster". This means that all nodes can access your source and target objects (files) over NFS (as above). Failure to to this means LENGTHY times to COPY material back an forth, in some cases undermining the pleasure gained from initially using clustering (reduced job times)
    • DONT mix the PHYSICAL or LOGICAL networks in your local cluster. I dont know why but I could never get this to work. Physical mean stick with eother ETHERNET or FIREWIRE or your other (airport etc whic will be generally way to slow and useless), Logical measn leepin all nodes on the SAME subnet. You can do this siply by setting theis up in the system preferences/QMASTER/advanced tab under "Use Network Interfaces". In my currnet QUAd I set this to use BUILT IN ETHERNET1 and in the MPBDC's I set this to their BUILTIN ETHERNET.
    • LOGICAL NETWORKS (Subnet): simply HARDCODE an IP address on the ETHERNET (for eample) for your cluster nodes andthe service controller. FOr eample 3.1.1.x .... it will all connect fine.
    • Physical Networks: As above (1) DONT MIX firewire (IPoFW) and Ethernet(IPoE). (2) if more than extra service node USE A HUB or SWITCH. I went and bought a 10 port GbE HUB for about $HK400 (€40) and it worked fine. I was NEVER able to get a stable system of QMASTER mixing FW and ETHERNET. (3) fwiw using IP of FW caused me a LOAD of DISK errors and timouts (I/O errors) on thosse DISKs that were FW400 (al gone now) but it showed this was not stable overall
    • for the cluster controller node MAKE SURE you set the CLUSTER STORAGE (system preferences/QMASTER/shared cluster storage) for the CLUSTER CONTROLLER NODE IS ON A SHARED volume (See above). This seems essential for SHAKE to work. (if not check the Qmaster errors in the console.app [see below] ). IF you have an SSAFS like XSAN™ then just add this cluster storage on a share file path. NOte that QMASTER does not permit the cluster storage to be on a NETWORK NODE for some reason. So in short just MOUNT the volume where the SHARED CLUSTER file is maintained for the CLUSTER controller.
    • FCP - avoid EXPORT to COMPRESSOR from the TIMELINE - it never seems to work properly (see later). Instead EXPORT FROM SEQUENCE in the BROWSER - consistent results
    • FCP - "media missing " messages on EXPORT to COMPRESSOR.. seems a defect in FCP 5.1 when you EXPORT using a sequence that is NOT in the "root" or primary trry in the FCP PROJECT BROWSER. Simply if you have browser/bin A contains(Bin B (contains Bin C (contains sequence X))) this will FAIL (wont work) for "EXPORT TO COMPRESSOR" if you use EXPORT to COMPRESSOR in a FCP browser PANE that is separately OPEN. To get around this, simply OPEN/EXPOSE the triangles/trees in the BROWSER PANE for the PROJECT and select the SEQUENCE you want and "EXPORT to COMPRESSOR" from there. This has been documented in a few places in this forum I think.
    • FCP -> COMPRESSOR -> .M2V (for DVDSP3): some things here. EXPORTING from an FCP SEQUENCE with CHAPTER MARKERS to an MPEG2 .M2V encoding USING A CLUSTER causes errors in the placement of the chapter makers when it is imported to DVDSP3. In fact CONSISTENTLY, ALL the chapter markers are all PLACED AT THE END of the TRACK in DVD SP# - somewhat useless. This seems to happen ALSO when the source is an FCP reference movie, although inconsistent. A simple work around if you have the machines is TRUN OF SEGMENTING in the COMPRESSOR ENCODER inspector. let each .M2V transcode run on the same service node. FOr the jobs at hand just set up a CLUSTER and controller for each machine and then SELECT the cluster (myclusterA, hisclusterb, herclusterc) for each transcode job.. anyway for me.. the time spent resolving all this I could have TRANSCODED all this on my QUAD and it would all have ben done by sooner! (LOL)
    • CONSOLE logs: IF QMASTER fails, I would suggest your fist port of diagnosis should be /Library/Logs/Qmaster in there you will see (on the controller node) compressor.log, jobcontroller.com.apple.qmaster.cluster.admin.log, and lots of others including service controller.com.apple.qmaster.executorX.log (for each cpu/core and node) andd qmasterca.log. All these are worth a look and for me helped me solve 90% of my qmaster errors and failures.
    • MOTION 3 - fwiw.. EXPORT USING COMPRESSOR to a CLUSTER seems to fail EVERY TIME.. seems MOTION is writing stuff out to a /var/spool/qmaster
    TROUBLESHOOTING QMASTER: IF QMASTER seems buggered up (hosed), then follow these steps PRIOR to restarting you machines.
    go read the TROUBLE SHOOTING in the published APPLE docs for COMPRESSOR, SHAKE and "SET UP FOR DISTRIBUTED PROCESSING" and serach these forums CAREFULLY.. the answer is usually there somewhere.
    ELSE THEN,, try these steps....
    You'll feel that QMASTER is in trouble when you
    • see that the QMASTER ICON at the top of the screen says 'NO SERVICES" even though that node is started and
    • that the APPLE QMASTER ADMINSTRATOR is VERY SLOW after an 'APPLY" (like minutes with SPINNING BEACHBALL) or it WONT LET YOU DELETE a cluster or you see 'undefined' nodes in your cluster (meaning that one was shut down or had a network failure)..... all this means it's going to get worse and worse. SO DONT submit any more work to QAMSTER... best count you gains and follow this list next.
    (a) in COMPRESSOR.app / RESET BACKGROUND PROCESSES (its under the COMPRESSOR name list box) see if things get kick started but you will lose all the work that has been done up to that point for COMPRESSOR.app
    b) if no OK, then on EACH node in that cluster, STOP the QMASTER (system preferences/QMASTER/setup [set 0 minutes in the prompt and OK). Then when STOPPED, RESET the shared services my licking OPTION+CLICK on the "START" button to reveal the "RESET SERVICES". Then click "START" on each node to start the services. This has the actin of REMOVING or in the case where the CLUSTER CONTROLLER node is "RESET" f terminating the cluster that's under its control. IF so Simply go to APPLE QMASTER ADMINISTRATOR and REDFINE it. Go restart you cluster.
    c) if step (b) is no help, consult the QMASTER logs in /Library/Logs/Qmaster (using the cosole.app) for any FILE MISSING or FILE not found or FILE ERROR . Look carefully for the NODENAME (the machine_name.local) where the error may have occured. Sometimes it's very chatty. Others it is not. ALso look in the BATCH MONITOR OUTPUT for errors messages. Often these are NEVER written (or I cant find them) in the /var/logs... try and resolve any issues you can see (mostly VOLUME or FILE path issues from my experience)
    (d) if still no joy then - try removing all the 'dead' cluster files from /var/tmp/qmaster , /var/sppol/qmaster and also the file directory that you specified above for the controller to share the clustering. FOR shake issues, go do the same (note also where the shake shared cluster file path is - it can be also specified in the RENDER FILEOUT nodes prompt).
    e) if all this WONT help you, its time to get the BIG hammer out. Simply, STOP all nodes of not stopped. (if status/mode is "STOPPING" then it [QMASTER] is truly buggered). DISMOUNT the network volumes you had mounted. and RESTART ALL YOUR NODES. Tis has the affect of RESTARTING all the QMASTERD tasks. YEs sure you can go in and SUDO restart them but it is dodgy at best because they never seem to terminate cleanly (Kill -9 etc) or FORCE QUIT.... is what one ends up doing and then STILL having to restart.
    f) after restart perform steps from (B) again and it will be usually (but not always) right after that
    LAstly - here's some posts I have made that may help others for QMASTER 2.3 .. and not for the NEW QMASTER as at MAy 2007...
    Topic "qmasterd not running" - how this happened and what we did to fix it. - http://discussions.apple.com/message.jspa?messageID=4168064#4168064
    Topic: IP over Firewire AND Ethernet connected cluster? http://discussions.apple.com/message.jspa?messageID=4171772#4171772
    LAstly spend some DEDICATED time to using OBJECTIVE keywords to search the FINAL CUT PRO, SHAKE, COMPRESSOR , MOTION and QMASTER forums
    hope thats helps.
    G5 QUAD 8GB ram w/3.5TB + 2 x 15in MBPCore   Mac OS X (10.4.9)   FCS1, SHAKE 4.1

    Warwick,
    Thanks for joining the forum and for doing all this work and posting your results for our benefit.
    As FCP2 arrives in our shop, we will try once again to make sense of it and to see if we can boost our efficiencies in rendering big projects and getting Compressor to embrace five or six idle Macs.
    Nonetheless, I am still in "Major Disbelief Mode" that Apple has done so little to make this software actually useful.
    bogiesan

  • Failover is not working in clustering

    we installed infrastructure in the one system and added 2 instances app1.mycompany.com,app2.mycompany.com into it.
    for loadbalancing we r using webcache.
    we configured origin servers,site definitions,site-server mappings.
    in the cluster two instances showing up.
    that we can see in health monitor in Up/Down* parameter of web cache administrator console.
    we deployed same ear in two instances.
    but when i down one instance say app1.mycompany.com,
    In the health monitor its not showing up DOWN parameter for host: app1.mycompany.com.same for UP also.
    immediately its not showing changes when i am testing failover.
    Is webcache loadbalancing is Round robin based ?
    when i down one of the instances session replication is not happening properly.sometimes session expired is coming.
    when 2 instances r up if user access application all the requests r coming to one instance if down that instance session expired is coming.
    i think failover is not working in clustering.
    i checked replication properties and added <distributable> tag in both the instances.
    in webcache console page what will sessionbinding will do?i have not configured anything.

    Why are you using Webcache?
    Web cache will certainly work, but its more common role is to more access as a simple load balancer over HTTP servers, not OC4J instances.
    What I'd do is to simplify your situation to verify you have the server setup correctly.
    That means using the Oracle HTTP Server which will be part of your cluster as the common routing point. OHS and mod_oc4j are session state aware and know about all the OC4J instances. In the situation where an OC4J instance dies for some reason, mod_oc4j will know to which other OC4J instance(s) the request can be routed to pickup the replicated session state.
    Once you have verified that the failover is working on the backend, you can then configure another OHS instance and position webcache in front of them to act as a request router and failover handler for when the OHS instances are inactive.
    The Enterprise Deployment Guide offers some guidance in typical architectures, well worth a read.
    cheers
    -steve-

  • How to Install 9i real application clusters on a PC!

    How to install and deploy Oracle Real Application Clusters on a single Linux
    server with a minimal configuration (less than $1000 worth of harware):
    First of all to install Oracle cluster database you DON'T HAVE to have a cluster, but a
    single PC may do as well (of course thi kind of installation will not be of
    much use for a production DB). A minimal server that I HAVE TESTED is: Celeron 750 MhZ,
    512 Mb of RAM, 2 IDE HD, Linux Suse 7.2
    This document contains the steps needed to deploy a working Oracle 9i Database with a
    minimal comment. For a complete discussion refer to Oracle documentation, namely:
    Oracle 9i administrator's reference part number A90347-02
    Oracle 9i Linux release notes part number A90356-01
    Oracle 9i Real Application Clusters Administration part numebr A89869-01
    1) set up the partitions to user for Oracle software and the ones to use as raw devices
    for the cluster DB. Soppose you want to you use an HD mounted as the 3rd IDE device
    (/dev/hdc) for the cluster DB. Then you have to partition it with fdisk with a large
    extended partition (say hdc1). Create a large number of logical partitions inside hdc1
    (say hdc5 till hdc20) of about 300 Mb in size (all of the same size for simplicity).
    2) Real application clusters wants to store the DB strutures into raw devices or a
    cluster filesystem. Create the raw devices using the following command (as superuser):
    raw /dev/raw1 /dev/hdc5
    raw /dev/raw2 /dev/hdc6 ... etc till /dev/hdc20
    you will need to repeat these steps after every boot
    3) Set up the Oracle user (already done with the Suse distribution), environment variables
    and mount point.
    Install Oracle software enterprise edition
    4) Complete the installation with a custom install of the real application cluster option.
    This will add a directory called oracm under your oracle home, which contains the
    cluster manager software
    5) edit $ORACLE_HOME/oracm/admin/nmcfg.ora, it contains 3 lines for the set-up of the
    cluster manager sofware:
    DefinedNodes=localhost
    CmDiskFiles=/dev/raw2
    CmHostName=localhost
    6) edit /var/opt/oracle/srvConfig.loc. It contains 1 line with the location of a raw device
    used to sync the cluster nodes:
    srvconfig_loc=/dev/raw1
    7) start the cluster manager software (as superuser):
    $ORACLE_HOME/oracm/bin/ocmstart.sh
    8) as the oracle user start the global cache service:
    gsd
    9) you can now create a cluster db. to use the configuration assistant you need to set
    an extra environment variable:
    export THREADS_FLAG=native
    10) start the configuration assistant: dbca
    expect some errors in the script the dbca generates, best is to review them before
    execution
    11) after the db creation you'll be able to start two instances on the same DB,
    which means you will have a cluster DB!
    12) the environment variable ORACLE_SID will determine the instance to which you can
    connect by means of a special syntax: SID.parameter=value
    this is used for example for paramters like instance_number, thread, etc
    also the parameter cluster_database must be set to true
    All these problems are normaly handled by the dbca.
    Have fun,
    Luca Canali
    OCP-DBA

    RedHat 7.1 with the some configuration I run a production 9.0.1
    database: Compaq Proliant Hardware with a RA4100 storage.
    The script created from the dbca seems fine, but the dbca give
    me the
    some error I get from:
    srvconfig -init
    [...]stop all daemons and oracle programs on both machines, then
    start only $ORACLE_HOME/oracm/bin/ocmstart.sh,
    then run the line
    srvconfig -init
    you can check it by running
    srvconfig -version
    it should output something like "9.0.0.0.0"
    before running dbca, make sure ocmstart.sh and gsd are
    running on both machines ("lsnodes -v" should putput sbshadow1
    and dbshadow2).
    BTW - I cant run clusca... Can somenome give a sample nmcfg.ora,
    just to check my parameters? for dbshadow1:
    DefinedNodes=dbshadow1 dbshadow2
    CmDiskFiles=/dev/raw/raw2
    CmHostName=dbshadow1
    for dbshadow2:
    DefinedNodes=dbshadow1 dbshadow2
    CmDiskFiles=/dev/raw/raw2
    CmHostName=dbshadow2
    Saludos
    --Marcos.
    Ps: can you contact me by e-mail? i have some questions regarding
    your HW

  • Simon Greener's Morton Key Clustering in Oracle Spatial

    Hi folks,
    Apologies for the rambling.  With mattyschell heading for greener open source big apple pastures I am looking for new folks to bounce ideas and code off.  I was thinking this week about the discussion last autumn over spatial clustering.
    https://community.oracle.com/thread/3617887
    During the course of the thread we all kind of pooh-poohed spatial clustering as not much of solution, myself being one of the primary poohers.  Yet the concept certainly remains as something to consider regardless of our opinions.  The yellow book, the Greener/Ravada book, Simon's recent treatise (http://download.oracle.com/otndocs/products/spatial/pdf/biwa_2015/biwa2015_uc_comparativeperformance_greener.pdf), they all put forward clustering such that at the very least we should consider it a technique we should be able as professionals to do - a tool in the toolbox whether or not it always is the right answer.  I am mildly (very mildly) curious to see if Kothuri, Godfrind and Beinat will recycle their section on spatial clustering with the locked-down MD.HHENCODE into their 12c revision out this summer.  If they don't then what is the replacement for this technique?  If they do then we return to all of our griping about this ancient routine that Simon implies may date back to the CHS and their hhcode indexes - at least its not written in Java! 
    Anyhow, so I've been in the midst this month of refreshing some of the datasets I manage and considering clustering the larger tables whilst I am at it.  Do I really expect to see huge performance gains?   Well... not really.  But it does seem like something that should be easy to accomplish, certainly something that "doesn't hurt" and shows that I am on top of things (e.g. "checks the box").  But returning to the discussion from last fall, just what is the best way to do this in Oracle Spatial?
    So if we agree to ignore poor old MD.HHENCODE, then what?  Hilbert curves look nifty but no one seems to be stepping up with the code for them.  And this reroutes us back around to Simon and his Morton key code.
    http://www.spatialdbadvisor.com/oracle_spatial_tips_tricks/138/spatial-sorting-of-data-via-morton-key
    So who all is using Simon's code currently?  If you read that discussion from last fall there does not seem to be anyone doing so and we never heard back from Cat Person on either what he decided to do or what his name is.
    I thought I could take a stab at streamlining Simon's process somewhat to make things easier for myself to roll this onto many tables.  I put together the following small package
    https://github.com/pauldzy/DZ_SDO_CLUSTER/tree/master/Packages
    In particular I wanted to bundle up the side issues of how to convert your lines and polygons into points, automate things somewhat and provide a little verification function to see what results look like.  So again nothing that Simon does not already walk through on his webpage, just make it bit easier to bang out on your tables without writing a separate long SQL process for each one.
    So for example to use Simon's Morton key logic, you need to know the extent envelope of the data (in order to define a proper grid).  So if its a large table, you'd want to stash the envelope info in the metadata.  You can do this with the update_metadata_envelope procedure or just suffer through the sdo_aggr_mbr each time if you don't want to go that route (I have one table of small watershed polygons that takes about 9 hours to run sdo_aggr_mbr upon).  So just run things at the sql prompt
    SELECT
    DZ_SDO_CLUSTER.MORTON_UPDATE(
        p_table_name => 'CATCHMENT_NP21'
       ,p_column_name => 'SHAPE'
       ,p_grid_size => 1000
    FROM dual;
    This will return the update clause populated with the values to use with the morton_key wrapper function, e.g. "morton_key(SHAPE,160.247133275879,-17.673722530871,.0956820001136141,.0352063207508021)".  So then just paste that into an update statement
    UPDATE foo
    SET my_morton_key = dz_sdo_cluster.morton_key(
        SHAPE
       ,160.247133275879
       ,-17.673722530871
       ,.0956820001136141
       ,.0352063207508021
    Then rebuild your table sorting on the morton_key.  I just use the TOAD rebuild table tool and manually add the order by clause to the rebuild script.  I let TOAD do all the work of moving the indexes, constraints and grants to the new table.  I imagine there are other ways to do this.
    The final function is meant to be popped into Oracle mapviewer or something similar to show your family and friends the results.
    SELECT
    dz_sdo_cluster.morton_visualize(
        'NHDPLUS'
       ,'NHDFLOWLINE_NP21_ACU'
       ,'SHAPE'
       ,'OBJECTID'
       ,'100'
       ,10000
       ,'MORTON_KEY'
    FROM dual;
    Look Mom, there it is!
    So anyhow this is first stab at things and interested in feedback or suggestions for improvement.  Did I get the logic correct?  Don't spare my feelings if I botched something.  Note that like Simon I passed on the matter of just how to determine the proper grid size.  I've been using 1000 for the continental US + Hawaii/PR/VI and sitting here this morning I think that probably is too large.  Of course it depends on the size of the geometries and thus the density of the resulting points.  With water features this can vary a lot from place to place, so perhaps 1000 is okay.  What would the algorithm be to determine a decent grid size?  It occurs to me I could tell you the average feature count per morton key value, okay well its about 10.  That seems small to me.  So I could see another function in this package that returns some kind of summary on the results of the keying to tell you if your grid size estimate was reasonable.
    Cheers and Happy Saturday,
    Paul

    I've done some spatial clustering testing this week.
    Firstly, to reiterate the purpose of spatial clustering as I see it:  spatial clustering can be of benefit in situations where frequent window based spatial queries are made.  In particular it can be very useful in web mapping scenarios where a map server is requesting data using SDO_FILTER or SDO_ANYINTERACT and there is a need to return the data as quickly as possible.  If the data required to satisfy the query can be squeezed into as few blocks as possible, then the IO overhead is clearly reduced.
    As Bryan mentioned above, once the data is in the buffer cache, then the advantage of spatial clustering is reduced.  However it is not always possible to get/keep enough of the data in the buffer cache, so I believe spatial clustering still has merits, particularly if it can be implemented alongside spatial partitioning.
    I ran the tests using an 11.2.0.4 database on my laptop.  I have a hard disk rather than SSD, so the effects of excessive IO are exaggerated.  The database is configured with the default 8kb block size.
    Initially, I created a table PARCELS:
    create table parcels (
    id            integer,
    created_date  date,
    x            number,
    y            number,
    val1          varchar2(20),
    val2          varchar2(100),
    val3          varchar2(200),
    geometry      mdsys.sdo_geometry,
    hilbert_key  number);
    I inserted 2.8 million polygons into this table.  The CREATED_DATE is the actual date the polygons were captured.  I populated val1, val2 and val3 with string values to pad the rows out to simulate some business data sitting alongside the sdo_geometry.
    I set X,Y to the first ordinate of the polygon and then set hilbert_key = sdo_pc_pkg.hilbert_xy2d(power(2,31), x, y).
    I then created 4 tables to base the tests upon:
    PARCELS_RANDOM:  Ordered by dbms_random.random - an absolute worst case scenario.  Unrealistic, but worthwhile as a benchmark.
    PARCELS_BASE_DATE:  Ordered by CREATED_DATE.  This is probably pretty close to how the original source data is structured on disk.
    PARCELS_RTREE:  Ordered by RTree.  Achieved by inserting based on an SDO_FILTER query
    PARCELS_HILBERT:  Ordered by the hilbert_key attribute
    As a first test, I counted the number of blocks required to satisfy an SDO_FILTER query.  E.g.
    select count(distinct(dbms_rowid.rowid_block_number(rowid)))
    from parcels_rtree
    where sdo_filter(geometry,
                    sdo_geometry(2003, 2157, null, sdo_elem_info_array(1, 1003, 3),
                                    sdo_ordinate_array(644232,773809, 651523,780200))) = 'TRUE';
    I'm assuming dbms_rowid.rowid_block_number(rowid) is suitable for this.
    I ran this on each table and repeated it over three windows.
    Results:
    So straight off we can see that the random ordering gave pretty horrific results as the data required to satisfy the query is spread over a large number of blocks.  The natural date based clustering was far better. RTree and Hilbert based clustering reduced this by a further 50% with Hilbert just nosing out RTree.
    Since web mapping is the use case I am most likely to target, I then setup a test case as follows:
    Setup layers in GeoServer for each of the tables
    Used a script to generate 1,000 random squares over the extent of the data, ranging from 200m to 500m in width and height.
    Used JMeter to make a WMS request for a png of the each of the 1,000 windows.  JMeter was run sequentially with just one thread, so it waited for each request to complete before starting the next.  I ran these tests 3 times to balance out the results, flushing the buffer cache before each run.
    Results:
    Again the random ordering performed woefully bad - somewhat exacerbated by the quality of the disk on my laptop.  The natural date based clustering performed far better.  RTree and hilbert based clustering further reduced the time by more than half.
    In summary, the results suggest that spatial clustering is worth the effort if:
    the data is not already reasonably well clustered
    you've got a decent quantity of data
    you're expecting a lot of window based queries which need to be returned as quickly as possible
    you don’t expect to be able to fit all the data in the buffer cache
    When it comes to deciding between RTree and Hilbert (or Morton/z-order or any other space filling curve method).... I found that the RTree method can be a bit slow on large datasets, although this may not matter as a one off task.  Plus it requires a spatial index on the source table to start off with.  The key based methods are based on an xy, so for lines and polygons there is an intermediate step to extract an xy.  I would tend to recommend this approach if you also partition the data based on a subset of the cluster key.
    Scripts are available here: https://github.com/john-otoole/oracle_spatial_cluster_test
    John

  • Clustered Oracle 9i AS and OC4J Problem

    Hi,
    I am working with two Oracle 9i AS (9.0.2.0.1), serverA and serverB, in a clustered environment. I have published a servlet to the cluster under the URL of /MyApp and the main servlet is MyServlet. I have noticed that when I go to either URL:
    http://serverA/MyApp/MyServlet
    or
    http://serverB/MyApp/MyServlet
    my servlet will show up fine but sometimes when going to serverA it will actually use the servlet running on serverB, and vice versa. I am able to verify this by looking at the log files. At first I wasn't sure why it would do this but it makes sense that in a clustered environment it would pick either of the servers to load balance the application. After reading some Oracle documentation it re-enforced the way the AS works.
    Recently I have noticed some problems with the cluster though but can't seem to pin-point the problem. When I go to either URL it will sometimes bring up the servlet web page but other times it will give an error of:
    No Response from Application Web Server
    Upon further investigation I noticed that the page would come up fine any time it tried to use the servlet on serverA but it seems that when it tries to use the servlet on serverB it returns the error. The problem is I am having a hard time verifying that serverB is causing problems or if it is something with the communication between the two servers. As far as I can tell in the Oracle EM it shows both OC4J containers running fine and all the other standard containers are running as well. Some testing that I have done is turning off the OC4J container on serverB. When I do this I stop getting any errors. Checking the opmn log for the containers doesn't give any information on the errors.
    Has anyone had similar problems? Might someone know how to properly test / troubleshoot the problem to see where things are messing up? Anyone have a solution?
    Thanks

    This is the way it works in 903 as well.
    There is a more general forum for 9iAS questions on this site, where other product managers who look after other areas of the product answer questions. It might be worth your while to post your question there and one of the PMs from the management team might be able to shed some light on the issue with respect to a future release.
    -steve-

  • OSB jms clustering - load balancing seems to be not working

    Hi All,
    I have one admin server and two managed servers running ( one of these managed server is running in the remote linux machine) in a cluster
    I have connectionfactory created with load balance enabled with round robin
    and server affinity is disabled
    I have queue created as uniformly distributed Q
    I have a proxy service with load balancing as roundrobin and endpoint URL as below
    jms://rdoelapp001011:61703,rdoelapp001013:61703/synergyConnectionFactory1/MM_gridQ0
    If I execute this proxy sending messages it always go to one server only. There is no message going to the other server.
    If I shutdown the server that receives messages then the other server is receiving messages. Seems like fail-over is working but not the load-balancing
    There is one point may be worth mentioning here is, from the admin console if I look at the servers for the clusters it has below information
    Name      State      Drop-out Frequency      Remote Groups Discovered      Local Group Leader      Total Groups      Discovered Group Leaders      Groups      Primary      
    synergyOSBServer1     RUNNING     Never     0     synergyOSBServer1     1     synergyOSBServer1     *{synergyOSBServer1}*     0          
    synergyOSBServer2     RUNNING     Never     0     synergyOSBServer1     1     synergyOSBServer1     *{synergyOSBServer1, synergyOSBServer2}* 0
    one server has groups as {synergYOSBServer1} instead of {synergyOSBServer1, synergyOSBServer2}. Does that look correct?
    here is my jms xml file
    <?xml version='1.0' encoding='UTF-8'?>
    <weblogic-jms xmlns="http://xmlns.oracle.com/weblogic/weblogic-jms" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-jms http://xmlns.oracle.com/weblogic/weblogic-jms/1.1/weblogic-jms.xsd">
    *<connection-factory name="synergyConnectionFactory1">*
    *<sub-deployment-name>synergySubDeploy1</sub-deployment-name>*
    *<default-targeting-enabled>false</default-targeting-enabled>*
    *<jndi-name>synergyConnectionFactory1</jndi-name>*
    *<client-params>*
    *<client-id-policy>Restricted</client-id-policy>*
    *<subscription-sharing-policy>Exclusive</subscription-sharing-policy>*
    *<messages-maximum>10</messages-maximum>*
    *</client-params>*
    *<transaction-params>*
    *<xa-connection-factory-enabled>false</xa-connection-factory-enabled>*
    *</transaction-params>*
    *<load-balancing-params>*
    *<load-balancing-enabled>true</load-balancing-enabled>*
    *<server-affinity-enabled>false</server-affinity-enabled>*
    *</load-balancing-params>*
    *<security-params>*
    *<attach-jmsx-user-id>false</attach-jmsx-user-id>*
    *</security-params>*
    *</connection-factory>*
    <uniform-distributed-queue name="errorQ">
    <sub-deployment-name>synergySubDeploy1</sub-deployment-name>
    <default-targeting-enabled>false</default-targeting-enabled>
    <jndi-name>errorQ</jndi-name>
    <load-balancing-policy>Round-Robin</load-balancing-policy>
    <forward-delay>-1</forward-delay>
    <reset-delivery-count-on-forward>true</reset-delivery-count-on-forward>
    </uniform-distributed-queue>
    <uniform-distributed-queue name="undlvQ">
    <sub-deployment-name>synergySubDeploy1</sub-deployment-name>
    <default-targeting-enabled>false</default-targeting-enabled>
    <jndi-name>undlvQ</jndi-name>
    <load-balancing-policy>Round-Robin</load-balancing-policy>
    <forward-delay>-1</forward-delay>
    <reset-delivery-count-on-forward>true</reset-delivery-count-on-forward>
    </uniform-distributed-queue>
    *<uniform-distributed-queue name="MM_gridQ0">*
    *<sub-deployment-name>synergySubDeploy1</sub-deployment-name>*
    *<default-targeting-enabled>false</default-targeting-enabled>*
    *<jndi-name>MM_gridQ0</jndi-name>*
    *<load-balancing-policy>Round-Robin</load-balancing-policy>*
    *<forward-delay>5</forward-delay>*
    *<reset-delivery-count-on-forward>true</reset-delivery-count-on-forward>*
    *</uniform-distributed-queue>*
    <saf-imported-destinations name="synergySAFImportedDest1">
    <sub-deployment-name>synergySubDeploy1</sub-deployment-name>
    <default-targeting-enabled>false</default-targeting-enabled>
    <saf-queue name="gridQ0">
    <remote-jndi-name>MB_gridQ0</remote-jndi-name>
    <local-jndi-name>gridQ0</local-jndi-name>
    <non-persistent-qos>At-Least-Once</non-persistent-qos>
    <time-to-live-default>0</time-to-live-default>
    <use-saf-time-to-live-default>false</use-saf-time-to-live-default>
    <unit-of-order-routing>Hash</unit-of-order-routing>
    </saf-queue>
    <jndi-prefix>MB_</jndi-prefix>
    <saf-remote-context>synergySAFContext1</saf-remote-context>
    <saf-error-handling>synergySAFErrorHndlr1</saf-error-handling>
    <time-to-live-default>0</time-to-live-default>
    <use-saf-time-to-live-default>false</use-saf-time-to-live-default>
    <unit-of-order-routing>Hash</unit-of-order-routing>
    </saf-imported-destinations>
    <saf-remote-context name="synergySAFContext1">
    <saf-login-context>
    <loginURL>t3://rdoelapp001013:7001</loginURL>
    <username>weblogic</username>
    <password-encrypted>{AES}z9VY/K4M7ItAr2Vedvhx+j9htR/HkbY2LRh1ED+Cz5Y=</password-encrypted>
    </saf-login-context>
    <compression-threshold>2147483647</compression-threshold>
    </saf-remote-context>
    <saf-error-handling name="synergySAFErrorHndlr1">
    <policy>Log</policy>
    <log-format xsi:nil="true"></log-format>
    <saf-error-destination xsi:nil="true"></saf-error-destination>
    </saf-error-handling>
    </weblogic-jms>
    Any help will be greatly appriciated
    Edited by: 818591 on Feb 16, 2011 11:28 AM

    I am not getting you here "the right approach is to make OSB run on the man server cluster and not on admin server. "
    I have a jms proxy service that I created from admin console
    And also I have gone thru the step 5 in the link below
    http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/deploy/config.html#wp1524235
    If I am not wrong, the proxy service endpoint URI determines where it is pointing to. If it is a cluster environment, it should point to a clustered address
    My proxy has below endpoint URI
    jms://rdoelapp001011:61703,rdoelapp001013:61703/synergyConnectionFactory1/MM_gridQ0
    and rdoelapp001011:61703,rdoelapp001013:61703 is my cluster address
    As per your suggestion "To fix your problem, *make osb to run on the cluster* and specify the same URL for the jms proxy service"
    Could you please provide some instruction how would I "make osb jms proxy service to run in a cluster"
    As a note, I have Q defined as a distributed Q and connection factory targets to the cluster. UDQ also targtes to the cluster.
    Just for a testing I have created another manged server running local to the machine where my admin server is running
    And I created a proxy by following steps as I mentioned above and with endpoint URI as below
    jms://rdoelapp001011:61703,rdoelapp001013:61703,*rdoelapp001011:61700*/synergyConnectionFactory1/MM_gridQ0
    where the new address of my cluster is rdoelapp001011:61703,rdoelapp001013:61703,rdoelapp001011:61700
    It did create consumers in both the managed servers in the cluster that are running locally, but no consumers in the remote managed server.
    So I am kind of leaning towards thinking that there is some incorrect setup for the remote managed server and may be admin server is not able to communicate to the remote server for some reason but not sure about it..
    As a note the cluster is setup to communicate using "unicast" channel
    and I created a channel in each manged server with the same name
    here is the cluster configuration
    <name>synergyCluster1</name>
    <cluster-address>rdoelapp001011:61703,rdoelapp001013:61703,rdoelapp001011:61700</cluster-address>
    <default-load-algorithm>round-robin</default-load-algorithm>
    *<cluster-messaging-mode>unicast</cluster-messaging-mode>*
    *<cluster-broadcast-channel>synergyChannel1</cluster-broadcast-channel>*
    *<number-of-servers-in-cluster-address>3</number-of-servers-in-cluster-address>*
    </cluster>
    here are the twoOSBserver configuration
    <server>
    <name>synergyOSBServer1</name>
    <machine xsi:nil="true"></machine>
    <listen-port>61703</listen-port>
    <cluster>synergyCluster1</cluster>
    <web-server>
    <web-server-log>
    <number-of-files-limited>false</number-of-files-limited>
    </web-server-log>
    </web-server>
    <server-debug>
    <debug-scope>
    <name>weblogic.jms.saf</name>
    <enabled>true</enabled>
    </debug-scope>
    <debug-jmssaf>true</debug-jmssaf>
    <debug-saf-sending-agent>true</debug-saf-sending-agent>
    </server-debug>
    <listen-address>localhost</listen-address>
    <network-access-point>
    *<name>synergyChannel1</name>*
    *<protocol>cluster-broadcast</protocol>*
    *<listen-address>localhost</listen-address>*
    *<listen-port>61702</listen-port>*
    <http-enabled-for-this-protocol>true</http-enabled-for-this-protocol>
    <tunneling-enabled>false</tunneling-enabled>
    *<outbound-enabled>true</outbound-enabled>*
    *<enabled>true</enabled>*
    <two-way-ssl-enabled>false</two-way-ssl-enabled>
    <client-certificate-enforced>false</client-certificate-enforced>
    </network-access-point>
    <jta-migratable-target>
    <user-preferred-server>synergyOSBServer1</user-preferred-server>
    <cluster>synergyCluster1</cluster>
    </jta-migratable-target>
    </server>
    <server>
    <name>synergyOSBServer2</name>
    <ssl>
    <enabled>false</enabled>
    </ssl>
    <machine xsi:nil="true"></machine>
    <listen-port>61703</listen-port>
    <listen-port-enabled>true</listen-port-enabled>
    <cluster>synergyCluster1</cluster>
    <web-server>
    <web-server-log>
    <number-of-files-limited>false</number-of-files-limited>
    </web-server-log>
    </web-server>
    <listen-address>rdoelapp001013</listen-address>
    <network-access-point>
    *<name>synergyChannel1</name>*
    *<protocol>cluster-broadcast</protocol>*
    *<listen-address>rdoelapp001013</listen-address>*
    *<listen-port>61702</listen-port>*
    <http-enabled-for-this-protocol>true</http-enabled-for-this-protocol>
    <tunneling-enabled>false</tunneling-enabled>
    *<outbound-enabled>true</outbound-enabled>*
    *<enabled>true</enabled>*
    <two-way-ssl-enabled>false</two-way-ssl-enabled>
    <client-certificate-enforced>false</client-certificate-enforced>
    </network-access-point>
    <java-compiler>javac</java-compiler>
    <jta-migratable-target>
    <user-preferred-server>synergyOSBServer2</user-preferred-server>
    <cluster>synergyCluster1</cluster>
    </jta-migratable-target>
    <client-cert-proxy-enabled>false</client-cert-proxy-enabled>
    </server>
    <server>
    Edited by: 818591 on Feb 18, 2011 11:26 AM

  • Pros and Cons of Clustering switches

    Can someone tell me the pros and cons of clustering switches? It sound like its just for managing multiple switches or is it more trouble that its worth?
    TIA

    Hi,
    well it has advantages as well as some flaws imho.
    Advantage:
    - Very scalable at low cost. You can add a lot of models onto a stack, ranging from 12port to 48 port, from 100Base-X Uplink to 10Gig-Uplink, with or without PoE with just the buying of a new switch. In modular switches, you have to buy the chassis and then add modules on it which might not come as divers as standalone-switches.
    - A somewhat higher redundancy. Spread the uplinks onto different switches in the stack and you'll have nearly no single point of failure (enviromental errors such as power not considered), whereas modular switches always have the backplane as (very rare) spe... and, if your not rockefeller, is the single supervisor engine.
    - You can spread a stack (with 3750) over some distance, at least more than on a modular switch. That gives you enough room to implement some cable-routing facilities inbetween the switches.
    Disadvantages:
    - surely the need for more power outlets, one for each switch
    - management, as stated on previous post (well, it's quite similar to modular switches but not exactly, just have a look at snmp, it's a mess on stacks :-( )
    - heat dissipation (not really checked on that). X powersupplies generate more heat than Y, if Y < X...
    Mathias

  • Detect clicked cluster in mouse down event for clusters within multiple stacked clusters

    With the help of Ben (see http://forums.ni.com/t5/LabVIEW/Determine-cluster-element-clicked-in-mouse-down-event/td-p/1245770)
    I could easily find out what sub-cluster had been clicked on (mouse down event) within the main cluster, by using the Label.Text Property.
    However if you have a cluster within a cluster within a cluster then you probably have to use individual mouse down events for each sub-sub cluster.
    I just wanted to use one "Main Cluster":Mouse Down event and from that determine which of the sub-sub clusters had been clicked on - is this even remotely possible?
    Chris.

    Chris Reed wrote:
    With the help of Ben (see http://forums.ni.com/t5/LabVIEW/Determine-cluster-element-clicked-in-mouse-down-event/td-p/1245770)
    I could easily find out what sub-cluster had been clicked on (mouse down event) within the main cluster, by using the Label.Text Property.
    However if you have a cluster within a cluster within a cluster then you probably have to use individual mouse down events for each sub-sub cluster.
    I just wanted to use one "Main Cluster":Mouse Down event and from that determine which of the sub-sub clusters had been clicked on - is this even remotely possible?
    Chris.
    Yes but... you will have to pass through 26 Kudos worth of Nuggets to get there (Well maybe you can skip the last 5 or so).
    This Nugget by Ton teaches us how to use Dynamic Event Registration. (15 Kudos, must read and understand)
    This Nugget by me talks about getting at references inside arbitrary data structures. (11 Kudos, You don't have to read the whole thing, only enough to get at nested objects).
    SO use the stuff I wrote about to gather up the references to the clusters. Build them into an array and then use dynamic event registration and what you learned in that thread you linked in your question.
    So Possible? Yes!
    Easy? YOU tell me.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Telemetry & Zone Clusters

    Does anyone know a good source for configuring cluster telemetry, specifically with zone clusters? I can't find much in the cluster documentation or by searching oracle's website. The sctelemetry man page wasn't very useful. The sun cluster essentials book provides a brief example but not with zone clusters.
    I'm wanting to use this feature to monitor memory/cpu usage in my various zone clusters. In our environment, we will have a few three node clusters with all applications running inside zone clusters with the "active" cluster nodes being staggered across the 3 nodes.
    Lastly is telemetry really worth the hassle? We are also deploying Ops Center (which I don't really know its capabilities yet) I briefly used an older version of XVM Ops Center at my last gig but only as a provisioning tool. So with Ops Center and the myriad of dtrace tools available, is telemetry worth messing with?
    Thx for any info,
    Chuck

    That's correct. I checked with the features author and telemetry pre-dates the introduction of zone clusters. So "SC only can do cpu.idle monitoring for a zone itself. Anything below that are not monitored, include RG/RS configured inside zones. " is what I got back.
    Tim
    ---

  • JRun multi-svr clustering seems to disconnect on reboot

    i have recently upgraded to mx 7 and have the multi-server
    configuration set up with 2 identical instances on the same
    machine, added to a cluster. these each have 5 websites using IIS
    6.
    after the cluster was made, i ran the webserver config tool
    and choose all sites and chose the cluster name and all is well.
    until i have to reboot, in which case the sites display an
    iis error about the page cannot be displayed, 403 error.
    if i open the webserver config tool and remove the cluster
    then re-add it, it works fine, until the next reboot, etc. etc.
    does anyone know why this would be happening?
    for what it's worth, i am also fine tuning the windows NLB
    and need to reboot to test, which is how i found this problem.
    i have never had a clustering question answered here, so
    maybe my luck will change this time...
    thanks so much

    Hi \HDL and welcome to Apple Discussions!
    Do you know someone with another power adapter you can use to test with? I would try that first, to isolate the problem to one or the other, before you make your purchase.
    If not, I would suggest carrying the MacBook in to an Apple retail store to let them make the diagnosis. Be sure to [make an appointment|www.apple.com/retail] first or you may spend a long time waiting to talk to a Mac Genius.
    Best of luck.

  • Clustering Spatial Data to minimize disk I/O

    Hi,
    I use Oracle Spatial on an Oracle RDBMS 11gR2 EE.
    I have read that wiht huge spatial tables (few GB for instance), it is possible to "cluster" the rows having an SDO_GEOMETRY data type.
    This can minimized I/O, having spatial geometries grouped in contiguous data blocks.
    I have also found that there is the function MD.HHENCODE encodes a two-dimensional point ino a raw string and this string can be used to sort/re-organize the points in a table.
    I have not find how this function works. Does someone know the principle on which this function works?
    How does this apply to polygons or lines, except by conputing a centroid for these geometries?
    I don't know if I get good approach, but generally speaking, does someone know how to "cluster" spatial data to minimize I/O?
    Thanks by advance for any tip.
    Kind Regards.

    Hi Unnamed Cat Person,
    Sorry I am late responding to your questions.  You should sign your postings with the name of your cat.  I pester posters to provide a name not just to be petulant, but unless we have names to reference it can be difficult to tell if I am responding to you or to Stefan or to Bryan.  It certainly does not have to be your real name, the cat's name would work just fine.
    So you are looking for information not on how to do spatial sorting but rather the "why to do this" and "how does it work" questions surrounding spatial sorting.  This forum is generally not a deep well of such information as it tends to focus on practical implementations.  As a group we often should be more introspective on the reasons why things work they way they do and if there are better ways to do them but then to some degree that is what our employers pay Oracle to do.  You might get better answers for such questions over on stack exchange (Geographic Information Systems Stack Exchange) or you might not.
    MD.HHENCODE is an undocumented, fully wrapped, by default unexecutable function in Oracle Spatial - so you won't find anything on this from Oracle.  Exactly how it works is "beyond our ken".  The 12c version of Pro Oracle Spatial is due out next year and I personally wonder if they should keep this section in the book.  I should also note its quite the pain in the backside to use returning a RAW datatype of all things.  So as the book says, the function "more or less" encodes a point in spatially sorted order.  So its very difficult for any of us to say if this function works well or sensibly or follows some established formula, etc.  Its a proprietary black box.
    So taking a step back and cribbing from Simon's discussion, you also could just sort on the X and Y values.  This is what PostGIS "order by geometry" does and you could set up something similar in Oracle quite easily.  But I don't really see much value in this approach if you are trying to cluster things together.
    Now Simon's article says that sorting on the Morton key is better.  Does he say why Morton keys are better than the above methods?  I guess he does not have a summary section at the top but I feel the results speak for themselves, particularly the illustrations.  Imagine this article without those illustrations and it would be most difficult to visualize.  There are also good illustrations on the wikipedia page
    Z-order curve - Wikipedia, the free encyclopedia
    Note that as none of us can say how HHENCODE functions, none of us can really state definitively that Simon's Morton Curves are better.  Oracle could quietly change the function tomorrow to produce Morton curves (for all I know it does already as I haven't used the thing in years).  But I think Simon's work is fantastic so in my opinion then his way is better!  Take that for what its worth.  Show his nifty illustrations to your boss and that will end any discussion of your rational.
    But wikipedia suggests that an even better way to go is using Hilbert curves.  So hey, does anyone on the list have this implemented in oracle spatial?
    Hilbert curve - Wikipedia, the free encyclopedia
    So there you go, four approaches:
    X Y striping = useless for clustering
    HHENCODE = "more or less" works okay in unknown manner
    Greener Morton Curves = works well, fully explained, coded for you
    Hilbert Curves = might be better, fully explained, you need to code it yourself
    Does anyone on the forum have more entries to add?
    If you are heck-bent on spatially sorting your data, I'd suggest using Simon's code.  You could also do a head-to-head test against HHENCODE and report back to us your findings.  My suspicion though is that as Bryan and Stefan both allude, that spatial sorting won't cure the problem that brought to you to the forum.  I am not saying there could not be a modest little boost, but its dependent on many other factors I mentioned in my first post.  Anyhow, I have appreciated your posting, it lead me to think more about a topic I've not paid much attention to for years.  We could spend more time in this forum on such issues.  But most often things here are just about how to do a task or bug chasing.  Perhaps there is a better place for these things.  Maybe if a question leads beyond the "how" and "bug" topics, we should cross post into stack exchange?  I dunno.
    Cheers,
    Paul

  • Help for RPD Clustering

    Hello Gurus,
    I have a setup as below:
    VM1-------------------SAN Loc 1-------------------VM3
    PS 1-------------------WC----------------------------PS 2
    VM2-------------------SAN Loc 2-------------------VM4
    AS 1-------------------RPD---------------------------AS 2
    Also there is a Apache based load balancer at the top.
    I always expected VM2-AS1 and VM4-AS2 to talk with RPD, as I have enabled clustering and specified SAN Loc 2 as the location for storing RPD.
    I have two questions:
    1. What is the process of replicating across the cluster, i.e. do we have to copy RPD at all the three locations, i.e. VM2 AS 1, SAN Loc2 and VM 4AS 2?
    2. How to ensure RPD on SAN Loc 2 is being used, and when updates are done using DSN in online mode, in which RPD they are going, and how they get replicated across all the three RPDs?
    Pleaes help
    Edited by: user6718192 on Feb 24, 2010 2:44 PM

    VM1-------------------SAN Loc 1-------------------VM3
    PS 1-------------------WC----------------------------PS 2
    VM2-------------------SAN Loc 2-------------------VM4
    AS 1-------------------RPD---------------------------AS 2I'm assuming VM = Server Instance, PS = Presentation Services, AS = BI Server?
    A key wouldn't have gone amiss :)
    I always expected VM2-AS1 and VM4-AS2 to talk with RPD, as I have enabled clustering and specified SAN Loc 2 as the location for storing RPD.
    I have two questions:
    1. What is the process of replicating across the cluster, i.e. do we have to copy RPD at all the three locations, i.e. VM2 AS 1, SAN Loc2 and VM 4AS 2?
    2. How to ensure RPD on SAN Loc 2 is being used, and when updates are done using DSN in online mode, in which RPD they are going, and how they get replicated across all the three RPDs?This is a very common misunderstanding when Clustering BI Server.
    The RPD is LOCAL to each BI Server instance. It is NOT SHARED.
    The SAN / shared folder comes into play when you make an online update to the RPD. When this happens you are connected to your Master BI Server, and the change is propogated to the other BI Server(s) in the cluster via the Repository Publishing Directory, which is on shared storage. The other BI Servers in the cluster get the updated RPD, and update their LOCAL COPY.
    Repository Publishing Directory
    This directory is shared by all Oracle BI Servers participating in a cluster. It holds the master copies of repositories edited in online mode. The clustered Oracle BI Servers examine this directory upon startup for any repository changes.
    The Master BI Server must have read and write access to this directory. All other BI Servers must have read access."See http://download.oracle.com/docs/cd/E10415_01/doc/bi.1013/b40058.pdf
    It's worth noting that online updates of an RPD are not particularly advised in a production environment. It's better, if you can take the downtime, to take each server offline in turn and manually update their local RPD copy.

  • Clustering over a distance

    Hi Guys,
    Forgive me if this has been addressed before but I could not find anything quickly.
    When I did my cluster training, I was told that you could not satisfactorily run a cluster over a distance due to latency issues in the interconnect. So, if you are using long wave fibre, you can get about 10 kilometers and if you are really keen, you can use Dense Wave Division Multiplexing DWDM for up to about 400 kilometers.
    I suppose someone has done that and I would like to know if it is as good as they say. I will have lots of data flowing between these sites and DWDM sounds worth investigating especially with something like true copy on a HDS running and lots of IP traffic that might be using the same link.
    Comments welcome before I get Sun's sales pitch.
    Regards
    Stephen

    Tim,
    Thanks for the reply. I worked for Sun for some time and dealt with companies that obviously had demands for business continuity where campus clustering in particular was not an acceptable choice. Power grids and water supplies needed an appropriate distance factor to provide a service to meet their demands.
    When I raised this just over two year ago, Sun kept harping on the ability of DWDM to make long distance clustering and replication of data achievable.
    We currently have a requirement for a minimum of 400 million transactions per day (plus whatever we do now) and I am still waiting to see if this is possible even in our building. Even at 50 million transactions, logging is proving to be a bottleneck on our SAN.
    I just wonder where we can go with this.
    Speaking of blueprints. Someone really needs to ensure the highest level of quality control in blueprints as they are some people at Sun who just want to get their name in print. After reading some of the blueprints, there are a (thankfully) minority that write blueprints that don't have a clue in the real world.
    Regards
    Stephen

  • Same Certificate for WLS Clustering?

    Hi all.
    First of all, is it worth to have 2 way authentication SSL connections among
    the weblogic servers under the same weblogic clustering?
    I'm wondering if I can set up same certificate/private key pair into all
    weblogics running on the same machine under
    weblogic clustering? or do they require to have unique key pairs?
    To make 2 way authentication SSL between IIS and weblogic server, does IIS
    have to obtain certificate from
    trusted CA such as Verisign? Can IIS and weblogic server running at the
    same machine share same certificate and
    private key?
    Thanks in advance.

    First of all, is it worth to have 2 way authentication SSL connectionsamong
    the weblogic servers under the same weblogic clustering?There really is no reason to do so.
    >
    I'm wondering if I can set up same certificate/private key pair into all
    weblogics running on the same machine under
    weblogic clustering? or do they require to have unique key pairs?Digital certificates are typically tied to machine name. Each digital
    certificate is associated with a unique private key. So, unless all the
    machines have the same machine name, (ala machine.com), you'll need a
    different visual certificate.
    >
    To make 2 way authentication SSL between IIS and weblogic server, does IIS
    have to obtain certificate from
    trusted CA such as Verisign?You're trying to do with SSL connection from the plug-in to WLS?
    Can IIS and weblogic server running at the
    same machine share same certificate and
    private key?I do not believe that this is possible. Each vendor uses their own
    mechanism for storage of the private key.
    Thanks,
    Michael
    Michael Girdley
    BEA Systems Inc
    "Won H. Cho" <[email protected]> wrote in message
    news:[email protected]..
    Hi all.
    First of all, is it worth to have 2 way authentication SSL connectionsamong
    the weblogic servers under the same weblogic clustering?
    I'm wondering if I can set up same certificate/private key pair into all
    weblogics running on the same machine under
    weblogic clustering? or do they require to have unique key pairs?
    To make 2 way authentication SSL between IIS and weblogic server, does IIS
    have to obtain certificate from
    trusted CA such as Verisign? Can IIS and weblogic server running at the
    same machine share same certificate and
    private key?
    Thanks in advance.

Maybe you are looking for

  • How do I get iTunes Match to update playlists I change content on with my PC?

    Help! I cannot get iTunes to update one of my playlists (I use iTunes Match). I tried turning Match off to manually sync, but the option to sync did not become available. And after one of the syncs, even with Match back on, now I have lost ALL of my

  • Hard drives not showing up need to reset the access??? how do you do that

    i have an internal hard drive & an external hard drive and in the info section of both it was changed to no access. I tryed to change it back to read & write but when i close it out and go back in the info it still says no access. I tryed restarting

  • How do I move files on my new iMac?

    I am a longtime PC user with my first iMac and I cannot figure out how to move files from one location to another.

  • Bounds in ABAP HR

    hai folks..                   Can anybody tell me what is 'bounds' statement in abap-hr.                   and also eloborate the function of provide and endprovide. Regards Sekhar.C

  • Https Class for Encryption Message...One question on this

    (1)I have written a java code that connects to an https server. Do I always need to have a client certificate in my code for the browser to recognize the Server Certificate? like the example I have written below?(the try-catch block..) Is it possible