Benefit of clusters

Hi
Just want to understand why do we need to have clusters in WebLogic environments.If we have say 4 managed servers, the load balancer can balance all 4 of them without knowing about the cluster.I have read various benefits at
http://download.oracle.com/docs/cd/E11035_01/wls100/cluster/overview.html#wp1011562
need to know how it is practically beneficial please
thanks

Hi,
If you are integrating weblogic cluster in apache using weblogic plugin , then apache acts as load balancer and distributes the requests to weblogic cluster.
If you do not want to create a cluster in weblogic and still you want to loadbalance the requests among your weblogic managed servers, you can use mod_proxy_balancer.so module which is newly introduced from apache 2.2.x onwards.
This module will load balance the requests to weblogic managed servers, even it will maintain the sticky sessions alos.
For detailed configuration please refer the below link :
http://middlewareforum.com/weblogic/?p=285
Thanks,
Kartheek

Similar Messages

  • Clustering on a single box.

    Is there typically a performance benefit from clustering EJBs on single box?
              That is, running a number of smaller (memory footprint) Weblogic Servers on
              a single machine vs. one larger instance. In its simplest form, we have an
              application where stateless session beans are used to perform transactions
              on behalf of the client.
              I think this boils down to JVM performance characteristics. Can the JVM
              provide adequate memory management for object creation, GC, etc.
              We are using NT, have a gigabyte of memory and using Sun's JVM version
              1.2.2.
              

              For starters, check: http://www.enteract.com/~bradapp/links/
              Search for "Garbage Collection & Memory Management"
              Srikant, [email protected], http://www.weblogic.com/, etc.
              Chaminda Peries wrote:
              > Rob could you please tell me where I can find the research you mentioned ?
              >
              > Thanks
              >
              > Chaminda Peries
              >
              > Rob Woollen wrote in message <[email protected]>...
              > > In general, there is a benefit to clustering multiple java vms on a
              > single
              > >multi-processor box.
              > >
              > > Most (if not all) current java vms will stop all the threads, initiate
              > a
              > >garbage collection, and then restart each thread. During the gc pause,
              > you
              > >will generally see 1 cpu at 100% while the others idle.
              > >
              > > By running multiple jvms, you will stagger the collection times.
              > >
              > > If you are interested, there is a large body of research on concurrent
              > and
              > >incremental gc algorithms.
              > >
              > >-- Rob
              > >
              > >Todd Shutts wrote:
              > >
              > >> Is there typically a performance benefit from clustering EJBs on single
              > box?
              > >> That is, running a number of smaller (memory footprint) Weblogic Servers
              > on
              > >> a single machine vs. one larger instance. In its simplest form, we have
              > an
              > >> application where stateless session beans are used to perform
              > transactions
              > >> on behalf of the client.
              > >>
              > >> I think this boils down to JVM performance characteristics. Can the JVM
              > >> provide adequate memory management for object creation, GC, etc.
              > >>
              > >> We are using NT, have a gigabyte of memory and using Sun's JVM version
              > >> 1.2.2.
              > >
              

  • What is the benefit if we add coherence in soa clustering?

    Hi,
    What is the benefit if we implement coherence in SOA clustering?
    For eg:
    Consider 2 applications like below:
    1. SOA clustering with coherence implemented
    2. SOA clustering without coherence implemented.
    The features like high availability, load balancing, performance improvement etc., can be achieved with SOA clustering alone. But if coherence is used in SOA clustered envt, what is the value add that coherence is providing?
    Your inputs are highly appreciated.
    Regards,
    Sree

    Sree,
    Coherence caching is embedded in SOA 11g. Product uses it internally for various purpose (deployment distribution, cluster communication etc.). What do you mean by SOA clustering without coherence implemented?
    Regards,
    Anuj

  • What is RID in non clustered index and its use

    Hi All,
    I need help regarding following articles on sql server
    1) what is RID in non clustered index and its use.
    2) What is Physical and virtual address space. Difference in 32 bit vs 64 bit Virtual address space
    Regards
    Rahul

    Next time Please ask single question in a thread you will get better response.
    1. RID is location of heap. When you create Non clustered index on heap and
    lookup happens to get extra records RID is used to locate the records. RID is basically Row ID. This is basic definition for you. Please read
    this Thread for more details
    2. I have not heard of Physical address space. I Know Virtual address space( VAS)
    VAS is simple terms is amount of memory( virtual )  'visible' to a process, a process can be SQL Server process or windows process. It theoretically depends on architecture of Operating System. 32 bit OS will have maximum range of 4 G VAS, it's calculated
    like a process ruining on 32 bit system can address max up to 2^32 locations ( which is equivalent to 4 G). Similarly for 64 bit max VAS will be 2^64 which is theoretically infinite. To make things feasible maximum VAS for 64 bit system is kept to 8 TB. Now
    VAS acts as layer of abstraction an intermediate .Instead of all request directly mapping to physical memory it first maps to VAS and then mapped to physical memory so that it can manage request for memory in more coordinated fashion than allowing process
    to do it ,if not it will  soon cause memory crunch.Any process when created on windows will see virtual memory according to its VAS limit.
    Please read
    This Article for detailed information
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Clustered Index Vs Reverse Index

    I am searching against a column of table (4 mil rows) which has a datatype varchar(19), what index is recommended clustered or reverse index
    any thoughts on this would help
    Thanks

    Reverse indexes are used for write performance.
    If you insert a lot of rows quickly and you use an incrementing value as the index key, typically a sequence, then your insert performance will benefit.
    An index is simply ordered data, so if you need to insert into the index
    9567843
    9567844
    9567845
    9567846
    9567847
    9567848
    They will all write to the same block.
    Whereas with
    3487659
    4487659
    5487659
    6487659
    7487659
    8487659
    they will not as there are big gaps in the index values.
    If anything flipping the value back may have some overhead, so read performance could be very slightly degraded, but I must emphasize that this is untested speculation.

  • QMASTER hints 4 usual trouble (QM NOT running/CLUSTEREd nodes/Networks etc

    All, I just posted this with some hints & workaround with very common issues people have on this forum and keep asking concerning the use of APPLE QMASTER with FCP, SHAKE, COMPRESSOR and MOTION. I've had many over the last 2 years and see them coming up frequently.
    Perhaps these symptoms are fixed in FCS2 at MAY 2007 (now). However if not here's some ROTS that i used for FCP to compressor via QMASTER cluster for example. NO special order but might help someone get around the stuff with QMASTER V2.3, FCP V5.1.4, compressor.app V2.3
    I saw the latest QMASTER UI and usage at NAB2007 and it looked a little more solid with some "EASY SETUP" stuff. I hope it has been reworked underneath.. I guess I will know soon if it has.
    For most FCP/COMPRESSOR, SHAKE. MOTION and COMPRESSOR:
    • provide access from ALL nodes to ALL the source and target objects (files) on their VOLUMES. Simply MOUNT those volumes through the APPLE file system (via NFS) using +k (cmd+k) or finder/go/connect to server. OR using an SSAFS such as XSAN™ where the file systems are all shared over FC not the network. YOu will notice the CPU's going very busy for a small while. THhis is the APPLE FILE SYSTEM task,,, I guess it's doing 'spotlight stuff". This goes away after a few minutes.
    • set the COMPRESSOR preferences for "CLUSTER OPTIONS" to "Never copy source to Cluster". This means that all nodes can access your source and target objects (files) over NFS (as above). Failure to to this means LENGTHY times to COPY material back an forth, in some cases undermining the pleasure gained from initially using clustering (reduced job times)
    • DONT mix the PHYSICAL or LOGICAL networks in your local cluster. I dont know why but I could never get this to work. Physical mean stick with eother ETHERNET or FIREWIRE or your other (airport etc whic will be generally way to slow and useless), Logical measn leepin all nodes on the SAME subnet. You can do this siply by setting theis up in the system preferences/QMASTER/advanced tab under "Use Network Interfaces". In my currnet QUAd I set this to use BUILT IN ETHERNET1 and in the MPBDC's I set this to their BUILTIN ETHERNET.
    • LOGICAL NETWORKS (Subnet): simply HARDCODE an IP address on the ETHERNET (for eample) for your cluster nodes andthe service controller. FOr eample 3.1.1.x .... it will all connect fine.
    • Physical Networks: As above (1) DONT MIX firewire (IPoFW) and Ethernet(IPoE). (2) if more than extra service node USE A HUB or SWITCH. I went and bought a 10 port GbE HUB for about $HK400 (€40) and it worked fine. I was NEVER able to get a stable system of QMASTER mixing FW and ETHERNET. (3) fwiw using IP of FW caused me a LOAD of DISK errors and timouts (I/O errors) on thosse DISKs that were FW400 (al gone now) but it showed this was not stable overall
    • for the cluster controller node MAKE SURE you set the CLUSTER STORAGE (system preferences/QMASTER/shared cluster storage) for the CLUSTER CONTROLLER NODE IS ON A SHARED volume (See above). This seems essential for SHAKE to work. (if not check the Qmaster errors in the console.app [see below] ). IF you have an SSAFS like XSAN™ then just add this cluster storage on a share file path. NOte that QMASTER does not permit the cluster storage to be on a NETWORK NODE for some reason. So in short just MOUNT the volume where the SHARED CLUSTER file is maintained for the CLUSTER controller.
    • FCP - avoid EXPORT to COMPRESSOR from the TIMELINE - it never seems to work properly (see later). Instead EXPORT FROM SEQUENCE in the BROWSER - consistent results
    • FCP - "media missing " messages on EXPORT to COMPRESSOR.. seems a defect in FCP 5.1 when you EXPORT using a sequence that is NOT in the "root" or primary trry in the FCP PROJECT BROWSER. Simply if you have browser/bin A contains(Bin B (contains Bin C (contains sequence X))) this will FAIL (wont work) for "EXPORT TO COMPRESSOR" if you use EXPORT to COMPRESSOR in a FCP browser PANE that is separately OPEN. To get around this, simply OPEN/EXPOSE the triangles/trees in the BROWSER PANE for the PROJECT and select the SEQUENCE you want and "EXPORT to COMPRESSOR" from there. This has been documented in a few places in this forum I think.
    • FCP -> COMPRESSOR -> .M2V (for DVDSP3): some things here. EXPORTING from an FCP SEQUENCE with CHAPTER MARKERS to an MPEG2 .M2V encoding USING A CLUSTER causes errors in the placement of the chapter makers when it is imported to DVDSP3. In fact CONSISTENTLY, ALL the chapter markers are all PLACED AT THE END of the TRACK in DVD SP# - somewhat useless. This seems to happen ALSO when the source is an FCP reference movie, although inconsistent. A simple work around if you have the machines is TRUN OF SEGMENTING in the COMPRESSOR ENCODER inspector. let each .M2V transcode run on the same service node. FOr the jobs at hand just set up a CLUSTER and controller for each machine and then SELECT the cluster (myclusterA, hisclusterb, herclusterc) for each transcode job.. anyway for me.. the time spent resolving all this I could have TRANSCODED all this on my QUAD and it would all have ben done by sooner! (LOL)
    • CONSOLE logs: IF QMASTER fails, I would suggest your fist port of diagnosis should be /Library/Logs/Qmaster in there you will see (on the controller node) compressor.log, jobcontroller.com.apple.qmaster.cluster.admin.log, and lots of others including service controller.com.apple.qmaster.executorX.log (for each cpu/core and node) andd qmasterca.log. All these are worth a look and for me helped me solve 90% of my qmaster errors and failures.
    • MOTION 3 - fwiw.. EXPORT USING COMPRESSOR to a CLUSTER seems to fail EVERY TIME.. seems MOTION is writing stuff out to a /var/spool/qmaster
    TROUBLESHOOTING QMASTER: IF QMASTER seems buggered up (hosed), then follow these steps PRIOR to restarting you machines.
    go read the TROUBLE SHOOTING in the published APPLE docs for COMPRESSOR, SHAKE and "SET UP FOR DISTRIBUTED PROCESSING" and serach these forums CAREFULLY.. the answer is usually there somewhere.
    ELSE THEN,, try these steps....
    You'll feel that QMASTER is in trouble when you
    • see that the QMASTER ICON at the top of the screen says 'NO SERVICES" even though that node is started and
    • that the APPLE QMASTER ADMINSTRATOR is VERY SLOW after an 'APPLY" (like minutes with SPINNING BEACHBALL) or it WONT LET YOU DELETE a cluster or you see 'undefined' nodes in your cluster (meaning that one was shut down or had a network failure)..... all this means it's going to get worse and worse. SO DONT submit any more work to QAMSTER... best count you gains and follow this list next.
    (a) in COMPRESSOR.app / RESET BACKGROUND PROCESSES (its under the COMPRESSOR name list box) see if things get kick started but you will lose all the work that has been done up to that point for COMPRESSOR.app
    b) if no OK, then on EACH node in that cluster, STOP the QMASTER (system preferences/QMASTER/setup [set 0 minutes in the prompt and OK). Then when STOPPED, RESET the shared services my licking OPTION+CLICK on the "START" button to reveal the "RESET SERVICES". Then click "START" on each node to start the services. This has the actin of REMOVING or in the case where the CLUSTER CONTROLLER node is "RESET" f terminating the cluster that's under its control. IF so Simply go to APPLE QMASTER ADMINISTRATOR and REDFINE it. Go restart you cluster.
    c) if step (b) is no help, consult the QMASTER logs in /Library/Logs/Qmaster (using the cosole.app) for any FILE MISSING or FILE not found or FILE ERROR . Look carefully for the NODENAME (the machine_name.local) where the error may have occured. Sometimes it's very chatty. Others it is not. ALso look in the BATCH MONITOR OUTPUT for errors messages. Often these are NEVER written (or I cant find them) in the /var/logs... try and resolve any issues you can see (mostly VOLUME or FILE path issues from my experience)
    (d) if still no joy then - try removing all the 'dead' cluster files from /var/tmp/qmaster , /var/sppol/qmaster and also the file directory that you specified above for the controller to share the clustering. FOR shake issues, go do the same (note also where the shake shared cluster file path is - it can be also specified in the RENDER FILEOUT nodes prompt).
    e) if all this WONT help you, its time to get the BIG hammer out. Simply, STOP all nodes of not stopped. (if status/mode is "STOPPING" then it [QMASTER] is truly buggered). DISMOUNT the network volumes you had mounted. and RESTART ALL YOUR NODES. Tis has the affect of RESTARTING all the QMASTERD tasks. YEs sure you can go in and SUDO restart them but it is dodgy at best because they never seem to terminate cleanly (Kill -9 etc) or FORCE QUIT.... is what one ends up doing and then STILL having to restart.
    f) after restart perform steps from (B) again and it will be usually (but not always) right after that
    LAstly - here's some posts I have made that may help others for QMASTER 2.3 .. and not for the NEW QMASTER as at MAy 2007...
    Topic "qmasterd not running" - how this happened and what we did to fix it. - http://discussions.apple.com/message.jspa?messageID=4168064#4168064
    Topic: IP over Firewire AND Ethernet connected cluster? http://discussions.apple.com/message.jspa?messageID=4171772#4171772
    LAstly spend some DEDICATED time to using OBJECTIVE keywords to search the FINAL CUT PRO, SHAKE, COMPRESSOR , MOTION and QMASTER forums
    hope thats helps.
    G5 QUAD 8GB ram w/3.5TB + 2 x 15in MBPCore   Mac OS X (10.4.9)   FCS1, SHAKE 4.1

    Warwick,
    Thanks for joining the forum and for doing all this work and posting your results for our benefit.
    As FCP2 arrives in our shop, we will try once again to make sense of it and to see if we can boost our efficiencies in rendering big projects and getting Compressor to embrace five or six idle Macs.
    Nonetheless, I am still in "Major Disbelief Mode" that Apple has done so little to make this software actually useful.
    bogiesan

  • Need recommendation - iis with two physical servers and clusters

    Hello,
    I was hoping to get a recommendation/opinion or two on the configuration I've inherited. I'm looking
    at the following configuration using weblogic 8.1 sp4 server (to be upgraded to sp6) running on two physical machines:
    MachineA:
    Domain: myDomain
    Cluster: cluster1
    Servers: Admin Server:9001 (not part of cluster)
    ManagedServer1:7001
    ManagedServer2:7010
    MachineB:
    Domain: myDomain
    Cluster: cluster2
    Servers: Admin Server:9001 (not part of cluster)
    ManagedServer3:7001
    ManagedServer4:7010
    On MachineA: IIS Proxy Plugin with parameter in .ini file
    WeblogicCluster: MachineA:7001, MachineA:7010, MachineB:7001,MachineB7010
    Naturally, the IIS is acting as a round robin load balancer..
    The domains were created using the wizard on both machines and basic domain template..
    For proper failover to be taken advantage of, does something different need to be configured? Or is it fine?
    And if so, does group replication have something to do with it?
    Should either of the clusters be Managed Servers across physical machines?
    or should this just be one cluster of 4 managed servers across two machines?
    I really am not sure on any of this, even though i've read through much of the documentation for clustering..twice.
    I should mention that there is only a .war file deployed with 4 connection pools and 4 Data Sources..No EJBs or JMS
    Would really appreciate any feedback or opinions/advice..
    Thanks

    This is what I'd do
    MachineA:
    Domain: myDomain
    Cluster: cluster1
    Servers: Admin Server:9001 (not part of cluster)
    ManagedServer1:7010
    ManagedServer2:7020
    MachineB:
    Domain: myDomain
    Cluster: cluster2
    Servers: Admin Server:9001 (not part of cluster)
    ManagedServer3:8010
    ManagedServer4:8020
    I just changed the port numbers so they flow and have a pattern - 70* port numbers belong to cluster 1, 80* port numbers belong to cluster 2. No technical benefit apart from it makes it tidier in my opinion.
    Port 9001 for the admin server is normally the domain wide admin port - you could be using that, so that's fine, but my admin server is running 7001.
    You also only need one admin server per domain, this can run on MachineA
    Replication groups are used to help WLS figure out where to place the secondary http session. It will try servers on a different machine/replication group first, if none available, it will create secondary on the same machine.
    What you will end up with is a domain with 4 managed servers in it. The admin server will run on machineA, say using port 7001, each of the managed servers will contact the admin server to download it's config.
    The cluster1 will contain 2 managed servers and the same for cluster2.
    providing you have your cluster multicast addresses set up and the cluster is working, that should be all you need to do for the session failover.
    There's some BEA sample code somewhere which will deploy a web-app and show which managed server you've attached to. If you find that and then connect through IIS, shut down the server that the JSP tells you, you should be able to see the session fail-over nicely.
    Hope that helps,
    Pete

  • Simon Greener's Morton Key Clustering in Oracle Spatial

    Hi folks,
    Apologies for the rambling.  With mattyschell heading for greener open source big apple pastures I am looking for new folks to bounce ideas and code off.  I was thinking this week about the discussion last autumn over spatial clustering.
    https://community.oracle.com/thread/3617887
    During the course of the thread we all kind of pooh-poohed spatial clustering as not much of solution, myself being one of the primary poohers.  Yet the concept certainly remains as something to consider regardless of our opinions.  The yellow book, the Greener/Ravada book, Simon's recent treatise (http://download.oracle.com/otndocs/products/spatial/pdf/biwa_2015/biwa2015_uc_comparativeperformance_greener.pdf), they all put forward clustering such that at the very least we should consider it a technique we should be able as professionals to do - a tool in the toolbox whether or not it always is the right answer.  I am mildly (very mildly) curious to see if Kothuri, Godfrind and Beinat will recycle their section on spatial clustering with the locked-down MD.HHENCODE into their 12c revision out this summer.  If they don't then what is the replacement for this technique?  If they do then we return to all of our griping about this ancient routine that Simon implies may date back to the CHS and their hhcode indexes - at least its not written in Java! 
    Anyhow, so I've been in the midst this month of refreshing some of the datasets I manage and considering clustering the larger tables whilst I am at it.  Do I really expect to see huge performance gains?   Well... not really.  But it does seem like something that should be easy to accomplish, certainly something that "doesn't hurt" and shows that I am on top of things (e.g. "checks the box").  But returning to the discussion from last fall, just what is the best way to do this in Oracle Spatial?
    So if we agree to ignore poor old MD.HHENCODE, then what?  Hilbert curves look nifty but no one seems to be stepping up with the code for them.  And this reroutes us back around to Simon and his Morton key code.
    http://www.spatialdbadvisor.com/oracle_spatial_tips_tricks/138/spatial-sorting-of-data-via-morton-key
    So who all is using Simon's code currently?  If you read that discussion from last fall there does not seem to be anyone doing so and we never heard back from Cat Person on either what he decided to do or what his name is.
    I thought I could take a stab at streamlining Simon's process somewhat to make things easier for myself to roll this onto many tables.  I put together the following small package
    https://github.com/pauldzy/DZ_SDO_CLUSTER/tree/master/Packages
    In particular I wanted to bundle up the side issues of how to convert your lines and polygons into points, automate things somewhat and provide a little verification function to see what results look like.  So again nothing that Simon does not already walk through on his webpage, just make it bit easier to bang out on your tables without writing a separate long SQL process for each one.
    So for example to use Simon's Morton key logic, you need to know the extent envelope of the data (in order to define a proper grid).  So if its a large table, you'd want to stash the envelope info in the metadata.  You can do this with the update_metadata_envelope procedure or just suffer through the sdo_aggr_mbr each time if you don't want to go that route (I have one table of small watershed polygons that takes about 9 hours to run sdo_aggr_mbr upon).  So just run things at the sql prompt
    SELECT
    DZ_SDO_CLUSTER.MORTON_UPDATE(
        p_table_name => 'CATCHMENT_NP21'
       ,p_column_name => 'SHAPE'
       ,p_grid_size => 1000
    FROM dual;
    This will return the update clause populated with the values to use with the morton_key wrapper function, e.g. "morton_key(SHAPE,160.247133275879,-17.673722530871,.0956820001136141,.0352063207508021)".  So then just paste that into an update statement
    UPDATE foo
    SET my_morton_key = dz_sdo_cluster.morton_key(
        SHAPE
       ,160.247133275879
       ,-17.673722530871
       ,.0956820001136141
       ,.0352063207508021
    Then rebuild your table sorting on the morton_key.  I just use the TOAD rebuild table tool and manually add the order by clause to the rebuild script.  I let TOAD do all the work of moving the indexes, constraints and grants to the new table.  I imagine there are other ways to do this.
    The final function is meant to be popped into Oracle mapviewer or something similar to show your family and friends the results.
    SELECT
    dz_sdo_cluster.morton_visualize(
        'NHDPLUS'
       ,'NHDFLOWLINE_NP21_ACU'
       ,'SHAPE'
       ,'OBJECTID'
       ,'100'
       ,10000
       ,'MORTON_KEY'
    FROM dual;
    Look Mom, there it is!
    So anyhow this is first stab at things and interested in feedback or suggestions for improvement.  Did I get the logic correct?  Don't spare my feelings if I botched something.  Note that like Simon I passed on the matter of just how to determine the proper grid size.  I've been using 1000 for the continental US + Hawaii/PR/VI and sitting here this morning I think that probably is too large.  Of course it depends on the size of the geometries and thus the density of the resulting points.  With water features this can vary a lot from place to place, so perhaps 1000 is okay.  What would the algorithm be to determine a decent grid size?  It occurs to me I could tell you the average feature count per morton key value, okay well its about 10.  That seems small to me.  So I could see another function in this package that returns some kind of summary on the results of the keying to tell you if your grid size estimate was reasonable.
    Cheers and Happy Saturday,
    Paul

    I've done some spatial clustering testing this week.
    Firstly, to reiterate the purpose of spatial clustering as I see it:  spatial clustering can be of benefit in situations where frequent window based spatial queries are made.  In particular it can be very useful in web mapping scenarios where a map server is requesting data using SDO_FILTER or SDO_ANYINTERACT and there is a need to return the data as quickly as possible.  If the data required to satisfy the query can be squeezed into as few blocks as possible, then the IO overhead is clearly reduced.
    As Bryan mentioned above, once the data is in the buffer cache, then the advantage of spatial clustering is reduced.  However it is not always possible to get/keep enough of the data in the buffer cache, so I believe spatial clustering still has merits, particularly if it can be implemented alongside spatial partitioning.
    I ran the tests using an 11.2.0.4 database on my laptop.  I have a hard disk rather than SSD, so the effects of excessive IO are exaggerated.  The database is configured with the default 8kb block size.
    Initially, I created a table PARCELS:
    create table parcels (
    id            integer,
    created_date  date,
    x            number,
    y            number,
    val1          varchar2(20),
    val2          varchar2(100),
    val3          varchar2(200),
    geometry      mdsys.sdo_geometry,
    hilbert_key  number);
    I inserted 2.8 million polygons into this table.  The CREATED_DATE is the actual date the polygons were captured.  I populated val1, val2 and val3 with string values to pad the rows out to simulate some business data sitting alongside the sdo_geometry.
    I set X,Y to the first ordinate of the polygon and then set hilbert_key = sdo_pc_pkg.hilbert_xy2d(power(2,31), x, y).
    I then created 4 tables to base the tests upon:
    PARCELS_RANDOM:  Ordered by dbms_random.random - an absolute worst case scenario.  Unrealistic, but worthwhile as a benchmark.
    PARCELS_BASE_DATE:  Ordered by CREATED_DATE.  This is probably pretty close to how the original source data is structured on disk.
    PARCELS_RTREE:  Ordered by RTree.  Achieved by inserting based on an SDO_FILTER query
    PARCELS_HILBERT:  Ordered by the hilbert_key attribute
    As a first test, I counted the number of blocks required to satisfy an SDO_FILTER query.  E.g.
    select count(distinct(dbms_rowid.rowid_block_number(rowid)))
    from parcels_rtree
    where sdo_filter(geometry,
                    sdo_geometry(2003, 2157, null, sdo_elem_info_array(1, 1003, 3),
                                    sdo_ordinate_array(644232,773809, 651523,780200))) = 'TRUE';
    I'm assuming dbms_rowid.rowid_block_number(rowid) is suitable for this.
    I ran this on each table and repeated it over three windows.
    Results:
    So straight off we can see that the random ordering gave pretty horrific results as the data required to satisfy the query is spread over a large number of blocks.  The natural date based clustering was far better. RTree and Hilbert based clustering reduced this by a further 50% with Hilbert just nosing out RTree.
    Since web mapping is the use case I am most likely to target, I then setup a test case as follows:
    Setup layers in GeoServer for each of the tables
    Used a script to generate 1,000 random squares over the extent of the data, ranging from 200m to 500m in width and height.
    Used JMeter to make a WMS request for a png of the each of the 1,000 windows.  JMeter was run sequentially with just one thread, so it waited for each request to complete before starting the next.  I ran these tests 3 times to balance out the results, flushing the buffer cache before each run.
    Results:
    Again the random ordering performed woefully bad - somewhat exacerbated by the quality of the disk on my laptop.  The natural date based clustering performed far better.  RTree and hilbert based clustering further reduced the time by more than half.
    In summary, the results suggest that spatial clustering is worth the effort if:
    the data is not already reasonably well clustered
    you've got a decent quantity of data
    you're expecting a lot of window based queries which need to be returned as quickly as possible
    you don’t expect to be able to fit all the data in the buffer cache
    When it comes to deciding between RTree and Hilbert (or Morton/z-order or any other space filling curve method).... I found that the RTree method can be a bit slow on large datasets, although this may not matter as a one off task.  Plus it requires a spatial index on the source table to start off with.  The key based methods are based on an xy, so for lines and polygons there is an intermediate step to extract an xy.  I would tend to recommend this approach if you also partition the data based on a subset of the cluster key.
    Scripts are available here: https://github.com/john-otoole/oracle_spatial_cluster_test
    John

  • Help on Clustering of tables

    Hi,
    The below is an example of 3 tables for which i want to cluster.
    I have the following structure of the 3 tables:
    Table A Primary key -> referenced by Table B
    Table A & B Primary key -> referenced by Table C
    I have done the following:
    1. This is the cluster created for the table A primary key ano
    create cluster a_cl
    ano number(10)
    2. This creates a table A with the cluster command
    create table a
    ano number(10) primary key,
    aname varchar2(20)
    cluster a_cl(ano);
    3. This creates another cluster for the table B as it is referenced by table C ( I have no idea if this has to be done)
    create cluster b_cl
    bno number(10)
    4. This is the command i used to create a table which is having 2 clusters in it - cluster of table A & table B
    create table b
    (bno number(10) primary key,
    bname varchar2(20),
    ano number(10),
    foreign key(ano) references a(ano))
    cluster a_cl(ano), -- of table A
    cluster b_cl(bno); -- of table B
    The above command gave me the following error:
    ORA-01769: duplicate CLUSTER option specifications.
    I could not proceed further for the clustering of table C due to the above error.
    How do I use cluster command for a table which is related with 2 or more tables?
    I have gone through many examples on clustering but almost all of them have only one example in which only one table is related.
    Can anybody help me in this regard?
    Thanks in advance for any help.
    Regards,
    Rajashree.

    Rajashree,
    I understood relationships between them. It's just strange bacause many-to-many relationship includes many-to-one. If you have a reason, that's fine.
    Before creating a cluster try to find out why you need it. Don't create because you heard that clusters can improve performance dramatically. They may, but they are usefull only in certain cases whereas in others they may cause in very poor performance. To find useful information about clusters see Oracle Documnetation: Oracle8 Concepts.
    Read my first answer carefully. There is one of possible cluster organizations. How you want to organize your cluster depends on what you want. If you don't understand how cluster may benefit you, don't do it. I recommend to read carefully Oracle Docs to find out clusters benefits.
    Aleksandr

  • Best Practices: Clustered Author Environment

    Hello,
    We are setting our CQ 5.5 infrastructure in 3 datacenters with ultimately an Authoring instance in each (total of three).  Our plan was to Cluster the three machines using “Share Nothing” and each would replicate to the Publish instances in all data centers.  To eliminate confusion within our organization, I’d like to create a single URL resource for our Authors so they wouldn’t have to remember to log into 3 separate machines?
    So instead of providing cqd1.acme.com, cqd2.acme.com, cqd3.acme.com, I would distribute something like “cq5.acme.com” which would resolve to one of the three author instances.  While that’s certainly possible by putting a web server/load balancer in front of the three, I’m not so sure that’s even a best practice for supporting internal users.
    I’m wondering what have other multi-datacenter companies done (or what does Adobe recommend) to solve this issue, did you:
    Only give one destination and let the other two serve as backups? (this appears to defeat the purpose of clustering)
    Place a web server/load balancer in front of each machine and distribute traffic that way?
    Do nothing, e.g., provide all 3 author URLs and let the end-user choose the one closest to them geographically?
    Something else???
    It would be nice if there was a master UI an Author could use that communicated with the other author machines in a way that’s transparent to the end-user – so if Auth01 went down, the UI would continue to work with the remaining machiness without the end-user (author) even knowing the difference (e.g., not have to change machines).
    Any thoughts would be greatly appreciated.

    Day's documentation (for CRX 2.3) states in part, "whenever a write operation is received by a slave instance, it is redirected to the master instance ..."  So, all writes will always go to the master, regardless of which instance you hit.
    Day's documentation also states, "Perhaps surprisingly, clustering can also benefit the author environment because even in the author environment the vast majority of interactions with the repository are reads. In the usual case 97% of repository requests in an author environment are reads, while only 3% are writes."
    This being the case, it seems the latency of hitting a remote author would far outweight other considerations.  If I were you, New2CQ, I would probably have my users hit the instance that's nearest to them (in terms of network latency, etc...) regardless or whether it's a master or a slave.

  • Compression on table and on clustered index

    Hi all,
    if I had a table with a clustered index on it, is it the same to rebuild the table with compression and rebuild the clustered index with compression?
    Is this
    ALTER TABLE dbo.TABLE_001 REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = ROW, ONLINE=ON)
    Equivalent to this?
    ALTER INDEX PK_TABLE_001 ON dbo.TABLE_001 REBUILD WITH (DATA_COMPRESSION=ROW, ONLINE=ON)
    where PK_TABLE_001 is the clustered index of the table TABLE_001

    Andrea,
    Cluster index is table itself organized according to clustering key value. So if you apply compression on CI means internally table would be compressed and because cluster index includes all columsn of the table so complete table would be compressed.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • IronPort Clustering questions

    Hello all,
    I have some questions about clustering in Ironport:
    Actually I have one IronPort C150 in "Standalone mode" with an ip adress who takes the mail flow (192.168.1.34)
    We received a second Ironport for setup a cluster configuration between them.
    My question are :
    1) What happen for the mail flow if the first IronPort ( 192.168.1.34) move to a cluster configuration ?
    I have to configure a virtual address to be same of the original ip adress mail flow (192.168.1.34) or the cluster takes the original configuration of the first IronPort ?
    2) If one Ironport Fail, the second IronPort automatically takes the mail? or i have to reconfigure manually the ip address ?
    Thanks for your help.
    PS: Sorry for my english

    I agree with your thoughts on MX records. The biggest benefit to using a load balancer is with the management. Once you start getting a large number of hosts in an MX record you start running into problems with senders correctly resolving your MX records due to inproper DNS configuration on the internet (UDP vs TCP). Standing up a large number of hosts behind some load balancers is one potential solution. This of course comes with its own set of challenges.
    I'm still using MX records, but at some point will need to look at having multiple machines behind each host in my MX records to cut down on the size of the returned record.
    I just wish I could get all of my application developers to write their apps to understand MX records. Load balancers have worked well for my outbound environment where most applications are pointing at a host name instead of an MX record.
    Joe

  • XI R2 clustering question

    I am adding new hardware to BusinessObjects XI R2 SP3 FP3.4 environment.  We are not upgrading versions.  When running install, I would like to configure new servers with their own, independent CMS database instead of creating extended installation, or simply using existing CMS database at install time.  The benefit would be that I can install the product any time without imapct to users and take relatively smaller outage time to cluster new servers with existing servers.  Are there any drawbacks to this method?

    Hi Alan,
    I wasn't able to find it at a glance, but I do believe that some clustering instructions are in the Admin guide.  But here are your basic steps:
    - Make sure that your File Repositories are on a network share.  This is a biggie...if they're not, your reports will not run correctly from both servers.
    - Make sure that your CMS databaes is on a shared database server.  The default installation, I believe, is to create a local MySQL repository.  That will probably not work (someone feel free to jump in here).
    - In the CCM, give your cluster a name...in the properties section.  The cluster name starts with @.  For instance, the cluster could be @BOE_CompanyName_PROD.
    - Go through your BusinessObjects servers and make sure they're all pointed to the cluster name instead of "servername:6400"
    Semi-optional step:
    - Change the default login on the Infoview and CMC logins to point to the clustered name instead of "servername:6400."  If you don't do this, it's pretty likely that only one CMS is going to be utilized.
    There are probably a few things that I'm missing, but that's the general idea.  Overall, clustering is surprisingly easy.  You'll know it's working when you enter in the cluster name in the Infoview login screen (make sure you remember the @ at the login screen) and are able to log in.
    Good luck!

  • Maxdb new features - Page Clustering and Prefetch

    Hi to all,
    2 new features of maxdb are available with new patches: Page Clustering and Prefetch in OLTP-Systems.
    Are there any experiences to this features?
    READAHEAD_TABLE_THRESHOLD
    "This parameter is used to specify, if read operations of tables
    should be optimized, i.e. if during a table access more than the
    specified value in pages is affected, servertasks will be used to
    read ahead pages of this table."
    I activated this parameter in November. The system now starts more quickly and some expensive sql-statements may be faster. But the average db-time didn't change.
    Page Clustering can be activated with parameter: DATA_IO_BLOCK_COUNT
    Then tables can be migrated with SQL-Statement: ALTER TABLE <table name> CLUSTER
    I activated this only for one table. This table now is read much faster.
    My questions.
    Is there a documentation available for this features?
    How can I "uncluster" clustered tables?
    When this features are activated, expensive sql-statements are processed much faster. Because of the high workload on storagesystem, all other transaction may get slower?
    regards
    Franz

    > 2 new features of maxdb are available with new patches: Page Clustering and Prefetch in OLTP-Systems.
    First of all: both features, although technically available, are not yet generally released for OLTP usage.
    The current implementation of page prefetching is explicitely a proof-of-concept implementation which should be used by recommendation from MaxDB support only.
    There are several limitations to the functionality at the moment, e.g. it does only work for table scans.
    It does not work for joins, for index builds neither does it work on clustered tables.
    Prefetching is available to 7.6 release only at the moment, so after an upgrade to 7.7 this feature would be gone...
    > I activated this parameter in November. The system now starts more quickly and some expensive sql-statements may be faster. But the average db-time didn't change.
    Well, nothing surprising here.
    On SAP start there happen a lot of tablescans, mostly attributable to the fully buffered tables for the ABAP internals (security, ddic, etc.)
    These statements will of course benefit if the pages are now read into the buffer in parallel.
    For the avg. db-time: be happy!
    This basically means, that most of the statements running in your system aren't processed by table scans, which is for MaxDB nearly always a very good thing.
    > Page Clustering can be activated with parameter: DATA_IO_BLOCK_COUNT
    > Then tables can be migrated with SQL-Statement: ALTER TABLE <table name> CLUSTER
    Ok, page clustering is not documented yet (just as prefetching is not officially documented) and not released for OLTP.
    > I activated this only for one table. This table now is read much faster.
    >
    > My questions.
    > Is there a documentation available for this features?
    Hmm.. you're using undocumented features - what do you expect?`
    No, currently there is no official documentation.
    There is work going on to provide official documentation but none of it is released yet (and it will take some more time!)
    > How can I "uncluster" clustered tables?
    ALTER TABLE <tablename> NOT CLUSTER
    > When this features are activated, expensive sql-statements are processed much faster. Because of the high workload on storagesystem, all other transaction may get slower?
    It's not that simple.
    Both features address specific issues and are used only in very specific circumstances.
    The clustering feature will not improve performance if the table is not accessed via the cluster key.
    In fact, it may even decrease performance if this is done.
    Also, at the moment clustered tables don't stay clustered on updates/inserts/deletes.
    Therefore it is not necessarily a good idea to cluster tables like REPOLOAD pr REPOSRC on a development system.
    regards,
    Lars

  • Forms and Reports Feature/Benefit Demo

    Hi,
    I downloaded a sample code, Forms and Reports Feature/Benefit Demo, from OTN sample code site. During the installation it says that my system (NT 4.0 Workstation with Oracle8 Server Release 8.0.6) does not have Oracle WebDB Listener. Where can I download it? Can I install it on a NT workstation (not a web server)? If not, is there other way to make that demo working? Thanks.
    hv

    That depends on what you mean by clustering and exactly what version you are talking about.

Maybe you are looking for

  • Help needed for project

    hi....i wanna ask anyone who can help tell me how you can put a .gif file such that the html file can show the .gif picture and a random number generator applet together UPON click on a button. u see....there are many .gif files and on the base of th

  • HT4798 I have a used mac book with the user id from the old user still on the system, how do I reset it to a new one?

    I have obtained a used MacBook that still has the previous owners User ID associated with it, how do I go about changing it to me so that all the necessary accounts to reflect my user rights?

  • Relate to Array.sort(), urgent help~~

    I am using a class for store the data then I need to sort them is it the vector which want to sort is needed in same class??? because I use other file to access them~~ The following code: public Test() Person person; Vector v = new Vector(); v.add(ne

  • Suppressing 'Ending program' popup window in Windows 7

    Hello, If you configure the system setting AllowBlockingAppsAtShutdown to 1, and there is some program which is blocking shutdown process with ShutdownBlockReasonCreate system call, you get an 'Ending program...' popup window after trying to shut the

  • Write XmlDocument in Windows Phone

    while writing the xml file in windows phone 8.1...this is the code I am using ..getting error(check image)..can anyone help..... <I don't want to use Xdocument Class for it> XmlDocument dom = new XmlDocument();              XmlElement x = dom.CreateE