Making use of a cluster to render

Hi everyone,
First post here! Hoping to get involved in the discussions.
Anyways, I've a cluster set up with two MBP's, two new Quad Xeons, and a fairly beefed up G5. One of the MacBook's is acting as controller, with only one rendering service running on it, so it doesn't get overloaded.
I am editing a 3.5Gb file in Final Cut Pro and was wondering how I can make the most of the cluster i've set up. Do i need to export it to compressor and use it that way or is there a simpler way?
Thanks a lot,
Ronan

            reader.setFeature("http://xml.org/sax/features/validation", true);
            reader.setFeature("http://apache.org/xml/features/validation/schema", true);
      reader.setProperty("http://apache.org/xml/properties/schema/external-noNamespaceSchemaLocation",  XSDSchemaString);
       

Similar Messages

  • The name 'rolename' is in use by the cluster already as network name or application name

    I removed windows cluster ISCSI target server few days back since it was not needed and now its needed as we need since its a clustered storage space and we need this role for ISCSI storage to external server. Now when I add the role again I get this error
    The name winiscsi is in use by the cluster already as network name or application name
    I double check role is not installed . I even rebooted both nodes
    ad

    Hi Adnan-Vohra,
    Could you post the original error information or the screenshot about this error, I can’t find out any similar error explain, we can not install the iSCSI target on any cluster
    node we need the separate server as the shared storage.
    Failover Clustering Hardware Requirements and Storage Options
    https://technet.microsoft.com/en-us/library/jj612869.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Compressor outputs pixelated H.264 video when using a virtual cluster

    Hey all,
    So for some reason, when I use compressor to convert my HDV .mov file to H.264, if I am using a virtual cluster (8 core Mac Pro5,1)the end product comes out pixelated. However, I just tried again without the virtual cluster, and it came out normal as usual. Anyone know what I am missing? Any help is greatly appreciated. Thanks!

    Whoops, I'm clearly not paying attention. The images are a little small so it's hard to see, but the first image is without the quick cluster, and the second is with the quick cluster

  • "ORA-1715 : UNIQUE may not be used with a cluster index" but why?

    "ORA-1715 : UNIQUE may not be used with a cluster index" but why and what "may" means here? Any comments will be welcomed, thank you and best regards;
    show rel
    release 1002000300
    CREATE CLUSTER sc_srvr_id (
    srvr_id NUMBER(10)) SIZE 1024;
    SELECT cluster_name, tablespace_name, hashkeys,
    degree, single_table FROM user_clusters;
    CREATE UNIQUE INDEX idx_sc_srvr_id ON CLUSTER sc_srvr_id;
    ERROR at line 1:
    ORA-01715: UNIQUE may not be used with a cluster index
    CREATE INDEX idx_sc_srvr_id ON CLUSTER sc_srvr_id;
    SELECT index_name, index_type, tablespace_name
    FROM user_indexes where index_name like '%SRVR%' ;
    CREATE TABLE cservers (
    srvr_id    NUMBER(10),
    network_id NUMBER(10),
    status     VARCHAR2(1),
    latitude   FLOAT(20),
    longitude  FLOAT(20),
    netaddress VARCHAR2(15))
    CLUSTER sc_srvr_id (srvr_id);
    ALTER TABLE cservers add constraint pk_srvr_id primary key (srvr_id ) ;
    SELECT index_name, index_type, tablespace_name
    FROM user_indexes where index_name like '%SRVR%' ;
    INDEX_NAME                     INDEX_TYPE
    TABLESPACE_NAME
    IDX_SC_SRVR_ID                 CLUSTER
    USERS
    PK_SRVR_ID                     NORMAL
    USERSdo we really need another pkey index here?

    "May" has different meanings, one of which is:
    (used to express opportunity or permission)
    Metalink note 19067.1 says:
    This is not permitted.
    ... which agrees with the above meaning of it.
    Besides these, it does not make any sense to me to create a unique index on a cluster. You can have primary keys in the tables you include in the cluster, it depends on your business requirement. But why do you need a unique index on a cluster?

  • Load LiDAR data into Spatial 11g making use of Point Cloud Type?

    Dear all,
    from an aerial LiDAR scan I have approximately 226 million points, spread over 9 files. I would like to load them into Oracle Spatial 11g, making use of the new point cloud data type. I have the book "Pro Oracle Spatial for Oracle Database 11g" here and Appendix E explains how you have these two tables that together manage your point cloud. I find the example given in the book rather simplified though as they have only x, y and z and a row id colum. In addition to this I also have r, g, b and i (intensity) values.
    I was wondering if anyone could give me a hint on how to store all the information in one table and making use of the sdo_pc data type at the same time.
    Also, the example has the points already in a table, but I'd like to load it all directly from my files into the point cloud table (I know how to use sqlldr, but how do I get it into this point cloud table structure). What's the cleverest way to go about this?
    All ideas are greatly appreciated!!
    Regards,
    Bia.

    Hi,-
    Our LAS converter supports LAS 1.1 format.
    LAS version 1.0 has fewer entries for the data than LAS version 1.1 so you might just ignore those extra fields that donot exist in LAS 1.0.
    Therefore, i am expecting your data to be fine with our LAS to SDO_PC converter.
    Thanks

  • Cluster Mini Render Farm

    I'm terrible at server things, I wanted to start with that and be rather upfront about it.  But that's why I'm asking out to you in the community.  I'm a video editor and in Final Cut Pro it takes literally forever sometimes for things to render when adding effects.  Unfortuantly rather than buying a Mac Pro, I instead bought a MacBook Pro.  Which don't get me wrong I love wholeheartedly, except the graphics card inside just is embarrassing, plus with how quickly the processors heat up it's hard for me to work constantly and quickly.  So I was curious, I asked out in the general communities last week about putting some minis together and using them as a small "render farm" and what I got back was some more or less maybes... so before I continue on I thought that I should pose the question to those that are basically lord in the mac tech services for help or rather "how to".  Because finding out just how much work that would have to go in it would really set the tone of whether or not I should do it. 
    I was thinking just purchasing two of the Mac Mini Servers (Quad Cores each) and using them. 
    So finally, is it indeed possible to even use two minis to process the graphical outputs of Final Cut Pro X that I'm running?  And if so, just how do I go about doing such a project?
    Thanks so much!

    I think your best bet is simply to schedule it yourself. Set up a simple automater script that reads from a folder and performs the conversion. Simply drop a bunch of files into the watch folder and let it run. You can experiment with how many concurrent processes you can run on each machine before it bogs down.
    Don't expect much out of the G3's.

  • Windows 2008 Cluster question on using a new cluster drive source from shrinking existing disk

    I have a two node Windows 2008 R2 enterprise SP1 cluster. It has a basic cluster setup of one (Q:)quorum disk and data disk (E:) which is 2.7tb is size. This cluster is connected to a shared Dell Disk array.
    My question is can I safely shrink the 2.7tb drive down and carve out a disk size of 500gb from the same disk and use for a new cluster disk resource. We want to install Globalscape SFTP software on this new disk for use as a cluster resource.
    Will this work without crashing the cluster.
    Thanks,
    Gonzolean

    Hi ,
    Thank you for posting your issue in the forum.
    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thank you for your understanding and support.
    Best Regards,
    Andy Qi
    Andy Qi
    TechNet Community Support

  • Making use of the new MDB-Contract

    Dear friends,
    I would like to create a J2EE 1.4 MDB making use of the JMS resource adapter. I'm not able to find any documentation presenting an example. Could you please point me to some code samples? Or if you could provide me an example with the ejb-jar.xml and the weblogic-jar.xml I would much appreciated.
    Thanks!
    Pjrm

    The instructions for setting this up should be available through Yahoo/SBC. Someone else may have the instructions for you, but take a look through the Help sections for Yahoo/SBC and you should find what you need.
    If you get stuck along the way, feel free to ask.
    ~Lyssa

  • Any penalty to using one big cluster in a state machine?

    I've been working on a lot of small/medium sized state machines lately, and I've gotten into a habit of putting most of my data (single values, arrays, strings, even LVOOP objects) in one cluster that I pass from state to state with a shift register.  I just unbundle the data I need in each state, work on it, and bundle it back in for the next state.  Part of me says that this is a bad idea - that I should separate the big cluster into a set of smaller clusters that group the data by logical categories.  Another part of me says that if I do it that way, I'm just creating needless clutter on my diagram.
    So my question is simply this - is there any significant performance penalty for using a single cluster rather than multiple clusters in this fashion?  I never run the whole cluster into subVIs, and I never split the main cluster wire, so it doesn't seem like there should be... but I've been wrong before!
    Thanks,
    Jason

    As Norbert mentioned, the answer to your Q is dependant on your data structure. Provided you can do all of your data manipulation "in-place" there should not be any issues. The more complex structures can still be handled "in-place" but you may have to use the in-place operations to achieve this effect. Depending how comfortable you are with those operators, they can complicate the appearence of our code.
    If you can'y do everything "in-plcae" and you have a Super-Cluster, then its time for me to Quote Rolf again when he wrote "Once all of you physical memory is filled up with a single cluster, your application is probably going to suck."
    So lest say you have cleared all of the above hurdles and still want to use a single SR that has all data for all states. I ask you to carefull examine where the app may go in the future and what type of animal it could turn into. if there is even a small chance that app may turn into something that non-computer users will use (requires robust app to prtect itself from dumb users) the single cluster approach is going to get in your way when the app gets big.
    1) If you have to add another field to the cluster, every function that uses that cluster should be re-tested. WE have an app in-house that was developed by our customer and we support. There is an 800+ step proceedure required to re-verify the app!
    2) AS more functionality is added you will add more states to your state machine. Personally, I cringe when I see the support developer have to choose from a list of 300 states when working with the Stae machine.
    3) You will have a hard time re-using code aside from cutting-n-pasting
    4) When you unbundle manipulate and replace, you are in danger of creating duplicate data which will impact performance.
    I never went through the formal IS training but my wife did and she has given me the short version to learn how to normalize a DB. There is actually a science to the process that results in only related items being grouped together and if you take to the full exdtreme of "fully Normalized DB", there absolutely no duplicates of data. For LV apps a Fully Normalized DB adds some overhead so I don't go that far.
    THe following will ignore using an Object Oriented approach and stick with old-school ideas.
    So after I analyze my data structures and group related items together and review who touches what when, THEN I try to wrap-up the data in Action Engines. More often than not, the action will replace some or most of the work done in some states. From your Q it sounds like your "read-mod-replace" constructs can be moved into AEs with little effort.
    Now for OOP
    I'm still learning OO system design but I have found myself "turning my apps inside out" with LVOOP. By this I mean rather than think that the data AND the function are inside the AE, the data is outside and acted on by what is inside the LVOOP methods. I have been amazed at the degree to which LVOOP can operate in-place, but I digress. There is some arguement/design pattern that says that if you have a function like a test that uses other classes, then you can create a class for the test and slam all of the required object into you need.
    Done rambling for now. As usual, if there is anyone out there that want to correct me any of the above, please do so!
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Paralleling or making using of extra CPU's

    Hi,
    Is it possible to improve the performance of the queries by setting certain parameters at database level by making
    use of maximum cpu available and by not changing exisiting queries.Our servers have basically 8cpus.
    Regards
    Vijay Salian

    sybrand_b wrote:
    cpu_count is determined by Oracle and can not be set.That's wrong. See the docs.

  • PrintDataGrid's DataGridColumn - Embedded image not printing when you use TextFlow in the item rende

    I'm printing a datagrid using something like  this...
    <mx:PrintDataGrid
      id="printDataGrid" 
      width="100%" 
      height="100%"
      showHeaders="false"
      borderVisible="false"
      horizontalGridLines="false"
      variableRowHeight="true"
      dataProvider="{titles}"
      >
      <mx:columns>
       <mx:DataGridColumn 
        itemRenderer="renderer.TitlePrintRenderer" 
        />
      </mx:columns>
    </mx:PrintDataGrid>
    TitlePrintRenderer.mxml has s:RichText component. I use  RichText's textFlow property to render the text. The approach is working fine  except that if the textFlow has embedded images (<img source=... />), the  images are not printed!
    Is this a bug? Is it a limitation? Has anyone come  across this issue?
    I'm using Flex SDK 4.5.1

    After struggling for 4+ days on using timer / events for printing PrintDataGrid with embedded images in RichText's textFlow, I tried your other suggestion... to convert <img> tags to InlineGraphicElement and give it Bitmap from image loaded from a .gif file. The approach works but the printout skips images in a few rows!
    I've this test case in which, every time I print, it skips printing image in the second row! I also implemented this approach in a more complex test case and depending on the total number of rows, it would skip printing image in different number of rows. I'm suspecting that even if you construct InlineGraphicElement from bitmap loaded from an image, PrintDataGrid's renderer still skips printing image intermittently.
    I would very much appreciate it if you could create small project from my following code and verify this behavior. I'm at my wit's end in getting this printing to work.
    PrintImagesTest.mxml
    =================
    <?xml version="1.0" encoding="utf-8"?>
    <s:Application
        xmlns:fx="http://ns.adobe.com/mxml/2009"
        xmlns:s="library://ns.adobe.com/flex/spark"
        xmlns:mx="library://ns.adobe.com/flex/mx"
        minWidth="955" minHeight="600"
        initialize="initData();"
        viewSourceURL="srcview/index.html"
        >
        <s:layout>
            <s:VerticalLayout
                paddingLeft="20" paddingRight="20"
                paddingTop="20" paddingBottom="20"
                />
        </s:layout>
        <mx:Button
            label="Print"
            click="printClickHandler();"
            />
        <fx:Script>
            <![CDATA[
                import flash.utils.setTimeout;
                import flashx.textLayout.elements.InlineGraphicElement;
                import flashx.textLayout.elements.ParagraphElement;
                import flashx.textLayout.elements.SpanElement;
                import flashx.textLayout.elements.TextFlow;
                import mx.collections.ArrayCollection;
                import mx.printing.*;
                import mx.utils.OnDemandEventDispatcher;
                public var contentData:ArrayCollection;
                private var embeddedImages:ArrayCollection;
                private var numberOfImagesLoaded:int;
                public var printJob:FlexPrintJob;
                public var thePrintView:FormPrintView;
                public var lastPage:Boolean;
                private var textFlowNS:Namespace = new Namespace("http://ns.adobe.com/textLayout/2008");
                public function initData():void {
                    contentData = new ArrayCollection();
                    var page:int = 0;
                    for (var z:int=0; z<20; z++)    {
                        var content:Object = new Object();
                        content.srNo = z+1;
                        content.contentText =
                        "<TextFlow whiteSpaceCollapse='preserve' xmlns='http://ns.adobe.com/textLayout/2008'>" +
                        "<span>some text</span>" +
                        "<img width='53' height='49' source='assets/images/formula.gif'/>" +
                        "</TextFlow>";
                        contentData.addItem(content);
                public function printClickHandler():void {
                    convertToTextFlow();
                private function convertToTextFlow():void {
                    embeddedImages = new ArrayCollection();
                    numberOfImagesLoaded = 0;
                    for each (var contentElement:Object in contentData) {
                        extractImageInfo(contentElement.contentText);
                    if (embeddedImages.length > 0) {
                        loadImage(embeddedImages.getItemAt(0).source);
                    } else {
                        printData();
                private function extractImageInfo(contentText:String):void {
                    var textXml:XML = new XML(contentText);
                    var imageList:XMLList = textXml.textFlowNS::img;
                    for each (var img:XML in imageList) {
                        var embeddedImage:Object = new Object();
                        embeddedImage.source = String(img.@source);
                        embeddedImage.width = parseInt(img.@width);
                        embeddedImage.height = parseInt(img.@height);
                        embeddedImages.addItem(embeddedImage);
                private function loadImage(imageSource:String):void {
                    var loader:Loader = new Loader();
                    var urlRequest:URLRequest = new URLRequest(imageSource);
                    loader.load(urlRequest);
                    loader.contentLoaderInfo.addEventListener(Event.COMPLETE, imageLoaded);
                private function imageLoaded(e:Event):void {
                    embeddedImages.getItemAt(numberOfImagesLoaded).bitmap = (Bitmap)(e.target.content);
                    embeddedImages.getItemAt(numberOfImagesLoaded).width = ((Bitmap)(e.target.content)).width;
                    embeddedImages.getItemAt(numberOfImagesLoaded).height = ((Bitmap)(e.target.content)).height;
                    ++numberOfImagesLoaded;
                    if (numberOfImagesLoaded < embeddedImages.length) {
                        loadImage(embeddedImages.getItemAt(numberOfImagesLoaded).source);
                    } else {
                        // all the images have been loaded... convert to textflow
                        buildContent();
                        printData();
                private function buildContent():void {
                    var contentIndex:int = 0;
                    for each (var contentElement:Object in contentData) {
                        if (hasImage(contentElement.contentText)) {
                            buildTextFlow(contentElement, contentIndex);
                            ++contentIndex;
                private function buildTextFlow(content:Object, contentIndex:int):void {
                    var textXml:XML = new XML(content.contentText);
                    var p:ParagraphElement = new ParagraphElement();
                    for each(var child:XML in textXml.children()) {
                        switch (child.localName()) {
                            case "span":
                                var span:SpanElement;
                                span = new SpanElement();
                                span.text = child;
                                span.fontSize = 10;
                                p.addChild(span);
                                break;
                            case "img":
                                var image:InlineGraphicElement;
                                image = new InlineGraphicElement();
                                image.source = embeddedImages.getItemAt(contentIndex).bitmap;
                                image.width = embeddedImages.getItemAt(contentIndex).width;
                                image.height = embeddedImages.getItemAt(contentIndex).height;
                                p.addChild(image);
                                break;
                    content.textFlow = new TextFlow();
                    content.textFlow.addChild(p);
                private function hasImage(contentText:String):Boolean {
                    var textXml:XML = new XML(contentText);
                    var imageList:XMLList = textXml.textFlowNS::img;
                    if (imageList.length() > 0) {
                        return true;
                    } else {
                        return false;
                private function printData():void {
                    printJob = new FlexPrintJob();
                    lastPage = false;
                    if (printJob.start()) {
                        thePrintView = new FormPrintView();
                        addElement(thePrintView);
                        thePrintView.width=printJob.pageWidth;
                        thePrintView.height=printJob.pageHeight;
                        thePrintView.printDataGrid.dataProvider = contentData;
                        thePrintView.showPage("single");
                        if(!thePrintView.printDataGrid.validNextPage) {
                            printJob.addObject(thePrintView);
                        } else {
                            thePrintView.showPage("first");
                            printJob.addObject(thePrintView);
                            while (true) {
                                thePrintView.printDataGrid.nextPage();
                                thePrintView.showPage("last"); 
                                if(!thePrintView.printDataGrid.validNextPage) {
                                    printJob.addObject(thePrintView);
                                    break;
                                } else {
                                    thePrintView.showPage("middle");
                                    printJob.addObject(thePrintView);
                        removeElement(thePrintView);
                    printJob.send();
            ]]>
        </fx:Script>
    </s:Application>
    FormPrintView.mxml
    ===============
    <?xml version="1.0"?>
    <mx:VBox
        xmlns:fx="http://ns.adobe.com/mxml/2009"
        xmlns:s="library://ns.adobe.com/flex/spark"
        xmlns:mx="library://ns.adobe.com/flex/mx"
        xmlns:MyComp="myComponents.*"
        backgroundColor="#FFFFFF"
        paddingTop="50" paddingBottom="50" paddingLeft="50"
        >
        <fx:Script>
            <![CDATA[
                import mx.core.*
                    public function showPage(pageType:String):void {
                        validateNow();
            ]]>
        </fx:Script>
        <mx:PrintDataGrid
            id="printDataGrid"
            width="60%"
            height="100%"
            showHeaders="false"
            borderVisible="false"
            horizontalGridLines="false"
            variableRowHeight="true"
            >
            <mx:columns>
                <mx:DataGridColumn
                    itemRenderer="MyPrintRenderer"
                    />
            </mx:columns>
        </mx:PrintDataGrid>
    </mx:VBox>
    MyPrintRenderer.mxml
    =================
    <?xml version="1.0" encoding="utf-8"?>
    <s:MXDataGridItemRenderer
        xmlns:fx="http://ns.adobe.com/mxml/2009"
        xmlns:s="library://ns.adobe.com/flex/spark"
        xmlns:mx="library://ns.adobe.com/flex/mx"
        xmlns:bslns="com.knownomy.bsl.view.component.*"
        >
        <s:layout>
            <s:VerticalLayout
                paddingLeft="5"
                paddingRight="5"
                paddingTop="3"
                paddingBottom="3"
                gap="5"
                horizontalAlign="left"
                clipAndEnableScrolling="true"
                />
        </s:layout>
        <fx:Declarations>
        </fx:Declarations>
        <s:HGroup
            width="100%"
            gap="5"
            verticalAlign="middle"
            >
            <s:Label
                text="{data.srNo}"
                color="0x000000"
                fontFamily="Verdana"
                fontSize="10"
                />
            <s:RichText
                id="title"
                width="700"
                textFlow="{myTextFlow}"
                color="0x000000"
                fontFamily="Verdana"
                fontSize="10"
                />
        </s:HGroup>
        <fx:Metadata>
        </fx:Metadata>
        <s:states>
            <s:State name="normal" />
            <s:State name="hovered" />
            <s:State name="selected" />
        </s:states>
        <fx:Script>
            <![CDATA[
                import flashx.textLayout.elements.TextFlow;
                [Bindable]
                private var myTextFlow:TextFlow;
                override public function set data(value:Object) : void {
                    if (value != null) {
                        super.data = value;
                        myTextFlow = data.textFlow;
            ]]>
        </fx:Script>
    </s:MXDataGridItemRenderer>

  • Why cant I use my lights in the render future in PS CC

    I am having trouble with the 3D feature and the Lights in the render feature in PS CC.  I have contacted the Technicians on a number of occasions and each time they say its my Graphics card.  My PC is only months old and the firm that built it has checked out my graphics card and say there's nothing wrong with it.    When I first had PS CC things worked fine its only lately that this problem has developed, if the Technicians cant help me what chance do I have on my own and now I'm now renting something that I cant fully use.  Eve

    Great, now you've posted a completely blank post! 

  • Can I change which nic is used for a cluster network when more than one nic on the node is on same subnet?

    This cluster has been up and working for maybe a year and a half the way it is.  There are two nodes, running Server 2012.  In addition to a couple network interfaces devoted to VM traffic each node has:
    Management Interface: 192.168.1.0/24
    iSCSI Interface: 192.168.1.0/24
    Internal Cluster Interface: 192.168.99.0/24
    The iSCSI interfaces have to be on same subnet as management interfaces due to limitations in the shared storage.  Basically if I segregate it I wouldn't be able access the shared storage itself for any kind of management or maintenance tasks. 
    I have restricted the iSCSI traffic to only use the one interface on each cluster node but I noticed that one of the cluster networks is connecting the management interface on one cluster node member with the iSCSI interface on the other cluster node member. 
    I would like for the cluster network to be using the management interface on both cluster node members so as not to interfere with iSCSI traffic.  Can I change this?
    Binding order of interfaces is the same on both boxes but maybe I did that after I created the cluster, not sure. 

    Hi MnM Show,
    Tim is correct, if you are using ISCSI Storage and using the network to get to it, it is recommended that the iSCSI Storage fabric have a dedicated and isolated network. This
    network should be disabled for Cluster communications so that the network is dedicated to only storage related traffic.
    This prevents intra-cluster communication as well as CSV traffic from flowing over same network. During the creation of the Cluster, ISCSI traffic will be detected and the network
    will be disabled from Cluster use. This network should set to lowest in the binding order.
    The related article:
    Configuring Windows Failover Cluster Networks
    http://blogs.technet.com/b/askcore/archive/2014/02/20/configuring-windows-failover-cluster-networks.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Trying to figure out whether I can use an ASA cluster in Transparent mode to facilitate VRF based network ??

    Hi Guys,
    I had to re-post this here because I did not get any comments earlier.. hopefully I'll get something here.. :)
    I'm investigating the ways that I can use 2 x ASA (5525x) to accommodate Multi-tenancy situation with overlapping addresses. Unfortunately in this particular scenario we have to stick with 5525x firewalls.
    The ASAs are going to be placed in north-south traffic path between 2 routers and these routers need to be configured with multiple VRFs to segregate the traffic for each tenant with overlapping IP subnets ( We are not looking at NAT as a workaround for the time being).
    As we know, this ASA model won't support VRFs so we can't use the ASA as a intermediary routing hop and therefore this is not an option.. and using security contexts per VRF seems not scale-able enough (correct me if I'm wrong). So my thinking is that, if we put the ASAs in to the transparent mode and just use the ASAs as a layer 2 interconnect (configured with different VLANs connecting VRFs served by top and bottom routers)  I should be able to go up to maximum of 50 VRFs (since 5525x only supports 200 VLANs).  
    I'm also planning to use the 2 ASAs in a cluster mode to aggregate the bandwidth of both ASAs for better throughput.
    So I need to clarify following with you guys.. 
    1) Can I actually do this or am I missing something.
    2) Are there any limitations that I might run in to with this setup
    3) Is there anyone out there who's doing the same thing or can you think of a better way to tackle this scenario (with same hardware and requirements)
    4) Instead of using clustering, can I use simple Active/Stanby pare and still configure transparent mode and use it that way ?
    Appreciate your input.
    Thanks
    Shamal 

    There is a limitation on how many context you can have, which depends on the license you have.  This is quite possible with ASA multi routed mode and even with multi transparent mode.  You can have overlapping ip in each context without the need of using nat as long as you have unique mac address for each sub interface.
    Thanks

  • Making requests to a cluster

              Hi,
              I'm a bit confused by the General tab in cluster configuration.
              It contains the following fields:-
              - Name
              - Cluster Address
              - Default Load Algorithm
              - Service Age Threshold
              I understand that hostname/IP(s) that map to one or more servers
              in the cluster goes in Cluster Address. But, if that's the case,
              what party is responsible for scheduling requests to servers
              in th cluster, using the algorithm in Default Load Algorithm? And, how does one
              connect to that party and on what port?
              If the answer is that you have to use your own policy (software
              or hardware load balancing) then what is the purpose of
              the Default Load Algorithm field in WLS 6.1?
              You already configure what servers are on the cluster, so WLS
              knows this already. So why does one have to specify the
              IPs again in Cluster Address? Seems to me, and from other
              messages in this forum, that filling out this tab doesn't have
              much benefit at all.
              Thanks in advance,
              Gary
              FT.com
              

    The cluster address is the DNS round-robin address that clients use in their URL to
              establish
              their initial connection. The cluster address is currently only used by WL in two
              limited cases:
              EJB home handles -- These contain info that can be serialized and passed to a
              client
              which currently may not have a connection to the cluster. The client can use
              the handle to find its associated EJB.
              Entity Bean fail-over -- Allows client to automagically get back to the cluster
              if a connection to the cluster fails.
              Tom
              Gary Watson wrote:
              > Hi,
              >
              > I'm a bit confused by the General tab in cluster configuration.
              > It contains the following fields:-
              > - Name
              > - Cluster Address
              > - Default Load Algorithm
              > - Service Age Threshold
              >
              > I understand that hostname/IP(s) that map to one or more servers
              > in the cluster goes in Cluster Address. But, if that's the case,
              > what party is responsible for scheduling requests to servers
              > in th cluster, using the algorithm in Default Load Algorithm? And, how does one
              > connect to that party and on what port?
              >
              > If the answer is that you have to use your own policy (software
              > or hardware load balancing) then what is the purpose of
              > the Default Load Algorithm field in WLS 6.1?
              >
              > You already configure what servers are on the cluster, so WLS
              > knows this already. So why does one have to specify the
              > IPs again in Cluster Address? Seems to me, and from other
              > messages in this forum, that filling out this tab doesn't have
              > much benefit at all.
              >
              > Thanks in advance,
              > Gary
              > FT.com
              > ______________________________________________________________
              

Maybe you are looking for