2012 NLB Best Practice (Single vs Multiple NICs)?

Our environment has used an NLB configuration with two NICs for years.  One NIC for the host itself and one for the NLB.  We have also been running the NLB in multicast mode.  Starting with 2008, we began adding the cluster's MAC address as
an ARP entry on our layer three switch.  Each server participating in the NLB is on VMware.
Can someone advise what the best procedure is for handling NLB in this day?  Although initial tests with one NIC seem to be working, I do notice that we get a popup warning on the participant servers when launching NLB manager "Running NLB Manager
on a system with all networks bound to NLB might not work as expected"... if they are set to run in unicast mode.
With that said, should we not be running multicast?  Will that present problems down the road?

Hi enoobmot11,
You can refer the following KB and the VMware requirement KB:
Network Load Balancing Best practices
https://technet.microsoft.com/en-us/library/cc740265%28v=ws.10%29.aspx?f=255&MSPPError=-2147217396
Multiple network adapters
https://technet.microsoft.com/en-us/library/cc784848(v=ws.10).aspx
The VMware KB:
Microsoft Network Load Balancing Multicast and Unicast operation modes (1006580)
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006580
Sample Configuration - Network Load Balancing (NLB) Multicast Mode Configuration (1006558)
http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1006558
I’m glad to be of help to you!
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

Similar Messages

  • Best Practice for using multiple models

    Hi Buddies,
         Can u tell me the best practices for using multiple models in single WD application?
        Means --> I am using 3 RFCs on single application for my function. Each time i am importing that RFC model under
        WD --->Models and i did model binding seperately to Component Controller. Is this is the right way to impliment  multiple            models  in single application ?

    It very much depends on your design, but One RFC per model is definitely a no no.
    Refer to this document to understand how should you use the model in most efficient way.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/705f2b2e-e77d-2b10-de8a-95f37f4c7022?quicklink=events&overridelayout=true
    Thanks
    Prashant

  • Best practice for running multiple sites on 1 CF install?

    Hi-
    I'm setting up a new hosting environment (Windows Server 2008 Standard 64 bit VPS  configuration, MySQL, IIS 7, CF 9)
    Has anyone seen any docs or can anyone suggest best practices for configuring multiple sites in this environment? At this point I'm thinking simple is best, one new site in IIS for each client (domain) and point it to CF.
    Given this environment, is anyone aware of any gotchas within the setup of CF 9 on IIS 7?
    Thank you in advance,
    Rich

    There's nothing wrong with that approach. You can run as many IIS sites as you like against a single CF install.
    As for installing CF on IIS 7, I recommend that you do the following: install CF 9 without connecting it to IIS, then installing the 9.0.1 upgrade and any hotfixes, then connecting CF to IIS using the web server configuration utility. This will keep you from having to install the IIS 6 compatibility layer that's needed with CF 9 but not with CF 9.0.1.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/

  • Best Practice in maintaining multiple apps and user logins

    Hi,
    My company is just starting to use APEX, and none of us (the developers) have worked on this before either. It is greatly appreciated if we can get some help here.
    We have developed quite a few applications in the same workspace. Now, we are going to setup UAT and PRD environments and also trying to understand what the best practice is to maintain multiple apps and user logins.
    Many of you have already worked on APEX environment for sometime, can you please provide some input?
    Should we create multiple apps(projects) for one department or should we create one app for one department?
    Currently we have created multiple apps for one department, but, we are not sure if a user can login once and be able to access to all the authenticated apps.
    Thank you,
    LC

    LC,
    I am not sure how much of this applies to your situation - but I will share what I have done.
    I built a single 700+ page application for my department - other areas create separate smaller applications.
    The approach I chose is flexible enough to accomdate both.
    I built a separate access control application(Control) in its own schema.
    We use database authenication fo this app - an oracle account is required.
    We prefer to use LDAP for authentication for the user applications.
    For users that LDAP is not option - an encrypted password is stored - reset via email.
    We use position based security - priviliges are based on job functions.
    We have applications, appilcations have roles , roles have access to components(tabs,buttons,unmasked card numbers,etc.)
    We have positions that are granted application roles - they inherit access to the role components.
    Users have a name, a login, a position, and a site.
    We have users on both the East Coast and the West Coast, we use the site in a sys_context
    and views to emulate VPD. We also use the role components,sys_contexts and views to mask/unmask
    card numbers without rewriting the dependent objects(querys,reports,views,etc.)
    The position based security has worked well, when someone moves,
    we change the position they are assigned to and they immediately have the privileges they need.
    If you are interested I can rpovide more detail.
    Bill

  • SQL Server 2012 Infrastructure Best Practice

    Hi,
    I would welcome some pointers (direct advice or pointers to good web sites) on setting up a hosted infrastructure for SQL Server 2012. I am limited to using VMs on a hosted site. I currently have a single 2012 instance with DB, SSIS, SSAS on the same server.
    I currently RDP onto another server which holds the BI Tools (VS2012, SSMS, TFS etc), and from here I can create projects and connect to SQL Server.
    Up to now, I have been heavily restricted by the (shared tenancy) host environment due to security issues, and have had to use various local accounts on each server. I need to put forward a preferred environment that we can strive towards, which is relatively
    scalable and allows me to separate Dev/Test/Live operations and utilise Windows Authentication throughout.
    Any help in creating a straw man would be appreciated.
    Some of the things I have been thinking through are:
    1. Separate server for Live Database, and another server for Dev/Test databases
    2. Separate server for SSIS (for all 3 environments)
    3. Separate server for SSAS (not currently using cubes, but this is a future requirement. Perhaps do not need dedicated server?)
    4. Separate server for Development (holding VS2012, TFS2012,SSMS etc). Is it worth having local SQL Server DB on this machine. I was unsure where SQL Server Agent Jobs are best run from i.e. from Live Db  only, from another SQL Server Instance, or to
    utilise SQL ServerAgent  on all (Live, Test and Dev) SQL Server DB instances. Running from one place would allow me to have everything executable from one place, with centralised package reporting etc. I would also benefit from some license cost
    reductions (Kingsway tools)
    5. Separate server to hold SSRS, Tableau Server and SharePoint?
    6. Separate Terminal Server or integrated onto Development Server?
    7. I need server to hold file (import and extract) folders for use by SSIS packages which will be accessible by different users
    I know (and apologise that) I have given little info about the requirement. I have an opportunity to put forward my requirement for x months into the future, and there is a mass of info out there which is not distilled in a way I can utilise. It would
    be helpful to know what I should aim for, in terms of separate servers for the different services and/or environments (Live/Test/Live), and specifically best practice for where SQL Server Agent jobs should be run from , and perhaps a little info on how to
    best control deployment/change control . (Note my main interest is not in application development, it is in setting up packages to load/refresh data marts fro reporting purposes).
    Many thanks,
    Ken

    Hello,
    On all cases, consider that having a separate server may increase licensing or hosting costs.
    Please allow to recommend you Windows Azure for cloud services.
    Answers.
    This is always a best practice.
    Having SSIS on a separate server allows you isolate import/export packages, but may increase network traffic between servers. I don’t know if your provider charges
    money for incoming traffic or outgoing traffic.
    SSAS on a separate server certainly a best practice too.
     It contributes to better performance and scalability.
    SQL Server Developer Edition cost about $50 dollars only. Are you talking about centralizing job scheduling on an on-premises computer than having jobs enable on a
    cloud service? Consider PowerShell to automate tasks.
    If you will use Reporting Services on SharePoint integrated mode you should install Reporting Services on the same server where SharePoint is located.
    SQL Server can coexist with Terminal Services with the exception of clustered environments.
    SSIS packages may be competing with users for accessing to files. Maybe copying them to a disk resource available for the SSIS server may be a better solution.
    A few more things to consider:
    Performance storage subsystem on the cloud service.
    How Many cores? How much RAM?
    Creating a Domain Controller or using active directory services.
    These resources may be useful.
    http://www.iis.net/learn/web-hosting/configuring-servers-in-the-windows-web-platform/sql-2008-for-hosters
    http://azure.microsoft.com/blog/2013/02/14/choosing-between-sql-server-in-windows-azure-vm-windows-azure-sql-database/
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • SCOM 2012 Agent - Best Practices with Base Images

    I've read through the
    SCOM 2012 agent installation methods technet article, as well as how to
    install the SCOM 2012 agent via command line, but don't see any best practices in regards to how to include the SCOM 2012 agent in a base workstation image. My understanding is that the SCOM agent's unique identifier is created at the time of client installation,
    is this correct? I need to ensure that this is a supported configuration before I can recommend it. 
    If it is supported, and it does work the way I think it does, I'm trying to find out a way to strip out the unique information so that a new client GUID will be created after the machine is sysprepped, similar to how the SCCM client should be stripped of
    unique data when preparing a base image. 
    Has anyone successfully included a SCOM 2012 (or 2007 for that matter) agent in their base image?
    Thanks, 
    Joe

    Hi
    It is fine to build the agent into a base image but you then need to have a way to assign the agent to a management group. SCOM does this via AD Integration:
    http://technet.microsoft.com/en-us/library/cc950514.aspx
    http://blogs.msdn.com/b/steverac/archive/2008/03/20/opsmgr-ad-integration-how-it-works.aspx
    http://blogs.technet.com/b/jonathanalmquist/archive/2010/06/14/ad-integration-considerations.aspx
    http://thoughtsonopsmgr.blogspot.co.uk/2010/07/active-directory-ad-integration-when-to.html
    http://technet.microsoft.com/en-us/library/hh212922.aspx
    http://blogs.technet.com/b/momteam/archive/2008/01/02/understanding-how-active-directory-integration-feature-works-in-opsmgr-2007.aspx
    You have to be careful in environments with multiple forests if no trust exists.
    http://blogs.technet.com/b/smsandmom/archive/2008/05/21/opsmgr-2007-how-to-enable-ad-integration-for-an-untrusted-domain.aspx
    http://rburri.wordpress.com/2008/12/03/untrusted-ad-integration-suppress-misleading-runas-alerts/
    You might also want to consider group policy or SCCM as methods for installing agents.
    Cheers
    Graham
    Regards Graham New System Center 2012 Blog! -
    http://www.systemcentersolutions.co.uk
    View OpsMgr tips and tricks at
    http://systemcentersolutions.wordpress.com/

  • Best practice for deleting multiple rows from a table , using creator

    Hi
    Thank you for reading my post.
    what is best practive for deleting multiple rows from a table using rowSet ?
    for example how i can execute something like
    delete from table1 where field1= ? and field2 =?
    Thank you

    Hi,
    Please go through the AppModel application which is available at: http://developers.sun.com/prodtech/javatools/jscreator/reference/codesamples/sampleapps.html
    The OnePage Table Based example shows exactly how to use deleting multiple rows from a datatable...
    Hope this helps.
    Thanks,
    RK.

  • Best Practices for FSCM Multiple systems scenario

    Hi guys,
    We have a scenario to implement FSCM credit, collections and dispute management solution for our landscape comprising the following:
    a 4.6c system
    a 4.7 system
    an ECC 5 system
    2 ECC6 systems
    I have documented my design, but would like to double check and rob minds with colleagues regarding the following areas/questions.
    Business partner replication and synchronization: what is the best practice for the initial replication of customers in each of the different systems to business partners in the FSCM system? (a) for the initial creation, and (b) for on-going synchronization of new customers and changes to existing customers?
    Credit Management: what is the best practice for update of exposures from SD and FI-AR from each of the different systems? Should this be real-time for each transaction from SD and AR  (synchronous) or periodic, say once a day? (assuming we can control this in the BADI)
    Is there any particular point to note in dispute management?
    Any other general note regarding this scenario?
    Thanks in advance. Comments appreciated.

    Hi,
    I guess when you've the informations that the SAP can read and take some action, has to be asynchronous (from non-SAP to FSCM);
    But when the credit analysis is done by non-SAP and like an 'Experian', SAP send the informations with invoices paid and not paid and this non-SAP group give a rate for this customer. All banks and big companies in the world does the same. And for this, you've the synchronous interface. This interface will updated the FSCM-CR (Credit), blocking or not the vendor, decreasing or increasing them limit amount to buy.
    So, for these 1.000 sales orders, you'll have to think with PI in how to create an interface for this volume? What parameters SAP does has to check? There's an time interval to receive and send back? Will be a synchronous or asynchronous?
    Contact your PI to help think in this information exchange.
    Am I clear in your question?
    JPA

  • PS - Best Practices - Single version of the truth

    Hi Gurus
    I'm very new to SAP PS and have previously logged a similar thread on this subject.
    There appears to be mixed locations/naming standards for the various Best Practice Guides which I'd like some clarification on.
    In some instance J* Type Best Practice Guides are used, in other case 10* Type Best Practice Guides are used.
    Yet they may contain very similar build instructions, is any one able to guide me on which versions I should be looking at. Is there a reason or rule to understand these naming conventions.
    I'm coming from a CRM background and our best practice guides are very clearly named.
    Many Thanks in advance
    Panduranga

    SAP PS functionality is probably not as straightforward as CRM
    It spreads accross various modules - and across different industries - and has to deal with a variety of combinations e.g. country specific. Just imagine that PS support a project manager - whatever you define as a project can be managed using PS
    There can be various scenarios -
    is it a cost project?
    are you capitalising your costs?
    is there any internal production involved?
    how are you procuring materials?
    who are the resources doing the work - internal or external?
    all these impact the project business process and hence for each such scenario best practices has to be selected.
    So go through all the best practive available and then select the practice closet to your bsuiness process

  • Best Practice: One Library, Multiple Macs ?

    Despite searching, I've yet to find a definitive answer to this one - so any advice would be appreciated....
    Having downloaded the trial, I very much like Aperture, but my decision to purchase and continue using it hinges very much on whether or not I can access the same library from multiple machines.
    Here is the scenario:
    My 'studio' currently has a Quad G5 (soon to be upgraded to a MacPro), with ample Firewire storage, including 2 new 500Gb Western Digital MyBook drives. I now also have a MacBook Pro to allow me to work away from the studio.
    So far, I'm using one of the 500Gb drives to hold everything I need - 'work in progress' documents, images etc. etc. with this being SuperDuper! smart updated daily to the second drive. This allows me to take drive 1 away at night and work from home, our go out of the office with the MacBook Pro and have all my media to hand, and 'synchronise' upon return the next day or whenever.
    I think what I'd like to be able to do, is set up Aperture on the G5 and get all my images sorted and stored into a new library (out of iPhoto and various Finder folders etc) stored on FW Drive 1 along with the rest of my stuff. This would semi-automatically be backed up to FW Drive 2 as mentioned above.
    However, I want to be able to access this library with Aperture on the MacBook Pro when I'm not in the office - will this work? I appreciate that I'll need 2 licenses of Aperture (one for the desktop, and one for the laptop) but my concern is whether or not both copies of Aperture can use the same library....... I wonder if the Aperture prefs would prevent this from working or screw it up completely.
    If this ain't gonna work - what other options do I have !?
    MacBook Pro 15"   Mac OS X (10.4.8)  

    Not sure this will help but this is what I have decided to do.
    System: MacBook Pro and a G5. Storage: 2 Western Digital 160 GB USB 2.0 Passport Portable drives, 1 OWC 160 GB Firewire 800 (for back up of my internal drive on my MacBook Pro), and 5 500GB Western Digital MyBook Pro FW drives (attached to my G5). One Aperture library resides on my MacBook Pro internal and a yearly (like 2006) Aperture resides on 1 500 GB WD MyBook Pro
    In studio, I shoot wirelessly and transfer by ftp to the MBP and using Aperture Hot Folder move the images into Aperture to show the client at the end and during the shoot. They are managed, but sadly the files are not named by studio convention. Once the client leaves, I delete the temporary project, and reimport the pictures, renaming the pictures using the studio format and they are managed. At this point the picture reside on my internal drive of my MBP.
    I do the majority of the clean up of the images at this point (contrast, exposure, saturation, sharpening, leveling, blemish corrections, etc, keywording). Once done, I export the project to my primary WD Passport drive and then using the old fashion network strategy of walking, take the WD Passport drive to the G5 and import the project in G5 library converting them to referenced files so I have Finder based architecture for all my RAW files.
    Once I know the the files are on the G5, I go back to my MBP and convert the managed pictures to referenced pictures in the process moving them off my internal drive and on to the WD 160 USB 2.0 drive in a Finder based architecture. This keeps my library on my MBP pretty small and allows me to carry 160 GB worth of pictures that I can work on at any time or place. [Actually, I will probably pick up a 3rd 160GB Passport soon, like this weekend, so I will be able to carry 320 GB of pictures with me.
    Now it gets nasty because I have two sets of pictures, one on my MacBook Pro and one on my G5. Again using old fashion strategies, I decided that the work flow could only go one way... from MBP to G5 and never (or very rarely) the opposite direction. The other nasty is how to deal with changes made after the first transfer of the project to the G5. The vast majority of those changes go through Photoshop and by default they become managed files. So for a particular project when I start to do additional editing, I bring those images into an album (let's say "Selected pictures") for that project and edit away. Once I am really done, I generate a new project (Original project is 070314 and the new project would be 070314 Update) and I drag the album to the new project. The nice thing is the primary pictures remain in the original project when I move the album. I then export Project Update consolidating images and import that into the library on the G5. Lastly, I relocated masters of the managed files to the WD Passport drive. I pay the penalty of having a few replicates of a couple of pictures and the architecture of my G5 library is not identical to my MBP library, but I have all of my primary images plus CS2 changes available to my MBP and my G5.
    OK..... my library on my MBP internal is backed up to the OWC drive using Superduper so I always have a current replicate of that library. The library and pictures on the 500 GB WD Mybook Pro are backed up 2-3 times per week to a second 500 GB WD MyBook Pro.
    In case my GS fails, I can address the library by simply attaching my MBP to the firewire chain and double click on the library icon. This forces Aperture to open the library on the 500 GB WD Mybook Pro drive. In case my MBP fails, I can boot my MBP or G5 off the OWC drive.
    Principles: A one library solution will never work, because eventually, even with referenced images it will become too large for a single drive. So start now with a strategy of smaller libraries structures. I use a yearly system, but others are possible. My yearly system consists of my library plus referenced images on one 500 GB drive plus a second drive to which I back up. If I run out of space, I will buy two more drives, perhaps moving TB drives.
    Don't try to store very many images on your internal drive of any computer. Develop a methodology of keeping the images on external (FW or eSATA) drives where capacity can be readily expanded.
    Keep your MBP library pristine by regularly moving managed files over to referenced files on a bus powered portable drive.
    Make sure you regularly back up both systems.
    Hope this helps.
    steven

  • Best practice to serve multiple companys

    Hello,
    I have developed a web application, want to serve it to multiple companys. What I am thinking is:
    - Each company will have its own URL, e.g.
    http://www.xyz.com/companya/webapp
    http://www.xyz.com/companyb/webapp
    - Each company will have its own resource , like mysql db connection pool
    - Each company will connect to its own database.
    Can the above be implemented using Sun Applicaton Server? Is it just configuration issue or require some changes in my application design.
    Best regards,
    - Joe

    Hi Joe!
    You can easily setup multiple domains in your application server and use a different one for each company. This simplifies a lot as you can leave your application as it is and have just to change the resources in the application server instance. But I don't know how much overhead and load this will produce - give it a try.
    On the other hand you can setup all different resources in one instance and delopy a seperate copy for each company - but then you have to provide a specific web.xml for each copy and updating isn't as easy as with a single package.
    But maybe you could fetch configs from a database table to keep modifications small. Maybe you can use the url to distinguish. Just a few thoughts...
    Cheers,
    Jan

  • Best Practice to load multiple Tables

    Hi,
    Am trying to load around 9-10 tables in sql server.all these belong to a single project.
    table loads are just select * into destination table,using some joins.
    Is it good to have 1 stored proc to load all these tables or individual stored procs to load.
    All these tables are independent to each other.
    Appreciate your help !

    If you are using SQL Server 2008 and onwards take a look into this link (Minimal logging)
    http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/23/minimal-logging-changes-in-sql-server-2008-part-1.aspx
    http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/23/minimal-logging-changes-in-sql-server-2008-part-2.aspx
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Best practice for displaying multiple subpanels?

    I have about 20 different subpanels to display controls for different functional groups.   What is considered the best way to represent these on the main front panel?  For example:
    - tab control with a sub-panel on each tab?
    - Single subpanel using Insert/Remove methods to display the selected subpanel
      (select visible panel with buttons or ring menu?)
    - some other way?
    Considering from both code development/maintenance point of view and user experience point of view, what do you consider the best way to implement this?
    Thanks and Best Regards,
    -- J.
    Message Edited by JeffBuckles on 06-18-2009 01:45 PM

    Hello,
    It is hard to determine what is best for your specific type of information. In the end, they all do the same thing, but what is the best for your users? What looks the best? Which provides the easiest accessibility? Which do you find to be most intuitite? There are instances where any number of these methods might work for a particular application and would not be as good in another one. This all depends on how you would like to present the information to the user and also the level of accessibility that you would like. If there was only one best way to do this, why even give options?
    -Zach
    Certified LabVIEW Developer

  • Hyper V 2012 Backup - best practice

    I'm setting up a new Hyper-V server 2012 and I'm trying to understand how to best configure it for backups and restores using
    Windows Server Backup. I´ve 5 VMs running, everyone has their one iScsi drive.
    I would like to backup the VMs on a external usb disk. I´ve the choice to backup the Hyper-V using child partition snapshot or to backup the vm folder with the config and the vhd files (is is right, that Windows override the old backup ?)
    What´s the difference between the two methods ?  Is incremental back possible with vm´s ?
    thanks

    Hi,
    There are two basic methods you can use to perform a backup. You can:
    Perform a backup from the server running Hyper-V.
    We recommend that you use this method to perform a full server backup because it captures more data than the other method. If the backup application is compatible with Hyper-V and the Hyper-V VSS writer, you can perform a full server backup that helps protect
    all of the data required to fully restore the server, except the virtual networks. The data included in such a backup includes the configuration of virtual machines, snapshots associated with the virtual machines, and virtual hard disks used by the virtual
    machines. As a result, using this method can make it easier to recover the server if you need to, because you do not have to recreate virtual machines or reinstall Hyper-V.
    Perform a backup from within the guest operating system of a virtual machine.
    Use this method when you need to back up data from storage that is not supported by the Hyper-V VSS writer. When you use this method, you run a backup application from the guest operating system of the virtual machine. If you need to use this method, you
    should use it in addition to a full server backup and not as an alternative to a full server backup. Perform a backup from within the guest operating system before you perform a full backup of the server running Hyper-V.
    iSCSI-based storage is supported for backup by the Hyper-V VSS writer when the storage is connected through the management operating system and the storage is used for virtual hard disks.
    For more information please refer to following MS articles:
    Planning for Backup
    http://technet.microsoft.com/en-us/library/dd252619(WS.10).aspx
    Hyper-V: How to Back Up Hyper-V VMs from the Host Using Windows Server Backup
    http://social.technet.microsoft.com/wiki/contents/articles/216.hyper-v-how-to-back-up-hyper-v-vms-from-the-host-using-windows-server-backup.aspx
    Backing up Hyper-V with Windows Server Backup
    http://blogs.msdn.com/b/virtual_pc_guy/archive/2009/03/11/backing-up-hyper-v-with-windows-server-backup.aspx
    Lawrence
    TechNet Community Support

  • Best practice for running multiple instances?

    Hi!
    I have a MMPRPG interface that currently uses shared objects to store several pages of information per account.  A lot of users have several accounts (some as many as 50 or more) and may access only one or several different game servers..  I am building a manager application in air to manage them and plan on putting all the information in several sql db's. 
    The original authors obviously had no idea what the future held.  Currently players have a separate folder for each account, with a copy of the same swf application in EACH folder.  So if a player has 20 accounts, he manually opens 20 instances of the same swf (or projector exe, based on personal prefrence).  I have spent the last year or so tinkering with the interface, adding functionality, streamlining, etc, and have gathered a large following of supporters.
    Each account is currently a complete isolated copy of a given interface (there are several different ones out there. It could shape up to be quite a battle)   In order to remedy this undesireable situation, I have replaced the login screen with a controller.  The question now is how to handle instansiating each account. The original application simply replaced the login screen with the main application screen in the top application container at login.
    My main (first) question is: If I replace the login screen with a controller is it more economical to have the controller open a window for each account and load an instance of the required classes or  to compile the main application and load instances of the swf?
    Each account can have up to 10 instances of about 30 different actionscript classes that get switched in and out of the main display.  I need to be able to both send and receive events between each instance and the main controller.
    I tenatively plan on using air to open windows, and simply alter the storage system using shared objects to storing the same objects in an sql table.
    Or should that be 1 row per account?   I am not all that worried about the player db, since it is basically file storage, but the shared db will be in constant use, possibly  from several accounts. (Map and player data is updated constantly)  I am not sure yet how I plan to handle updating that one. 
    I am at the point now where all the basic groundwork is laid, and the controller (though still rough around the edges) stands ready to open some accounts...  Had the first account up and running a couple days ago, but ran into trouble when the next one tried to access what used to be static infoirmation...  The  next step is to build some databases and I need to get it right the first time.  Once I release the app and it writes a db to the users machine, I do not want to have to change it
    I am an avid listener and and an eager student.  (I have posted here before under the name eboda_kcuf but was notified a few weeks ago that it was not acceptable in the forums....)  I got some great help from Alex amd a few others, so I am sure you can help me out here. 
    After all, you guys are the pro's! just point me in the right direction.... 
    Oh, almost forgot:  I use flashbuilder 4.5, sdk 4.5.1
    Message was edited by: the0bot  typo.

    There's nothing wrong with that approach. You can run as many IIS sites as you like against a single CF install.
    As for installing CF on IIS 7, I recommend that you do the following: install CF 9 without connecting it to IIS, then installing the 9.0.1 upgrade and any hotfixes, then connecting CF to IIS using the web server configuration utility. This will keep you from having to install the IIS 6 compatibility layer that's needed with CF 9 but not with CF 9.0.1.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/

Maybe you are looking for

  • Hard drive i/o error, spinning beachballs. PASSES Smart test, but fails two specific parameters:

    I'm on a Macbook from early 2007. First off, I'm fully aware that my HD is probably on its last legs and I have two gigantic TB external HDs on which there are multiple images of the entire drive, so data loss is not currently a problem for me. I've

  • Error while creating new item in SharePoint List

    Hi, I have requirement to use cascading dropdown, I just created all the steps which are explained in the below URL http://spcascade.org/. But i am getting Debug message Error: "Error pulling data from list" and "try setting spWebUrl to appropriate S

  • I need to take a video off my iPod nano.

    I think it's a 4th generation one. I don't have iTunes on my current laptop and have no idea how to do this. Can someone help please? Thanks

  • Problem with L193pC display: not coming on intermitently

    Hi, I have a problem with my new L193pC ThinkVision display. My setup is this: - Lenovo laptop 3000 N200 0769-A8U with Windows XP and all the latest updates installed - Lenovo L193pC display hooked up to the laptop's analog output This is a work envi

  • Wake up a specific thread

    Hello All, We have a site that has the potential of multiple users hitting our site at the same time. Each user initiates a transaction request, which goes to a controller servlet and then fires off a transaction worker. While these transaction are r