Best practice to serve multiple companys

Hello,
I have developed a web application, want to serve it to multiple companys. What I am thinking is:
- Each company will have its own URL, e.g.
http://www.xyz.com/companya/webapp
http://www.xyz.com/companyb/webapp
- Each company will have its own resource , like mysql db connection pool
- Each company will connect to its own database.
Can the above be implemented using Sun Applicaton Server? Is it just configuration issue or require some changes in my application design.
Best regards,
- Joe

Hi Joe!
You can easily setup multiple domains in your application server and use a different one for each company. This simplifies a lot as you can leave your application as it is and have just to change the resources in the application server instance. But I don't know how much overhead and load this will produce - give it a try.
On the other hand you can setup all different resources in one instance and delopy a seperate copy for each company - but then you have to provide a specific web.xml for each copy and updating isn't as easy as with a single package.
But maybe you could fetch configs from a database table to keep modifications small. Maybe you can use the url to distinguish. Just a few thoughts...
Cheers,
Jan

Similar Messages

  • Best practice for running multiple sites on 1 CF install?

    Hi-
    I'm setting up a new hosting environment (Windows Server 2008 Standard 64 bit VPS  configuration, MySQL, IIS 7, CF 9)
    Has anyone seen any docs or can anyone suggest best practices for configuring multiple sites in this environment? At this point I'm thinking simple is best, one new site in IIS for each client (domain) and point it to CF.
    Given this environment, is anyone aware of any gotchas within the setup of CF 9 on IIS 7?
    Thank you in advance,
    Rich

    There's nothing wrong with that approach. You can run as many IIS sites as you like against a single CF install.
    As for installing CF on IIS 7, I recommend that you do the following: install CF 9 without connecting it to IIS, then installing the 9.0.1 upgrade and any hotfixes, then connecting CF to IIS using the web server configuration utility. This will keep you from having to install the IIS 6 compatibility layer that's needed with CF 9 but not with CF 9.0.1.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/

  • Best Practice for using multiple models

    Hi Buddies,
         Can u tell me the best practices for using multiple models in single WD application?
        Means --> I am using 3 RFCs on single application for my function. Each time i am importing that RFC model under
        WD --->Models and i did model binding seperately to Component Controller. Is this is the right way to impliment  multiple            models  in single application ?

    It very much depends on your design, but One RFC per model is definitely a no no.
    Refer to this document to understand how should you use the model in most efficient way.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/705f2b2e-e77d-2b10-de8a-95f37f4c7022?quicklink=events&overridelayout=true
    Thanks
    Prashant

  • Best practice for server configuration for iTunes U

    Hello all, I'm completely new to iTunes U, never heard of this until now and we have zero documentation on how to set it up. I was given the task to look at best practice for setting up the server for iTunes U, and I need your help.
    *My first question*: Can anyone explains to me how iTunes U works in general? My brief understanding is that you design/setup a welcome page for your school with sub categories like programs/courses, and within that you have things like lecture audio/video files and students can download/view them on iTunes. So where are these files hosted? Is it on your own server or is it on Apple's server? Where & how do you manage the content?
    *2nd question:* We have two Xserve(s) sitting in our server room ready to roll, my question is what is the best method to configure them so it meets our need of "high availability in active/active mode, load balancing, and server scaling". Originally I was thinking about using a 3rd party load balancing device to meet these needs, but I was told there is no budget for it so this is not going to happen. I know there is IP Failover but one server has to sit in standby mode which is a waste. So the most likely scenario is to setup DNS round robin and put both xserves in active/active. My question now is (this maybe related to question 1), say that all the content data like audio/video files are stored by us, (We are going to link a portion of our SAN space to Xserve for storage), if we are going with DNS round robin and put the 2 servers in Active/Active mode, can both servers access a common shared network space? or is this not possible and each server must have its own storage space? And therefore I must use something like RSYNC to make sure contents on both servers are identical? Should I use XSAN or is RSYNC good enough?
    Since I have no experience with iTunes U whatsoever, I hope you understand my questions, any advice and suggestion are most welcome, thanks!

    Raja Kondar wrote:
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm I prefer option 1, as this gives me the greatest amount of resources available. I don't have to worry about resources in smaller pools. It also means there are more resources across the pool for HA purposes. Not sure if this is Official Best Practice, but it is a simpler configuration.
    Keep in mind that a server pool should probably have up to 20 servers in it: OCFS2 starts to strain after that.

  • What is the Best Practice for Server Pool...

    hi,
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm
    please suggest.....

    Raja Kondar wrote:
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm I prefer option 1, as this gives me the greatest amount of resources available. I don't have to worry about resources in smaller pools. It also means there are more resources across the pool for HA purposes. Not sure if this is Official Best Practice, but it is a simpler configuration.
    Keep in mind that a server pool should probably have up to 20 servers in it: OCFS2 starts to strain after that.

  • 2012 NLB Best Practice (Single vs Multiple NICs)?

    Our environment has used an NLB configuration with two NICs for years.  One NIC for the host itself and one for the NLB.  We have also been running the NLB in multicast mode.  Starting with 2008, we began adding the cluster's MAC address as
    an ARP entry on our layer three switch.  Each server participating in the NLB is on VMware.
    Can someone advise what the best procedure is for handling NLB in this day?  Although initial tests with one NIC seem to be working, I do notice that we get a popup warning on the participant servers when launching NLB manager "Running NLB Manager
    on a system with all networks bound to NLB might not work as expected"... if they are set to run in unicast mode.
    With that said, should we not be running multicast?  Will that present problems down the road?

    Hi enoobmot11,
    You can refer the following KB and the VMware requirement KB:
    Network Load Balancing Best practices
    https://technet.microsoft.com/en-us/library/cc740265%28v=ws.10%29.aspx?f=255&MSPPError=-2147217396
    Multiple network adapters
    https://technet.microsoft.com/en-us/library/cc784848(v=ws.10).aspx
    The VMware KB:
    Microsoft Network Load Balancing Multicast and Unicast operation modes (1006580)
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006580
    Sample Configuration - Network Load Balancing (NLB) Multicast Mode Configuration (1006558)
    http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1006558
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Best Practice in maintaining multiple apps and user logins

    Hi,
    My company is just starting to use APEX, and none of us (the developers) have worked on this before either. It is greatly appreciated if we can get some help here.
    We have developed quite a few applications in the same workspace. Now, we are going to setup UAT and PRD environments and also trying to understand what the best practice is to maintain multiple apps and user logins.
    Many of you have already worked on APEX environment for sometime, can you please provide some input?
    Should we create multiple apps(projects) for one department or should we create one app for one department?
    Currently we have created multiple apps for one department, but, we are not sure if a user can login once and be able to access to all the authenticated apps.
    Thank you,
    LC

    LC,
    I am not sure how much of this applies to your situation - but I will share what I have done.
    I built a single 700+ page application for my department - other areas create separate smaller applications.
    The approach I chose is flexible enough to accomdate both.
    I built a separate access control application(Control) in its own schema.
    We use database authenication fo this app - an oracle account is required.
    We prefer to use LDAP for authentication for the user applications.
    For users that LDAP is not option - an encrypted password is stored - reset via email.
    We use position based security - priviliges are based on job functions.
    We have applications, appilcations have roles , roles have access to components(tabs,buttons,unmasked card numbers,etc.)
    We have positions that are granted application roles - they inherit access to the role components.
    Users have a name, a login, a position, and a site.
    We have users on both the East Coast and the West Coast, we use the site in a sys_context
    and views to emulate VPD. We also use the role components,sys_contexts and views to mask/unmask
    card numbers without rewriting the dependent objects(querys,reports,views,etc.)
    The position based security has worked well, when someone moves,
    we change the position they are assigned to and they immediately have the privileges they need.
    If you are interested I can rpovide more detail.
    Bill

  • Best Practice for serving static files (gif, css, js) from front web server

    I am working on optimization of portal performance by moving static files (gif, css, js) to my front web server (apache) for WLP 10 portal application. I end up with moving whole "framework" folder of the portal WebContent to file system served by apache web server (the one which hosts WLS plugin pointing to my WLP cluster). I use <LocationMatch> directives for that:
    Alias /portalapp/framework "/somewhere/servedbyapache/docs/framework"
    <Directory "/somewhere/servedbyapache/docs/framework">
    <FilesMatch "\.(jsp|jspx|layout|shell|theme|xml)$">
    Order allow,deny
    Deny from all
    </FilesMatch>
    </Directory>
    <LocationMatch "/partalapp(?!/framework)">
         SetHandler weblogic-handler
         WLCookieName MYPORTAL
    </LocationMatch>
    So, now browser gets all static files from apache insted of the app server. However, there are several files from bighorn L&F, which are located in the WLP shared lib: skins/bighorn/ window.css, wsrp.css, menu.css, general.css, colors.css; skins/bighorn/borderless/window.css; skeletons/bighorn/js/ util.js, buttons.js; skeleton/bighorn/css/layout.css
    I have to merge these files into the project and physically move them into apache served file system to make mentioned above apache configuration works.
    However, this approach makes me exposed bunch of framework resources, which I do not to intend to change and they should not be change (only custom.css is the place to make custom changes to the bighorn skin). Which is obviously not very elegant solution. The other approach would be intend to create more elaborate expression for LocationMatch (I am not sure it's entirely possible giving location of these shared resources). More radical move - stop using bighorn and create totally custom L&F (skin, skeleton) - which is quire a lot of work (plus - bighorn is working just fine for us).
    I am wondering what is the "Best Practice Approach" approach recommended by Oracle/BEA - giving the fact that I want to serve all static files from my front end apache server instead fo WLS app server.
    Thanks,
    Oleg.

    Oleg,
    you might want to have a look at the official WLP performance support pattern (Metalink DocID 761001.1 ) , which contains a section about "Configuring a Fronting Web Server Serving WebLogic Portal 8.1 Static Artifacts ".
    It was written for WLP 8.1, but most of the settings / recommendations should also to WLP 10.
    --Stefan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Best practice - which server OS should I use for Exchange 2010 install

    We currently have 2 exchange 2007 boxes running Server 2003. the plan is to upgrade to exchange 2010 (I'd prefer 2013 but the powers that be want 2010), and I have been asked to follow Microsoft best practice. The problem is I can't find anything to point
    me in the direction of the recommended server OS, I can find ones it will work on but nothing to say Microsoft recommend this...
    We have licenses for Server 2008, 2008 R2 and 2012 available, which one should I advise is the Microsoft recommendation?

    Thanks Andy,
    So is there no actual best practice recommendation for a server OS to run Exchange 2010 on? I agree that 2012 would be the one to go for, but the people making the decision on how we do this want reasons, and as they don't really have a lot of technical
    understanding I need to be able to go to them with "Use Server 20xx because it's Microsoft best practice".
    If there isn't a best practice recommendation I will try the longer support life and more options for high availability with 2012.
    Well, you probably wont find a "best practice" as much as a "its supported" stance from Micorosoft.
    As in all these things, there may be other reasons a business chooses to use 2008 over 2012 etc...
    Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • Best practice for deleting multiple rows from a table , using creator

    Hi
    Thank you for reading my post.
    what is best practive for deleting multiple rows from a table using rowSet ?
    for example how i can execute something like
    delete from table1 where field1= ? and field2 =?
    Thank you

    Hi,
    Please go through the AppModel application which is available at: http://developers.sun.com/prodtech/javatools/jscreator/reference/codesamples/sampleapps.html
    The OnePage Table Based example shows exactly how to use deleting multiple rows from a datatable...
    Hope this helps.
    Thanks,
    RK.

  • Best Practices for FSCM Multiple systems scenario

    Hi guys,
    We have a scenario to implement FSCM credit, collections and dispute management solution for our landscape comprising the following:
    a 4.6c system
    a 4.7 system
    an ECC 5 system
    2 ECC6 systems
    I have documented my design, but would like to double check and rob minds with colleagues regarding the following areas/questions.
    Business partner replication and synchronization: what is the best practice for the initial replication of customers in each of the different systems to business partners in the FSCM system? (a) for the initial creation, and (b) for on-going synchronization of new customers and changes to existing customers?
    Credit Management: what is the best practice for update of exposures from SD and FI-AR from each of the different systems? Should this be real-time for each transaction from SD and AR  (synchronous) or periodic, say once a day? (assuming we can control this in the BADI)
    Is there any particular point to note in dispute management?
    Any other general note regarding this scenario?
    Thanks in advance. Comments appreciated.

    Hi,
    I guess when you've the informations that the SAP can read and take some action, has to be asynchronous (from non-SAP to FSCM);
    But when the credit analysis is done by non-SAP and like an 'Experian', SAP send the informations with invoices paid and not paid and this non-SAP group give a rate for this customer. All banks and big companies in the world does the same. And for this, you've the synchronous interface. This interface will updated the FSCM-CR (Credit), blocking or not the vendor, decreasing or increasing them limit amount to buy.
    So, for these 1.000 sales orders, you'll have to think with PI in how to create an interface for this volume? What parameters SAP does has to check? There's an time interval to receive and send back? Will be a synchronous or asynchronous?
    Contact your PI to help think in this information exchange.
    Am I clear in your question?
    JPA

  • Best Practice - Leopard Server and a Windows Server/Application

    Our family owns an accounting practice in Northern California. We all use MacBook Pros running Leopard as our client machines.
    Our customers all use QuickBooks for Windows for their accounting software. We currently host and store our customer's QuickBooks company files on a Windows 2003 SBS Server. Our staff connects to the Windows Server to launch QuickBooks and perform our accounting services. We do not use any of the features of the Windows Server besides the ability to log in with multiple user sessions to launch QuickBooks from both internal and remote office locations.
    Since we all use MacBook Pros, we'd like to use Leopard Server and implement mail, calendaring, wikis, etc. My dilemma is figuring out if this is technically possible because we need to continue to log into a Windows PC, Server, etc with multiple staff's sessions to perform our accounting services with QuickBooks for Windows.
    Can someone suggest a way we can do this? I'm not very technical since I am an accountant! We're hoping this can be done somehow!
    Thanks for any suggestion and assistance!
    John

    We also have Quickbooks as our legacy Windows app.
    Quickbooks files for several companies are stored on our Mac OS X Server, which is accessed by a Windows 2003 Server.
    I considered using Parallels or VMware, but the administrative overhead seam excessive. (One installation of Windows is too many). Windows Terminal Services seamed the best fit. Additional client access licenses were required from Microsoft as Windows Server only comes with two administrative RDP licenses.
    Our staff and staff from accounting and tax consulting firms connect to Quickbooks on the Windows server via RDP. Internally, we have a couple of diskless thin clients, and iMacs with RDP software installed. Externally, both Windows and Mac OS X clients connect to our 10.5 server IPSec VPN, and then to the Windows server, again via RDP.
    Performance is acceptable, as here in Australia we now have ADSL service with 1-2Mbps upstream. On my 23" display (Mac OS X 10.5), situated 1200miles away, a RDP connection at 1280x1024 works seamlessly.
    On the Mac Server side, we have recently upgraded to Leopard Server, and thus haven't implmented all the fabulous new features such as calendar sharing and wikis, but we are using network spotlight, and of course the Mail server. Additionally, we have a Fujitsu SnapScan which scans documents that are then automatically OCR'd, and made available by network spotlight. All very cool.
    Further work to be undertaken;
    - Have the Windows server logins authenticate with our Mac OS X Open Directory. Currently account credentials are duplicated.
    - Implement public key cryptography for the VPN, so that we can revoke individual client VPN installations
    - Implement two factor authentication (probably CryptoCard)
    - Batten down the permissions on the Windows server. The default Quickbooks installation required an unacceptable level of user rights.

  • Best Practice to load multiple Tables

    Hi,
    Am trying to load around 9-10 tables in sql server.all these belong to a single project.
    table loads are just select * into destination table,using some joins.
    Is it good to have 1 stored proc to load all these tables or individual stored procs to load.
    All these tables are independent to each other.
    Appreciate your help !

    If you are using SQL Server 2008 and onwards take a look into this link (Minimal logging)
    http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/23/minimal-logging-changes-in-sql-server-2008-part-1.aspx
    http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/23/minimal-logging-changes-in-sql-server-2008-part-2.aspx
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Best Practice: One Library, Multiple Macs ?

    Despite searching, I've yet to find a definitive answer to this one - so any advice would be appreciated....
    Having downloaded the trial, I very much like Aperture, but my decision to purchase and continue using it hinges very much on whether or not I can access the same library from multiple machines.
    Here is the scenario:
    My 'studio' currently has a Quad G5 (soon to be upgraded to a MacPro), with ample Firewire storage, including 2 new 500Gb Western Digital MyBook drives. I now also have a MacBook Pro to allow me to work away from the studio.
    So far, I'm using one of the 500Gb drives to hold everything I need - 'work in progress' documents, images etc. etc. with this being SuperDuper! smart updated daily to the second drive. This allows me to take drive 1 away at night and work from home, our go out of the office with the MacBook Pro and have all my media to hand, and 'synchronise' upon return the next day or whenever.
    I think what I'd like to be able to do, is set up Aperture on the G5 and get all my images sorted and stored into a new library (out of iPhoto and various Finder folders etc) stored on FW Drive 1 along with the rest of my stuff. This would semi-automatically be backed up to FW Drive 2 as mentioned above.
    However, I want to be able to access this library with Aperture on the MacBook Pro when I'm not in the office - will this work? I appreciate that I'll need 2 licenses of Aperture (one for the desktop, and one for the laptop) but my concern is whether or not both copies of Aperture can use the same library....... I wonder if the Aperture prefs would prevent this from working or screw it up completely.
    If this ain't gonna work - what other options do I have !?
    MacBook Pro 15"   Mac OS X (10.4.8)  

    Not sure this will help but this is what I have decided to do.
    System: MacBook Pro and a G5. Storage: 2 Western Digital 160 GB USB 2.0 Passport Portable drives, 1 OWC 160 GB Firewire 800 (for back up of my internal drive on my MacBook Pro), and 5 500GB Western Digital MyBook Pro FW drives (attached to my G5). One Aperture library resides on my MacBook Pro internal and a yearly (like 2006) Aperture resides on 1 500 GB WD MyBook Pro
    In studio, I shoot wirelessly and transfer by ftp to the MBP and using Aperture Hot Folder move the images into Aperture to show the client at the end and during the shoot. They are managed, but sadly the files are not named by studio convention. Once the client leaves, I delete the temporary project, and reimport the pictures, renaming the pictures using the studio format and they are managed. At this point the picture reside on my internal drive of my MBP.
    I do the majority of the clean up of the images at this point (contrast, exposure, saturation, sharpening, leveling, blemish corrections, etc, keywording). Once done, I export the project to my primary WD Passport drive and then using the old fashion network strategy of walking, take the WD Passport drive to the G5 and import the project in G5 library converting them to referenced files so I have Finder based architecture for all my RAW files.
    Once I know the the files are on the G5, I go back to my MBP and convert the managed pictures to referenced pictures in the process moving them off my internal drive and on to the WD 160 USB 2.0 drive in a Finder based architecture. This keeps my library on my MBP pretty small and allows me to carry 160 GB worth of pictures that I can work on at any time or place. [Actually, I will probably pick up a 3rd 160GB Passport soon, like this weekend, so I will be able to carry 320 GB of pictures with me.
    Now it gets nasty because I have two sets of pictures, one on my MacBook Pro and one on my G5. Again using old fashion strategies, I decided that the work flow could only go one way... from MBP to G5 and never (or very rarely) the opposite direction. The other nasty is how to deal with changes made after the first transfer of the project to the G5. The vast majority of those changes go through Photoshop and by default they become managed files. So for a particular project when I start to do additional editing, I bring those images into an album (let's say "Selected pictures") for that project and edit away. Once I am really done, I generate a new project (Original project is 070314 and the new project would be 070314 Update) and I drag the album to the new project. The nice thing is the primary pictures remain in the original project when I move the album. I then export Project Update consolidating images and import that into the library on the G5. Lastly, I relocated masters of the managed files to the WD Passport drive. I pay the penalty of having a few replicates of a couple of pictures and the architecture of my G5 library is not identical to my MBP library, but I have all of my primary images plus CS2 changes available to my MBP and my G5.
    OK..... my library on my MBP internal is backed up to the OWC drive using Superduper so I always have a current replicate of that library. The library and pictures on the 500 GB WD Mybook Pro are backed up 2-3 times per week to a second 500 GB WD MyBook Pro.
    In case my GS fails, I can address the library by simply attaching my MBP to the firewire chain and double click on the library icon. This forces Aperture to open the library on the 500 GB WD Mybook Pro drive. In case my MBP fails, I can boot my MBP or G5 off the OWC drive.
    Principles: A one library solution will never work, because eventually, even with referenced images it will become too large for a single drive. So start now with a strategy of smaller libraries structures. I use a yearly system, but others are possible. My yearly system consists of my library plus referenced images on one 500 GB drive plus a second drive to which I back up. If I run out of space, I will buy two more drives, perhaps moving TB drives.
    Don't try to store very many images on your internal drive of any computer. Develop a methodology of keeping the images on external (FW or eSATA) drives where capacity can be readily expanded.
    Keep your MBP library pristine by regularly moving managed files over to referenced files on a bus powered portable drive.
    Make sure you regularly back up both systems.
    Hope this helps.
    steven

  • Best practice for running multiple instances?

    Hi!
    I have a MMPRPG interface that currently uses shared objects to store several pages of information per account.  A lot of users have several accounts (some as many as 50 or more) and may access only one or several different game servers..  I am building a manager application in air to manage them and plan on putting all the information in several sql db's. 
    The original authors obviously had no idea what the future held.  Currently players have a separate folder for each account, with a copy of the same swf application in EACH folder.  So if a player has 20 accounts, he manually opens 20 instances of the same swf (or projector exe, based on personal prefrence).  I have spent the last year or so tinkering with the interface, adding functionality, streamlining, etc, and have gathered a large following of supporters.
    Each account is currently a complete isolated copy of a given interface (there are several different ones out there. It could shape up to be quite a battle)   In order to remedy this undesireable situation, I have replaced the login screen with a controller.  The question now is how to handle instansiating each account. The original application simply replaced the login screen with the main application screen in the top application container at login.
    My main (first) question is: If I replace the login screen with a controller is it more economical to have the controller open a window for each account and load an instance of the required classes or  to compile the main application and load instances of the swf?
    Each account can have up to 10 instances of about 30 different actionscript classes that get switched in and out of the main display.  I need to be able to both send and receive events between each instance and the main controller.
    I tenatively plan on using air to open windows, and simply alter the storage system using shared objects to storing the same objects in an sql table.
    Or should that be 1 row per account?   I am not all that worried about the player db, since it is basically file storage, but the shared db will be in constant use, possibly  from several accounts. (Map and player data is updated constantly)  I am not sure yet how I plan to handle updating that one. 
    I am at the point now where all the basic groundwork is laid, and the controller (though still rough around the edges) stands ready to open some accounts...  Had the first account up and running a couple days ago, but ran into trouble when the next one tried to access what used to be static infoirmation...  The  next step is to build some databases and I need to get it right the first time.  Once I release the app and it writes a db to the users machine, I do not want to have to change it
    I am an avid listener and and an eager student.  (I have posted here before under the name eboda_kcuf but was notified a few weeks ago that it was not acceptable in the forums....)  I got some great help from Alex amd a few others, so I am sure you can help me out here. 
    After all, you guys are the pro's! just point me in the right direction.... 
    Oh, almost forgot:  I use flashbuilder 4.5, sdk 4.5.1
    Message was edited by: the0bot  typo.

    There's nothing wrong with that approach. You can run as many IIS sites as you like against a single CF install.
    As for installing CF on IIS 7, I recommend that you do the following: install CF 9 without connecting it to IIS, then installing the 9.0.1 upgrade and any hotfixes, then connecting CF to IIS using the web server configuration utility. This will keep you from having to install the IIS 6 compatibility layer that's needed with CF 9 but not with CF 9.0.1.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/

Maybe you are looking for