Need suggestion on a Design Decision.

Hi All,
One of customer that has a more than 100,000 employee for a leave management solution. If every employee would apply around 26 leaves yearly it means Custom List would have at least 2600,000 items . I am wondering should I propose solution since SharePoint
would not be able handle so many items in Custom List.
What is your suggestion guys?
Regards Restless Spirit

Hi Restless Spirit,
You can indeed handle large amount of data in SharePoint 2013 lists. Please have a look in to the external lists and BDC models for handling such kind of scanrios.
Following links will be helpfull in understanding lists handling of large data.
http://technet.microsoft.com/en-us/library/cc262813(v=office.14).aspx
http://office.microsoft.com/en-us/sharepoint-server-help/manage-lists-and-libraries-with-many-items-HA102771361.aspx
http://www.layer2solutions.com/en/community/FAQs/BDLC/Pages/SharePoint-Large-Scale-External-Data-Integration.aspx
http://www.ericgregorich.com/blog/2013/7/10/working-with-list-view-thresholds-in-sharepoint-2013

Similar Messages

  • Need Suggestion on the Design of a New Workbench

    Hi All,
    I need a suggestion on the design of agreement workbench..
    The requirement goes this way...
    We will have workbench main screen, where header and line details will be entered manually ( or sourced from legacy system). On the main screen, there will be few buttons, clicking upon which will open the subforms (around 6-8 screens) or supporting details (the data can be entered or interfaced).
    We have two approaches.
    1. Keeping everyithing in a single .fmb file
    2. Creating one .fmb file for the main screen and different .fmb files for each of the individual screens and calling them from the main screen.
    Please suggest the best approach considering all the factors like maintanance, user friendlyness, switching b/w the main and child forms and all other possible factors which can make difference.
    Thanks in advance!.
    Thanks,
    Pavan

    Hello,
    All I can say is that small modules are faster to load and easyest to maintain.
    Francois

  • Need suggestion on OSB  design..

    Hi
    i am facing following issue:-
    I have 4 interface that i need to do using OSB.
    I have separate xsd's for each interface.
    Each interface may have its own message flow,
    Now what i want to do is:-
    Use only one WSDL ,one Proxy and in business service....
    Business Service :-not an issue as i will place the message in Queue.
    WSDL:- while creating WSDL i will make separate operation for each interface ie each operation will have the request/response xsd corresponding to that interface..
    Now problem is how do i deal with Proxy service?
    There will be an unique ID that will identify for which interface is this request...
    but even if i identify the interface ,how will i have a separate message flow in proxy????Will branching help???
    Do any kw how can i do this?
    Thanks

    Hi,
    You can try using the "Operational Branch" feature of OSB using a single Proxy Service.
    Thanks,
    Prabu

  • Hi please need suggestion for best design for jdbc-rfc-file

    Hi Gurus,
    actually our scenario is jdbc->XI( <-RFC->)->File
    Here our payload is around 5000 records
    can it be advisable to use rfc synchronous communication.
    The scenario will be exected only in night times.so can we schedule the adapter as we are using sp9.
    If not what must be the good design approach.
    And also after scheduling to a perticular period ,if there is any down time of XI server,will the process start immediately after the sever up or it will again watch for that perticular time.

    Hi,
    >>>can it be advisable to use rfc synchronous communication.
    - No, if there is no business requirement for realtime response.
    >>>can we schedule the adapter as we are using sp9
    - Not necessary.
    >>>Here our payload is around 5000 records
    - Is it requirement to send all 5000 records at once?? if not then distribute load whole day if possible.
    If you provide more information then maybe we can assist you.
    Regards,
    Gourav
    <i>Reward points if it helps you</i>

  • Need suggestion on making career decision

    Dear Friends,
    Problem Background  : Want to make the most of my previous SAP Technical Experience and work in a marketing role.
    Experience/Qualification:
    Bachelor of Technology (Computer Science)
    4.5 years of experience in SAP ABAP HCM
    MBA in Marketing Management (Just Completed)
    Problem : Not sure whether this combination(SAP+Marketing) make sense? If it does, what kind of role should I be looking for and in which firms?
    <<removed>>
    Thanks,
    Kunal.
    Edited by: kishan P on Feb 21, 2012 2:30 AM

    Hi Kunal,
    If you want to continue with the SAP ABAP HCM experience and wants to be in technical field itself, then your
    MBA specialization as such doesn't help you, but of course it gives you an edge compared to others.
    If you are very good at HCM domain, probably you can move on to the functional side of HCM.
    If you are very good at marketing, you can probably get in to the SAP Presales team where you can make use
    of your technical expertise very well.
    Regards,
    Leon

  • Need suggestion for designing a BEx report

    Hi,
    I need suggestions for designing a BEx report.
    Iu2019ve a DSO with below structure:
    1. Functional Location u2013 Key
    2. Maintenance Plan u2013 Key
    3. Maintenance Item u2013 Key
    4. Call # - Key
    5. Cycle u2013 Data Field
    6. Planned Date u2013 Data Field
    7. Completion Date u2013 Data Field
    This DSO contains data like:
    Functional -
    Plan --- Item -
    Call# --- Cycle -
    Planned Dt -
    Completion Dt
    Location
    11177 -
         134 -
         20 -
         1 -
    T1 -
         02-Jan-2011 -
         10-Jan-2011
    11177 -
         134 -
         20 -
         2 -
    T2 -
         15-Feb-2011 -
    11177 -
         134 -
         20 -
         3 -
    T1 -
         15-Mar-2011 -
    11177 -
         134 -
         20 -
         4 -
    M1 -
         30-Mar-2011 -
    25000 -
         170 -
         145 -
         1 -
    T1 -
         19-Jan-2011 -
         19-Jan-2011
    25000 -
         134 -
         145 -
         2 -
    T2 -
         20-Feb-2011 -
         25-Feb-2011
    25000 -
         134 -
         145 -
         3 -
    T1 -
         14-Mar-2011 -
    Now Iu2019ve to create a report which will be executed at the end of every month and should display the list of Functional Locations whose Cycles were planned in that particular month, along with the last completed Cycle/Date.
    Thus based upon above data, if I execute report at the end of (say) March then report must display:
    Functional ---     Curr. Cycle --- Planned Date --- Prev. completed Cycle --- Prev Completed Date
    Location
    11177 -
         T1 -
         15-Mar-2011 -
    ---     T1 -
    --     10-Jan-2011
    11177 -
         M1 -
         30-Mar-2011 -
    ---     T1 -
    --     10-Jan-2011
    25000 -
         T1 -
         14-Mar-2011 -
    ---     T2 -
    --     25-Feb-2011
    Any idea how can I display Previous Completed Cycle and Completion Date (i.e. the last two columns)?
    Regards,
    Vikrant.

    hi vikrant,
    You can a Cube at the reporting layer  which gets data from DSO and which has these 2 extra characteristics completion date and previous cycle along with other chars and keyfigures from DSO.
    You can populate these  based on your logic in the field routine.
    Hope it helps.
    Regards
    Dev

  • JPA Arhitectural Design Decision

    Hi,
    I'm building a 1 tier web shop, using mostly Ajax, Servlets and JPA and I need your advice on a design decision.
    When a user demands to see the products belonging to a particular category of products, my DAO object returns to the servlet a java.util.List<Product>, where Product is a JPA entity. In the servlet class I "manually" create the Ajax XML response, the user gets to see the products, everything is nice and great.
    I am not happy with the fact that the list of products remains detached in the servlet class, sort of say, and when another user demands to see the same products another list gets greated. These are objects that have method scope, but still they are on the stack, right? For 100 users who want to see 100 products each, the no. of objects created could cause the application to have a slower reponse time.
    So my question is about the design of the application.
    I obtain the list of products in the servlet class and construct the XML response. Right before sending the response, should I pass the list of products back to the DAO, and ask the EntityManager to merge the products? Will this reduce the no. of objects my application creates? Shouldn't I do this because I'm merging entities that have not been changed and the merge operation is time consuming?
    Should I not pass back the products to the DAO and set each product in the list to reference null and call System.gc() ?
    Keeping in mind, that my main concern is application response time, not reduced development time, are there any other suggestions you can make?

    first of all, a merge is only used to synchronize a changed entity that is not managed by an entity manager with the database. Why did you even come to the conclusion that you might need this?
    No you don't nullify the entities in the list. You let the entire list go when you are done with it. Manually nullifying can hinder the garbage collector, just don't do it unless you have a very good reason for doing so.
    Your main problem seems to be that you don't like the fact that you are fetching 100 objects for both users, putting duplicate objects in memory on the server. Are you sure this is a problem? You shouldn't be thinking about optimizations while you are still developing you know. I would wait until you are done, then profile the application to see where bottlenecks are; if fetching those 100 products turns out to take a lot of system resources, THEN optimize it.
    You may want to look into caching. If for example under water you use Hibernate as the persistence provider, search for "hibernate cache" using google.

  • Low level Hex disk edit & search util needed- suggestions please?

    low level Hex disk edit & search util needed- suggestions please?
    Maybe It's just late & I've had a bad day.... but I haven't needed a low level Hex disk edit & search utility suitable for an Intel 10.4.x Mac until now and can't seem to locate one.
    There should be plenty of free/shareware options (because they're handy and not particularly hard to write ... and every tech head needs one some time)...
    Any suggestions please?
    [I haven't bothered with the commercial stuff - like tech tool/Norton/*insert name here* Recover/repair, Something Genius etc. etc. because they are all without exception either unnecessary (just pretty shells for the underlying UNIX/X utils) useless AND greedy $-gougers, just useless, or just money gougers so I couldn't even say whether any still have a 'feature' like the old Norton Disk editor app had - but a quick look about suggest not...]
    grumble
    Any specific suggestions/links, please?
    TIA

    they are all without exception either unnecessary (just pretty shells for the underlying UNIX/X utils) useless AND greedy $-gougers, just
    useless, or just money gougers
    Such a high-esteem for fellow human beings - and
    programmers...
    You know, there are some good decent nice people
    behind those names?
    You'd be amazed at how much testing goes into a product.
    [SNIP]
    g'day Hatter...
    Yes, I know there are some good decent nice people behind those names..fellow human beings - and fellow programmers (so yes, I do know...) In previous incarnations I have 'thunk up' and developed, Marketed & supported considerably more complex Apps & systems myself - I even know some of the people you mention personally - and they are usually decent Blokes/women but normally, it isn't the programmers who make the decisions on pricing/features/support/performance/upgrade costs & cycles etc...
    My only error was, I agree, the phrase 'without exception' - but (mainly) only because I haven't bought/tested & used all of them very very recently. So I offer my apologies to those to whom my remarks should not apply (long, late night/early morning frustration...)
    However, I also offer a few simple pertinent examples:
    One 'top name' Utility company had a 'save your Mac HD' product on the market for some time that was almost guaranteed to TRASH it irretrievably but did NOT say so - nor did they help or compensate those they harmed.
    Several are selling what amount to simple, pretty, GUI shells for 'free' or OS-included command line tools - no more, no less but do NOT say so and are asking good money for the 'software'.
    Many are asking ridiculous prices for "regular upgrades" so you can keep their tool current with your Mac OS - one wants US$100/year for it, another, $15 per u/g, others, US$25 or $40 per u/g; one asks 'only' $10 - and these 'upgrades' are happening 3,4,5,6 times per year and are necessary for the Marketing company to keep their product Saleable to new purchasers and new Macs (as well as for important Bug Fixes - and only co-incidentally to keep it performing with your current Mac and OS - which is what you paid them for in the first place).
    I won't pay for a product 3, 6 or 9 times and I won't advise my clients to: It's not unreasonable for a person to expect a 'sensible lifetime/currency' of Product X for Computer Y - say 3 years (e.g. AppleCare). I wouldn't object to paying for an "upgrade" at that point - IF it's worth the money.
    Software is Waaay too expensive in many cases and is simply inviting 'piracy' - from people who have already PAID for the product: sadly, they are killing their own Gooses.
    Seriously, one product costs ca. US$100 to Buy in the first place
    To keep it actually working for you costs about the same again Per Year - a 3 year 'sensible lifetime' or 'currency' cost of US$300 or $400! [That'll buy a lot of 'bare drives' to put in a handy Firewire case for automatic backups in Background, differential backups etc. and other simple practices which render this product type largely unnecessary ].
    For what? A relatively simple set of utilities which would actually cost the company involved less than $5 total each - over 3 years - to make available to existing ( or 'current') owners. [Applecare 'complete' Hardware and Software warranty & support on a US$2000 iMac - which includes Tech Tools Pro Deluxe or somesuch costs about US$165 for 3 years. Total.]
    Having designed, developed, Marketed, supported & maintained more complex Applications to/for a sizeable user-base (in US terms) over multiple complete Series of Product 'life-cycles' - regular Updates and all, I think I know where the pirates are.
    These practices have been rampant in the MSWindows™ market for a longtime. It's a real shame to see it in the Mac world.
    I have all the esteem in the world for those fellow human beings who deserve such - and programmers who are 'good decent nice people'.
    I have none to spare for monopolists, 'exploitationists' or any of those who take unfair/unreasonable advantage of their fellow human beings - AND of programmers who are 'good decent nice people' (like, say, ME... .
    In any event, as I said: they are "killing their Gooses": I know of at least 6 software companies which went this route a while back. All are dead or dying.
    Thank you for your help - and the opportunity to apologise for 'mis-speaking'.
    all the best,
    orig g4 733, many others thru (the luvly) Macintels     New & old Macs, Wintels, MacIntels, other systems...

  • I need your help with a decision to use iPhoto.  I have been a PC user since the mid 1980's and more recently have used ACDSee to manage my photo images and Photoshop to edit them.  I have used ProShow Gold to create slideshows.  I am comfortable with my

    I need your help with a decision to use iPhoto.  I have been a PC user since the mid 1980’s and more recently have used ACDSee to manage my photo images and Photoshop to edit them.  I have used ProShow Gold to create slideshows.  I am comfortable with my own folder and file naming conventions. I currently have over 23,000 images of which around 60% are scans going back 75 years.  Since I keep a copy of the originals, the storage requirements for over 46,000 images is huge.  180GB plus.
    I now have a Macbook Pro and will add an iMac when the new models arrive.  For my photos, I want to stay with Photoshop which also gives me the Bridge.  The only obvious reason to use iPhoto is to take advantage of Faces and the link to iMovie to make slideshows.  What am I missing and is using iPhoto worth the effort?
    If I choose to use iPhoto, I am not certain whether I need to load the originals and the edited versions. I suspect that just the latter is sufficient.  If I set PhotoShop as my external editor, I presume that iPhoto will keep track of all changes moving forward.  However, over 23,000 images in iPhoto makes me twitchy and they are appear hidden within iPhoto.  In the past, I have experienced syncing problems with, and database errors in, large databases.  If I break up the images into a number of projects, I loose the value of Faces reaching back over time.
    Some guidance and insight would be appreciated.  I have a number of Faces questions which I will save for later. 

    Bridge and Photoshop is a common file-based management system. (Not sure why you'd have used ACDSEE as well as Bridge.) In any event, it's on the way out. You won't be using it in 5 years time.
    Up to this the lack of processing power on your computer left no choice but to organise this way. But file based organisation is as sensible as organising a Shoe Warehouse based on the colour of the boxes. It's also ultimately data-destructive.
    Modern systems are Database driven. Files are managed, Images imported, virtual versions, lossless processing and unlimited editing are the way forward.
    For a Photographer Photoshop is overkill. It's an enormously powerful app, a staple of the Graphic Designers' trade. A Photographer uses maybe 15% to 20% of its capability.
    Apps like iPhoto, Lightroom, Aperture are the way forward - for photographers. There's the 20% of Photoshop that shooters actually use, coupled with management and lossless processing. Pop over to the Aperture or Lightroom forums (on the Adobe site) and one comment shows up over and over again... "Since I started using Aperture/ Lightroom I hardly ever use Photoshop any more..." and if there is a job that these apps can do, then the (much) cheaper Elements will do it.
    The change is not easy though, especially if you have a long-standing and well thought out filing system of your own. The first thing I would strongly advise is that you experiment before making any decisions. So I would create a Library, import 300 or 400 shots and play. You might as well do this in iPhoto to begin with - though if you’re a serious hobbyist or a Pro then you'll find yourself looking further afield pretty soon. iPhoto is good for the family snapper, taking shots at birthdays and sharing them with friends and family.
    Next: If you're going to successfully use these apps you need to make a leap: Your files are not your Photos.
    The illustration I use is as follows: In my iTunes Library I have a file called 'Let_it_Be_The_Beatles.mp3'. So what is that, exactly? It's not the song. The Beatles never wrote an mp3. They wrote a tune and lyrics. They recorded it and a copy of that recording is stored in the mp3 file. So the file is just a container for the recording. That container is designed in a specific way attuned to the characteristics and requirements of the data. Hence, mp3.
    Similarly, that Jpeg is not your photo, it's a container designed to hold that kind of data. iPhoto is all about the data and not about the container. So, regardless of where you choose to store the file, iPhoto will manage the photo, edit the photo, add metadata to the Photo but never touch the file. If you choose to export - unless you specifically choose to export the original - iPhoto will export the Photo into a new container - a new file containing the photo.
    When you process an image in iPhoto the file is never touched, instead your decisions are recorded in the database. When you view the image then the Master is presented with these decisions applied to it. That's why it's lossless. You can also have multiple versions and waste no disk space because they are all just listings in the database.
    These apps replace the Finder (File Browser) for managing your Photos. They become the Go-To app for anything to do with your photos. They replace Bridge too as they become a front-end for Photoshop.
    So, want to use a photo for something - Export it. Choose the format, size and quality you want and there it is. If you're emailing, uploading to websites then these apps have a "good enough for most things" version called the Preview - this will be missing some metadata.
    So it's a big change from a file-based to Photo-based management, from editing files to processing Photos and it's worth thinking it through before you decide.

  • Need some Help in Design please...

    Hi All,
    I am creating dimensions for my sales cube in which I am going to store Day level data with Customer and Material Data. I have to store all the Customer and Material Groups (1-5) in the cube as most of the reports are based on them. I have planned to created 2 seperate dimensions for Customer and Material and include all the Group fields as well in the respective dimensions. But, as Customer and Material master data itself are gonna be big, I am bit confused if it creates a performance issue if I include the groups in the same dimension. I remember that there is a rule like Dimension table should be < x% of cube (I didnot remember the exact figure)...Any suggestions on this design please...
    I am in the process of creating process chain for my FI-GL and FI-AR flows. I am loading both line item and transaction level data for both of them. I am confused if I can include both the flows in the same chain or seperate ones? What is the sequence in loading the data i.e First Transaction or line item? First GL or AR? Any such dependencies?..Any help please...
    Best Regards,
    James.

    Hi,
    Generally, if we need to take Characterstics like Document no as part of the cube then we define them as line item dimensions, becuase we can easily predict that the no of entries in  those dimensions are almost equal to fact table entries. So ask your business process team to tell about no of new entries created for Material and customers.On that basis you can decide whether it should be a Line item Dimension or not. It is juat a hypothetical assumption.
    For further info on how to decide which dimension is line item dimension, search the forum.
    And comming to chain:
    You can go with that design.
    With rgds,
    Anil Kumar Sharma .P

  • Need suggestion on new environment (server - database combi).

    Hello,
    We have two database instances (DB1 and DB2 - Both 10g) each of size 10TB. Currently, both the databases are mounted on same AIX server.
    There are several schemas in each database. There will be data transfer activity between two databases. Currently we are using DB links for the same.
    We are planning to build a new environment(Server / Servers) with data from both the databases.Need your inputs and suggestions on building / designing the new environment.
    2 Servers with DB1 and DB2 mounted on each seperately. - Pros and cons ?
    1 server with DB1 and DB2 mounted on the same server. - Pros and cons ?
    1 server with 1 DB (Merging both schemas from DB1 and DB2). - Pros and cons ?
    Please suggest your thoughts, advantages and constraints of the different approaches.
    A few things which come to my mind instantly are : Modularity , Data transfer , Security , Ease of maintenance , Expandability of business , Architectural recommendations , Backup and Restore.
    Thanks,
    Anand.

    Hi Arjun.Singh,
    Now can I create a new database with name JKL at Server B with backup of PQR?You need a target to use the duplicate command in rman.
    Is your backup of PQR a image copy?
    If so you could use a classic clone procedure to create a new database.
    Regards,
    Tycho

  • Design decision / purpose / aim of audit trail

    Hi,
    since the audit trail doesn't contain so many data and BAM is great for real time monitoring my question is: What is the design decision, or the purpose / aim of the audit trail?
    What was the main target to implement a audit trail? Is it primarily for debugging? To see the flow the process instance has taken?
    Obviously the audit trail isn't the right way for real time monitoring, right? So maybe you can tell me, why there is an audit trail at all. What was the design decision behind it?
    Greetings
    Mike

    Hi Mike,
    While I am certainly not one of the people who designed it, I think I can answer your question.
    The audit trail is what the name implies - it keeps track of all the steps preformed by the process instance. It lets you view the instance history, variable content etc. and lets you see the current state of an in flight instance or to be more exact lets you see the last dehydration point. You can minimize the trail data, or even disable it.
    BAM however is real time monitoring of business or operational data or KPI. You send data to the BAM engine using sensors, and you only send the data you want to send when you want to send it. IF you don't need real real-time monitoring with all the fantastic visual features, alerts etc. of BAM, you can send the same data to a database or JMS server instead and built your own monitoring.
    hth,
    ~ronen

  • Need help with a design

    Hi All,
    I have a peculiar requirement and would appreciate your suggestions/inputs in designing this.
    The client needs a custom UI from where they can select the different letters/documents they want to generate. But they want to preview the document and make any changes to it before they print it.
    For example, the user select letter1, letter2 and letter4 to be generated. He clicks on the 'preview' button and the 3 documents (MS Word) are created. But he wants to change some data in one of the letters (for example 'Need by Date') and then when he is satisfied, he clicks on the 'Generate' button. The generate button should merge all the 3 documents (with the changes made by user) as 1 pdf and email it to a particular email account.
    I can handle the dynamic template and merge/delivery part, but how can I take care of the case where a user makes changes to the generated report? We are on EBS R12.0.6
    Any suggestions?
    Thanks,
    Ashish

    you need some kind of reverse engineering, client change generated report and the change should reflect to template
    use xsl templateEdited by: arjamit on Nov 4, 2009 1:47 AM

  • Questioning design decision in java.lang.Character

    Having a look at the source code for java.lang.Character, I have the Character class explicitly extends Object
    I am using jre1.6.0_07 on window XP
    Now the question is what is the reason for such a decision and we all know that any class implicitly extends Object class.
    public final class Character extends Object implements java.io.Serializable, Comparable<Character> {
    }Regards,
    Alan Mehio
    London, UK
    Edited by: alan_mehio on 24-Jul-2009 12:31

    I cannot answer with anything but personal intuition, and give non-conclusive details:
    first this is not a design decision, merely a style decision, since, as you mention, any class implicitly extends directly java.lang.Object if not explicitly extending anything else (and at the bytecode level, the source-level difference is undetectable).
    As far as style is concerned, I would have assumed that the whole JDK team is required to strictly follow consistent rules, but different classes suggest otherwise.
    Sun's public [Code Conventions for the JavaTM Programming Language|http://java.sun.com/docs/codeconv/html/CodeConventions.doc5.html#2991] do not have an explicit rule about this; section +6.4 Class and Interface Declarations+ provide an example with an extends Object clause, but the rule is not explicit in the text; and the previous section 5.2 does provide an example without this clause...
    I went on speculating that the developpers for the Character class had a special intent in mind, as they override Object methods equals() and hascode(), but other class in the same package do the same without the explicit extends Object clause (Void, System, Number). At that step I gave up trying to find a reason other than the developers' own style...

  • HELP !!!!!! Design decision...!!!!!

    Hello,
    I am in a dilemma of making a design decision . We are developing a business tier component. This is going to talk to webservices on the backend. Right now it is going to integrate with 2 different backend systems through web services. In future it might support more of such backend systems.
    And there are clients (web app, xml app) who interface with the component.
    Most of the data elements passed over to backend systems is similar for both the systems, but some are different.
    Now is it a good design to make 2 different client interfaces for 2 backend systems ? so that ,clients upfront decide which interface to use. This is more cleaner and easier implementation.
    Or is it good to have a generic interface, and component then figures out which data to use and to which backend system to talk to.
    Please help,
    Thanks

    There are several patterns that could apply, but the most widly used is probably the MVC (Model View Controller) pattern.
    With the pattern the View layer is the front end (in your case this would be the web app / xml app).
    The Controller would be your middle tier, this layer is responsible for relaying requests of the View layer to the Model layer.
    The Model layer would be your backend webservices.
    As said, the controler is responsible for relaying the requests from the view layer to the correct webservice. This means you need to have some way to know how to do this. You can employ several methods to do this.
    You could have different methods for the different webservices, this is the most straight forward way.
    Or you could look at the provided parameters and decide where you need to go based on that. This is slightly more difficult, but when you have two or more webservices that do almost the same thing, this might be the better way to go.
    If you really wanted to make things fancy, you could employt the second method and have the checks be based on rules you configure through a dynamically loaded file, this way, you could (theoratically) build your middle tier in such a way that you can add new front ends / back ends without having to redo the middle tier. This might eventually be the cleanest / best way to go, but it is also the most difficult and takes a lot of planning beforehand.
    Mark

Maybe you are looking for