Best practice for web servers behind a router (NAT, ACL, policy-map, VLAN)

Hi,
I'm a new Network admin, and I have some configuration questions about my installation (see attachment).
I have 3 web servers behind a router.
Public interface: 3 public ip adresses
Private interface: router on a stick config ( 3 sub-interfaces, 3 different networks, 3 VLAN)
I would to know the best way to redirect http traffic to the right server.
My idea is to map a public address to a private address, via NAT, but I'm not sure for the configuration.  I could also redirect via Policy-map and filter by url content.
So if you have some advise for this case, it would be really appreciated.
Thank you.
Chris.

Hello Christophe,
As I understand you want 1st that ; 
if somebody go to A.local.com from internet then he will redirect to 192.168.1.10 in your internal network. 
That means, you need static mapping between your public @ip address and your local ip address. 
for this example, your local interface is Fa0/0.1 and I dont your public interface because it is not mention in your diagram. I will suppose S0/0 for public interface. 
that is the config for the Web Server1. You can do the same with the remaining servers:
interface fa0/0.1 
ip nat inside
interface serial0/0
 ip nat outside
ip nat inside source static 192.168.1.10 172.1.2.3 
static mapping from local to public. 
I suppose you have done the dns mapping in your network and the ISP have done the same in his network. 
ip route 171.1.2.3 interface serial0/0 
or 
ip route 0.0.0.0 0.0.0.0 interface serial0/0. 
After these step for each web server, you will get the mapping. 
Now you can restrict access to this ip only to http or https protocol on your isp and after on your local network 
like
ip access-list extended ACL_WebServer1
permit ip any 192.168.1.10 eq www
deny ip any 192.168.1.10
exit
interface fa0/0.1
 ip acess-group ACL_WebServer1 in
no shut
exit
That is the first step. 
Second step : you want to filter traffic by url, that means layer 5 to 7 filtering. 
I am not sure that it is possible using cisco router with (ZBF + Regex).
Check the first step and let us know ! 
Please rate and mark as correct if it is the case. 
Regards,

Similar Messages

  • IronPort ESA best practice for DNS servers?

    Hello!
    Is there a best practice for what servers should be used for the Cisco IronPort DNS servers?
    Currently when I check our configuration, we have set it to "Use these DNS servers" and the first two are our domain controllers and last two are Google DNS.
    Is there a best practice way of doing this? I'm thinking of selecting the "Use the Internet's Root DNS Servers" option as I can't really see an advantage of using internal DC's.
    Thoughts?

    Best practice is to use Internet Root DNS Servers and define specific dns servers for any domain that you need to give different answers for. Since internal mail delivery is controlled by smtproutes using internal dns servers is normally not required.
    If you must use internal dns servers I recommend servers dedicated to your Ironports and not just using servers that handle enterprise lookups as well. Ironports can place a very high load on dns servers because every outside connection results in multiple dns lookups. (forward, reverse, sbrs)
    If you don't have enough dns horsepower you are susceptible to a DOS attack either through accident or design. If the Ironports overload your internal dns servers it can impact your entire enterprise.

  • ACE best practice for proxy servers

    Dear,
    I would like to know which is the best practice scenario to load balance proxy servers:
    1- Best practice to have transparent proxy or proxy setting on the web browser?
    2- for transparent proxy: best practice to use ip wccp or route-map pointing to the ACE VIP?
    3- What are the advantages and disadvantages of transparent proxy V/S web browser proxy setting.
    Regards,
    Pierre

    Hi,
    Sorry, that seem to be an internal link.
    You can also check the below post where a sample config is posted here for transparent cache.
    https://supportforums.cisco.com/thread/129106
                   Best practice :
    VIP would be a catch all address.
    To optimize the caching predictor hash url is used.
    You can also use mac-sticky on interface so proper flow persistence is used within ACE
    The mode is transparent so we preserve the destination ip address.
    Regards,
    Siva

  • Best practice for Web Dynpro for Java to connect to SAP HR

    What is best way to connect Web Dynpro for Java application deployed in SAP portal to connect to SAP HR ?
    Is it good practice to connect to underlying SAP database ( eg oracle) directly to get the data or is there a better way ?
    This below article describes to connect to external DB, however Is there any other way for  SAP HR ?
    http://wiki.sdn.sap.com/wiki/display/WDJava/WebDynproApplciationwithDatabaseMS+Access

    Hi,
    There are 2 supported ways :
    First is  to use JCO connections to call abap RFC enabled function modules. (BAPIs for example)
    Second is to call SOAP web services (HR enterprise services for example)
    You should never access directly database tables...
    Regards,
    Olivier

  • Best practice for web service call

    I can add a web service using the standard data connection wizard - works fine. I also can do it all in Javascript which give me a bit more flexibility. Is there some guideline or wisdom for which is best?

    It all depends on your requirement..
    For example, if you know your webservice address at design time itself, then it would be better to put in the data connection tab.
    But if your webservice address changes at run time based on the environment your application is deployed in, then you can use the java script code to change the webservice address dynamically.
    Thanks
    Srini

  • Looking for best practices for web delivery of Captivate tutorials

    I have a set of ~50 Adobe Captivate projects that are a mix of UI tour, business process overviews, and how-tos for configuring those processes. These "tutorials" as we call them, vary in length, ~1-3 minutes for the UI tours and overviews, and 4-10 minutes for the how-tos. We use narration, which we recorded in Audacity on Windows and imported as MP3s into Captivate. We are not using menus or interactivity, and are not integrated with an LMS.
    The goal is to deliver these tutorials in-context in the UI of our web-based enterprise application. Our customers typically have several thousand users and when they kick of a new cycle of a business process, have hundreds of simultaneous users logged in, many of whom may view our tutorials when they first log in. This means optimal network performance is critical.
    We initially output the Captivate projects to SWF format and used the standard HTML page as a means for a simple, progressive download. Users said it took too long to download before the tutorials could start playing. We tried converting the SWFs to FLV and delivered them via a streaming server. Even after optimizing the data rate and codec's compression settings, users reported that the volume of data being streamed to simultaneous users is too great and is adversely affecting their networks.As an example of the file sizes I'm talking, we have a five about are 6 MB for a SWF and 8.7 MB for an FLV.
    Can anyone suggest strategies for optimizing delivery of the SWF files, the FLVs, or another format all together? I would like to stay with Captivate because of our investment in training and the fact it is included with our licenses for the Technical Communications 2 Suite. We are using Captivate 4.0.1 build 1658 on Windows XP, and Sorenson Squeeze 6 for compressing the FLVs with the On2 VP6 codec.
    Any suggestions would be greatly appreciated.
    Thanks,
    Alan

    The tutorial I mention in my example below is five minutes in length.
    Alan

  • WSDL generator for web-service behind a router/proxy

    I create jax-ws service and deploy it on local network computer (LS:port1)
    and set up direct port forward from computer in DMZ ES:port2 to LS:port1.
    When I browse for http://ES:port2/sevice/serviceport?wsdl it generate invalid schemaLocation
    <types>
    <xsd:schema>
    <xsd:import schemaLocation="http://LS:port1/sevice/serviceport?xsd=1"/>
    </xsd:schema>
    <service name="hello">
    <port name="hello" binding="tns:helloBinding">
    <soap:address location="http://LS:port1/sevice/serviceport"/>
    </port>
    </service>
    </types>I thought the links are generated by request context
    jdevstudio11123, wl 10.3.5.0
    Can I forcedly set service location?

    yes, you can set ENDPOINT_ADDRESS_PROPERTY on BindingProvider for requestContext
    requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY,"http://example.com/webservices/service1");
    For complete code please look below link
    http://docs.oracle.com/cd/E12839_01/web.1111/e13758/jws.htm#autoId11
    http://jax-ws.java.net/articles/MessageContext.html
    Regards,
    Sunil P

  • Best practice for consuming web services

    Hi
    we are consuming web service in orchestration by "Add Generated Item".By using this option it creates 1 orch,1xsd file and some bindings.
    we have different projects for schemas,maps and orchestration under our solution in visual studio.
    Now i need to know that what will be the best practice for consuming web service in orchestration i mean in which project should i use "add generated item" (in orchstration project or in schemas project) coz it generates both 1 orch and 1
    schema.
    thanks

    From a service orientation perspective you should abstract the service artifacts from the other artifacts. Otherwise it will be very difficult to update the service interface without affecting the other artifacts. For example you don't want to have to redeply
    your entire application if only one field changes in the service you consume.
    So I typically generate the items, remove the unnecessary stuff, and put them in a separate project.
    Depending on the control you have over the services you want to consume, it would even be better to create another layer of abstraction. By that I mean create your own interface (schema) and map that one to the one the service exposes. This basically
    is only necessary if you consume external services that are beyond your control. By abstracting the interface it exposes, you limit the impact of changes of that interface on the rest of your system. All changes are abstracted behind your own interface.
    If you consume internal services, you can probably control the way the interface is defined. In a service oriented world all internal services expose a well known interface, based on the domain objects you have within your organisation.
    Jean-Paul Smit | Didago IT Consultancy
    Blog |
    Twitter | LinkedIn
    MCTS BizTalk 2006/2010 + Certified SOA Architect
    Please indicate "Mark as Answer" if this post has answered the question.

  • Best practice for intervlan routing?

    are there some best practices for intervlan routing ?
    I've been reading allot and I have seen these scenarios
    router on a stick
    intervlan at core layer
    intervlan at distribution layer.
    or is intervlan needed at all if the switches will do the routing?
    I've done all of the above but I just want to know what's current.

    The simple answer is it depends because there is no one right solution for everyone. 
    So there are no specific best practices. For example in a small setup where you may only need a couple of vlans you could use a L2 switch connected to a router or firewall using subinterfaces to route between the vlans.
    But that is not a scalable solution. The commonest approach in any network where there are multiple vlans is to use L3 switches to do this. This could be a pair of switches interconnected and using HSRP/GLBP/VRRP for the vlans or it could be stacked switches/VSS etc. You would then dual connect your access layer switches to them.
    In terms of core/distro/access layer in general if you have separate switches performing each function you would have the inter vlan routing done on the distribution switches for all the vlans on the access layer switches. The core switches would be used to route between the disribution switches and other devices eg. WAN routers, firewalls, maybe other distribution switch pairs.
    Again, generally speaking, you may well not need vlans on the core switches at all ie. you can simply use routed links between the core switches and everything else. 
    The above is quite a common setup but there are variations eg. -
    1) a collapsed core design where the core and distribution switches are the same pair. For a single building with maybe a WAN connection plus internet this is quite a common design because having a completely separate core is usually quite hard to justify in terms of cost etc.
    2) a routed access layer. Here the access layer switches are L3 and the vlans are routed at the access layer. In this instance you may not not even need vlans on the distribution switches although again to save cost often servers are deployed onto those switches so you may.
    So a lot of it comes down to the size of the network and the budget involved as to which solution you go with.
    All of the above is really concerned with non DC environments.
    In the DC the traditional core/distro or aggregation/access layer was also used and still is widely deployed but in relatively recent times new designs and technologies are changing the environment which could have a big impact on vlans.
    It's mainly to do with network virtualisation, where the vlans are defined and where they are not only routed but where the network services such as firewalling, load balancing etc. are performed.
    It's quite a big subject so i didn't want to confuse the general answer by going into it but feel free to ask if you want more details.
    Jon

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best practice for integrating oracle atg with external web service

    Hi All
    What is the best practice for integrating oracle atg with external web service? Is it using integration repository or calling the web service directly from the java class using a WS client?
    With Thanks & Regards
    Abhishek

    Using Integration Repository might cause performance overhead based on the operation you are doing, I have never used Integration Repository for 3rd Party integration therefore I am not able to make any comment on this.
    Calling directly as a Java Client is an easy approach and you can use ATG component framework to support that by making the endpoint, security credentials etc as configurable properties.
    Cheers
    R
    Edited by: Rajeev_R on Apr 29, 2013 3:49 AM

  • Best Practice for External Libraries Shared Libraries and Web Dynrpo

    Two blogs have been written on sharing libraries with Web Dynpro DC, but I would
    like to know the best practice for doing this.
    External libraries seem to work great at compile time, but when deploying there is often an error related to the external library not being a deployed component. 
    Is there a workaround for this besides creating a shared J2EE library which I have been able to get working?  I am not interested in something that works, but really
    what are the best practice for this. What is the best way to  limit the number of jars that need to be kept in a shared library/ext library.  When is sharing ref service/etc a valid approach vs. hunting down the jars in the portal libraries etc and storing in an external library.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best Practice for the Service Distribution on multiple servers

    Hi,
    Could you please suggest as per the best practice for the above.
    Requirements : we will use all features in share point ( Powerpivot, Search, Reporting Service, BCS, Excel, Workflow Manager, App Management etc)
    Capacity : We have  12 Servers excluding SQL server.
    Please do not just refer any URL, Suggest as per the requirements.
    Thanks 
    srabon

    How about a link to the MS guidance!
    http://go.microsoft.com/fwlink/p/?LinkId=286957

  • Best practice for loading config params for web services in BEA

    Hello all.
    I have deployed a web service using a java class as back end.
    I want to read in config values (like init-params for servlets in web.xml). What
    is the best practice for doing this in BEA framework? I am not sure how to use
    the web.xml file in WAR file since I do not know how the name of the underlying
    servlet.
    Any useful pointers will be very much appreciated.
    Thank you.

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best practice for auto update flex web applications

    Hi all
    is there a best practice for auto update flex web applications, much in the same way AIR applications have an auto update mechanism?
    can you please point me to the right direction?
    cheers
    Yariv

    Hey drkstr
    I'm talking about a more complex mechanism that can handle updates to modules being loaded into the application ect...
    I can always query the server for the verion and prevent loading from cach when a module needs to be updated
    but I was hoping for something easy like the AIR auto update feature

Maybe you are looking for