ESA setup - best practices

I have 2 ESA (c370) running active/active setup. Currently each ESA configured to use only a single port for both inbound and outbound email. The other 3 ports are not in use. What are the best practices of setting up the ESA ?
Sent from Cisco Technical Support iPhone App

Interfaces- Only one of the three available Ethernet interfaces on the Cisco appliance is required
for most network environments. However, you can configure two Ethernet interfaces and segregate
your internal network from your external Internet network connection.
Source: ESA_8.0_User_Guide.pdf
You could potentially have low security, high security, and a management interface. Some networks need a physical connection into a dmz but if your environment doesn't need it there is no reason to use multiple interfaces.

Similar Messages

  • Workflow setup:Best Practices

    Hi All,
    Could anyone please share knowledge related to Oracle Workflow setup:Best Practices.What all are the high level steps?
    I am looking from embedded workflow setup for R11 or R12.
    Thanks for your time!
    Regards,

    This is a very broad question - narrowing it to specifics might help folks respond better.
    There are a lot of documents on MOS that refer to best practices from a technology stack perspective.
    Oracle Workflow Best Practices Release 12 and Release 11i          (Doc ID 453137.1)
    As far as functional practices are concerned, these may vary from module to module, as functionality and workflow implementation vary from module to module.
    FAQ: Best Practices For Custom Order Entry Workflow Design          (Doc ID 402144.1)
    HTH
    Srini

  • OS X Server 3.0 new setup -- best practices?

    Alright, here's what I'm after.
    I'm setting up a completely new OS X Server 3.0 environment.  It's on a fairly new (1.5 year old) Mac Mini, plenty of RAM and disk space, etc.  This server will ONLY be used interally.  It will have a private IP address such as 192.168.1.205 which will be outside of my DHCP server's range (192.168.1.10 to .199) to prevent any IP conflicts.
    I am using Apple's Thuderbolt-to-Ethernet dongle for the primary network connection.  The built-in NIC will be used strictly for a direct iSCSI connection to a brand new Drobo b800i storage device.
    This machine will provide the following services, rougly in order of importance:
    1.  A Time Machine backup server for about 50 Macs running Maverics.
    1a.  Those networked Macs will authenticate individually to this computer for the Time Machine service
    1b.  This Server will get it's directory information from my primary server via LDAP/Open Directory
    2.  Caching server for the same network of computers
    3.  Serve a NetInstall image which is used to set up new computers when a new employee arrives
    4.  Maybe calendaring and contacts service, still considering that as a possibility
    Can anyone tell me the recommended "best practices" for setting this up from scratch?  I've done it twice so far and have faced problems each time.  My most frequent problem, once it's set up and running, is with Time Machine Server.  With nearly 100 percent consistency, when I get Time Machine Server set up and running, I can't administer it.  After a few days, I'll try to look at it via the Server app.  About half the time, there'll be the expected green dot by "Time Machine" indicating it is running and other times it won't be there.  Regardless, when I click on Time Machine, I almost always get a blank screen simply saying "Loading."  On rare occasion I'll get this:
    Error Reading Settings
    Service functionality and administration may be affected.
    Click Continue to administer this service.
    Code: 0
    Either way, sometimes if I wait long enough, I'll be able to see the Time Machine server setup, but not every time.  When I am able to see it, I'll have usability for a few minutes and then it kicks back to "Loading."
    I do see this apparently relevant entry in the logs as seen by Console.app (happens every time I see the Loading screen):
    servermgrd:  [71811] error in getAndLockContext: flock(servermgr_timemachine) FATAL time out
    servermgrd:  [71811] process will force-quit to avoid deadlock
    com.apple.launchd: (com.apple.servermgrd[72081]) Exited with code: 1
    If I fire up Terminal and run "sudo serveradmin fullstatus timemachine" it'll take as long as a minute or more and finally come back with:
    timemachine:command = "getState"
    timemachine:state = "RUNNING"
    I've tried to do some digging on these issues and have been greeted with almost nothing to go on.  I've seen some rumblings about DNS settings, and here's what that looks like:
    sudo changeip -checkhostname
    Primary address = 192.168.1.205
    Current HostName = Time-Machine-Server.local
    The DNS hostname is not available, please repair DNS and re-run this tool.
    dirserv:success = "success"
    If DNS is a problem, I'm at a loss how to fix it.  I'm not going to have a hostname because this isn't on a public network.
    I have similar issues with Caching, NetInstall, etc.
    So clearly I'm doing something wrong.  I'm not upgrading, again, this is an entirely clean install.  I'm about ready to blow it away and start fresh again, but before I do, I'd greatly appreciate any insight from others on some "best practices" or an ordered list on the best way to get this thing up and running smoothy and reliably.

    Everything in OS X is dependant on proper DNS.  You probably should start there.  It is the first service you should be configuring and it is the most important to keep right.  Don't configure any services until you have DNS straight.  In OS X, DNS really stands for Do Not Skip.
    This may be your toughest decision.  Decide what name you want the machine to be.  You have two choices.
    1: Buy a valid domain name and use it on your LAN devices.  You may not have a need now for use externally, but in the future when you use VPN, Profile Manager, or Web Services, at least you are prepared.  This method is called split horizon DNS.  Example would be apple.com.  Internally you may name the server tm.apple.com.  Then you may alias to it vpn.apple.com.  Externally, users can access the service via vpn.apple.com but tm.apple.com remains a private address only.
    2: Create an invalid private domain name.  This will never route on the web so if you decide to host content for internal/external use, you may run into trouble, especially with services that require SSL certificates.  Examples might be ringsmuth.int or andy.priv.  These type of domains are non-routable and can result in issues of trust when communicating with other servers, but it is possible.
    Once you have the name sorted out, you need to configure DNS.  If you are on a network with other servers, just have the DNS admin create an A and PTR record for you.  If this is your only server, then you need to configure and start the DNS service on Mavericks.  The DNS service is the best Apple has ever created.  A ton of power in a compact tool.  For your needs, you likely need to just hit the + button and fill out the New Device record.  Use a fully qualified host name in the first field and the IP address of your server (LAN address).  You did use a fixed IP address and disabled the wireless card, right?
    Once you have DNS working, then you can start configuring your other services.  Time Machine should be pretty simple.  A share point will be created automatically for you.  But before you get here, I would encourage starting Open Directory.  Don't do that until DNS is right and you pass the sudo changeip -checkhostname test.
    R-
    Apple Consultants Network
    Apple Professional Services
    Author, "Mavericks Server – Foundation Services" :: Exclusively in the iBooks Store

  • Report server setup best practice info needed -SOLVED-

    Hello, I'm looking for some best practice info on how to set up the report server to handle multiple reports from multiple apps (hopefully in separate directories).
    We are converting forms 5 apps to 10g. Currently reports live in the same dir as the form files, each in their own application directory. Moving to 10g, the report.conf file specifies a reports dir. It does not seem that there can be multiple directories listed in the sourceDir parameter, in order to handle mutiple directories where reports can live. Is it possible to set it up so it can find reports in any of our 20 application directories? Do we have to have only one directory where all reports are served from (if so we'll have an issue, as reports from different apps could be named the same)?
    How have you folks solved this situation?
    Thanks for any info,
    Gary

    Got it working! Thanks to all for your input! I found a reference on Metalink to a known issue with running on Sun Solaris, which was causing me problems.
    Bottom line, here's what I did to get it all working:
    1) Report server .conf file:
    - Comment out sourceDir line in the engine entry.
    - Add environment entries for each app before the </server> line at the end .
    <environment id="prs">
    <envVariable name="REPORTS_PATH" value="(path to dir where reports live for this app)"/>
    </environment>
    - Bounce the server (not sure if this is necessary)
    2) $ORACLE_HOME/bin/reports.sh:
    - Comment out line that sets REPORTS_PATH
    This was necessary for Sun Solaris (the bug as mentioned on Metalink)
    3) The app .fmb that calls the report:
    - Set the report object property to specify the environment ID before calling
    run_report_object():
    set_report_object_property(rpt_id, REPORT_OTHER, 'ENVID="prs"');
    Blue Skies,
    Gary

  • BPC 7M SP6 - best practice for multi server setup

    Experts,
    We are considering purchasing new hardware for our BPC 7M implementation. My question is what is the recommended or best practice setup for SQL and Analysis Services? Should they be on the same server or each on a dedicated server?
    The hardware we're looking at would have 4 dual core processors and 32 GB RAM in a x64 base. Would this adequately support both services?
    Our primary application cube is just under 2GB and appset database is about 12 GB. We have over 1400 users and a concurrency count of 250 users. We'll have 5 app/web servers to handle this concurrency.
    Please let me know if I am missing information to be able to answer this question.
    Thank you,
    Hitesh

    I don't think there's really a preference on that point. As long as it's 64bit, the servers scale well (CPU, RAM), so SQL and SSAS can be on the same server. But it is important to look also beyond CPU and RAM and make sure there's no other bottlenecks like storage (Best practice is to split the database files on several disks and of course to have the logs on disks that are used only for the logs). Also the memory allocation in SQL and OLAP should be adjusted so that each has enough memory at all times.
    Another point to consider is high availability. Clustering is quite common on that tier. And you could consider having the active node for SQL on one server and the active node for OLAP (SSAS) on the other server. It costs more in SQL licensing but you get to fully utilize both servers, at the cost of degraded performance in the event of a failover.
    Bruno
    Edited by: Bruno Ranchy on Jul 3, 2010 9:13 AM

  • What is the guideline and/or best practice for EMC setup on ASM?

    We are going to use EMC CX4-480 for ASM storage on RAC. What is the guideline and best practice for EMC setup on ASM?
    Thanks for the advice!

    Probably a poor choice of words. Sorry.
    So basically, I have gotten further, but I just noticed related problem.
    I'm using the WebServices(WS) 1.0. I insert an account, then, on a separate WS call, I insert my contacts for the account. I include the AccountID, and a user defined key from the Account when creating the Contact.
    When I look at my Contact on the CRMOD web page, it shows the appropriate links back to the Account. But when I look at my Account on the CRMOD web page, it shows no Contacts.
    So when I say workflow or Best Practice, I was hoping for guidance on how to properly write my code to accomplish all of the necessary steps. As in this is how you insert an account with a contact(s) and it updates the appropriate IDs so that it shows up properly on the CRMOD web pages.
    Based on the above, it looks like I need to, as the next step, take the ContactID and update the Account with it so that their is a bi-directional link.
    I'm thinking there is a better way in doing this.
    Here is my psuedocode:
    AccountInsert()
    AccountID = NewAcctRec
    ContactInsert(NewAcctRec)
    ContactID = NewContRec
    AccountUpdate(NewContRec)
    Thanks,

  • SAP Business One Best-Practice System Setup and Sizing

    <b>SAP Business One Best-Practice System Setup and Sizing</b>
    Get recommendations from SAP and hardware specialists on system setup and sizing
    SAP Business One is a single, affordable, and easy-to-implement solution that integrates the entire business across financials, sales, customers, and operations. With SAP Business One, small businesses can streamline their operations, get instant and complete information, and accelerate profitable growth. SAP Business One is designed for companies with less than 100 employees, less than $75 million in annual revenue, and between 1 and 30 system users, referred to as the SAP Business One sweet spot. The sweet spot covers various industries and micro-verticals which have different requirements when it comes to the use of SAP Business One.
    One of the initial steps during the installation and implementation of SAP Business One is the definition of the system landscape and architecture. Numerous factors affect the system landscape that needs to be created to efficiently run SAP Business One.
    The <a href="http://wiki.sdn.sap.com/wiki/display/B1/BestPractiseSystemSetupand+Sizing">SAP Business One Best-Practice System Setup and Sizing Wiki</a> provides recommendations on how to size and configure the system landscape and architecture for SAP Business One based on best practices.

    For such high volume licenses, you may contact the SAP Local Product Experts.
    You may get their contact info from this site
    [https://websmp209.sap-ag.de/~sapidb/011000358700001455542004#India]

  • Best practice Forms 10g configuration setup and tuning

    Hi,
    We are currently depolying forms 10g from 6i client/server version. Users are experiencing Form hangups and hour glasses. This does not happen that often but can happen any time, anywhere in the app (users do inserts, updates and deletes and queries).
    Is there a baseline best practice configuration setup anywhere either in the Forms side or the AppServer side of things?
    Here is our setup:
    Forms 10g (9.0.4)
    Reports 10g (9.0.4)
    Oracle AppServer 10g (9.0.4)
    OS = RedHat Linux
    Client Workstations run on Windows 2000 and XP w/ Internet Explorer 6 or higher
    Average No. of users = 250
    Thanks for all your help

    Shutdown applications within the guest.
    Either power off from Oracle VM Manager or 'xm shutdown xxx' from the command line
    It is possible one or more files could be open when the shutdown is initiated.
    Have found at least one case of misconfigured IP which would have resulted in the disk access being via the 'Front End' interface rather than the Back End.
    Thanks

  • Best practices for transport of setup of archivelink/contentserver

    Hi
    I'm using the archivelink setup to store all kind of documents/files and to archive outgoing documents and print lists (print and archive).
    But I don't know how we should transport the setting.
    We need different setup in dev/qa/prd systems because we don't want documents from our development system stored in the same server as the documents from our productive system.
    We have 2 setup used in different scenarios :
    1) We link the ObjectType/Doc.type to different content repositories in OAC3 (D1 for dev, Q1 for qa and P1 for prd)
    2) We point the content repository to different HTTP servers in OAC0/CSADMIN
    In both scenarios I see 2 options.
    1) open for customizing in qa and prd systems and maintain the different setups directly in each system.
    2) We Transport the prd content repositories all the way, but delete the transport with qa content repositories after import to the qa system and finally we don't transport the dev content repositories at all.
    Both options are bad practices, but what are Best practices?
    Best regards
    Thomas Madsen Nielsen

    Hi David,
    The best mechanism is probably transporting the objects in the same order as creating/changing them. The order would be Application Components, Info Area, Info-objects, Transfer Structure, Transfer Rules, Communication Structure, InfoCube, Update rules, Infopackages and the frontend components.
    There are many topics on BW transports in SDN forum. You can search for them.
    You can refer to this link for more details on transports in BW System:
    http://help.sap.com/saphelp_nw04/helpdata/en/b5/1d733b73a8f706e10000000a11402f/frameset.htm
    Bye
    Dinesh

  • Url category best practices for ESA 8.5.6-074

    In the new version  8.5.6-074 of ESA C170, what are the best practices for applying the new URL Category?
    Is it possible to crate filters that quarantine mails based on URL filtering? Is so could you upload sample script (for example quarantine emails that have adult links in body)

    You should be able to do it with a content filter. You have some conditions based on URL and categories.

  • Best Practices for Workshop IDE (Development Workstation Setup)

    Is there any Oracle documentation that describes best practices for setting up Workshop and developing on a workstation that includes Oracle's ODSI, OSB, Portal, and WLI? We are using all these products on a weblogic server for each developer's machine and experiencing performance and reliability issues. What's the optimal way to use these products on a developer's workstation. Thanks.

    Hi,
    Currently you dont see such best practice site with in workshop.
    but you can verify most issues from doc.
    http://docs.oracle.com/cd/E13224_01/wlw/docs103/
    if you need any further assistance let me know.
    Regards,
    Kal

  • Best Practice to Setup an application to work with both oracle and db2 db

    Hi,
    We have an application that currently supports both oracle and db2 databases . It is currently using JPA with eclipselink as backend mechanism and we want to move to ADFBc as our backend . So what is best practice to do this?
    I came across an old post in https://groups.google.com/forum/#!topic/adf-methodology/UlJZSTu14Io that states to create two different model projects to support oracle and db2 .
    Is this still the standard ? Is there a way that we could work around rather than creating multiple projects?
    How do i get the view controller to work with multiple model projects if this is the case?
    Thank you.

    Thanks for the response .
    The problem is i would have the same schema on both the databases (both in oracle and db2) .
    I don't see a scenario where i want both the application model projects at the same time . What i meant is , Application will either be deployed with Oracle DB or DB2 but not with both on a production environment .
    So is there a way where i just change the connection parameters alone rather having two different models ?
    If i have to use two model projects ,  would it be possible say i built the view controller for Oracle , and use it for DB2 if i make sure the BC objects names are similar between model projects and switch just the model jar based on the deployment environment ?
    -Sam

  • Best practice on test region setup...

    We want to set up our dev environment to a test database and our prod to our production database, the reason being we transform our data and also build our star schema ourselves. So any changes we make on brininging in new data, we want to be able to test using our test repository. Any best practices on how to do this? We have noticed issues when we change the physical layer tables of rpd from one db to other. Basically we have crashed the system doing this during test....If we have one dedicated repository for test pointing to the test db, and one to prod pointing to our prod db...what is the easiest and fool proof way to copy rpd changes from one environment to the other? If any of you have done this, please do drop in a line on what how you accomplished this.
    Thanks much!
    Arch

    Right now we are doing entire rpd copies as we are pointing to the production database from both rpd's.......
    ..but the problem is our dev environment physical layer is schema A and prod env has db schema B. we want to make changes to underlying table data in schema A, test using the rpd pointing to that schema and then once everything is ok, move changes to production db and make changes to prod repository....So I just want to merge business process and presentation layer.....We will try it the oracle suggestion but as I have been reading....merging is error prone....and we did not have much luck with it the one time we tried....

  • Best Practice for Production IDM setup

    Hi, what is the best practice for setting up prodcution IDM:
    1. Connect IDM prod to ECC DEV,QA and Prod or
    2. Connect IDM prod to ECC prod only and Connect IDM dev to ECC Dev and QA.
    Please also specify pros and cons for both options if possible.
    Thanks in advance,
    Farhan

    We run our IDM installation as per your option 2 (Prod and non-prod on separate instances)
    We use HCM for the source of truth in production and have a strict policy regarding not allowing non HCM based user accounts. HCM creates the SU01 record and details are downloaded to IDM through the LDAP extract. Access is provision based on Roles attached to the HCM Position in IDM. In Dev/test/uat we create user logins in IDM and push the details out.
    Our thinking was that we definitely needed a testing environment for development and patch testing, and it needs to be separate to production. It was also ideal to use this second environment for dev/test/uat since we are in the middle of a major SAP project rollout and are creating hundreds of test and training users with various roles and prefer to keep this out of a production instance.
    Lately we also created a sandpit environment since I found that I could not do destructive testing or development in the dev/test/uat instance because we were becoming reliant on this environment being available. Almost a second production instance - since we also set the policy that all changes are made through IDM and no direct SU01 changes are permitted.
    Have a close look at your usage requirements before deciding which structure works best for you.

  • IronPort ESA best practice for DNS servers?

    Hello!
    Is there a best practice for what servers should be used for the Cisco IronPort DNS servers?
    Currently when I check our configuration, we have set it to "Use these DNS servers" and the first two are our domain controllers and last two are Google DNS.
    Is there a best practice way of doing this? I'm thinking of selecting the "Use the Internet's Root DNS Servers" option as I can't really see an advantage of using internal DC's.
    Thoughts?

    Best practice is to use Internet Root DNS Servers and define specific dns servers for any domain that you need to give different answers for. Since internal mail delivery is controlled by smtproutes using internal dns servers is normally not required.
    If you must use internal dns servers I recommend servers dedicated to your Ironports and not just using servers that handle enterprise lookups as well. Ironports can place a very high load on dns servers because every outside connection results in multiple dns lookups. (forward, reverse, sbrs)
    If you don't have enough dns horsepower you are susceptible to a DOS attack either through accident or design. If the Ironports overload your internal dns servers it can impact your entire enterprise.

Maybe you are looking for