Best practices for NAT/PAT?

Greetings:
My setup is
Cisco 1811 serving as a router/firewall to several windows 2003 servers at an ISP. Ive configured NAT on the router to expose http, https, and smtp ports on each of the servers to a unique public ip address within my x.x.x.230/29 address space.
The WAN port on the 1811 is configured with x.x.x.230/29. On the ServerA I NAT ports 25, 80, and 443 on that same x.x.x.230 address, while managing the 1811 itself using SSH on that same address as well.
On server B (local ip 192.168.0.3), I NAT the x.x.x.231 for ports 25. 80, and 443. On server C (local ip 192.168.0.4), I NAT the x.x.x.232 address for the same ports.
Can anyone offer a critique of this configuration and offer some ideas of the best practices topology-wise for providing routing, vpn and firewall functionality for these servers?
My question arises because now I have a site to site VPN with Server B at the local end and I am unable to connect to the server B smtp port due to the following nat statements. I can confirm that this is the case since by removing the statement I am able to connect.
Here is the NAT section of the show run:
ip nat inside source static tcp 192.168.0.2 443 interface FastEthernet0 443
ip nat inside source static tcp 192.168.0.2 80 interface FastEthernet0 80
ip nat inside source route-map SDM_RMAP_1 interface FastEthernet0 overload
ip nat inside source static tcp 192.168.0.3 25 x.x.x.231 25 extendable
ip nat inside source static tcp 192.168.0.3 80 x.x.x.231 80 extendable
ip nat inside source static tcp 192.168.0.3 443 x.x.x.231 443 extendable
ip nat inside source static tcp 192.168.0.4 25 x.x.x.232 25 extendable
ip nat inside source static tcp 192.168.0.4 80 x.x.x.232 80 extendable
ip nat inside source static tcp 192.168.0.4 443 x.x.x.232 443 extendable
Would appreciate any and all comments on the way it is currently configured ass well as:
-How I might be able to change the config to follow a best-practice arrangement for the router/firewall and these servers.
TIA

Hi,
Static PAT is the same as static NAT, except it lets you specify the protocol (TCP or UDP) and port for the local and global addresses.
This feature lets you identify the same global address across many different static statements, so long as the port is different for each statement (you CANNOT use the same global address for multiple static NAT statements).
For example, if you want to provide a single address for global users to access FTP, HTTP, and SMTP, but these are all actually different servers on the local network, you can specify static PAT statements for each server that uses the same global IP address, but different ports
And for PAT you cannot use the same pair of local and global address in multiple static statements between the same two interfaces.
Regards
Bjornarsb

Similar Messages

  • Question on best practice for NAT/PAT and client access to firewall IP

    Imagine that I have this scenario:
    Client(IP=192.168.1.1/24)--[CiscoL2 switch]--Router--CiscoL2Switch----F5 Firewall IP=10.10.10.1/24 (only one NIC, there is not outbound and inbound NIC configuration on this F5 firewall)
    One of my users is complaining about the following:
    When clients receive traffic from the F5 firewall (apparently the firewall is doing PAT not NAT, the client see IP address 10.10.10.1.
    Do you see this is a problem? Should I make another IP address range available and do NAT properly so that clients will not see the firewall IP address? I don't see this situation is a problem but please let me know if I am wrong.

    Hi,
    Static PAT is the same as static NAT, except it lets you specify the protocol (TCP or UDP) and port for the local and global addresses.
    This feature lets you identify the same global address across many different static statements, so long as the port is different for each statement (you CANNOT use the same global address for multiple static NAT statements).
    For example, if you want to provide a single address for global users to access FTP, HTTP, and SMTP, but these are all actually different servers on the local network, you can specify static PAT statements for each server that uses the same global IP address, but different ports
    And for PAT you cannot use the same pair of local and global address in multiple static statements between the same two interfaces.
    Regards
    Bjornarsb

  • Best practice for Global Address?

    Good Morning,
    I am new to Cisco firewalls and would like to know what is the best practice for creating an external ip address and port into my network and then redirecting that to a specific machine.  I am thinking of using a global ip address and then only allowing this type of traffic to talk to the specific destnation and on that specific port.  Is this the correct course of action?  Or os there a better or more effecient way of allowing this process using ADSM.
    Troy
    Message was edited by: Troy Currence

    Hi,
    Basically when you are attempting to allow traffic from the external public network to some of your servers/hosts you will either use Static NAT or Static PAT
    Static NAT is when you bind a single public IP address to be used by only one internal host. This is usually the preferred option if you can spare a single public IP address for your server, meaning you probably have a small public subnet from your ISP.
    Static PAT is when you only allocate certain ports on your public IP address and map them to a local port on the host. This is usually the option when you only have a single public IP address that is configured on your ASAs external interface. Or perhaps in a situation when you just want to conserver your public IP addresses even though you might have a few of them.
    In Static NAT case you configure the Static NAT and use the interface ACL to allow the services you require.
    In Static PAT you only create a translation for a specific port/service so only connections to that port are possible. Naturally you will also have to allow those services/ports in the interface ACL just like with Static NAT.
    Again if you can spare the public IP addresses then I would go with Static NAT or if you only have a single or few IP addresses you can consider Static PAT (Port Forward) also.
    I dont personally use ASDM for configurations but can help you with the required CLI format configurations. These can actually be done through ASDM also from the Tools -> Command Line Interface menus at the top.
    Hope this helps
    - Jouni

  • Best Practices for zVM/SLES10/zDB2 environment for dialog instances.

    Hi,  I am a zSeries system programmer who has just completed an IBM led Proof of Concept which demonstrated the viability of running SAP instances on SUSE SLES10 Linux booted in zVM guests and accessing zDB2 data via hipersockets. Before we build a Linux infrastructure using the 62 IFLs we just procured, we are wondering if any best practices for this environment have been developed as an OSS note or something else by SAP.    Below you will find an email which was sent and responded to by IBM and Novell on these topics...
    "As you may know, Home Depot has embarked on an IBM led proof of concept using SUSE SLES10 running in zVM guests on IBM zSeries hardware to host SAP server instances.  The Home Depot IT organization is currently in the midst of a large scale push to modernize our merchandising and people systems on SAP platforms.  The zVM/SUSE/SAP POC is part of that effort, as is a parallel POC of an Intel Blade/Red Hat/SAP platform.  For our production financial systems we now use a pSeries/AIX/SAP platform.
          So far in the zVM/SUSE/SAP POC, we have been able to create four zVM LPARS on IBM z9 hardware, create twelve zVM guests on those LPARS, boot SLES10 in those guests, install and run SAP instances in those guests using hipersockets for access to our DB2 SAP databases running on zOS, and direct user workloads to the SAP instances with good results.  We have also successfully developed cloning scripts that have made it possible to create new SLES10 instances, configured and ready for SAP installs, in about 10 seconds using FLASHCOPY and IBM DASD.
          I am writing in the hope that you can direct us to technical resources at IBM/Novell/SAP who may be able to field a few questions that have arisen.  In our discussions about optimization of the zVM/SUSE/SAP platform, we wondered if any wisdom about the appropriateness of and support for using zVM capabilities to virtualize SAP has ever been developed or any best practices drafted.  Attached you will find an IBM Redbook and a PowerPoint presentation which describes the use of the zVM discontiguous shared segments and the zVM named saved system features for the sharing of reentrant code and other  elements of Linux and its applications, thereby conserving storage and disk resources allocated to guest machines.   The specific question of the hour is, can any SAP code be handled similarly?  Have specific SAP elements eligible for this treatment been identified? 
          I've searched the SUSE Knowledgebase for articles on this topic to no avail.  Any similar techniques that might help us reduce the total cost of ownership of a zVM/SUSE/SAP platform as we compare it to Intel Blade/Red Hat/SAP and pSeries/AIX/SAP platforms are of great interest as we approach the end of our POC.  Can you help?
          Greg McKelvey is a Client I/T Architect at IBM.  He found the attached IBM documents and could give a fuller account of our POC.  Pat Downs, IBM zSeries IT Architect, has also worked to guide our POC. Akshay Rao, IBM Systems IT Specialist - Linux | Virtualization | SOA, is acting as project manager for the POC.  Jim Hawkins is the Home Depot Architect directing the POC.  I've CC:ed their email addresses.  I am sure they would be pleased to hear from you if there are the likely questions about what the heck I am asking about here.  And while writing, I thought of yet another question that I hoping somebody at SAP might weigh in on; are there any performance or operational benefits to using Linux LVM to apportion disk to filesystems vs. using zVM to create appropriately sized minidisks for filesystems without LVM getting involved?"
    As you can see, implementation questions need to be resolved.  We have heard from Novell that the SLES10 Kernel and other SUSE artifacts can reside in memory and be shared by multiple operating system images.  Does SAP support this configuration?  Also, has SAP identified SAP components which are eligible for similar treatment?  We would like to make sure that any decisions we make about the SAP platforms we are building will be supportable.  Any help you can provide will be greatly appreciated.  I will supply the documents referenced above if they are not known to any answerer.  Thanks,  Al Brasher 770-433-8211 x11895 [email protected]

    Hello AL ,
    first, let me welcome you on board,  I am sure you won't be disapointed with your choice to run SAP on ZOS.
    as for your questions,
    it wan't easy to find them in this long post , so i suggest you take the time to write a short summary that contains a very short list of questions.
    as for answers.
    here are a few usefull sources of information :
    1. the sap on db2 for Z/os sdn page :
    SAP on DB2 for z/OS
    in it you can find 2 relevant docs :
    a. best practices for ...
    b. database administration for db2 udb for z/os .
    this second publication is excellent , apart from db2 specific info , it contains information on all the components of the sap on db2 for z/os like zlinux,z/vm and so on ...
    2. I can see that you are already familiar with the ibm redbooks , but it seems that you haven't taken the time to get the most out of that resource.
    from you post it is clear that you have found one usefull publication , but I know there are several.
    3. a few months ago I wrote a short post on a similar subject ,
    I'm sure its not exactly what you are looking for at this moment , but its a good start , and with some patience you may be able to get some answers.
    here's a link
    http://blogs.ittoolbox.com/sap/db2/archives/index-of-free-documentation-on-sap-db2-administration-14245
    good luck.
    omer brandis.

  • Best practice for ASA Active/Standby failover

    Hi,
    I have configured a pair of Cisco ASA in Active/ Standby mode (see attached). What can be done to allow traffic to go from R1 to R2 via ASA2 when ASA1 inside or outside interface is down?
    Currently this happens only when ASA1 is down (shutdown). Is there any recommended best practice for such network redundancy?  Thanks in advanced!

    Hi Vibhor,
    I test ping from R1 to R2 and ping drop when I shutdown either inside (g1) or outside (g0) interface of the Active ASA. Below is the ASA 'show' failover' and 'show run',
    ASSA1# conf t
    ASSA1(config)# int g1
    ASSA1(config-if)# shut
    ASSA1(config-if)# show failover
    Failover On
    Failover unit Primary
    Failover LAN Interface: FAILOVER GigabitEthernet2 (up)
    Unit Poll frequency 1 seconds, holdtime 15 seconds
    Interface Poll frequency 5 seconds, holdtime 25 seconds
    Interface Policy 1
    Monitored Interfaces 3 of 60 maximum
    Version: Ours 8.4(2), Mate 8.4(2)
    Last Failover at: 14:20:00 SGT Nov 18 2014
            This host: Primary - Active
                    Active time: 7862 (sec)
                      Interface outside (100.100.100.1): Normal (Monitored)
                      Interface inside (192.168.1.1): Link Down (Monitored)
                      Interface mgmt (10.101.50.100): Normal (Waiting)
            Other host: Secondary - Standby Ready
                    Active time: 0 (sec)
                      Interface outside (100.100.100.2): Normal (Monitored)
                      Interface inside (192.168.1.2): Link Down (Monitored)
                      Interface mgmt (0.0.0.0): Normal (Waiting)
    Stateful Failover Logical Update Statistics
            Link : FAILOVER GigabitEthernet2 (up)
            Stateful Obj    xmit       xerr       rcv        rerr
            General         1053       0          1045       0
            sys cmd         1045       0          1045       0
            up time         0          0          0          0
            RPC services    0          0          0          0
            TCP conn        0          0          0          0
            UDP conn        0          0          0          0
            ARP tbl         2          0          0          0
            Xlate_Timeout   0          0          0          0
            IPv6 ND tbl     0          0          0          0
            VPN IKEv1 SA    0          0          0          0
            VPN IKEv1 P2    0          0          0          0
            VPN IKEv2 SA    0          0          0          0
            VPN IKEv2 P2    0          0          0          0
            VPN CTCP upd    0          0          0          0
            VPN SDI upd     0          0          0          0
            VPN DHCP upd    0          0          0          0
            SIP Session     0          0          0          0
            Route Session   5          0          0          0
            User-Identity   1          0          0          0
            Logical Update Queue Information
                            Cur     Max     Total
            Recv Q:         0       9       1045
            Xmit Q:         0       30      10226
    ASSA1(config-if)#
    ASSA1# sh run
    : Saved
    ASA Version 8.4(2)
    hostname ASSA1
    enable password 2KFQnbNIdI.2KYOU encrypted
    passwd 2KFQnbNIdI.2KYOU encrypted
    names
    interface GigabitEthernet0
     nameif outside
     security-level 0
     ip address 100.100.100.1 255.255.255.0 standby 100.100.100.2
     ospf message-digest-key 20 md5 *****
     ospf authentication message-digest
    interface GigabitEthernet1
     nameif inside
     security-level 100
     ip address 192.168.1.1 255.255.255.0 standby 192.168.1.2
     ospf message-digest-key 20 md5 *****
     ospf authentication message-digest
    interface GigabitEthernet2
     description LAN/STATE Failover Interface
    interface GigabitEthernet3
     shutdown
     no nameif
     no security-level
     no ip address
    interface GigabitEthernet4
     nameif mgmt
     security-level 0
     ip address 10.101.50.100 255.255.255.0
    interface GigabitEthernet5
     shutdown
     no nameif
     no security-level
     no ip address
    ftp mode passive
    clock timezone SGT 8
    access-list OUTSIDE_ACCESS_IN extended permit icmp any any
    pager lines 24
    logging timestamp
    logging console debugging
    logging monitor debugging
    mtu outside 1500
    mtu inside 1500
    mtu mgmt 1500
    failover
    failover lan unit primary
    failover lan interface FAILOVER GigabitEthernet2
    failover link FAILOVER GigabitEthernet2
    failover interface ip FAILOVER 192.168.99.1 255.255.255.0 standby 192.168.99.2
    icmp unreachable rate-limit 1 burst-size 1
    asdm image disk0:/asdm-715-100.bin
    no asdm history enable
    arp timeout 14400
    access-group OUTSIDE_ACCESS_IN in interface outside
    router ospf 10
     network 100.100.100.0 255.255.255.0 area 1
     network 192.168.1.0 255.255.255.0 area 0
     area 0 authentication message-digest
     area 1 authentication message-digest
     log-adj-changes
     default-information originate always
    route outside 0.0.0.0 0.0.0.0 100.100.100.254 1
    timeout xlate 3:00:00
    timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
    timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
    timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
    timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute
    timeout tcp-proxy-reassembly 0:01:00
    timeout floating-conn 0:00:00
    dynamic-access-policy-record DfltAccessPolicy
    user-identity default-domain LOCAL
    aaa authentication ssh console LOCAL
    http server enable
    http 10.101.50.0 255.255.255.0 mgmt
    no snmp-server location
    no snmp-server contact
    snmp-server enable traps snmp authentication linkup linkdown coldstart warmstart
    telnet timeout 5
    ssh 10.101.50.0 255.255.255.0 mgmt
    ssh timeout 5
    console timeout 0
    tls-proxy maximum-session 10000
    threat-detection basic-threat
    threat-detection statistics access-list
    no threat-detection statistics tcp-intercept
    webvpn
    username cisco password 3USUcOPFUiMCO4Jk encrypted
    prompt hostname context
    no call-home reporting anonymous
    call-home
     profile CiscoTAC-1
      no active
      destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService
      destination address email [email protected]
      destination transport-method http
      subscribe-to-alert-group diagnostic
      subscribe-to-alert-group environment
      subscribe-to-alert-group inventory periodic monthly
      subscribe-to-alert-group configuration periodic monthly
      subscribe-to-alert-group telemetry periodic daily
    crashinfo save disable
    Cryptochecksum:fafd8a885033aeac12a2f682260f57e9
    : end
    ASSA1#

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best practice for multi-language content in common areas

    I've got a site with some text in header/footer/nav that needs to be translated between an English and Spanish site, which use the same design. My intention was to set up all the text as content to facilitate. However, if I use a standard dialog with the component's path set to a child of the current page node, I would need to re-enter the text on every page. If I use a design dialog, or a standard dialog with the component's path set absolutely, the Engilsh and Spanish sites will share the same text. If I use a standard dialog with the component's path set relatively (eg path="../../jcr:content/myPath"), the pages using the component would all need to be at the same level of the hierarchy.
    It appears that the Geometrixx demo doesn't address this situation, and leaves copy in English. Is there a best practice for this scenario?

    I'm finding that something to the effect of <cq:include path="<%= strCommonContentPath + "codeEntry" %>" resourceType ...
    works fine for most components, but not for parsys, or a component containing a parsys. When I attempt that, I get a JS error that says "design.path is null or not an object". Is there a way around this?

  • Best Practice for utility in Sol Man 4.0

    We have software component ST-ICO of release 150_700 with Patch level 5
    We want a Template Selection for ‘Utility’ industry. I checked in
    the service market place and found that 'Baseline Package United
    Kingdom V1.50, Template: BP_BLKU150' is available in the above software
    component.
    But we are not getting any templates other than 'BP_UTUS147 - Best Practices for Water Utility' in the 'SOLAR_PROJECT_ADMIN'
    transaction.
    Kindly suggest any patch needs to be applied or some configuration need to be done.
    Regards
    Mani

    Hi Mani,
       Colud u plz give me the link of "where u find the template BP_BLKU150"?
    It will be helpful for me.
    Thanks
    Senthil

  • Best Practices for SRM Installation !!

    Hi
        can someone share the best Practices for SRM Installation ?
    What is the typical timeframe to install SRM on development server and as well as on the Production server ?
    Appericiate the responses
    Thanks,
    Arvind

    Hi
    I don't know whether this will help you.
    See these links as well.
    <b>http://help.sap.com/bp_epv170/EP_US/HTML/Portals_intro.htm
    http://help.sap.com/bp_scmv150/index.htm
    http://help.sap.com/bp_biv170/index.htm
    http://help.sap.com/bp_crmv250/CRM_DE/index.htm</b>
    Hope this will help.
    Please reward suitable points.
    Regards
    - Atul

  • Best practices for ARM - please help!!!

    Hi all,
    Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
    Create GRC request -> role approver -> risk manager -> security team
    options are : end user / Manager / Functional super users / security team.
    End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
    Thanks!!!!

    In this case, I recommend proposing that the department managers create GRC Access Requests.  In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables.  This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role.  From your workflow design, it looks like Role Owners should be brought into these workshops.
    You might consider a Role Catalog that the manager could filter on and make selections from.  For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles.  You could map business functions or tasks to specific technical roles.  The design flaw here, of course, is the way your technical roles have been designed.
    The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs.  They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
    At all costs, avoid having end-users create ARs.  They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact.  You should choose a Requestor who has the highest chance of requesting the correct access.  This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
    If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination).  I do not recommend going down this path, however.  It is very confusing, time consuming, and difficult to integrate properly.
    Good luck!
    -Ken

  • Best Practices For Household IOS's/Apple IDs

    Greetings:
    I've been searching support for best practices for sharing primarily apps, music and video among multple iOS's/Apple IDs.  If there is a specific article please point me to it.
    Here is my situation: 
    We currently have 3 iPads (2-kids, 1-dad) in the household and one iTunes account on a win computer.  I previously had all iPads on single Apple ID/credit card and controlled the kids' downloads thru the Apple ID password that I kept secret.  As the kids have grown older, I found myself constantly entering my password as the kids increased there interest in music/apps/video.  I like this approach because all content was shared...I dislike because I was constantly asked to input password for all downloads.
    So, I recently set up an individual account for them with the allowance feature at iTunes that allows them to download content on their own (I set restrictions on their iPads).  Now I have 3 Apple IDs under one household.
    My questions:
    With the 3 Apple IDs, what is the best way to share apps,music, videos among myself and the kids?  Is it multiple accounts on the computer and some sort of sharing? 
    Thanks in advance...

    Hi Bonesaw1962,
    We've had our staff and students run iOS updates OTA via Settings -> Software Update. In the past, we put a DNS block on Apple's update servers to prevent users from updating iOS (like last fall when iOS 7 was first released). By blocking mesu.apple com, the iPads weren't able to check for or install any iOS software updates. We waited until iOS 7.0.3 was released before we removed the block to mesu.apple.com at which point we told users if they wanted to update to iOS 7 they could do so OTA. We used our MDM to run reports periodically to see how many people updated to iOS 7 and how many stayed on iOS 6. As time went on, just about everyone updated on their own.
    If you go this route (depending on the number of devices you have), you may want to take a look at Caching Server 2 to help with the network load https://www.apple.com/osx/server/features/#caching-server . From Apple's website, "When a user on your network downloads new software from Apple, a copy is automatically stored on your server. So the next time other users on your network update or download that same software, they actually access it from inside the network."
    I wish there was a way for MDMs to manage iOS updates, but unfortunately Apple hasn't made this feature available to MDM providers. I've given this feedback to our Apple SE, but haven't heard if it is being considered or not. Keeping fingers crossed.
    Hope this helps. Let us know what you decide on and keep us posted on the progress. Good luck!!
    ~Joe

  • Basic Strategy / Best Practices for System Monitoring with Solution Manager

    I am very new to SAP and the Basis group at my company. I will be working on a project to identify the best practices of System and Service level monitoring using Solution Manager. I have read a good amount about SAP Solution Manager and the concept of monitoring but need to begin mapping out a monitoring strategy.
    We currently utilize the RZ20 transaction and basic CCMS monitors such as watching for update errors, availability, short dumps, etc.. What else should be monitored in order to proactively find possible issues. Are there any best practices you all have found when implimenting Monitoring for new solutions added to the SAP landscape.... what are common things we would want to monitor over say ERP, CRM, SRM, etc?
    Thanks in advance for any comments or suggestions!

    Hi Mike,
    Did you try the following link ?
    If not, it may be useful to some extent:
    http://service.sap.com/bestpractices
    ---> Cross-Industry Packages ---> Best Practices for Solution Management
    You have quite a few documents there - those on BPM may also cover Solution Monitoring aspects.
    Best regards,
    Srini
    Edited by: Srinivasan Radhakrishnan on Jul 7, 2008 7:02 PM

  • Best practice for integrating a 3 point metro-e in to our network.

    Hello,
    We have just started to integrate a new 3 point metro-e wan connection to our main school office. We are moving from point to point T-1?s to 10 MB metro-e. At the main office we have a 50 MB going out to 3 other sites at 10 MB each. For two of the remote sites we have purchase new routers ? which should be straight up configurations. We are having an issue connecting the main office with the 3rd site.
    At the main office we have a Catalyst 4006 and at the 3rd site we are trying to connect to a catalyst 4503.
    I have attached configurations from both the main office and 3rd remote site as well as a basic diagram of how everything physically connects. These configurations are not working ? we feel that it is a gateway type problem ? but have reached no great solutions. We have tried posting to a different forum ? but so far unable to find the a solution that helps.
    The problem I am having is on the remote side. I can reach the remote catalyst from the main site, but I cannot reach the devices on the other side of the remote catalyst however the remote catalyst can see devices on it's side as well as devices at the main site.
    We have also tried trunking the ports on both sides and using encapsulation dot10q ? but when we do this the 3rd site is able to pick up a DHCP address from the main office ? and we do not feel that is correct. But it works ? is this not causing a large broad cast domain?
    If you have any questions or need further configuration data please let me know.
    The previous connection was a T1 connection through a 2620 but this is not compatible with metro-e so we are trying to connect directly through the catalysts.
    The other two connection points will be connecting through cisco routers that are compatible with metro-e so i don't think I'll have problems with those sites.
    Any and all help is greatly welcome ? as this is our 1st metro e project and want to make sure we are following best practices for this type of integration.
    Thank you in advance for your help.
    Jeff

    Jeff, form your config it seems you main site and remote site are not adjacent in eigrp.
    Try adding a network statement for the 171.0 link and form a neighbourship between main and remote site for the L3 routing to work.
    Upon this you should be able to reach the remote site hosts.
    HTH-Cheers,
    Swaroop

Maybe you are looking for