MCS7825I5-K9-UCC2 redundant server

Hi we have a client using:
a) MCS7816I5-K9-CMD2 as Call manager server, and on "CCM Node licenses feature" they have one used license and one remaining license.
b) MCS7825I5-K9-UCC2 as Unity server.
Our client is asking for a backup proposal, and we have a pair of questions.
1) Where we can verify if current MCS7825I5-K9-UCC2 server has support for a second Unity node??
2) MCS7825I5-K9-UCC2 has an EOL/EOS report, and its recommended sustitute is MCS-7825-I5-IPC1, but this server does not come with Unity pre installed software. So we need to verify which software component is required to implemet a Cisco Unity backup server for this client.
Thanks a lot for your help.
Enrique Villasana

Install and Upgrade Guides
http://www.cisco.com/en/US/products/sw/voicesw/ps2237/prod_installation_guides_list.html
Review HW support on above link.
You need the exact same release, along with ESs you might have installed on the failover server.
All unity info is here
http://www.cisco.com/en/US/products/sw/voicesw/ps2237/tsd_products_support_series_home.html
HTH
java
if this helps, please rate
www.cisco.com/go/pdihelpdesk

Similar Messages

  • Trying to setup a redundant server for server 2012

    have some questions  regarding creating a backup for an all in one server. The server has active directory, DNS, the file server and everything on it. I'm trying to create a redundancy setup in case of failure. So I was reading about failover clusters
    and this seem to be what I need to do. I'm just a little bit confused. What is the difference between using hyper V and using clusters?
    When I create a cluster and add servers to it, what exactly happens? For instance simplest case I have the main server with all the data. Now I bring in another machine with nothing on it. I install windows server 2012 on it and add it and the other computer
    to a cluster. Does the data from the main server automatically get copied onto all the servers that are part of this cluster? At this point if one server fails will the other one take over?
    Any help is greatly appreciated.
    -FlipFlop

    have some questions  regarding creating a backup for an all in one server. The server has active directory, DNS, the file server and everything on it. I'm trying to create a redundancy setup in case of failure. So I was reading about failover clusters
    and this seem to be what I need to do. I'm just a little bit confused. What is the difference between using hyper V and using clusters?
    When I create a cluster and add servers to it, what exactly happens? For instance simplest case I have the main server with all the data. Now I bring in another machine with nothing on it. I install windows server 2012 on it and add it and the other computer
    to a cluster. Does the data from the main server automatically get copied onto all the servers that are part of this cluster? At this point if one server fails will the other one take over?
    Any help is greatly appreciated.
    -FlipFlop
    You can think about Hyper-V cluster Vs. generic cluster like Hyper-V is a beehive of an independent clusters :)
    You need to install Windows Server 2012 R2, enable Hyper-V role, virtualize all workload (keep all tasks inside separate set of VMs, don't run anything inside parent partition and don't use multiple roles either) and configure guest VM cluster (make sure
    you use shared VHDX for that).
    Data is not copied, you use shared storage and control gets passed to another VM running on another physical host. That would provide you smallest amount of downtime (if any). From the other point of view VM HA *would* have some downtime (VM reboots on another
    physical host).
    Some links for your interest (guest VM cluster, file server cluster, shared VHDX, shared storage for Hyper-V etc):
    Deploy an Active Directory-Detached Cluster
    http://technet.microsoft.com/en-us/library/dn265970.aspx
    Failover Cluster Step-by-Step Guide: Configuring Accounts in Active Directory
    http://technet.microsoft.com/en-us/library/cc731002(v=ws.10).aspx
    Cluster Nodes as a DCs
    http://support.microsoft.com/kb/281662
    How
    To Setup Redundant DNS Servers 2012
    http://www.techstaty.com/how-to-setup-redundant-dns-servers-2012/
    Working
    with File Shares in Windows Server 2008 (R2) Failover Clusters
    http://blogs.technet.com/b/askcore/archive/2010/08/19/working-with-file-shares-in-windows-server-2008-r2-failover-clusters.aspx
    Failover
    Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
    http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    Using
    Guest Clustering for High Availability
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Deploy
    a Guest Cluster Using a Shared Virtual Hard Disk
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    How
    to Configure Storage on a Hyper-V Host Cluster in VMM
    http://technet.microsoft.com/en-us/library/gg610692.aspxHope
    this helped a bit :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • CVP Redundant server config

    hello everyone
    i have a project for ucce
    in the list i see there have 2 CVP-90-SERVER-SW and 40 CVP-9X-PTS and 40 CVP-9X-REDPT
    i known there have redundant
    but i dont know how to install the server ...
    should i install three servers for console , primary Call server and VXML server and  second Call server and VXML server ?
    and then on console confg the primary and second call server
    next apply the license use primary server information for CVP-9X-PTS  and use second server information apply  CVP-9X-REDPT
    next upload the license form console??

    EDIT:
    I think a good way for your setup, if I'm understanding your task correctly, is to have two CVP servers and one OAMP server. The CVP servers will run the Call Server, VXML Server, and Web Services Manager. Make sure you size the servers correctly because you're going to have three tomecat processes running and it will crush your server if you don't size it adequately (RAM). 

  • Bpel server redundancy

    Hi!
    in our product environment now we have 1 bpel pm server 10.1.2 and we'd like to have one more redundant server for this instance, which should work in case of crash of the first server ...
    Could you point me to the right documentation about this problem? thanks a lot,
    Tomas

    Looks like the server is unable to connect to the dehydration database.
    A standard installation would have this data source setup as part of the installation.
    "jdbc/BPELServerDataSourceWorkflow"
    This again uses the connection pool "BPELPM_CONNECTION_POOL"
    And the name of the data source is "BPELServerDataSourceWorkflow"
    There are a few checks that you can perform.
    1. Check if the dehydration database is up.
    If no then this should be up and running.
    2. Go to the EM and use {test connection} for "BPELPM_CONNECTION_POOL"
    If comes back with error then check if the dehydration database credentials has changed post installation.
    There are a few other datasources that share the same connection pool. Check if these are working too.
    Every Little Helps
    Kalidass Mookkaiah
    http://oraclebpelindepth.blogspot.com/

  • Redundant Hub and CAS roles for exchange server

    Hello ,
    currently we have 2 mailbox servers , 1 DAG server and 1 HUB,CAS server ( hub and cas roles in same server )
    i am looking to have a backup server incase the hub CAS server goes down  ..  or to have a redundant server for HUBCAS  incase if it goes down the other one will go up and the users will use the second one automatically
    how can i achieve that ? 
    appreciated your help

    Hello,
    If you use exchange 2010 server, I recommend you deploy multiple CAS servers as member of CAS array.
    As your requirement, you need to deploy CAS NLB, the failover will occurs automatically.
    Here is an article for your reference.
    http://technet.microsoft.com/en-us/library/ff625247(v=exchg.141).aspx
    Cara Chen
    TechNet Community Support

  • Mac Mini Server on Yosemite showing x2 Servers with one being redundant

    Hi
    I have a Mac Mini OS X Server that worked fine with setup/name of 'server' on Mavericks, however since upgrading to Yosemite the server has now become Server (2) (or server-2) leaving a redundant 'server' (server) in the shared tab.I can't access 'server' on the Mac Mini server or via the Network (even with connect as) and cannot rename \server-2' to server. I have re-installed the OS with no change.
    Does anyone have any ideas what is going on here please and also how I could reset the 'serve-2r' to show correctly as 'server' and remove the erroneous one?
    Thanks
    Glennyboy

    The real answer to this depends on two things:
    1) whether you're just trying to do this within your lab, or you want external systems to be able to get to your services (e.g. students be able to access these resources from outside the school network)
    2) whether each device in your schools' network have real-world IP addresses, or whether the network uses NAT.
    If the answer to 1 is that this is only within the lab then there isn't likely to be any issue - get an IP address for your server from the network admins (which is why #2 is relevant) and you're set. The network guys would only have other involvement if you wanted to get external access working, in which case you've got to talk to them about opening holes in firewalls, etc.
    If this is purely internal (or, at least, doesn't need access from outside the school network) then there isn't likely to be any problem. You could even get away without DNS, although it would be nice if you could get a sensible hostname from the DNS administrators (e.g. server.yourlab.yourschool.edu) but even without that there shouldn't be anything preventing you from running a service and having internal clients connect to it (other than policy issues, of course…). For the most part the admins wouldn't even know you're running a server.
    Getting a static, non-conflicting IP address that valid for the school's network is going to be key, though.

  • Starting managed server using node manager

    When i try to start the managed server using node manager i get the following error
    ERROR: transport error 202: bind failed: Address already in use
    ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
    JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [../../../src/share/back/debugInit.c:690]
    FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
    Abort - core dumped
    <Sep 21, 2010 3:58:07 PM> <FINEST> <NodeManager> <Waiting for the process to die: 22708>
    <Sep 21, 2010 3:58:07 PM> <INFO> <NodeManager> <Server failed during startup so will not be restarted>
    <Sep 21, 2010 3:58:07 PM> <FINEST> <NodeManager> <runMonitor returned, setting finished=true and notifying waiters>
    The managed server lok file is deleted, port is not in use. With similar node manager settings the managed server on the redundant server starts fine

    Hi,
    I am getting same issue. Not able to start the soa_server1. Can you pleae tell me the soulution if you able to reslove the issue.
    starting weblogic with Java version:
    FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
    [ERROR] aborted
    JRockit aborted: Unknown error (50)
    ERROR: transport error 202: bind failed: Address already in use
    ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
    JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [../../../src/share/back/debugInit.c:690]
    Starting WLS with line:
    D:\Oracle\MIDDLE~1\JROCKI~1.0-6\bin\java -jrockit -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=8453,server=y,suspend=n -Djava.compiler=NONE -Xms512m -Xmx512m -Dweblogic.Name=soa_server1 -Djava.security.policy=D:\Oracle\MIDDLE~1\WLSERV~1.3\server\lib\weblogic.policy -Dweblogic.system.BootIdentityFile=D:\Oracle\Middleware\user_projects\domains\domain2\servers\soa_server1\data\nodemanager\boot.properties -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.ignoreHostnameVerification=false -Dweblogic.ReverseDNSAllowed=false -Xverify:none -da:org.apache.xmlbeans... -ea -da:com.bea... -da:javelin... -da:weblogic... -ea:com.bea.wli... -ea:com.bea.broker... -ea:com.bea.sbconsole... -Dplatform.home=D:\Oracle\MIDDLE~1\WLSERV~1.3 -Dwls.home=D:\Oracle\MIDDLE~1\WLSERV~1.3\server -Dweblogic.home=D:\Oracle\MIDDLE~1\WLSERV~1.3\server -Ddomain.home=D:\Oracle\MIDDLE~1\USER_P~1\domains\domain2 -Dcommon.components.home=D:\Oracle\MIDDLE~1\ORACLE~1 -Djrf.version=11.1.1 -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -Djrockit.optfile=D:\Oracle\MIDDLE~1\ORACLE~1\modules\oracle.jrf_11.1.1\jrocket_optfile.txt -Doracle.domain.config.dir=D:\Oracle\MIDDLE~1\USER_P~1\domains\domain2\config\FMWCON~1 -Doracle.server.config.dir=D:\Oracle\MIDDLE~1\USER_P~1\domains\domain2\config\FMWCON~1\servers\soa_server1 -Doracle.security.jps.config=D:\Oracle\MIDDLE~1\USER_P~1\domains\domain2\config\fmwconfig\jps-config.xml -Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Digf.arisidbeans.carmlloc=D:\Oracle\MIDDLE~1\USER_P~1\domains\domain2\config\FMWCON~1\carml -Digf.arisidstack.home=D:\Oracle\MIDDLE~1\USER_P~1\domains\domain2\config\FMWCON~1\arisidprovider -Dweblogic.alternateTypesDirectory=D:\Oracle\MIDDLE~1\ORACLE~1\modules\oracle.ossoiap_11.1.1,D:\Oracle\MIDDLE~1\ORACLE~1\modules\oracle.oamprovider_11.1.1 -Dweblogic.jdbc.remoteEnabled=false -Doracle.security.jps.policy.migration.validate.principal=false -da:org.apache.xmlbeans... -Dsoa.archives.dir=D:\Oracle\Middleware\Oracle_SOA1\soa -Dsoa.oracle.home=D:\Oracle\Middleware\Oracle_SOA1 -Dsoa.instance.home=D:\Oracle\MIDDLE~1\USER_P~1\domains\domain2 -Dtangosol.coherence.clusteraddress=227.7.7.9 -Dtangosol.coherence.clusterport=9778 -Dtangosol.coherence.log=jdk -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Djava.protocol.handler.pkgs="oracle.mds.net.protocol|oracle.fabric.common.classloaderurl.handler|oracle.fabric.common.uddiurl.handler|oracle.bpm.io.fs.protocol" -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true -Djavax.net.ssl.trustStore=D:\Oracle\MIDDLE~1\WLSERV~1.3\server\lib\DemoTrust.jks -Dem.oracle.home=D:\Oracle\Middleware\oracle_common -Djava.awt.headless=true -Dbam.oracle.home=D:\Oracle\Middleware\Oracle_SOA1 -Dums.oracle.home=D:\Oracle\Middleware\Oracle_SOA1 -Dweblogic.management.discover=false -Dweblogic.management.server=http://192.168.1.102:7001 -Dwlw.iterativeDev= -Dwlw.testConsole= -Dwlw.logErrorsToConsole= -Dweblogic.ext.dirs=D:\Oracle\MIDDLE~1\patch_wls1033\profiles\default\sysext_manifest_classpath;D:\Oracle\MIDDLE~1\patch_oepe1033\profiles\default\sysext_manifest_classpath;D:\Oracle\MIDDLE~1\patch_ocp353\profiles\default\sysext_manifest_classpath weblogic.Server
    [WARN ] Use of -Djrockit.optfile is deprecated and discouraged.
    FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
    [ERROR] aborted
    JRockit aborted: Unknown error (50)
    ERROR: transport error 202: bind failed: Address already in use
    ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
    JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [../../../src/share/back/debugInit.c:690]
    <Oct 4, 2010 6:25:39 AM> <FINEST> <NodeManager> <Waiting for the process to die: 700>
    <Oct 4, 2010 6:25:39 AM> <INFO> <NodeManager> <Server failed during startup so will not be restarted>
    <Oct 4, 2010 6:25:39 AM> <FINEST> <NodeManager> <runMonitor returned, setting finished=true and notifying waiters>

  • How we can configure a bachup Data services Server?

    Hello,
    "Data services" Production Server :
    OS: Windows 2003 Server SP2
    Data services: 11.3
    Repository: oracle10gr2
    We want to prepare a backup server identical to that of production. The Backup Server will be used in case of problems on the production server.
    Can you help us to make procedure that build a backup server from production server.
    thanks

    Hi,
    Short answer would be: there is no automatic way to do that.
    The amount of work to create this redundancy will depends heavily on your platform and how the Data Services components are deployed u2013 such as Job Server, repository databases, access servers, etc.
    If I understood your aim, the intention is not to create a load balance environment, but to 'clone' a deployment and let it in standby in case of issues with the default server. Is that right?
    In this case, you still could take advantage of the server group functionality to create a redundant Job Server, for instance. This configuration helps you to create the u2018backup serveru2019 you intent to and, at the same time, leverage the load balance resources available in DS. You will need to replicate the access server, adapters, and ODBC driversu2019 configurations (if applicable) in the other server.
    Just as important as the Job Server, a database redundancy for the Locals and Central repositories in your environment should be created, so you can replicate the repository in another machine using your RDBMS native resources, instead of backing it up constantly. Iu2019m not sure though if the repositories databases replication could affects job performance. Itu2019s recommended to double check that.
    As for the licensing issues regarding this redundant server, I suggest you talk with your Account Representative.
    Hope Iu2019m not flogging a dead horse here.
    Regards,
    Pedro
    PS. You should consider upgrading your software. You can ask your Account Representative for that as well.

  • Export User-Database between ACS-Server

    Hi everyone ,
    an ACS 2.3 is running under Unix with 3000 based user. The job is, to migrate the user-database to a new ACS-Server under Windows.
    On the unix-version 2.3 there is no way to export the database to external.
    The only way, i hope, is to mirror the old and the new server as redundant server and if the database is mirrored on both server, than the database is ready for export.
    Is this correct?
    Is there an other way?
    Thanks for your input.
    Ralf

    The migration should go to version 3.1 or 3.2 .
    Ralf

  • Upgrade and Redundancy

    Hi Expert,
        We have a 3 tire SAP System (DEV,QAS and PRD),
    we under Windows 2003 and SQl 2005 Environment,
    We are upgrade our DEV & QAS Environment to Windows 2012 and SQL 2012,
    We now plan to Upgrade our Production server also to the same (Win,SQL) 2012,
    Also we want to  make a redundancy server to Our production serve as (active to passive Or active to active) in anticipation of any database corruption because we face this problem in the first of this year.
    What's the correct solution we can do it?
    what's available material to help me to implement this solution?
    what's the type of replication we can do?
    How we can make a tow sap system work in the same DB on time?
    Are anther sap instance on the new server will take the same name or not?
    Summary i want to find a solution for no down time if database corrupted suddenly.
    BR
    Said Shepl

    Hi Said,
    Due to Electricity problem our database on SAN is corrupted,
    Because of this we want a solution for completely separated server and with separated DB with minimum Down Time .
    You can create manual DR setup within co-location of Production system.
    Here you manually sync up DR system with data and logs. This can be used incase your Primary goes down . This DR system will have same hostname ( virtual hostname) so that users need not change any entry in SAPlogon pad while using DR system.
    You may also look for other alternate solution which is based on SAN replication. Check with respective hardware vendor for the same.
    Hope this helps
    Regards,
    Deepak Kori

  • How Many Server is needed?

    Hi,
    Currently we have 2 servers with installed application as follows:
    Server 1:
    - Hyperion Essbase
    - APS (for redundancy)
    - EAS (for redundancy)
    Server 2:
    - OAS (Oracle Application Server)
    - Hyperion Shared Services
    - Hyperion Planning
    - Financial Report
    - Web Analysis
    - EAS
    - APS
    - Workspace
    We plan to upgrade from version 9.3.1 to version 11.1.1.x
    And will add HFM (Hyperion Financial Management) into the solution.
    We have around 100+ users for Hyperion Planning and 50+ for HFM.
    Questions:
    How many server do we need for our solution. And what should we install on each server?
    Here is what we think:
    Server 1:
    - Hyperion Essbase
    - APS (for redundancy)
    - EAS (for redundancy)
    Server 2:
    - OAS (Oracle Application Server)
    - Hyperion Shared Services
    - Hyperion Planning
    - EAS
    - APS
    - Workspace
    Server 3:
    - Financial Report
    - Web Analysis
    Server 4:
    - HFM
    Any Feedback is appreciated.
    Thanks,
    Lian

    Tom,
    You could use SAP Site for benchmark reference on SAPS number.
    It is here: http://www.sap.com/benchmark

  • License Issue on Back Up Ciscoworks Server

    We are trying to setup a backup server for our primary Ciscoworks server to get redundancy . We setup the new server and restored the backup on the redundant server and is working fine. However when we click on any of the installed modules say RME , DFM etc we get a message that we have a trial license of 90 days and we should license the product . However when we go to the Licensing we see that all the Modules are Licensed and it shows status as Never Expired.
    We are having Ciscoworks LMS 2.6 with CM, DFM, RME, IPM etc modules installed.
    I would like to know for the backup server do we need to have a seperate license , or while restoring the database something has gone wrong.
    Regards

    In continuation to the post , I checked for the  license file on my backup server and I could not find any license file , all were showing as the eval90 days
    When we were installing the backup server it had prompted us for the license details and we put in the PAK and the PIN details which we had put in the original server.
    I don't understand why the valid license file has not got created.
    Regards

  • Multiplexing and Redundancy

    I have a Lookout 6.1 client/server application that multiplexes data for 150 wastewater lift stations on to a
    single panel from a DataTable object. I have a redundant server that starts an identical standby server process
    in the event the primary server fails. There are 10 duplicate remote client processes that also use multiplexed
    DDE enabled DataTable objects.
    In a failover scenario, the remote clients' TCP/IP connections all work properly with dynamic symbolic links;
    But, none of the clients' DataTable objects will connect to the standby computer because the client DataTables'
    DDE Service parameter does not resolve a symbolic link and is static to the primary computer, which is offline
    during failover.
    Although this client program structure worked in Lookout prior to 4.0 using NetDDE redundancy, the new
    methodology makes multiplexing DDE enabled DataTables in client processes impossible for a client/server
    architecture needing redundancy, because the client DataTables' DDE service parameter remains static in failover.
    Is there a way to redirect the clients' Datatable DDE service parameter to the standby computer in Lookout 6.1
    another way, so I can still use the DDE enabled DataTable multiplexing method on the existing clients in a
    failover?
    DataTable configuration example:
    Service: \\PRIMARY\NETDDE$  <**Static, will not resolve a symbolic link to \\STANDBY\NETDDE$
    Topic: SERVERPROCESS$
    Item: TestDataTable
    Terry Parks, Engineering Analyst
    Terrebonne Parish Consolidated Government(TPCG)
    Utilities - Pollution Control

    The following statment is taken directly from Chapter 14 'Setting Up Redundancy' of the
    Lookout Developer's manual(pg 14-1).
    "If your client process uses NetDDE in shifting access from one computer to another, it
    will work in the current version of Lookout."
    This statement, coupled with the notion of backward compatibility, brings me to the
    question; Could this be considered a bug or anomaly in Lookout eligible for a fix?
    Terry

  • SSL VPN on C2821 Radius auth issues

    I've been looking through the discussions and I can't seem to nail this one down. I'm implimenting SSL VPN on a 2821 to do SMTP only. I need it to auth off the radius server and it is only asking for local router login P/Ws. It will not auth against Radius. I've created a seperate aaa auth group to no avail and tried a few different tweaks. I'm throwing science at the wall and seeing what sticks at this point.
    I've made a new group server for Radius to test it, not working. I've tried variations in domain, not working. Can't use SDM, nor want to.
    This is what the config looks like
    Building configuration...
    Current configuration : 24735 bytes
    ! Last configuration change at 08:19:39 Arizona Tue Aug 28 2012 by dci
    version 15.1
    service timestamps debug datetime msec
    service timestamps log datetime msec
    no service password-encryption
    hostname N****
    aaa new-model
    aaa group server radius IAS_AUTH
    server-private 10.12.1.7 auth-port 1645 acct-port 1646 key $*****
    aaa group server radius Global ***made for testing. Redundant
    server-private 10.12.1.7 auth-port 1645 acct-port 1646 key $*****
    aaa authentication login default local
    aaa authentication login sdm_vpn_xauth_ml_1 group IAS_AUTH
    aaa authentication login sdm_vpn_xauth_ml_2 local
    aaa authentication login SSL_Global group Global ** created for SSL VPN redundant, but did for testing
    aaa authorization network sdm_vpn_group_ml_1 local
    aaa authorization network sdm_vpn_group_ml_2 local
    aaa session-id common
    clock timezone Arizona -7
    dot11 syslog
    ip source-route
    ip cef
    password encryption aes
    crypto pki trustpoint TP-self-signed-2464190257
    enrollment selfsigned
    subject-name cn=IOS-Self-Signed-Certificate-2464190257
    revocation-check none
    rsakeypair TP-self-signed-2464190257
    crypto pki certificate chain TP-self-signed-2464190257
    certificate self-signed 01
    REMOVED
    interface GigabitEthernet0/0
    INTERFACES REMOVED
    ip local pool SDM_POOL_2 10.12.252.1 10.12.252.254
    ip forward-protocol nd
    ip http server
    ip http authentication local
    ip http secure-server
    ip http timeout-policy idle 600 life 86400 requests 10000
    ip flow-cache timeout inactive 10
    ip flow-cache timeout active 5
    ip flow-export source GigabitEthernet0/0
    ip flow-export version 5 peer-as
    ip flow-export destination 10.12.1.17 2048
    ROUTES REMOVED
    ACLS REMOVED SSL IS ALLOWED
    route-map STAT_NAT permit 10
    match ip address 109
    route-map DYN_NAT permit 10
    match ip address 108
    snmp-server community $DCI$ RO
    control-plane
    banner login ^C
    line con 0
    password 7 01100F175804
    login authentication local
    line aux 0
    line vty 0 4
    privilege level 15
    transport input telnet ssh
    line vty 5 15
    privilege level 15
    transport input telnet ssh
    scheduler allocate 20000 1000
    webvpn gateway gateway_1
    ip address **outside ip*** port 443
    http-redirect port 80
    ssl trustpoint TP-self-signed-2464190257
    no inservice
    webvpn context webvpn
    secondary-color white
    title-color #CCCC66
    text-color black
    ssl authenticate verify all
    port-forward "portforward_list_1"
       local-port 3000 remote-server "10.12.1.23" remote-port 25 description "Email"
    policy group policy_1
       port-forward "portforward_list_1"
    default-group-policy policy_1
    aaa authentication list SSL_Global
    aaa authentication domain @n****
    gateway gateway_1 domain N****
    max-users 10
    no inservice
    end
    Can't change "no inservice" to "inservice" and I can't figure out why. Any help with this?

    OK, upgraded IOS to most current stable version and I'm now able to do inservice on the context and gateway. I'm trying to go through the SDM route, but Java crashes with ValidatorException errors. I'm going to try updating the SDM since it's the original version to the 2008 version since all the little "fixes" for this do not work. Any ideas on that?    

  • I have 2 POP accounts and 2 IMAP accounts in Mail.....

    I have 2 POP accounts and 2 IMAP accounts in Mail 6.2 (1499), how do I ensure they all leave messages on the server?
    I was trying to check with my ISP why they were not sending me notice of my monthly bill. They said they were, and gave me the dates of when the missing ones were sent. They went to check my inbox so that they could re-send the offending messages and said there were no messages there! Hence my question, How do I make sure that messages are left on the server? I have looked in Mail/Preferences, and certainly in the POP accounts the Remove copy from Server is unchecked, on the IMAP accounts, there doesn't appear to be a means of leaving on the server, which, in essence, shouldn't worry me because they are different ISP addresses, (1 = icloud and the other is BT.) and only the one address of the POP account is the one the ISP is sending messages to me.
    Can anyone point me in the right direction?
    Thanks
    altv

    Csound1
    I have, indeed, checked with my ISP and they do accept imap accounts. So I followed religiously their instructions, published in their setting e mail information for Mountain Lion website. The instructions were quite simple, and choosing imap was as easy as setting POP mail. however, the net result was that I could never connect with the servers to do anything with the mail. I know they are set up correctly, but when it tried to send mail, after a while an error message came up saying that it couldn't connect and gave me the option of dealing with a "connection doctor" or some such system. This I clicked on and noted that there were an awful lot of servers that it was trying to connect to some of which had been removed when I removed the setup addresses. The ones I wanted were, in fact, showing as connected!! It was obviously checking on anything that had been used in setting a server name from way back when and suffering with a severe bout of indigestion!
    I think what really needs to happen is to get iMAC/Mountain Lion to regurgitate all that rubbish which is no longer a requirement, since I am only using three addresses, one from icloud and two from my ISP. I do not need the clogging up from redundant server names, but it seems that the MAC hangs on to everything just in case I may want to go back to them rather than set them from afresh!! I am not feeling too confident in trying to set the address that originally showed the problem, my BT one, and since this morning, it is almost working 100%, I'll leave well alone and see if it settles down. (One step forward and nineteen backwards!!)
    Cheers
    altv

Maybe you are looking for