UCCX Best Practice - UCM Agent Line Configuration documentation

With UCCX, I have always abided by some rules when it came to configuration of the agent's line in CUCM.  At least to be a TAC supported solution anyway.  For Example:
1. Agent Extension can not be shared
2. Agent Exension not part of any CUCM hunt group or call pickup group
3. Call Waiting Disabled Max/Busy 2/1
4. Agent Extension should not take inbound calls
5. Agent extensions not set to CFNA
6. etc....
I had someone ask me to back this up with some sort of documentation.  I reviewed the UCCX 7.x SRND and could not find anywhere explicitly talking about CUCM configuration.
Does anyone know if this type of information is documented?
Thanks in advance,
Shane

Shane,
Look at the release notes for your version of UCCX. They typically have a
section called "Unsupported Features in Unified CM". There is also a
section on "Unsupported and Supported Actions" and general "Unsupported
Configurations in Cisco Unified CCX".
Release notes URL:
http://www.cisco.com/en/US/products/sw/custcosw/ps1846/prod_release_notes_li
st.html
You won't find the data in the SRND for whatever reason.
HTH.
Regards,
Bill
Please remember to rate helpful posts.
On 9/8/10 5:40 PM, "shane.orr"

Similar Messages

  • What are the best practice for CQ5.5 configuration?

    Hello,
    What are the best practice for CQ5.5 configuration which handle for High availability.
    Last time I had a issues on server when I was uploaded 2 GB of DAM and then after that the server is not able to start and always getting error regarding Tar Persistance.
    So kindly request you to please let me know what are the best apache felix configuration.
    Thanks in advance...
    Regards,
    Satish

    Hi,
    A DAM upload, regardless of the size of the assets, never should result in TarPM problems, unless you run into an OOM, which left the repository in an unclean state. So if you regularly do DAM uploads of that size, you should check the Garbage Collection logs and probably adjust the heapsize if necessary. You might want to limit the number of concurrent running workflows to keep the memory consumption a bit lower.
    To your question: HA in a traditional sense you cannot achieve with a single box, even with optimized settings. In an author usecase you would need clustering.
    Jörg

  • Best practice for repositories during configuration - one or several DBs?

    Establishing my 11.1.2 dev box, we are in 9.3.1 in Production. Reading through documentation it states that one database is the repository for the Shared Services, Business Rules, Essbase, etc.
    Since I came to this new job with 9.3.1 installed not sure if this was verbiage that is the standard from version 9.3 or this is something new for 11.1.x
    So...what is the best practice? is it better to lump all foundation type activity into one DB (I realize Planning apps have their own db) or is it better to have a db for biplus, db for shared services, etc...
    JTS

    Here is what Oracle have to say
    "For ease of deployment and simplicity, for a new installation, you can use one database for all products, which is the default when you configure all products at the same time. To use a different database for each product, perform the “Configure Database” task separately for each product. In some cases you might want to configure separate databases for products. Consider performance, roll-back procedures for a single application or product, and disaster recovery plans."
    I would say in a development environment then there is no harm in using one db/schema for products, remember some products require separate databases/schemas e.g. Planning application.
    In production environment I tend to promote keeping them separate as it helps with troubleshooting and recovery.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Best Practice - WAP connecting switchport configuration.

    Is there a best practice for deploying the WAP's in a WAP/WLC infrastructure?  Should the connecting switchport be an Access port or a Trunk port?  I've seen this implemented in both fashions and wasn't sure if one was a better choice than the order.  What is the difference?
    My other question is regarding applying additional switchport configurations.  Is there anything wrong with applying either spanning-tree portfast, spanning-tree bpdguard, or switchport port-security. 

    Hi Ken,
    Access port all the time, everywhere, UNLESS the AP is configured for HREAP/FLEX then trunk. Or if you deploy a AP in monitor mode then TRUNK.
    QOS -- if its access port trust dscp. If you truck trust cos.
    No you are fine. Portfast is highly recommended.
    "Satisfaction does not come from knowing the solution, it comes from knowing why." - Rosalind Franklin
    ‎"I'm in a serious relationship with my Wi-Fi. You could say we have a connection."

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best Practices Pseudo-Time Clock Configuration

    Friends,
    I Find  for a manual of best practices for configuration PSEUDO_TIME CLOCK GET VPN.

    Hi,
    GET VPN uses time-based anti-replay (TBAR), which is
    based on a pseudo-time clock that is maintained on the KS. An advantage of using pseudotime for
    TBAR is that there is no need to synchronize time on all the GET VPN devices using NTP.
    http://www.cisco.com/c/dam/en/us/products/collateral/security/group-encrypted-transport-vpn/GETVPN_DIG_version_1_0_External.pdf
    Regards,
    Rahul chhabra
    Network Engineer
    Spooster IT Services

  • 7k vPC best practice with multiple line cards?

    I have a pair of 7k's that have a single line card with a 2 port vPC linked to a pair of 5k's, another 2 port vPC linked to the layer 3 VDC and a 4 port vPC used for peer link.  I recently added an additional line card to the 7k’s and want to add redundancy for the new line card.  Would it be best to simply remove one of the existing ports from each vPC on line card 1 and then include a port from the new line card?  For the peer link, my thought was to remove two ports from the existing port channel and then add back 2 ports from the new line card.  This way line card 1 and 2 will share the ports that make up the vPC’s instead of all of the vPC ports being on line card 1.  Does that make sense or is there a better way to implement line card redundancy after adding a new line card? 
    Also, if I want to add some of the ports on line card 2 to the layer 3 VDC.  Can I run the allocate interface command without the fear of it removing any existing allocated ports?  I understand that the ports being moved/allocated to a different VDC will lose their current configuration.  I had a bad experience years ago using "switchport trunk vlan allowed" vs. "switchport trunk vlan allowed add" command so I just want to make sure the allocate interface doesn't require the "add" option!  :)
    Thanks!      

    Hi,
    For adding new interface to port-channel:
    1. you can shutdown the new physical interface and put all the configure as existing member interface.
    2. add as port-channal member.
    3. Unshut the physical interfaces (new interface- module 2).
    4. Shutdown the physical interfaces (module 1) which you wanted to remove from the port-channel 
    Moving interface from one VDC another VDC will erase the configuration on that  interface  . you could use allocate interface command to add new interface to the VDC but all the interface should be in same port group. Adding new interface to vdc will not affect existing interface configuration  .
    HTH
    Regards,
    VS.Suresh.
    *Plz rate the usefull posts *

  • What is the Best Practice for A1000 LUN Configuration

    I have a fully populated 12 X 18GB A1000 array, What is the optimal LUN configuration for a A1000 Array
    running RAID5 in a read intensive oracle financials environment.
    1. 1 (10 X 18GB + 2 X 18GB HS ) (Use format to split at OS level) - Current Setting
    2. 1 (10 X 18GB + 2 X 18GB HS ) (Use RM6 to split into 3 LUNS)
    3. 3 (3 X 18GB + 3 X 18GB HS )
    I would like to know if option 2 or 3 will buy me anything more than 3 queues?
    Thanks
    F.A

    Well, natural combination of dimensions connected to the fact would be a natural primary key and it would be composite.
    Having an artificial PK might simplify things a bit.
    Having no PK leads to a major mess. Fact should represent a business transaction, or some general event. If you're loading data you want to be able to identify the records that are processed. Also without PK if you forget to make an unique key the access to this fact table will be slow. Plus, having no PK will mean that if you want to used different tools, like Data Modeller in Jbuilder or OWB insert / update functionality it won't function, since there's no PK. Defining a PK for every table is a good practice. Not defining PK is asking for a load of problems, from performance to functionality and data quality.
    Edited by: Cortanamo on 16.12.2010 07:12

  • Best practice for .war?  Configure and deploy or deploy and configure?

    In Apache Tomcat for example, I can deploy an app, stop the server, reconfigure the app in situ, then start the server again...
    Is this recommended for deploying Java web apps to Oracle App Server 10g?
    We currently have a consulting firm that is recommending to configure the web app before deploying. Sounds reasonable, except that they want this done via JDeveloper so that the Sys Admin can right click on the "deploy to OAS" button (ie: have the tools generate the .war file after configuration and deploy automagically).

    Thanks for your feedback.
    Are you aware of any way to use the *.deploy configuration file that is created by JDeveloper in an ANT script to create the .war or .ear file?
    If not, I can picture the Sys Admin and developers groaning when they're told that they're JDeveloper web-app configuration cannot be used for production -- and that they must somehow duplicate that functionality in an ANT script!
    I do have the below ANT scripts from Debu to do the deployment etc. But they only help after the .ear is built.
    EAR file deployment:
    <target name="deploy" depends="core">
    <java jar="${j2ee.home}/admin.jar" fork="yes">
    <arg value="${oc4j.deploy.ormi}"/>
    <arg value="${oc4j.deploy.username}"/>
    <arg value="${oc4j.deploy.password}"/>
    <arg value="-deploy"/>
    <arg value="-file"/>
    <arg value="${this.build}/${this.ear}"/>
    <arg value="-deploymentName"/>
    <arg value="${this.application.name}"/>
    </java>
    </target>
    Web application binding:
    <target name="bind-web-app" depends="deploy">
    <java jar="${j2ee.home}/admin.jar" fork="yes">
    <arg value="${oc4j.deploy.ormi}"/>
    <arg value="${oc4j.deploy.username}"/>
    <arg value="${oc4j.deploy.password}"/>
    <arg value="-bindWebApp"/>
    <arg value="${this.application.name}"/>
    <arg value="${this.war}"/>
    <arg value="http-web-site"/>
    <arg value="/${this.uri}"/>
    </java>
    </target>
    Undeployment:
    <target name="undeploy" depends="init">
    <java jar="${j2ee.home}/admin.jar" fork="yes">
    <arg value="${oc4j.deploy.ormi}"/>
    <arg value="${oc4j.deploy.username}"/>
    <arg value="${oc4j.deploy.password}"/>
    <arg value="-undeploy"/>
    <arg value="${this.application.name}"/>
    </java>
    </target>

  • LDAP configuration for HR Portal in dual stack EHP4 - Best Practice

    Hi Experts,
               Hello Experts,
    We are trying to use the JAVA Stack of ECC server for HR Portal i.e Dual Stack and have applied EHP4 package for ESS/MSS Appraisal. When we are trying to configure the LDAP ADS datasource through portal , we are not able to do it since ABAP datasorce file is available by default.This we are doing for HR(ESS/MSS) Portal.This is for access to the object data stored in the Active Directory.
    We have already checked note 718383.
    Also, for the scenatrio ,LDAP <-> ABAP <-> J2EE
    We have already checked sap help doc.here:
    http://help.sap.com/erp2005_ehp_04/helpdata/EN/e6/0bfa3823e5d841e10000000a11402f/frameset.htm
    What should now be the best practice to follow for configuration ? Should we go for separate Portal server or is it possible to use Java Stack of ECC server for configuration ?
    Also, LDAP <-> ABAP <-> J2EE scenario please suggest if it a best practice and we can follow the same .What are the limitations , risks and issues ? Please suggest if this has been implemented and running well in any live project .
    Are the suggestions applicable for load balanced production servers as well?
    Thanks,
    Rakesh

    Hi,
    the UME datasource must remain ABAP but you can sync the users between ABAP and LDAP using the LDAP connector:
    http://help.sap.com/saphelp_nw70ehp2/helpdata/en/48/74040175bb501ae10000000a42189b/frameset.htm
    Regards,
    Jozsef

  • Best practice for Documenting SOA Composites

    Hi,
    I am looking for any general guideline or best practice to create documentation for composites developed as part of a project.
    Are there any plugins which help export in Visio or other tool?
    I don't see a create JPEG button on composite editor similar to BPEL, so any suggestions for documentating that?
    In general, i would like to take your opinions/suggestions to adapt a process for a better documentation.
    Thanks.

    Hi,
    As such there is no such guidelines or best practices which are followed for documentation.
    You may plug in your source control system with Jdeveloper but it will help during the coding process.
    We have used OER in our project for maintaining documentations and the relationships among different files (be it xsd's, wsdls, bpels, mediator , etc)
    Thanks

  • SAP Best Practices for SSO Configuration

    Hello There,
    Are there any SAP Best Practices available for SSO Configuration. If so, Kindly help me with those..
    And also any Third party tools available in the market for SSO Configuration..
    Appriciate your Help on this.. Thanks in advance.
    Regards,
    Pranay S
    Edited by: Pranay Subedari on Apr 29, 2011 9:12 AM

    Hello,
    Types on the SSO are classified with the systems involved in configuration (i.e.) SSO between ABAP Stack and Java stack or LDAP, OS
    Refer the link for more details [Document Deleted]
    Regards,
    Anand
    Message was edited by: Jason Lax

  • Oracle Service Bus - Large Configuration Space Best Practices

    Does anyone have any best practices for handling large configurations in Oracle Service Bus (formerly ALSB)? We are going to have hundreds of HTTP services defined. Any best practices for handling proxy service granularity, cross cutting areas and componentization to help us create a high level of quality and consistency?
    Thanks

    We are going to face the similar situation soon. Any real world experience would be great.

  • Best Practices for Configuration Manager

    What all links/ documents are available that summarize the best practices for Configuration Manager?
    Applications and Packages
    Software Updates
    Operating System Deployment
    Hardware/Software Inventory

    Hi,
    I think this may help you
    system center 2012 configuration manager best practices
    SCCM 2012 task-sequence best practices
    SCCM 2012 best practices for deploying application
    Configuration Manager 2012 Implementation and Administration
    Regards, Ibrahim Hamdy

  • Best Practice for Distributed TREX NFS vs cluster file systems

    Hi,
    We are planning to implement a distributed TREX, using RedHat on X64, but we are wondering which could be the best practice or approach to configure the "file server" used on the TREX distributed environment. The guides mention file server, that seems to be another server connected to a SAN exporting or sharing the file systems required to be mounted in all the TREX systems (Master, Backup and Slaves), but we know that the BI accelerator uses OCFS2 (cluster file systems) to access the storage, in the case of RedHat we have GFS or even OCFS.
    Basically we would like to know which is the best practice and how other companies are doing it, for a TREX distributed environment using either network file systems or cluster file systems.
    Thanks in advance,
    Zareh

    I would like to add one more thing, in my previous comment I assumed that it is possible to use cluster file system on TREX because BI accelerator, but maybe that is not supported, it does not seem to be clear on the TREX guides.
    That should be the initial question:
    Aare cluster file system solutions supported on plain TREX implementation?
    Thanks again,
    Zareh

Maybe you are looking for