UCCX 7 Agent Based Routing - Best Approach

Hello all,
Our agent phones have two lines.  The first line is the contact centre agent number and the second line is the agent's personal DDI.
I am trying to work out the best approach to deal with calls coming in on the DDI line as I would like these to be part of the contact centre too and be able to report on call activity here.  My initial thoughts are that a call will come into the DDI and if the agent is not available then it will be transfered to the CSQ for that agent.
Would it be better for the DDI line to have a "call foward all" set to the CSQ trigger and then use some enterprise parameter to see the original called number, do a lookup and send this call to the agent whose DDI it is?  I'm just wondering how this is achieved programatically?  Or would a better approach be for all of the agent DDI numbers to be triggers for the UCCX application? This leads to the question how many triggers can an application have?
Thanks,

This is always a sticky topic. Both of your ideas are possible and there isn't really a "best" option IMO.
If you create a trigger and have agents CFA their lines to it, then it is only a single trigger in CCX to configure.
Agents can turn this off by disabling CFA. This could be good or bad though.
CCX would need to handle redirecting the call to VM if reaching the agent fails.
You remove the ability for the agent to even see the call (call waiting) since the busy trigger is set to one on their ICD line. Their only indication someone called would be if they leave a voicemail.
There is no documented limit of triggers per application.
Agents cannot turn this on/off.
Everything else would be the same as option one.
You can do this either with a Call Consult Transfer step or attempt an agent-based routing with the Select Resource step. For reporting reasons you would want to use Select Resource though. This also means the agent cannot answer calls at all unless they are logged in to CAD though.
Also, you could choose to route to a CSQ instead of the agent's voicemail in any of these scenarios if you wanted. CCX cannot queue callers to a specific agent and CSQs don't scale well enough to have one per agent. This somewhat depends on who is calling though. A menu option with choices works well: Press one for voicemail or two to speak with another representative."
Lastly, another scenario might be to create an IVR with a "dial by extension" concept where the customers know the extension of who they are reaching and not the direct line number. CCX could do agent-based routing as discussed above there. The advantage is that their direct line would remain untouched. This only works if they would refrain from giving that number out though.

Similar Messages

  • UCCX-Agent based Outbound

    I have done all the configuration for agent based outbound.calling is hitting the CAD but accept button are not visible.Please give me your suggestion

    Hi Kiran,
    Navigate to Desktop Work Flow Administrator, Call Center 1-> Work Flow Configuration->Work Flow Groups> Default>CAD Agent> User Interface. Under the Outbound Dialer, check the check box for Direct Preview.
    Regards
    Please remember to rate useful posts clicking on the stars below.
    Favor calificar todos las respuestas útiles dando click en las estrellas de mas abajo.
    LinkedIn Profile: do.linkedin.com/in/leosalcie
    El mensaje fue editado por: Leo Salcie Tejeda

  • Best approach to add Z custom field to IC Agent Inbox search and results view

    Hi Experts,
    We are having a requirement to add a Z custom field to IC Agent Inbox search and results view. I got multiple forums and ideas, but looking for the best approach for handling this. I am sure, you experts, would have already done this.
    Thanks in advance.
    Regards
    Siva

    Hi Sivakumar,
    AET is the best way by far to create a custom field in this area. It is easy and simple.
    Also, field once added in one business object it can be used at different objects as well.
    There is also a demo available for AET on sdn.
    Please let me know if any more help is required.
    Thanks,
    Bhushan

  • Best approach -Tabs based ADF Tree left side navigation with Dynamic Regions with out UI Shell

    Hi,
    Somebody can help for the best approach to implement the following requirement.
    Req: When the user select the ADF Tree left side navigation menu, each menu will open as multiple tabs(Dynamic Tabs) in right side content area with out UI Shell Template.
    I completed the
    Step-1: From the Model project, I can able to render ADF Tree in the using view and view links. I can get the adf tree which is having 3 menu items. Each menu item having 2 sub menu's.
    I took each menu item as one(1) taskflow, each taskflow will have two(2) fragments.
    Total I have 3 task flows as Menu Items and 6 fragments for sub menu's.
    Step-2:  My question is How do I implement Tab based the ADF tree navigation (left side area to dynamic regions in content area) through dynamic regions? Please provide the steps in view layers.

    Than ks for your response.
    This is working fine for ADF Tree navigation with dynamic regions if the taskflow having only one fragment. if the taskflow having more than one fragments, this will not work. The following conditions are always satisfies one page fragment of either "employees" or "departments" task flow.  If the "employees" task flow have 2 page fragments, it's not work even you pass parameters through routers.
    public TaskFlowId getDynamicTaskFlowId() {
    if (currentTaskFlowID == null ||
    currentTaskFlowID.equalsIgnoreCase(“employees”)) {
    return TaskFlowId.parse(employeetaskFlowId);
    if (currentTaskFlowID != null &&
    currentTaskFlowID.equalsIgnoreCase(“departments”)) {
    return TaskFlowId.parse(departmetaskFlowId);
    return TaskFlowId.parse(employeetaskFlowId);
    My question is "Same use case with Dynamic Tabs" when the user click on any adf tree node.

  • Custom routing agent based on sender's security group and subject

    I made a custom routing agent that routes mails contains the word [encrypt] in the subject and sent from domain test.com
    The part of the code is
    if (e.MailItem.FromAddress.DomainPart.Contains("test.com")
                    && e.MailItem.Message.Subject.Contains("[encrypt]"))
    now what i need is to route mails based on the membership of a certain security group like "securemail" not the whole domain. ie if the sender is a member in security group (securemail) and the subject contains the word [encrypt] route the mail
    Thanks

    Thanks for your answer Glen
    The following  code is on exchange 2010 but i need it to check for a security group membership if possible
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using Microsoft.Exchange.Data.Transport;
    using Microsoft.Exchange.Data.Transport.Email;
    using Microsoft.Exchange.Data.Transport.Smtp;
    using Microsoft.Exchange.Data.Transport.Routing;
    using Microsoft.Exchange.Data.Common;
    namespace RoutingAgentOverride
        public class SampleRoutingAgentFactory : RoutingAgentFactory
            public override RoutingAgent CreateAgent(SmtpServer server)
                RoutingAgent myAgent = new ownRoutingAgent();
                return myAgent;
    public class ownRoutingAgent : RoutingAgent
        public ownRoutingAgent()
            //subscribe to different events
            base.OnResolvedMessage += new ResolvedMessageEventHandler(ownRoutingAgent_OnResolvedMessage);
        void ownRoutingAgent_OnResolvedMessage(ResolvedMessageEventSource source, QueuedMessageEventArgs e)
            try
                // For testing purposes we do not only check the sender address but the subject line as well
                // If the subject contains the substring "REDIR" then the default routing is overwritten.
                // Instead of hard-coding the sender you could also perform an LDAP-query, read the information
                // from a text file, etc.
                if (e.MailItem.FromAddress.DomainPart.Contains("contoso.com")
                    && e.MailItem.Message.Subject.Contains("[encrypt]"))
                    // Here we set the address space we want to use for the next hop. Note that this doesn't change the recipient address.
                    // Setting the routing domain to "nexthopdomain.com" only means that the routing engine chooses a suitable connector
                    // for nexthopdomain.com instead of using the recpient's domain.
                    RoutingDomain myRoutingOverride = new RoutingDomain("nexthopdomain.com");
                    foreach (EnvelopeRecipient recp in e.MailItem.Recipients)
                        recp.SetRoutingOverride(myRoutingOverride);
            catch // (Exception except)

  • Best approach for building dialogs based on Java Beans

    I have a large amount of Java Beans with several properties each. These represent all the "data" in our system. We will now build a new GUI for the system and I intend to reuse the beans as far as possible. My idea is to automatically generate the configuration dialogs for each bean using the java.beans package.
    What is the best approach for achieving this? Should I use PropertyEditors or should I make my own dialog-generator using the Introspetor class or are there any other suitable solutions?
    All suggestions and tips are very welcome.
    Thanks!
    Erik

    Definitely, it is better for you to use JTable. Why not try it?

  • Can I use Lab view for agent based modelling?

    Dear All, 
    I have a project to do, It is optimisation of cost for maintenance management in special type of contract.
    I am looking to take an agent based modelling to do the project. I've just started using Labview and I'm not sure if I can do this type of modelling approach with Labview. Can someone please confirm that I can do it and also if its a familiar topic show me the first few steps.
    Thanks,
    Davood
    Solved!
    Go to Solution.

    SebSieros wrote:
    Hi Davood,
                       Could you please expand on what you mean by agent based modelling in detail and what exactly you are trying to accomplish?
                        I look forwoard to hearing from you.
    Kind regards
    Seb
    Hi Seb, 
    What I am trying to accomplish is an optimised cost by trading-off between 3 main different area(Labour, Spare part and logistic). each area has its individual sections and sub sections. the important part is that there are plenty interactions between sections and subsections of 3 main area. in simple word project is : changing the cost for training and see the impact of it on total optimised cost or selecting supplier A which is providing item x cheaper than provider B by considering that for supplier A there is another logistic plan and method or sometimes more training is required and finally  see what is going to happen to total optimised cost
    I draw a simple conceptual framework to present the project more clearly. please see attachment. I'll be happy to clear any point if you need.
    Attachments:
    Project.png ‏105 KB

  • Best approach -To create RTF template having more than 50 tables.

    Hi All,
    Need your help.I am new to BI publisher. Currently we are using BIP 11g.
    I want to develop.rtf template having lots of layout and images.
    Data is coming from different tables (example : pulling from around 40 tables). When i tried to pull data from 5 tables by joining tables. It takes more time using data model in BI publisher 11g saved in xml and used in word doc.
    Could you please suggest best approach  weather i need to develop .rtf template via data model or query to generate a report.
    Also please suggest / guide me .
    Regards & Thanks in advance.

    it's very specific requirements
    first of all it's relate to logic behind
    as example 50 tables are related ? or 50 independent tables ? or may be 5 related and another independent ?
    based on relation of tables you create sql statement(s)
    how many sql statement(s) you'll have lead to identify ways to get data, as example, by package or trigger etc
    kim size of resulting select statement(s)
    if size say 1mb it's must be fast to get report but for 1000mb it can consume many time
    also kim what time it's not only to select data but to merge data and template
    looks like experimenting and knowing full logic of report is only ways to get needed output in projection of data and time

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • What's the best approach to resetting Calendar data on Server?

    I have a database format error in a calendar that I only noticed after the migration to Server on Yosemite.  I'll paste a snippet from the Error Log in at the bottom that shows the error - I've highlighted the description of the problem in red.
    I found a pretty cool writeup from Linc in a different thread, but it's aimed at fixing a similar problem for a local user on their own machine rather that an iCal server like what we're running.  Here's the link to that thread: Re: Calendar crashes on open  For example, does something like Calendar Cleaner work on our server database as well?
    In my case I think I'd basically like to gracefully remove all the Calendar databases from Server and start fresh (all the users' calendars are backed up on their local machines, so they can just import them into fresh/empty calendars once I've cleaned out the old stuff).  Any thoughts on "best approach" would be much appreciated.
    Here's the error log...
    File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twi sted/internet/defer.py", line 1099, in _inlineCallbacks
    2015-01-31 07:14:41-0600 [-] [caldav-0]         result = g.send(result)
    2015-01-31 07:14:41-0600 [-] [caldav-0]       File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python 2.7/site-packages/txdav/caldav/datastore/sql.py", line 3635, in component
    2015-01-31 07:14:41-0600 [-] [caldav-0]         e, self._resourceID
    2015-01-31 07:14:41-0600 [-] [caldav-0]     txdav.common.icommondatastore.InternalDataStoreError: Data corruption detected (Invalid property: GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     VERSION:2.0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CALSCALE:GREGORIAN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     PRODID:-//Apple Inc.//Mac OS X 10.8.2//EN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTART:20121114T215900Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTEND:20121114T232700Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CLASS:PUBLIC
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CREATED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DESCRIPTION:Flight leg 2 of 2 for trip from MSP to LAX\\nhttp://www.google.
    2015-01-31 07:14:41-0600 [-] [caldav-0]      com/search?q=US+29+flight+status\\nBooked on November 8\\, 2012\\n
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTAMP:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LAST-MODIFIED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LOCATION:Sky Harbor International Airport\\, Phoenix\\, AZ
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SEQUENCE:0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     STATUS:CONFIRMED
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SUMMARY:US 29 from PHX to LAX
    2015-01-31 07:14:41-0600 [-] [caldav-0]     URL:http://www.hipmunk.com/flights/MSP-to-LAX#!dates=Nov14,Nov17&group=1&s
    2015-01-31 07:14:41-0600 [-] [caldav-0]      elected_flights=96f6fbfd91,be8b5c748d;kind=flight&locations=MSP,LAX&dates=
    2015-01-31 07:14:41-0600 [-] [caldav-0]      Nov14,Nov16&group=1&selected_flights=96f6fbfd91,
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-CALENDARSERVER-PERUSER-UID:D0737009-CBEE-4251-A288-E6FCE5E00752
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRANSP:OPAQUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACKNOWLEDGED:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACTION:AUDIO
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ATTACH:Basso
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRIGGER:-PT2H
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-APPLE-DEFAULT-ALARM:TRUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-WR-ALARMUID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ) in id: 3405
    2015-01-31 07:14:41-0600 [-] [caldav-0]    
    2015-01-31 07:16:39-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None
    2015-01-31 08:08:40-0600 [-] [caldav-1]  [AMP,client] [calendarserver.tools.purge#warn] Cleaning up future events for principal A95C9DB2-9757-46B2-ADF6-4DECE2728820 since they are no longer in directory
    2015-01-31 08:09:10-0600 [-] [caldav-1]  [-] [twext.enterprise.jobqueue#error] JobItem: 39, WorkItem: 762001 failed: ERROR:  canceling statement due to statement timeout
    2015-01-31 08:09:10-0600 [-] [caldav-1]    
    2015-01-31 08:13:40-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None

    <facepalm>  Well, there you go.  It turns out I was over-thinking this.  The Calendar app on a Mac can manage this database just fine.  Sorry about that.  There may be an easier way to do this, but here's how I did it.
    Use the Calendar.app on a local computer to:
    - Export the corrupted calendar to an ICS file on the local computer (Calendar -> File -> Export -> Export)
    - Create a new local calendar (Calendar -> File -> New Calendar -> On My Mac)
    - Import the corrupted calendar into the new/empty local calendar (Calendar -> File -> Import...)
    - Delete years and years of old events, including the one that was triggering that error message
    - Export the (now much smaller) local calendar to another ICS file on my computer (Calendar -> File -> Export -> Export)
    - Create a new calendar on the server (Calendar -> File -> New Calendar -> in the offending server-based iCal account)
    - Import the edited/fixed/smaller/no-longer-corrupted calendar into the new/empty server calendar (Calendar -> File -> Import...)
    - Make the newly-created iCal calendar the primary calendar (drag it to the top of the list of calendars on the server)
    - Delete the old/corrupted calendar (right-clicking on the bad calendar in the calendar list - you can only delete it once it's NOT the primary calendar any more)

  • Best approach to do Range partitioning on Huge tables.

    Hi All,
    I am working on 11gR2 oracle 3node RAC database. below are the db details.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    Thanks,
    Hari

    >
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    >
    Sorry to tell you but that test and partitioning strategy is essentially useless and won't work for you entire table anyway. One reasone is that if you use the WITHOUT VALIDATION clause you must ensure that the data being exchanged actually belongs to the partition you are putting it in. If it doesn't you won't be able to reenable or rebuild any primary key or unique constraints that exist on the table.
    See Exchanging Partitions in the VLDB and Partitioning doc
    http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin002.htm#i1107555
    >
    When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
    If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.
    >
    Comments below are limited to working with ONE table only.
    ISSUE #1 - ALL data will have to be moved regardless of the approach used. This should be obvious since your current data is all in one segment but each partition of a partitioned table requires its own segment. So the nut of partitioning is splitting the existing data into multiple segments almost as if you were splitting it up and inserting it into multiple tables, one table for each partition.
    ISSUE#2 - You likely cannot move that much data in the 2 to 3 hours window that you have available for down time even if all you had to do was copy the existing datafiles.
    ISSUE#3 - Even if you can avoid issue #2 you likely cannot rebuild ALL of the required indexes in whatever remains of the outage windows after moving the data itself.
    ISSUE#4 - Unless you have conducted full volume performance testing in another environment prior to doing this in production you are taking on a tremendous amount of risk.
    ISSUE#5 - Unless you have fully documented the current, actual execution plans for your most critical queries in your existing system you will have great difficulty overcoming issue #4 since you won't have the requisite plan baseline to know if the new partitioning and indexing strategies are giving you the equivalent, or better, performance.
    ISSUE#6 - Things can, and will, go wrong and cause delays no matter which approach you take.
    So assuming you plan to take care of issues #4 and #5 you will probably have three viable alternatives:
    1. use DBMS_REDEFINITION to do the partitioning on-line. See the Oracle docs and this example from oracle-base for more info.
    Redefining Tables Online - http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables007.htm
    Partitioning an Existing Table using DBMS_REDEFINITION
    http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
    2. do the partitioning offline and hope that you don't exceed your outage window. Recover by continuing to use the existing table.
    3. do the partitioning offline but remove the oldest data to minimize the amount of data that has to be worked with.
    You should review all of the tables to see if you can remove older data from the current system. If you can you could use online redefinition that ignores older data. Then afterwards you can extract this old data from the old table for archiving.
    If the amount of old data is substantial you can extract the new data to a new partitioned table in parallel and not deal with the old data at all.

  • Policy based routing on VRF interfaces to route traffic through TE Tunnel

    Hi All,
    Is there a method to do policy based routing on VRF interfaces and route data traffic through one TE tunnel and non-data traffic through another TE tunnel.
    The tunnel is already build up with these below config
    interface Tunnel25
    ip unnumbered Loopback0
    tunnel destination 10.250.16.250
    tunnel mode mpls traffic-eng
    tunnel mpls traffic-eng path-option 10 explicit name test
    ip explicit-path name test enable
    next-address x.x.x.x
    next-address y.y.y.y
    router ospf 1
    mpls traffic-eng router-id Loopback0
    mpls traffic-eng area 0
    mpls traffic-eng tunnels
    nterface GigabitEthernet5/2
    mpls traffic-eng tunnels
    mpls ip
    Is there additional config needed to work ,also in the destination end for the return traffic,we want to use the normal PATH --I mean non TE tunnel.
    We tested with the above scenario,but couldn't able to reach the destination.Meantime we had a question,when the packet uses the policy map while ingress,it may not know the associatuion with VRF(Is that right? --If so ,how to make it happen)
    Any help would be really appreciated
    Thanks
    Regards
    Anantha Subramanian Natarajan

    hi Anantha!
    I might not be the right person to comment on your first question. I have not configured MVPNs yet and not very confertable with the topic.
    But I am sure that if you read through the CBTS doc thoroughly, you might be able to derive the answer yourself. One thing I notice is that " a Tunnel will be selected regularly according to the routing process (even isf it is cbts enabled). From the tunnels selected using the regular best path selection, the traffic is mapped to a perticular tunnel in the group if specific class is mapped to that tunnel.
    So a master tunnel can be the only tunnel between the 2 devices over which the routing (bgp next hops) are exchanged and all other tunnels can be members of this tunnel. So your RPF might not fail.
    You might have to explore on this a bit more and read about the co-existance of multicast and TE. This will be the same as that.
    For your second question, the answer would be easy :
    If you want a specific eompls cust to take a particular tunnel/path, just create a seperate pair of loopbacks on the PEs. Make the loopback learnt on the remote PE through the tunnel/path that you want the eompls to take. Then establish the xconnect with this loopback. I am assuming that your question is that a particular eompls session should take a particular path.
    If you meant that certain traffic from the same eompls session take a different path/tunnel, then CBTS will work.
    Regards,
    Niranjan

  • Best approach for multi-team/multi-projects.

    Hi,
    I'm looking for the best approach to handle multi-teams/multi-projects scenario. We have 20 development groups and over 300 products. Each products on it's own Schedule.
    Product X can be assign to Group A, but at some point, it can be assign to Group B.
    We are currently using TFS 2012, but will be upgrading to 2013 soon.
    Based on many reading, we are thinking to create only one Team Projects to ease management.
    In it, we will create a team for each development group, but we will not create an associated area path with the name of the team.
    - Group A
    - Group B
    - Group C
    Than, we will create an Area for each product.
    - Product X
    - Product Y
    - Product Z
    and, we will create multi-level of iterations to match each Schedule.
    - Product X
       - Release 1
          - Sprint 1
          - Sprint 2
    - Product Y
        - Release 1
           -Sprint 1
        - Release 2
           -Sprint 1
    The main issue, we have with this approach is that we can't use the Backlog or the Task Board effectivelly, as there is no way to filter per areas and iterations.
    Reading "How do I change the underlying query for the task board (and backlog board) on TFS Preview", this doen't seam to be possible in TFS 2012.
    In TFS 2013, "The Agile Portfolio Management: Using TFS to support backlogs across multiple teams" was introduced. Will this help to solve the problem?
    We would create a management team for each development group.
    We would create an agile team with an associated area for each product.
    The only thing that I couldn't find in the documentation is how to re-assign an agile team to another management team. Is this possible?
    Also can each agile team have their specific itérations, if so will it roll up properly to the management team?
    Regards
    SYSOTI
    PS: Sorry couldn't post the links of the quoted text as I get the message: Body text cannot contain images or links until we are able to verify your account. ;-(

    Hi SYSOTI,
    Based on your description, seems the area path is not configured properly hence you can't use the Backlog or the Task Board effectivelly.
    From the
    Agile Portfolio Management: Using TFS to support backlogs across multiple teams, the area path is set as agile team which is a consist of team members but not a product name. For you scenario, you can set the area path name as your product name to
    identify the associate products for work items.  And the groups you mentioned for products in the team projects are sub-group of contributors.
    Seems there is no need to create a management team for each development group since management team might be in a higher level to view the progress for all of the work across the agile teams. Certianly, you can create multiple management teams, but the management
    teams will be able to view works for all agile teams. 
    If you have multiple teams and products, you can create a team project for each product if the products don't have much relationship. However, it's OK to manage the projects for multiple products in the same team project. And working within a single team
    project also have benifits, you ccan check this
    blog for more information.
    Best regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Best approach to create a security environment in Java

    I need to create a desktop application that will run third party code, and I need to avoid the third party code from export by any way (web, clipboard, file io) informations from the application.
    Somethig like:
    public class MyClass {
        private String protectedData;
        public void doThirdPartyTask() {
            String unprotedtedData = unprotect(protectedData);
            ThirdPartyClass.doTask(unprotectedData);
        private String unprotect(String data) {
    class ThirdPartyClass {
        public static void doTask(String unprotectedData) {
            // Do task using unprotected data.
            // Malicious code may try to externalize the data.
    }I'm reading about SecurityManager and AccessControler, but I'm still not sure what's the best approach to handle this.
    What should I read about to do this implementation?

    Whilst code without any permissions (as supplied through the ProtectionDomain by the class' ClassLoader) cannot access network, file and system clipboard, this does not mean it is entirely isolated.
    Even modern cryptographic systems are surprisingly vulnerable to side-channel attacks.
    Where an untrusted agent has access to sensitive data, it isn't very feasible to stop any escape of that data. Sure, you can block off overt posting of the data, but you cannot reasonably block off all covert channels.
    Steganographic techniques are a particularly obvious way to covertly send sensitive data out amongst intended publications.

  • What is best approach to report building in my case?

    Hi all,
    I'm just getting started with Crystal Reports for our Swing-based desktop application.  We need the ability to generate PDF and XLS reports, perhaps later adding web-based dashboarding and interactive reports.  I'm trying to determine the best approach to take with Crystal Reports to fit our application's data.
    Our app stores results in a separate database (either Oracle, SQLServer, or Apache Derby).  The result records contain lots of ID lookups to tables in another database.  This makes using straight SQL for reporting difficult as I would like to avoid cross-database queries.  So I'm thinking of using the POJO reporting approach where our app gathers the results, generates POJOs, and then passes them to the report.
    My concern with this POJO approach is that it seems to require loading all results into memory and generating the report in one big step.  I've read other posts referring to heap issues.  Is there a way to avoid this?  Some-how to page through report data?
    I've also read that Crystal Reports can work with any data provider that implements ResultSet.  Is this true?  If so, could I create my own custom ResultSet implementation that would let me page through my results without loading everything into memory at-once?  If possible, please point me to the documentation for this approach.  I haven't been able to find any examples.
    If there is a better approach that I haven't mentioned, please let me know. 
    Thanks in advance,
    Guy

    The first option is the best one for performance.  The only time you should use result sets is when you need to do runtime manipulation of the data through your application and is not acheivable in a stored procedure.

Maybe you are looking for

  • Error While Installing Web Dynpro for ABAP

    Hello EveryBody, i Have Downloaded the sneak PreView from sdn. when i am trying to install that ont i got an error in the 11th step. the error is ERROR 2006-01-25 16:08:08 The dbmcli call for action PARAM_INIT_INST failed. Check the logfile XCMDOUT.L

  • What should I do when I want to change CS3 to another PC?

    Dear Adobe, I have installed in CS3 to PC "A" before, and now I need to change CS3 to another PC "B". In addition to uninstall CS3, what process or step I have to do? Just like to release the license I have registered before and re-register again in

  • Nokia E5-00 firmware v042.014

    Please i want to know if there is any problem with this firmware v042.014. Just got available in Nigeria today. And i don't want to update to a firmware that has issues because it is not downgradable. Thank you in advance.

  • Mobile Safari and RSS feeds

    Sorry if this has already been discussed - I haven't found anything on it yet. I have a few RSS feeds bookmarked on my iPhone. Whenever I click on an article I'm unable to use the back button to return to the main feed page. I always have to result t

  • Compiled libraries

    We are using a framework that consists of a set of project plans. These are bundled into a library distribution, distributed as compiled (Compaq Tru64 Unix and Windows NT) and are used by other developers who import them into their repositories. They