Collections Management (Capacity Planning): FICA Event/s?

Hello colleagues,
I have a requirement to apply additional criteria for Capacity Planning during dunning.
I need to restrict the volume of Dunning Activities by Regional Structure Group (after retrieving it from the Connection Object).
If anyone has any suggestions as to what FICA event to use, or where to find the count of proposed dunning activities after the proposal step is complete, that would really help.
In Event 300, I do not see any structure that gives me a count of proposed Dunning Activities. However, Capacity Balancing functionality is obviously retrieving the count and applying the limits based on configuration. I hope I do not have to loop through structure FKKMAGRP for the (in progress) Dunning Run ID to get the counts?
Thanks in advance.
Ivor Martin

Ivor:
Using Dunning by Collection Strategy method, this is delivered as standard.  To do so on your own, use event 1799.  In the standard event you could see where this is being executed for the collection steps.
regards,
bill.

Similar Messages

  • Capacity Planning for Workflow Manager

    Some information is available regarding highly available Workflow Manager implementations but I cannot find any information regarding when or how you should perform capacity planning for the workflow manager.
    A few questions are jumping out at me.
    When would we want/need a dedicated Workflow Manager server? (I can understand if this was shared between multiple farms this may be a benefit but what about from a pure capacity perspective).
    Are there any case studies or documentation that we can use as a baseline estimation for capacity for the workflow manager?
    Generally speaking for a small/medium farm deployment is it expected that this service also run on the application server?
    certifications MCITP, MCTS, MCPD | blog
    http://corypeters.net | twitter
    @cory_peters

    That would really depend on what kind of workflows are being run and the traffic being generated by the users. If you have long and complex workflows, you will need multiple nodes in the farm to handle.
    Our tests with very simple workflows on a 16-core 16GB machine with SQL Server on a diff machine showed that it could handle 400 incoming messages per second and 65 workflow executed per second. Now if you have complex workflows, it may slow down further.
    If you have one msg per user per second - it may mean 400 users handled per second. But I wouldn't make that conclusion - u must test your farms against your scale needs and accordingly plan your farm capacity
    Hope this helps
    Ravi Sekhar

  • Capacity Planning for Azure Managed Cache Service Spreadsheet Missing

    I'm currently choosing between dedicated in-role caching and the azure managed cache service. It seems pretty clear that for in-role caching one must consider the cache access frequency when choosing the in-role cache size (as demonstrated by the capacity
    planning spreadsheet found here http://msdn.microsoft.com/en-us/library/hh914129.aspx although the documentation states "Your application
    is the only consumer of the cache. There are no predefined quotas or throttling. Physical capacity (memory and other physical resources) is the only limiting factor.")
    It is less clear if this is also the case for the azure managed cache service since the documentation simply states:
    “Now, there are no predefined quotas on bandwidth and connections. Physical capacity is the only limiting
    factor and you only pay based upon the cache size. You can now focus solely on your application and its data needs.” 
    and the capacity planning guide spreadsheet found here:
    http://msdn.microsoft.com/en-us/library/dn386139.aspx
    does not lead the actual spreadsheet.
    Is there some way to get the capacity planning guide spreadsheet for the azure managed cache service? If not, can someone tell me whether we need to consider cache access frequency (and not just size) when choosing the azure managed cache service?
    Thanks!

    Just kidding, I found the planning spreadsheets here:
    http://www.microsoft.com/en-us/download/details.aspx?id=30000
    That said, I'm still unsure of whether the data read/write frequency (bandwidth) is relevant in choosing capacity:
    In role:
    "Your
    application is the only consumer of the cache. There are no predefined quotas or throttling. Physical capacity (memory and other physical resources) is the only limiting factor."
    Managed:
    “Now,
    there are no predefined quotas on bandwidth and connections. Physical capacity is the only limiting factor and you only pay based upon the cache size. You can now focus solely on your application and its data needs.” 
    I'm confused because when using the caching capacity planner spreadsheet,
    when the number reads/second is increased, a greater cache size is recommended. But why would I need a larger cache size if the same object is being read by multiple users and there is not limit on bandwidth?

  • From Forms Product Management - 10g Capacity Planning Guide

    We have published a capacity planning guide for Oracle Forms; now available from otn.oracle.com/products/forms.
    Regards
    Grant Ronald
    Forms Product Management

    Nice - thanks!

  • Bandwidth Utilization Avg or Max for capacity Planning best practice

    Hello All - This is a conceptual or Non-Cisco product question. Hope you can help me to get this best industry practice
    I am doing a Capacity planning for the WAN Link Bandwidth. To study the last month bandwidth utilization in the MRTG graph, i am seeing  two values
    Average
    Maximum.
    To measure how much bandwidth my remote location is using which value i have to use. Average or Max?
    Average is always low eg. 20% to 30%
    Maximum is continuous 100% for 3 hour in 3 different intervals in a day and become 60% in rest of the day
    What is the best practice followed in the networking industry to derive the upgrade size of the bandwidth by using the Utilization graph
    regards,
    SAIRAM

    Hello.
    It makes no sense to use average during whole day (or a month), as you do the capacity management to avoid business impact due to link utilization; and average does not help you to catch is the end-users experience any performance issues.
    Typically your capacity management algorithm/thresholds are dependent on traffic patterns. As theses are really different cases if you run SAP+VoIP vs. youtube+Outlook. If you have any business critical traffic, you need to deploy QoS (unless you are allowed to increase link bandwidth infinitely).
    So, I would recommend to use 95-percentile of maximum values on 5-15 minutes interval (your algorithm/thresholds will be really sensitive to pooling interval, so choose it carefully). After to collect baseline (for a month or so)  - go and ask users about their experience and try to correlate poor experience with traffic bursts. This would help you to define thresholds for link upgrade triggers.
    PS: proactive capacity management includes link planning for new sites and their impact on existing links (in HQ and other spoke).
    PS2: also I would recommend to separately track utilization during business hours (business traffic) and non-business (service or backup traffic).

  • HR personnel availability as input to PM labor capacity planning

    Good Day SAP Gurus!
    I am looking at using HR's personnel availability as input to PM capacity planning.
    I am planing to take this approach;
    1.Reference HR in shift definition in IMG
    2.Use the shift grouping in Work Ctr capacity header
    3.Define Interval of Available Capacity using the shift defined
    Appreciate your valuable inputs for the above steps to ensure it will work as well as the transactions/table to check personnel availability between HR and PM.
    Also, I have read that I can use "Performing Work Center" in operations of capacity category "personnel" for the same purpose but I don't know how this works.
    Anyone has better approach? Thanks a lot in advance!!

    Hi
    without knowing the answer:
    i would first check the integration points of PErsonnel Time Management with other SAP Applications. As far as I remember is the "CATS" (Cross Application Time Services) function collecting data for cross company topics, like capacity leveling tasks.
    regards,
    Andreas R

  • Operations are not getting despatched in capacity planning table

    Hi Experts,
    I am using capacity planning table (Graphical) to level the capacities and sequence the process orders. The start date of my orders are very well in future and sufficient capacities also is available in resources. But when I go to CM25 and select one order and click on despatch, system is not despatching it. This is the case for almost all the overall profiles.
    The surprising factor is, this function was working fine earlier without any issues. I havent done any changes in configs which will affect capacity planning.
    What can be the reason for this? Any thoughts please?.
    One more thing. Is there any option to avoid the capacity planning step if Iam using R/3 and manage it by some other way?
    Appreciate your earlier reply
    Thanks & Regards
    Prathib

    Most likely you didnt define the rowsource key properly. Please look under <install>/errors folder for any file there.
    Also, please read the documentation. There is information there about how to troubleshoot a loading problem. Please always read the documentation and we would appreciate feedback on the documentation.
    http://download.oracle.com/docs/cd/E17236_01/epm.1112/iop_user_guide/frameset.htm?launch.html

  • WAAS Capacity Planning TFO Connection Reporting?

    For WAAS Capacity Planning I'm trying to work out a way to gather WAAS appliance and module TFO connection statistics over time for each device. I'm looking to gather the number of TFO connections at regular intervals to allow us to get an accurate picture of the current WAAS capacity utilisation. The central manager reports a "The TFO accelerator is overloaded (connection limit)" when the maximum TFO connection limit for a devices is exceeded but it does not tell you the number of connections that exceeded the limit on the box.
    I've tried using the WAAS Monitoring XML API:
    http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v421/monitoring/guide/MG_XML_API.html
    But it does not allow reporting of this statistic as far as I can work out.
    If you run "show stat conn" on a WAAS device it gives you the "Current Active Optimized Flows", but this is the current number of flows and not the maximum number of TFO connections reached since the counters were last cleared?
    Current Active Optimized Flows:                                3258
       Current Active Optimized TCP Plus Flows:              812
       Current Active Optimized TCP Only Flows:              2453
       Current Active Optimized TCP Preposition Flows:    0
    Current Active Auto-Discovery Flows:                         411
    Current Reserved Flows:                                           99
    Current Active Pass-Through Flows:                          540
    Historical Flows:                                                      532
    Does anyone know anyway of anyway to either manually gather this statistic from a WAAS appliance or module? Or poll this statistic from a monitoring system like Solarwinds?

    Hello
    As of WAAS 4.4.1 it comes with a WAN optimization MIB CISCO-WAN-OPTIMIZATION-MIB, if you want to use a 3rd party software like Cacti to graph over time.
    http://www.cisco.com/en/US/customer/docs/app_ntwk_services/waas/waas/v441/configuration/guide/SNMP.html#wp1141055
    This MIB provides information about the status and statistics  associated  with the Application Optimizers.The following objects from  this MIB are  supported:
    •cwoAoStatsIsConfigured
    •cwoAoStatsIsLicensed
    •cwoAoStatsOperationalState
    •cwoAoStatsStartTime
    •cwoAoStatsTotalHandledConn
    •cwoAoStatsTotalOptConn
    •cwoAoStatsTotalHandedOffConn
    •cwoAoStatsTotalDroppedConn
    •cwoAoStatsActiveOptConn
    •cwoAoStatsPendingConn
    •cwoAoStatsMaxActiveOptConn
    This MIB also provides information about TFO statistics. The following objects are supported:
    •cwoTFOStatsTotalHandledConn
    •cwoTFOStatsActiveConn
    •cwoTFOStatsMaxActiveConn
    •cwoTFOStatsActiveOptTCPPlusConn
    •cwoTFOStatsActiveOptTCPOnlyConn
    •cwoTFOStatsActiveOptTCPPrepConn
    •cwoTFOStatsActiveADConn
    •cwoTFOStatsReservedConn
    •cwoTFOStatsPendingConn
    •cwoTFOStatsActivePTConn
    •cwoTFOStatsTotalNormalClosedConn
    •cwoTFOStatsResetConn

  • Seeking a capacity planning consultant

    Xpede is an financial services application service provider based in
    Oakland, CA, seeking an capacity planning consultant for an short term
    (under one week) initial engagement, with possible on-going consulting work.
    Please send resume, qualifications, hourly rate or a fixed price bid to
    [email protected]
    Capacity Requirements Plan (2-5 days)
    The Plan concentrates on the responsiveness, scalability, and throughput of
    our middleware infrastructure through investigation, analysis, and a formal
    report. This service is conducted as a pre-deployment evaluation. The
    consultant will work with our staff to identify or establish our capacity
    requirements in quantitative terms, which express the requirements for an
    application and architecture, and will encompass both current and future
    timeframes. Performance metrics may take the form of expected transaction
    rate (workload), response time, growth factor, user capacity, database size,
    and others. The time invested in this phase can be reduced when such
    information already exists and can be provided to BEA.
    - Performance Assessment Management Reports
    - Benchmarking
    - Performance bottleneck identification and resolution
    - Performance tools (such as workload generators) to be developed
    - How to interpret metrics
    - How to gather metrics
    - What metrics to gather
    - Reference architecture on Solaris platform (build-out)
    Qualifications
    3-4 experience on Weblogic platform, with over 20 projects. In-depth
    understanding of BEA technology, able to plan and design the architecture of
    middleware projects. Able to design architectures using associated
    technologies (Java, Object-Oriented Design and Analysis, Solaris, Veritas,
    Netscape/iPlanet, Oracle). Experience with large-scale, full life-cycle
    projects. Able to share technical and business knowledge with others.

    thnx for the  reply, Alex.
    in the application tier, i would like to have  2  boxes:
    1) service applications only like MMS, UPSA in one box
    2) can i include  search service appln in second box which is also a   search  query server? or
    i need to create/configure search serv.appln in the first box itself?
    i have created a farm topology as  shown above:
     would like to know whether its correct or not:  

  • Seeking a Weblogic capacity planning consultant

    Xpede is an financial services application service provider based in
              Oakland, CA, seeking an capacity planning consultant for an short term
              (under one week) initial engagement, with possible on-going consulting work.
              Please send resume, qualifications, hourly rate or a fixed price bid to
              [email protected]
              Capacity Requirements Plan (2-5 days)
              The Plan concentrates on the responsiveness, scalability, and throughput of
              our middleware infrastructure through investigation, analysis, and a formal
              report. This service is conducted as a pre-deployment evaluation. The
              consultant will work with our staff to identify or establish our capacity
              requirements in quantitative terms, which express the requirements for an
              application and architecture, and will encompass both current and future
              timeframes. Performance metrics may take the form of expected transaction
              rate (workload), response time, growth factor, user capacity, database size,
              and others. The time invested in this phase can be reduced when such
              information already exists and can be provided to BEA.
              - Performance Assessment Management Reports
              - Benchmarking
              - Performance bottleneck identification and resolution
              - Performance tools (such as workload generators) to be developed
              - How to interpret metrics
              - How to gather metrics
              - What metrics to gather
              - Reference architecture on Solaris platform (build-out)
              Qualifications
              3-4 experience on Weblogic platform, with over 20 projects. In-depth
              understanding of BEA technology, able to plan and design the architecture of
              middleware projects. Able to design architectures using associated
              technologies (Java, Object-Oriented Design and Analysis, Solaris, Veritas,
              Netscape/iPlanet, Oracle). Experience with large-scale, full life-cycle
              projects. Able to share technical and business knowledge with others.
              

    Hi,
    What should be the Heap size cannot be determined by the RAM size 32GB or 64GB. It depends on the Application...Application Functionality, How many Long living objects it creates , how many JNI calls it makes, How many Jars/Classes it loads, How many users requests your application at peak load, what kind of Caching you are using...etc
    Above things are the points which can be used to determine the required Heap Size and the JVM tunning options.
    There is no standard calculation or Formula available...The only option to determined the required Tunning parameters and JVM settings is...Load Testing and Performance Testing.
    Thanks
    Jay SenSharma
    http://middlewaremagic.com/weblogic/?page_id=2261  (Middleware Magic Is Here)

  • Capacity planning and hardware sizing

    Hi,
    I would like to know in detail about capacity planning and hardware sizing while deploying oracle database products. I need to know about properly planning the infrastructure and configuring and deploying the products. Moreover, when should we use RAC,how many nodes, exadata, DR etc. How to size hardwares RAM,Disk for this kind of implemetations.
    I know it depends on budget and environment. Still if anyone kindly give some links/docs I will be able to have a decent start in this domain.
    Regards,
    Saikat

    See , they all depend on your company policy and the database size that they are going to have.
    1. Size : i have worked on databases sizing from 80G to 2000G , all depending on the client or the industry you are catering to.
    2. hardware and Ram : Server Team should know this better , you can have a simple small server if the DB and application are small and the usage would be less. If it a full fledged system with high load then you need to get in touch with your server team and raise an oracle SR for exact specifications for that database and also plan for some future growth.
    3. RAC : depends on the project if it requires such a setup.
    4. DR : is for disaster management. Also all companies implement this by adopting data guard ( physical standby).
    5. For having a good start , start reading the books and refer to oracle documentation. More you refer , more knowledge you will get .
    Start with the basics.... make your basics strong.. and then move ahead with complex setups.
    Regards
    kk

  • Capacity Plan with Oracle 10g

    Good Day to all,
    I've little time working in environments with Oracle databases, I have requested to carry out a capacity plan with Oracle Database 10g for a data warehouse project that is leading the company in which they work. I request to make a plan specifying, among other things: size and number of tablespace and datafiles, projection growth taking into account the initial charge and the charge per week (incremental). The truth is a bit complicated for my inexperience in this kind of sizing requirements we will ask for your valuable cooperation. There are mathematical formulas that allow me to take those projections into account the type of data and their lengths? , There is a standard for creating the tablespace and datafiles?.
    In advance thank you for your contributions.

    The first thing you need to get management to do is give you two things.
    1. The cost to the organization for downtime, rated in dollars/hour.
    2. The service level agreement for the system's customers.
    3. The amount of data to be loaded into the system and the retention time.
    4. What version of RAID or what ASM redundancy is planned.
    With that you can start at the grossest level which is planning for database + archived redo logs + online back files.
    I generally figure the database, itself, at about 25% of required storage because I like to have at least two full backups, a bunch of incremental backups, plus all of the archived redo logs that support them. And all on separate shelves.
    The number of tablespaces and data files is really just a question of maintenance. Ease of transport. Ease of movement. Ease of backing up.
    If you want to get down to the actual sizes of tables and indexes the best place to go is the DBMS_SPACE built-in package. Play with these two procedures:
    CREATE_TABLE_COST and CREATE_INDEX_COST which do the job far more efficiently and accurately than any formulas you will receive. You can find demos
    of this capability here: http://www.morganslibrary.org/reference/dbms_space.html.

  • Capacity Planning Tool needed

    Hi,
    As a novice user, I have few questions. I would really appreciate any direction or help:
    Q#1: Is there a capacity planning tool in Oracle Database that can tell us when do we need more memory or CPU?
    Q#2: Is there a capacity planning tool in Oracle Database that can be used for sizing a new system?
    Q#3: How do we find out the status/usage of CPU, Disk Usage etc?
    This is the version in use: Oracle Database 11g Release 11.2.0.1.0 – 64bit Production
    With the Real Application Clusters and Automatic Storage Management options.

    >Q#1: Is there a capacity planning tool in Oracle Database that can tell us when do we need more memory or CPU?
    SQL SELECT
    >Q#2: Is there a capacity planning tool in Oracle Database that can be used for sizing a new system?
    SQL SELECT
    >Q#3: How do we find out the status/usage of CPU, Disk Usage etc?
    SQL SELECT

  • Capacity planning of a farm.

    One of the customer wants to try the SharePoint as Content Management & Collaboration solution. Customer has 225000 users, 10,000 active users daily. Users are across the global. I am working on Farm design. Since Customer is not sure whether they will
    opt the SharePoint so I am thinking that I should purpose minimum infrastructure that can handle the load. Initially Farm wouldn't have more than 1 TB of documents. Enterprise Search will also be enabled.
    Should I purpose single farm that will be accessed across the globe. I suppose this will not work since WAN latency between three continents might be the issue.
    Regards Restless Spirit

    here are technet for the Capacity planning for SharePoint & Sql
    http://technet.microsoft.com/en-us/library/cc298801.aspx
    for search check this wiki:
    http://social.technet.microsoft.com/wiki/contents/articles/16002.sharepoint-2013-capacity-planning-sizing-and-high-availability-for-search-in-spc172.aspx
    again its depend how many #of document in 1TB?
    Capacity management and sizing for SharePoint Server 2013
    One stop blog,which links all helpful Technet articles.
    http://sundarnarasiman.net/?p=112
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Integration of Collections Managment

    Hi all,
    Whether integration of CRM with Collections Management is possible??? If yes, tell the steps
    Lakshmi

    hi odaiah,
    this link is  for download CR700-> about services
    http://mysapbi.blogspot.com/2007/01/cr700.html
    and the content is:
    Content:
    Installed base management:
    Installations Individual objects Service contract processing:
    Service agreements
    Service contracts
    Service plans
    Usage-based billing Service order processing:
    Resource planning
    Service confirmation
    Service billing
    Product service letter
    Warranty claims Complaints
    i will let you know if i find new materials
    best regards
    Indah puspita

Maybe you are looking for

  • Problem in implementing code using snmp4j.jar

    Hi, I am using snmp4j.jar to create a class that will execute SNMP commands such as GET, GETNEXT and GETTABLE. The issue I am facing is extremely peculiar. When running the program for the first time, the output is seen correctly. However, in the sub

  • Video loads part way and freezes

    Our site is keeping an archive of 45 min videos and for some reason they load for awhile and then the load bar jumps to the end and the video will freeze at that point in the scroll bar. This started with .flv files so we got a player that plays .mov

  • How to import POI under JDeveloper 11.1.2.0.0

    How to import POI under JDeveloper 11.1.2.0.0

  • Music is stuck on my phone

    There are songs stuck on my iPhone 5. I have tried going into Settings and deleting all my music. I have tried syncing my music through iTunes, hoping that the automatic sync would delete them. The songs don't actually show up in the iTunes screen wh

  • S/PDIF options for PCI Express X-Fi Xtreme Audio sound card non-existant!

    Hey guys, I've been searching around for this for a while now and I still come up with anything... I currently have a PCI Express Sound Blaster X-Fi Xtreme Audio sound card installed in my PC, and this cound card has a built-in optical output port. I