Assignment of WAVEs to Buckets WCCPv2

All,
Can anyone explain how WAVEs (caches in WCCP lexicon) are assigned to buckets please?
The context to my question is probably best illustrated by an example:
* You have two CEs (CE-A and CE-B), each with a provider circuit
* Asymmetric routing across the WAN providers
* Four WAVE units registered using WCCPv2
* An assignment mask of 0x0f00 on the source ip (16 bits)
* Therefore, there are four caches and 16 buckets (4 buckets per cache)
How does WCCPv2 ensure buckets 1-4 get assigned to WAVE-A on both CEs?
According to the post below, the assignment is based on WCCP registration time, but I am not confident in that, else rebooting WAVEs or CEs would cause loss of optimisation (unless they were all rebooted together in sequence):
https://supportforums.cisco.com/thread/2225305
This seems like a fundamental part of the WCCP operation, but I can't find any documentation for it.
Many thanks for anyone who can assist.
Regards
James.

Hi James,
Please take a look here :
http://www.cisco.com/en/US/docs/ios/12_2/configfun/configuration/guide/fcf018_ps1835_TSD_Products_Configuration_Guide_Chapter.html#wp1000909
especially bullet point 3, where one of the WAVEs is elected/selected as the lead, and that WAVE controls the bucket assignments.
Know it's an (old) IOS 12.2 document, but that hasen't changed!
What you want to acheive is that you always redirect to the same WAVE from both routers, based on some common criteria.
So if you reboot one of the routers (CEs) the other one has to handle all the traffic, but should still redirect to the same WAVE.
If you reboot one of the WAVEs, this WAVE will deregister and the lead WAVE will inform everyone about the new assignment buckets.If it's the lead WAVE that is rebooted another lead WAVE is selected.
Off course TCP sessions being redirected to the rebooting WAVE will be disrupted and another TCP session will be set up from the client and eventually being redirected to one of the remaining WAVEs.
Hope this answers you question.
Best regards
Finn Poulsen

Similar Messages

  • GATP: ATP bucket assignment for supply elements - how to use BucketLogic ?

    Hello experts,
    I have a little issue with applying correct bucket logic - hoping somebody has a hint for me on this.
    My case:
    I have a PurchaseOrder coming in on Date_X, Time 00:00 in R3 --> same day and time in Product view in APO
    I have a ProductionOrder coming in on Date_X, Time 24:00 in R3 --> same day and time = 23:59:59 in Product view in APO
    .. this means both elements should create ATP supply qty to confirm a SalesOrders for Date_X
    My general ATP settings in APO are very simple:
    "ShiftReceipts" = 00:00, "IssueLimit" = 00:00 (there is no offset defined etc., demand & supply bucket are in synch and should behave like R3 ATP check)
    Now if I apply in the "progressiv" Bucket Logic in the ATP group
    --> the PurchaseOrder becomes available a day before Date_X (problem, too early)
    --> the ProductionOrder becomes available on Date_X (correct)
    Now if I apply in the "conservative" Bucket Logic in the ATP group
    --> the PurchaseOrder becomes available on Date_X (correct)
    --> the ProductionOrder becomes available on Date_X+1 (problem, too late)
    I believe its somehow linked to how SAP is handling the supply element assignment to an ATP bucket in case it's falling exactly onto a "bucket cut line" ...any help is very appreciated !
    Regards
    Thomas

    Hello Michael,
    tks for the hint. I'll give it try but ...
    I want to avoid "exact" logic since SAP is not recommending it for performance reasons (we are running multiple times a day massive BOPs for several plants). Basically we want to gain the BOP performance advantages over R3 using the ATP time series aggregation with gATP - therefore I'm playing around with "conservative", progressive" in the ATP bucket logic.
    I simply need to have all supply elements of a day in R3 assigned to the same day as available qty in APO. I wonder why such very simple case is not managable in APO - meanwhile my guess is that we have a bug here (using SCM5.0 SP13) - but I haven't found anything related in OSS.
    Regards
    Thomas

  • [EWM-PPF] Automatic Wave Assignment

    Hi,
    SPPFP > /SCDL/DELIVERY > Condition Configuration (Transportable Conditions) > Outbound Delivery Order with Automatic Wave Assign >
    (1) Assign Warehouse Request to Wave /SCWM/WHR_WAVE_ASSIGN_NEW     Assign to Wave At Delivery Creation
    (2) Assign Warehouse Request to Wave /SCWM/WHR_WAVE_ASSIGN_NEW     Assign to Wave After Change of Delivery
    The default configurations seem to be ok, for both:
    Schedule condition --> Schedule Automatically
    Start condition --> No condition (Seen as Fulfilled)
    But for all that I have to start the PPF action manually in transaction SPPFP, that the wave assignment is processed.
    Which further setting or flag is missing that the manual step (in tx SPPFP) is no longer needed?
    Thanks in advance!
    Kind Regards!
    Laura

    Hi,
    Did you check your Q in SMQ2?  It could be that it is still processing and hanged up for some reason.  That would be the first place to look.
    Thanks,
    Faical

  • EWM PPF Assign Warehouse Request to Wave

    We found that PRDO is assigned to wave number 100000585 from
    PPF Action.  But when check from mon ,there is no wave 100000585.
    We try to create new wave and include this odo into the wave. It is not possible.

    Hi,
    Did you check your Q in SMQ2?  It could be that it is still processing and hanged up for some reason.  That would be the first place to look.
    Thanks,
    Faical

  • Creating buckets by comparing Dimension member against Measure using MDX

    I have a Product dimension which has hierarchies - product_id, initial_price, product_name, category and Measures - units_sold, cost_per_unit, revenue_per_unit. For each product_id I need to compare initial_price against revenue_per_unit and then assign it
    to a bucket based on the comparison. 
    Consider the following example:
    Input:
    product_id   initial_price revenue_per_unit
    1               10  
               12
    2               20  
               18
    3               30  
               35
    And if Product 1, 2 are in Category Book and 3 in Clothes then output should look like
    Output:
    Category  revenue type        Amount
    Book               Profit                
     2
                          Loss            
           2
    Clothes           Profit                   5
                          Loss                     0
    How can I achieve this using MDX?

    Hi Vijay,
    In your case, I couldn't found the "revenue type" attribute in the "Product" dimension. If you need to get the expected result, I suggest you calculate the amount at underlying data souce, and then retrieve the result set via MDX.
    Regards,
    Elvis Long
    TechNet Community Support

  • Time Buckets in DP

    Dear All,
    I have the following scenario in APO DP:
    I have two plants: Plant A and Plant B.
    For Plant A (Weekly off is wednesday):
    So Weekly buckets for Oct 08 should look like this
    W1 --> 2nd to 7th Oct
    W2 --> 9th to 14th Oct
    W3 --> 16th to 21st Oct
    W4 --> 23rd to 29th OCt
    And for Plant B (Weekly off is sunday)
    So weekly buckets should look like this
    W1 --> 1st to 4th Oct
    W2 --> 6th to 11th oct...
    continues like this..
    How do i address this in ONE PLANNING AREA?
    Orelse should i create two fiscal year variants and attach to TWO DIFFERENT planning area?

    Hi.
    If I understand your requirement correctly, you are going to need two storage bucket profiles. This being the case, you will need two planning areas.
    Not only will you require two FYVs but also two Time streams to assign to the storage bucket profiles. In the timestreams you can define the order of your workdays in a week.
    Hope this helps.
    M

  • Future dated orders in waves

    HI All
    I have orders that come in the warehouse as future dated ,meaning order is placed today and delivery is for the next 2 days,and in that this delivery needs to be picked a day before the delivery date.when the system creates the waves for these type of orders it assigns the last wave template option.for instance if it picked up wave template 50,in this template it only allocates/assigns the wave to the last wave option in this template 50.this is the last wave for the day.I want the system to assign the next available wave option instead of just assigning it to the last one.I hope i am making sense and that htere is a hero out there.
    in this pic the order is created today for delivery tommorow.and as seen below the template is 102 and option 10 which is the last option.This delivery was created at 13:26 so it should have picked up the wave template option 8 atleast.
    In the second pic below,you will see that it has assigned it to the last option or wave in this template.I have tried mantaining the time intervals to try and accormodate the FDO's but with no luck,it still assigns it to the last one.i need the system to check all templates avaialable and assign the order to the next wave run and not only the last one.
    Regards
    Leonard :-(

    Hi Leonard,
        This is a SAP Sourcing forum. you will get a quicker response if you were to post this in the correct forum (SRM or MM)..
    Regards
    Prasad

  • Error : Timeout while delivering a packet

    Hi,
    When we handle objects in the cache, we get the following error (after several hundreds operations) :
    2005-12-15 07:55:37.551 Tangosol Coherence 3.1/321 <Warning> (thread=PacketPublisher, member=1): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=2, Timestamp=Thu Dec 15 07:53:47 CET 2005, Address=192.168.1.61, Port=8088, MachineId=26941)
    by MemberSet(Size=1, BitSetCount=1
    Member(Id=3, Timestamp=Thu Dec 15 07:53:47 CET 2005, Address=192.168.1.63, Port=8088, MachineId=26943)
    2005-12-15 07:55:37.554 Tangosol Coherence 3.1/321 <Info> (thread=Cluster, member=1): Member departure confirmed; removing Member(Id=2, Timestamp=Thu Dec 15 07:53:47 CET 2005, Address=192.168.1.61, Port=8088, MachineId=26941)
    2005-12-15 07:55:37.592 Tangosol Coherence 3.1/321 <Warning> (thread=DistributedCache, member=1): Assigned 85 orphaned primary buckets
    Our configuration is a partitioned (distributed) cache between 3 nodes, each on a specific server :
    - 192.168.1.61;
    - 192.168.1.62; (we get the error message on this node)
    - 192.168.1.63;
    Here are our two XML files config.
    Note that we use specific WKA, because multicast is blocked by the switch between the 3 servers.
    Coherence is launched by a web app, in JOnAS 4.5.3 (JVM 1.5.0). The aim is not to cache sessions, but some data like in a classic application.
    I attach the trace, from node 2, where you can see that some operations take more than 50 sec to be performed! It is just before the node problem.
    We use transaction to avoid incoherency between concurrent access.
    Thanks in advance,
    Vincent<br><br> <b> Attachment: </b><br>coherence-cache-config.xml <br> (*To use this attachment you will need to rename 206.bin to coherence-cache-config.xml after the download is complete.)<br><br> <b> Attachment: </b><br>tangosol-coherence.xml <br> (*To use this attachment you will need to rename 207.bin to tangosol-coherence.xml after the download is complete.)<br><br> <b> Attachment: </b><br>trace_s2_end.txt <br> (*To use this attachment you will need to rename 208.bin to trace_s2_end.txt after the download is complete.)

    Hi Vincent,
    When this happens, please email all 3 nodes logs (with -Dtangosol.coherence.log.level=5 log level) and thread dumps (pereferably several, taken at some time intervals) to support at tangosol.com.
    Regards,
    Dimitri

  • Authorization on initiative and item creation

    Hello!
    Dear Experts,
    Could you please comment on back-end RPM role customizing and front-end object authorization to do the following:
    1. the portfolio structure of buckets contains two sections on the fists level: the first of them is to assign initiativs and the second is to assign portfolio items, items are assigned to initiatives.
    2. the user should:
    - read all of the initiatives in the first bucket
    - create new initiatives and change just initiatives created by him or he is allowed to change in front-end object authorization
    - of item of initiatives with read authorization at Item Dashboard of the second bucket the user should see (read) just items of initiative of the first bucket
    - the user first creates initiative and then items of initiative. initiatives should be assigned to the first bucket and items to the second bucket.
    3. back-end role should allow to read create and write items and initiatives of 2 from 5 Category values. And when user creates and changes initiative/item data then he should see just 2 from 5 Category values.
    We implemented notes 1382703 and 1235897 but it didn't solved the problem.
    We are implementing RPM 4.5.
    Best regards,
    Valerie

    Hi Rohan,
    Have you tried:
    Administration -> System Initialisation -> Authorizations -> General Authorizations:
    Authurisation Window: General -> Close Document = No Authorisation.
    This will block the user from closing documents in the open Item list but also from other menu points.
    Hope it helps.
    Jesper

  • Unable to Initialize the Plannig Area

    Hello,
    While initializing the planning area for DP, we are getting the below error.
    Job started
    Step 001 started (program /SAPAPO/TS_PAREA_INITIALIZE, variant &0000000000017
    ABAP/4 processor: DBIF_DSQL2_SQL_ERROR
    Job cancelled
    When checking the consistency of planning area through /SAPAPO/TSCONS - Consistency Check for Time Series it gives the below message
    I checked the storage profile already exist.
    Error 1
    Storage buckets profile check
    Message no. /SAPAPO/TSM422
    Diagnosis
    There are errors with the storage buckets profile
    Procedure
    This error usually only occurs if changes are made to the the fiscal year variant being used. If this is the case; to remove this error, undo the changes you made to the fiscal year variant. Alternatively, you can deactivate and then reactivate the planning area.
    The error cannot be removed with this report.
    Error 2
    No valid storage buckets profile exists
    Message no. /SAPAPO/TSM235
    Diagnosis
    No valid storage buckets profile exists. The storage buckets profile must also exist in LiveCache.
    Procedure
    Before creating a planning area you have to create a storage buckets profile.
    Check that the storage buckets profile that has been assigned still exists, it may have been deleted. If it exists, it probably no longer exists in LiveCache. To correct this you must deinitialize the version in which this error occurs and then reinitialize it.
    I also checked the consistency between DB & Livecache for DP Time series through SAPAPO/OM17 and found no inconsitencies
    Edited by: Bhavesh Shah on Sep 12, 2009 10:43 AM

    Hi,
    Sometimes planning period might be upto 2012 and fiscal year variant maintained upto 2011. Then it will be problem to assign the period from the Fiscal year variant which is assigned in the Storage bucket profile (/SAPAPO/TR32).
    Please check your fiscal year variant assignment in storage bucket profle and check the fiscal year period maintained in the OB29 transaction.
    Regards,
    Saravanan V

  • .pk File not considered for Web path

    Hi,
    Please, help for this:
    I have one Windows file server "server", with a shared path "folder". In this directory it exists two file: "file.wav" and "file.pk".
    Also I have a web page with a simple link: <a href=file:///\\server\folder\file.wav> (any combination of file URI [back]slashes does the same results), on a Windows 7 client workstation, with IE 10.
    Web server it is an Apache (but IIS make same results), on separate Windows server.
    The three computers are on the same network.
    Ok ... I want to open (modify and save later) on the client workstation, the "file.wav" from shared folder \\server\folderwith Adobe Audition 3.0 (it is windows assigned for wav's), through web page hosted by web server.
    When I acces this link, Adobe Audition 3.0 is opening fine, but it seems like first processing, when .pk does not exists, altough I have preprocessed the .wav. So the process it is very slow and I need speed.
    However, if I'll try to save modified .wav from just opened Adobe Audition, the dialog box indicates \\server\folder, so  I can directly rewrite .wav file from Adobe Audition.
    But if I call Adobe Audition 3.0 from command line, like "audition.exe \\server\folder\file.wav", everything is fine; Adobe Audition read .pk file and opening process is very fast.
    Can enybody tell me what's the mechanism? Can I open "file.wav" from shared resources, via web page, with Adobe Audition 3.0, considering .pk file?
    Thank You,
    TB

    I'm on Linux, with many compression protocols available, and none will recognize the format.
    As well, because of a partial and very strange translation made for my language (French), it is awkward to use without correction. Which I can do very quickly once I can open the jar files.
    Until then I stick with the pre-omni version.

  • Calculating HASH values with WCCP

    Ok, I'm just not getting the HASH calculations.  Can somebody please explain how the HASH values translate into subnets?
    Thanks,
    Patrick

    Patrick,
    I'm not a 100% sure of the algorithm used to determine what subnet is assigned to which WCCP bucket.  However, I do know it involves an XOR of various L3 and L4 header fields in the packet.
    To view the how the calculation has been performed you can run the hidden IOS command
    show ip wccp hash <dst-ip> <src-ip> <dst-port> <src-port>
    Router# show ip wccp 61 hash 0.0.0.0 10.88.81.10 0 0
    WCCP hash information for:
        Primary Hash:   Src IP: 10.88.81.10
      Bucket:   9
        WCCP Client: 10.88.81.12
    Router#
    Hope this helps,
    Mike Korenbaum
    Cisco WAAS PDI Help Desk
    http://www.cisco.com/go/pdihelpdesk

  • Changing Fiscal Calendars in APO 4.0

    All,
    We need to change our Fiscal Period in APO.  When we make this change, do I have to copy the data to a backup cube, change the fiscal calendar, and then reload the data back into DP?
    I am on SCM 4.0.  We will be going to SCM 5.0 in February.  Will it be easier to make this change once we are on 5.0?  Or will the same process have to happen in 5.0 that is necessary in 4.0?

    Hi Monty,
    Changing the Fiscal Period (Fiscal Year Variant) in APO is only possible when the Fiscal Period is not assigned to any Time Bucket or Storage Bucket Profile. This means you need to extract the data in the Planning Area to a cube, deactivate the time series, unassign the Fiscal Year Variant before you can change the Fiscal Year variant.
    I am not sure if SCM 5.0 has made this process simpler. You need to check the Release Notes.
    Also please note there is a separate forum for SAP Advanced Planning & Optimization (SAP APO) now.
    Thanks,
    Somnath

  • SCHEMA: Flat vs. hierarchical

    All,
    I'd like your feedback on another topic relating to the schema, this
    one at a higher level. Basically, the question is whether you think we
    should adopt a flat schema, or a hierarchical schema. To explain:
    - With a flat schema, we assign some number of "buckets" to hold event
    data, give a name to each bucket, and then for any event that comes in
    we extract the relevant data and drop it into that bucket
    - With a hierarchical schema, we would have a set of "objects" with
    attributes that we could attach to events, and then for any given event
    that we receive we extract the relevant data and set the various
    attributes of the various objects.
    You may have already noted that to some extent we already have a
    hierarchical schema, in that we define four top-level containers - the
    Initiator, Action, Target, and Observer - and then we have some
    pseudo-objects under those like User, Host, etc. Right now we sort of
    "fake" an object-based schema by using CamelCase, e.g. InitUserName,
    InitUserDomain, etc. With a true hierarchical schema, we'd actually have
    an object calling Initiator, which might have a child object called
    User, which might have attributes Name, Domain, ID.
    Please again ignore details about how this would actually be
    implemented; you could always convert from one to the other for internal
    storage if need be. Instead focus on whether it would be easier to
    access the data you want to see by using a hierarchical model or a flat
    model.
    These are the pros and cons as I see it:
    - The flat schema is a little easier to display in a table and easier
    to read in a single-line format, but on the other hand the object schema
    yields much more interesting, interactive displays (the SLM event
    display, again, sort of "fakes" an object schema but putting the Init*
    fields at left and the Target* fields at right.
    - If we go to an object schema, we could actually reference almost any
    type of object we wanted by re-using something like DMTF's Common
    Information Model, which describes virtually any manageable IT resource.
    Right now if we want to include, say, a MAC address and we hadn't
    thought about that before, we have to completely revise our flat schema
    and define a new field. The potential downside is that not every event
    would then have even a standard set of fields if the values for those
    fields were null.
    - The other downside of course is how we migrate from one schema to
    another if we fundamentally change how this works. I think this is
    do-able, however, if we come up with some migration plans and perhaps
    support both models in some way for a while (the flat schema, for
    example, is just a representation of some subset of object attributes).
    So let's give some examples. Let's say that we have a user opening a
    file - simple enough. In a flat schema, this might look like:
    { "InitUserName": "user2", "InitUserID": "104", "InitHostName": "dc01",
    "TargetDataName": "syslog-ng.conf", "TargetDataContainer":
    "/etc/syslog", "TargetHostName": "dc01" }
    Note that it's a little ambiguous that this particular username
    represents an account on host 'dc01'
    But in an object schema, this might look like:
    { Initiator: { Account: { Name: user2, UserID: 104, Host:
    dc01}},
    "Target": { "File": { "Name": "syslog-ng.conf", "Container":
    "/etc/syslog", "Host": "dc01"}}}
    OK, so what do you all think?
    DCorlette
    DCorlette's Profile: http://forums.novell.com/member.php?userid=4437
    View this thread: http://forums.novell.com/showthread.php?t=419793

    Hi Rokie,
      What's the difference between Flat & Qualified Flat table?    
        Genarally to avoid the redundancy of data, both  Lookup[Flat] and Qualified Flat will be used.
        But  What's the difference between Flat & Qualified Flat table?
        Lookup[Flat] contains some set of legal values which can be shared by main table .But in some cases
        we may have the data in some fields changes frequently based on other fields in that table.
        Suppose Price of a particular product changes based on region.Here what we need to understand is
        there are multiple values for each main table product.Obvisouly it increases redundancy of data.
        Qualifer table contains qualifier and non-qualifier fields.
        Data which is frequently changes based on other fields considered as Qualifier fields.
        Non-Qualifier fields contains the data which is responsible of changes in qualifier fields.
         I am giving differences between Lookup[Flat] and Qualified lookup.
         1) Lookup[Flat] contains predefined set of records (Ex: List of manufactures for particular product)
            whereas Qualified lookup contains conditional set of records (Ex: Pricing based on manufaturer and
             region)
       2) Lookup within lookup is possible whereas qualified lookup is possible only on main table
       3) Lookup table record need to be maintained  before main table record  whereas qualified record is
          maintained  after main table gets created.
      4) Lookup[Flat] contains relatively small no of records when compared to main table whereas qualified
         lookup contains large no of records when compared to main table.
        For better understanding, go through the above blogs given by others.
       Hope it helps,
       Reward points,if found useful.
       Thanks,
       Narendra.M

  • Load balancing - WAAS

    Hi all,
    We got 2 x 674's in data center and use hash method for load balancing. Due to our IP address scheme, cisco's hash method puts most of the connections go to one WAE only. I know we can increase weight (currently 0) to get nearly 50-50 or 60-40 load balance, but i have no idea how to calculate the weight value. Currently it is 90-10 sharing! Any suggestions or doco is much appreciated.
    Regards
    Srini

    Hello Srini,
    If you're stick to hash (and cannot use mask for some reason), then you're correct, you can use weights.
    Couple of suggestions are there - https://supportforums.cisco.com/docs/DOC-21593#WCCP_best_practices_for_WAAS_deployment
    Make sure that weight factors for individual devices are greater thane 100 - that will ensure complete "bucket" coverage in case one of the devices is down (that is, the remaining device will get 100 % load then).
    When the sum of all weight factors is greater than 100, the specific percentage of buckets assigned to a specific WAAS device is the weight assigned to that WAAS device divided by the total weight and rounded up. Rounding up guarantees that each WAAS device will be assigned at least one bucket.
    p.s. Still mask assignment gives you a bit more flexibility of load-balancing between devices in WCCP farm - see http://www.cisco.com/en/US/prod/collateral/contnetw/ps5680/ps6870/white_paper_c11-608042.html for recommended methods, depending on your HW (WCCP router/switch).
    HTH,
    Amir

Maybe you are looking for

  • My ipad is not showing all my music or videos

    Looking for help with my iPad 2.  I have iTunes 11.1.5.5 on my Sony Vaio VPCSA laptop running Windows 7.  When I connect my iPad 2 and open iTunes, I can select my iPad from the menu column on the left and see 8 movies, all my music, and play lists. 

  • How can I get rid of a windows update deployment package?

    We are using SCCM 2007 with a primary site server and four child sites. I created a windows update list with a source directory and package.  Distributed the package to my primary site server and then from a child site distributed to a long list of d

  • ITunes 8 lost library and all applications for the iPhone 3g

    I am on WinXP Prof, using iTunes v8. This is second time I lost my entire library with ratings, applications, etc. for my iPhone 3g. The first tie it happened to iTunes v7, now with v8. Music folder (it is on another HDD) is still there, but iTunes s

  • Open a discoverer worksheet with parameter

    Hi I'm trying to open discoverer worksheet If the worksheet have no parameter we can open it with http://server/discoverer/viewer?cn=connid&event=openWorksheet&wsk=worksheet1 but if i want to open a worksheet with parameter i want to pass the paramet

  • Merging photos from several libraries

    I have an iPhoto library for myself and my wife as wel. l also have libraries form work on a different computer. I also moved one to a backup hard disk. I now would like to combine all my photos into one library. I'm looking at 5 or 6 libraries in th