Threshold calculation

Hello gurus,
I was wondering if there is any functions available in oracle PL/SQL which allows me to do the threshold calculation ? I was trying to find threshold values for a price range ...
I know this is very broad question ...I seriously don't know what my business user is looking for ...but i was trying to get some idea and later force my business user to come up with more specific question...But intially i wanted to get some idea from you guys
Thank you so much!!!

user642297 wrote:
Hello gurus,
I was wondering if there is any functions available in oracle PL/SQL which allows me to do the threshold calculation ? I was trying to find threshold values for a price range ...
I know this is very broad question ...I seriously don't know what my business user is looking for ...but i was trying to get some idea and later force my business user to come up with more specific question...But intially i wanted to get some idea from you guys
Thank you so much!!!I'm personally lost.
Is *"the threshold"* some sort of standard type calculation like a standard deviation or square root that i should know of?
If so, i apologize for my ignorance, can you please post a link so i can learn about it.
If not ... maybe you could explain it for those of us that don't work where you do.

Similar Messages

  • WLC Load Balancing Threshold

    I am trying to understand how the load balancing threshold is calculated but I am finding conflicting information, even withing Cisco's own documentation. I would be grateful if anyone could help.
    Cisco's latest Wireless LAN Controller Configuration Guide for software release 7.0.116.0 (April 2011) contains the following information for configuring Wireless > Advanced > Load Balancing Page (emphasis mine):
    In the Client Window Size text box, enter a value between 1 and 20. The window size becomes part of the algorithm that determines whether an access point is too heavily loaded to accept more client associations:
    load-balancing window + client associations on AP with highest load = load-balancing threshold
    In the group of access points accessible to a client device, each access point has a different number of client associations. The access point with the lowest number of clients has the lightest load. The client window size plus the number of clients on the access point with the lightest load forms the threshold. Access points with more client associations than this threshold is considered busy, and clients can associate only to access points with client counts lower than the threshold.
    Option 1
    The formula shown is correct (load-balancing window + client associations on AP with highest load = load-balancing threshold). If so, this would mean that if you had a window size of 5 and the AP with the highest load at the time of calculation was 15, the threshold would be 18. However, as no APs have 18 associations then this threshold would never be reached. Even if an AP reach 18 associations, the next client trying to associate would trigger another calculation for the threshold which would be 21 (3 + 18) and so still, this threshold would never be hit.
    Option 2
    The description in the paragraph below is correct (The access point with the lowest number of clients has the lightest load. The client window size plus the number of clients on the access point with the lightest load forms the threshold). This sounds much more sensible to me. In this case, the window size was 3 and the AP with the lowest number of associations already had 7 clients associated, the load balancing threshold would be 10 i.e. no load balancing would occur until a client tried to associate with an AP which already had at least 10 clients associated.
    Option 3
    I have seen many descriptions on forums etc of the load balancing threshold being essentially the Client window size, i.e. if the client window size is 3 then load balancing will kick in when a client tries to associate to an AP with at least 3 clients already associated. This doesnt match the above documentation unless the AP with the least number of clients associated doesnt have any associated clients i.e. 0 clients.
    Questions
    I think Option 2 is the correct description of load balancing and the formula given stating use of the AP with the highest load is a typo (albeit still not corrected in the latest documentation). Am I correct?
    The problem with using the option 2 method of calculating the load threshold is that you will be unnecessarily performing load balancing in an environment where some of your APs do actually have zero clients associated, unless you set the window size to somehing close to 10.
    I read here http://www.perihel.at/wlan/wlan-wlc.html#aggressive-load-balancing that when calculating the load threshold, it only accounts for the 8 'best' APs for a given client. In other words, if you have 60 APs on your campus but only 20 are visible to the client, the controller will only perform its load threshold calculations bases on the 8 APs which have the best signal to the client. This would ,ake sense as there is no point setting a load threshold based on the lightest loaded AP which is not even within 'reach' of the client. Is this correct as I can not find any other documentation which supports this?
    Thanks in advance for your help with this.

    Interesting, the config guide contradicts itself in the same paragraph.....    I thought maybe we had two different documents with different explanations.  I don't see any open documentation bugs asking to correct this, but I swear I've heard discussion on this in the past.......
    First off:  Option #3 was the "old way". I think it changed in 6.0.    If you had a threshold of 5, then as soon as you had 5 clients on an AP it would reject the association (3 times and then let them on the 4th attempt).  Now its a sliding window/scale.
    Option #1 I think is completely wrong. As you described, how in the world would you ever surpass the threshold if the highest AP + the window is what you have to beat to load-balance....?    RIght, that just doesn't make any sense to me.....
    Option #2, the way you explain it is correct to my understanding...
    Your question #3 is also correct (not sure if it is Top 8 or based on an RSSI threshold though.)
    The idea is that you don't want some AP in a remote office with 0 clients being your starting point.   So I believe that it is based on the top X candidate for your client.    If your client has 4 viable candidates (lets just say -70 or better), and one of those APs has 5 clients and the rest have 15, I'd expect loadbalancing to try to get you to the 5 client AP if your window size was ~10......  something like that anyhow... 

  • Why does query do a full table scan?

    I have a simple select query that filters on the last 10 or 11 days of data from a table. In the first case it executes in 1 second. In the second case it is taking 15+ minutes and still not done.
    I can tell that the second query (11 days) is doing a full table scan.
    - Why is this happening? ... I guess some sort of threshold calculation???
    - Is there a way to prevent this? ... or encourage Oracle to play nice.
    I find it confusing from a front end/query perspective to get vastly different performance.
    Jason
    Oracle 10g
    Quest Toad 10.6
    CREATE TABLE delme10 AS
    SELECT *
    FROM ed_visits
    WHERE first_contact_dt >= TRUNC(SYSDATE-10,'D');
    Plan hash value: 915912709
    | Id  | Operation                    | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | CREATE TABLE STATEMENT       |                   |  4799 |  5534K|  4951   (1)| 00:01:00 |
    |   1 |  LOAD AS SELECT              | DELME10           |       |       |            |          |
    |   2 |   TABLE ACCESS BY INDEX ROWID| ED_VISITS         |  4799 |  5534K|  4796   (1)| 00:00:58 |
    |*  3 |    INDEX RANGE SCAN          | NDX_ED_VISITS_020 |  4799 |       |    15   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access("FIRST_CONTACT_DT">=TRUNC(SYSDATE@!-10,'fmd'))
    CREATE TABLE delme11 AS
    SELECT *
    FROM ed_visits
    WHERE first_contact_dt >= TRUNC(SYSDATE-11,'D');
    Plan hash value: 1113251513
    | Id  | Operation              | Name      | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | CREATE TABLE STATEMENT |           | 25157 |    28M| 14580   (1)| 00:02:55 |        |      |            |
    |   1 |  LOAD AS SELECT        | DELME11   |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR       |           |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM) | :TQ10000  | 25157 |    28M| 14530   (1)| 00:02:55 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR  |           | 25157 |    28M| 14530   (1)| 00:02:55 |  Q1,00 | PCWC |            |
    |*  5 |      TABLE ACCESS FULL | ED_VISITS | 25157 |    28M| 14530   (1)| 00:02:55 |  Q1,00 | PCWP |            |
    Predicate Information (identified by operation id):
       5 - filter("FIRST_CONTACT_DT">=TRUNC(SYSDATE@!-11,'fmd'))

    Hi Jason,
    I think you're right with kind of "threshold". You can verify CBO costing with event 10053 enabled. There are many possible ways to change this behaviour. Most straightforward would be probably INDEX hint, but you can also change some index-cost related parameters, check histograms, decrease degree of paralellism on table, create stored outline, etc.
    Lukasz

  • Realtime A/D conversion

    Hello,
    I'm working on a new LabView project, and did some VI programming before, but this is the first time for me to work with the DAQmx software.
    I want to sample an analog channel, check if the sampled value is above a threshold and write the result to a digital channel. The problem is to do this with a rate of 10 kHz and with 20 channels at the same time. I've already did some realtime programming, and I have the possibility to use a realtime PC system.
    My questions are how to program this (parallel loops, timed loops?), how to check if the computer is really doing the trick at 10 kHz and how to do the calculations at the same time.
    I have already read the 'Learn 10 Functions in NI-DAQmx...' tutorial, but that doesn't answer my questions. I'm using LabView 8.0 with the NI PCI 6259 boards.
    Best regards,

    Possibly the point you have missed is that LabVIEW operates on the basis of data flow.
    When the desired number of samples are available from a Data Aquisition task (DAQ aquisition), the data becomes available to the processing routine by 'flowing' from the DAQ task into your process task. It's this flow that controls the execution when you connect the data wire from the DAQ task to your processing task.
    Thus is if you use the DAQ Assistant and create a task to read your channels, if there is insufficient time to read in or process the data then an error (e.g. -200279 buffer over written) will be generated and be made available at the error node. Whilst if there is available time the system will wait untill the timeout or the data becomes available.
    For your task I suspect that you will need to use reasonably large data blocks for example at 10kHz perhaps transferring data as say 1000 sample blocks might be good for your computers processing configuration. Thus you would have 20 channels at 1000 samples giving a transferred data set of 20,000 samples (if they are two byte samples that's about 40k bytes of data). Now you need to finish your data processing calculations (threshold calculation)  in less than 0.1 of a second to be ready for the next set of data as with the rates and sample sets detailed there will be 10 sample sets per second. So now you can determine if your processing task can be performed in the available time period.
    Unless you need some kind of guaranteed realtime response behavoir then the realtime operating system may not be necessary. Don't forget that real time only implies that a task will start at a certain time based on a given set of criteria. It does not guarantee that the data processing of a block will complete if the processing overhead is too large for the total system iteration rate.
    "May the flow be with you"

  • Threshold quantity calculation for one previous periods

    Hi,
    There is a requirement to calculate the Threshold quantity if we give calender month(april 2007) it has to pick the back 12 months shipped quantity (month wise)
    starts from march 2007 to feb 2006. and finds the active months(shipped quantity for that month shoulbe > 0) then it calculates the threshold quantity as
    for eg.,
    jan  feb  mar  apr may jun  jul   aug sep  oct nov dec  total
    100 200 100  100 50   100 200  50  100  60 200  100 1360
    threshod qty : (1360/12)*8 = 907
    here 8 is threshold factor
    jan  feb  mar  apr may jun  jul   aug sep  oct nov  dec  total
    100 200    0  100 50   100 200     0  100  60 200  100 1210
    threshod qty : (1210/10)*8 = 968
    the selection screen input is calendar month eg., 04/2007  (april/2007)
    Please give the logic for calculating the above with steps
    please specify any FM for these.
    Thanks in advance

    Hi Viswa,
    Thank you for your prompt reply. I tried with your answer, when I ran AFAB with posting period 1, I am getting below error.
    "You want to carry out a repeat depreciation posting run in period 001. However, the last posting run was for period 006.
    Start the repeat run again using period 006".
    Kind Regards
    Shanid

  • CCMS threshold Value baseline calculation

    Hi Guys,
    I am looking CCMS threshold value baseline calculation for ECC, BW, XI, SRM, CRM, APO & Solman systems. Many portals are provided the description of each MTE class , but not provided baseline calculation.
    Please let us know some expertise advice
    Thanks.
    Manjunath

    Dear. Marc P. Gilomen.
    I totally understand what you are written above.
    The threshold value shouldn't be changed until I change the rule related to the product.
    However, after I changed the rule, I have expected the threshold value should be changed according to the rule.
    But, it's not.
    This is the point.
    After I change the rule, for example 35% to 45%, the threshold value of GTS preference determination transaction '/SAPSLL/PRECA01' is changed. But, The replicated sales order's threshold value is not changed. And delivery, billing is the same with salesorder also.
    It's not changed whatever I do.
    So, I want to solve this problem. But until now, I didn't find the answer.
    Thank you for reading.
    Best regards,
    Jong Hwan

  • Threshold Value in Sales Order Processing

    Dear All,
    I have a question for which I need inputs and suggestions from you.
    We have a requirement in the returns processing to do check on the value of goods being returned and if this value goes beyond a certain threshold (pre-agreed value with the customer) then the system will raise a message and should not allow further processing of returned goods.
    This is the approach that we think of following:
    1) we store the initial Threshold amount as a header condition in the Return Authorization (RA) document.
    2) when creating Return Orders (RO) with reference to RA we check the value of all the open RO's against the RA
    3) We make reference to RA as mandatory at the time of RO creation
    3) if total value exceeds then we raise a error message
    Basically we plan to do this check in real-time, whenever a RO document is being entered we do this calculation in real time.
    Please advice if you think this is a feasible approach, or in case you have any other suggestions for storing Threshold amounts in SAP and working with.
    Thanks a lot in advance.
    BR,
    Sahadj

    dear friend,
    your approach looks okay from my point of view...
    you would also append VBAK table creating a new custom field for your purpose and adjust the relevant code(s)
    good luck!

  • How do i put the results from a calculation as the offset value

    hi .. i am trying to set the value of the mask offset from the results i get from a calculation.
    this is what i am trying to do
    i want to get a ROI from an image, so i used thresholding to separate it form the background
    then i find the centroid of the image. i then create a mask a little
    larger than the ROI to get rid of the background in an unthresholded
    image but the position of the mask is at origin(0,0)
    i need the centroid of the mask to be the same as the ROI. so i
    subtract the x and y values of the ROI centroid with the centroid of
    the mask to get the offset.  which is done manually. but i want to
    do it for ROIs in different positions, so i want to get the value of
    the subtraction staight into the VI .. how can i do that..?? i tried
    wiring the value of subtraction into the input of the "set offset" node
    but it does not work .. can anyone help me ??thanks very much

    i managed to get it done...i did not know how about the cluster to element function .. could you comment on my program?
     i'm trying to construct a sign language translator.  for the
    signing. i am using yellow colored gloves with colored finger tips. the
    yellow region is to help me detect the position of the gloves during
    thresholding.
    1st of all .. i do thresholding to separate the ROI from the
    background.. then i put a bouding box around the region of interest. is
    there any better and easier way to do this??
    could anyone point me in the right direction ?
     i am very new to LV and image processing. i always have problem
    using some functions especially when it requires lots of settings.
    sometimes the help file does no help much ..
    my next problem after creating the bounding box is how do i extract
    only the colors from the image? i tried color thresholding but the
    result is gray scale. how do i extract only the yellow region of the
    gloves and the finger tips to be further processed? in MATLAB , color
    segmentation is used.. but i can't figure how it works in LV .. please
    help me .. thanx
    Attachments:
    color threshold_andreas_centroid_mask.vi ‏263 KB

  • How to apply a 'calculated' radius in Unsharp Mask en mass?

    In about a week's time, when I've finished applying Curves to about 1200 B&W images, I'll be ready to apply an Unsharp Mask to each of them. Exactly how I'll do that I'm not certain yet, but I've been reading my Photoshop CS Bible (Deke McClelland) in preparation. On page 503 it states:
    If you're looking for a simple formula, I recommend 0.1 of Radius for every 15 ppi of final image resolution... If you have a calculator, just divide the intended resolution by 150 to get the ideal Radius value.
    For example, at 300 ppi use Radius = 2. Such a recommendation is just that: a recommendation (and not 'ideal' as he also states), but I've taken it as a good starting point; and while I've been adding Curves I have also been experimenting with Unsharp Mask and various Radii (but not yet saving the sharpened images).
    McClelland's statement is not clear regarding what Radius should be used if you are NOT working on the image at final resolution. I think I know what the answer is, but I want to confirm.
    Some background: I'll be printing on a Xerox iGen running a colour line screen at 175 lpi. The printer wants the images to be 300 ppi. The images at the moment range from an effective ppi of about 100 to 2000, depending on how much they have been scaled in InDesign. So I've got resolutions all over the place which at some point I will have to tidy up. Those images below about 250 ppi, I will be upsampling to 300. Those above about 450 ppi (this may change), will be downsampled to 300 when I convert to PDF.
    I have some questions:
    QUES 1
    Say I have an image at 1200 ppi, that will be downsampled to 300 ppi. Should I do the sharpening on the 1200 ppi image, or on the downsampled 300 ppi image? i.e Will it make any difference to the final printed result if sharpening is done before or after downsampling?
    QUES 2
    I think the statement:
    I recommend 0.1 of Radius for every 15 ppi of final image resolution
    should read
    I recommend 0.1 of Radius for every 15 ppi of image resolution
    Surely the Radius (assuming you accept the figures McClelland gives) depends only on the resolution of the image you are working on. i.e if you have a 1500 ppi image, use a Radius of 10 (1500/150), NOT a Radius of 2 (300/150) -- the final image resolution that is sent to the printer. Is that a correct reading of McClelland's recommendation?
    QUES 3
    My experiments indicate that for the type of images I am using, McClelland's recommendation oversharpens, at least for my liking. I will probably use half the figure he suggests: Radius = 1 for 300 ppi.
    Given that I have about 1200 images, if I was going to apply an Unsharp Mask manually this is what I would do:
    1. Put all the layers in each image into a Group.
    2. Calculate the Radius to be applied to each image from the formula: Radius = (IMAGE PPI) /300.
    3. Apply an Unsharp Mask to the Group using the calculated Radius (plus Amount = 50, Threshhold = 0)
    Of course I'm not going to do this manually -- well, I hope I'm not -- but what is the best way? Could an Action handle all of this? A Script? A combination of both? Something else entirely? Not possible in PS?
    In summary: I want to automatically apply a variable Radius that depends on the image's resolution. Possible?

    The PDF says, for one of the steps:
    Command-click the duplicate channel to make the edge mask a selection.
    When I do that, a selection outline appears, but I can't work out, or find out, what conditions are applied to make the selection. Any ideas? I may have to start a separate thread for this question. I don't like to blindly follow recipes. I like to know what's going on.
    Guy,
    I'm not sure I understand your question. With command click, the channel is the selection. White = fully selected, black = not selected. It's all the steps made to create the channel that affect the selection. To me the selection edges are visually interfering with a soft mask, I usually hide the edges.
    In post 9 steps 2-6 all contribute to the final section. Find edges identifies the edges. Median has a smoothing effect. Maximum expands the selection a little, then the gaussian blur softens it.
    This process (post 9) works for me when I sharpen images, but you may not like it. Sometimes no mask at all is a viable option - some folks like a grainy/noisy appearance in non-edge areas. Some people may even purposefully add noise to create that kind of look.
    With USM the radius thickens the light and dark areas of edges, and amount intensifies (lightening the light, darkening the dark). Threshold is just what it's  name implies, it sets a parameter for Ps to apply the sharpening or not (but it can result in a pockmarked look in smooth areas, so I like 0)
    I always look at images at actual pixels in Ps when sharpening. With sharpening there is room for opinion. Some people like thicker edges (higher radius). Some people go easy on the amount, some people are more aggressive.
    It's the scaling in ID that makes things difficult, because actual pixels won't give you a true print appearance. To correct this the image either needs to be scaled to size in PS, and placed at 100% in ID - OR left as is in ID, and the resolution changed in PS (300 x % i.e. 300 x 25% = 75 PPI)
    I like to start with radius 1 and 100 amount but these values are always subject to change. I wish it was an exact science but it's not. I like sharp images, but I don't want icicles in eyebrows either. I believe you said 175 LPI screening, so if it was me I would err on the sharp side...
    Something else I just thought of - the print size is something to think about, too. Consider the two extremes (which may or may not apply to your project) - thumbnails, and posters. If it's a thumbnail, by the time you've got the scaling right in Ps, there really isn't much pixel information left! In his instance a lot of sharpening probably wouldn't hurt, it would create contrast (which is important in very small images). The other end of the spectrum - posters. Actual pixels may be a little misleading in this case, because when the average person looks at a poster, they don't get right up on it at a normal reading distance, they view it from several feet away. For this reason sometimes poster images are only half-resolution, to keep the file size manageable, and give a more realistic view perspective.
    Hope this helps. I am not a forum expert. The others may have better advice, or may see errors in my information.

  • How is GC cpu utilization % calculated?

    I am trying to find a way to capture the problem where a user process eats up a ton of cpu for over 30 minutes. I thought that I could possibly use the cpu utilization metric.
    In testing it, I reset the values to warning and critical thresholds to 0 and 5% respectively to see what would happen. I was emailed with the expected critical alert. In looking at the top sessions on the server, I noticed that 1 session was eating up 100% of one cpu (which I want to know about) but the alert email said that 45% was utilized which I noticed was about what the user load is on the server.
    Initially, I had the warning and critical threshold set to 85 and 95% respectively but did not get any warning about a user process that was taking up 100% cpu for > 5 minutes (in which the 5 minute collect times were set to once).
    My questions are
    1) How do I capture a single process eating up > 90% cpu in a multi-cpu environment
    2) Is the cpu utilization % capturing the Average Load of the cpus on the server and alerting when it's above the critical threshold?
    3) How is the cpu utilization % calculated?
    Help would be appreciated.

    The cpu load is determined by running the Grid program nmupm found in $AGENT_HOME/bin
    The output is like:
    dbc12ykf: PDBGRP> nmupm osCpuUsage
    em_result=1|206331806.000000|201540370.000000|102524043.000000|958620752.000000
    em_result=2|212731589.000000|201203614.000000|67775500.000000|948506267.000000
    em_result=3|212402913.000000|230642957.000000|76833167.000000|917779040.000000
    em_result=4|242543014.000000|367865067.000000|81554006.000000|791426607.000000
    You will get a line for each cpu.
    YOu'll need a bit of reverse engineering to figure out the columns - I think they are user, system, idle , 5min avg - Multiplied by 10000 ?
    Now to capture a big one - I think you will need a user defined metric. Just write a simple shell script to run the above command, and evaluate the numbers coming thru.

  • SCE loses Quota Below Threshold events in cutoff redundancy mode

    Hi all,
    We are implementing a inline cascade topology with 2 SCE 2020 devices and the action on the SCE failure is cutoff and we are experiencing delay in "Quota Below Threshold" events.The events consequently generated upon quota depletion (i.e. "Quota Below Threshold" and "Quota Depleted") do appear, but meanwhile the subscriber experiences temporarily loss of service. The most important implication is that the subscriber loses all open downloading sessions. We haven't experienced such an issue while we were working in the "bypass" redundancy mode.
    Here follows an excerpt of our logs which indicate the problem while a subscriber has been downloading a file. The logs are generated by the API quota listeners at the lowest level. The quotas that get replenished each time are 50Mb. The quota threshold was set to 20Mb (from the default 10Mb) in an effort to mask off the problem, but unfortunately it persisted. The downloading rate wasn't high enough to justify a rapid replenishment of quotas.
    2011-05-06 17:40:26,499 DEBUG [handlers.SceQuotaListener] Handling Quota event for subscriber xxx with quota 16426 and type QUOTA_EVENT_TYPE_STATUS
    2011-05-06 17:40:41,491 DEBUG [handlers.SceQuotaListener] Handling Quota event for subscriber xxx with quota 10161 and type QUOTA_EVENT_TYPE_BELOW_THRESHOLD
    2011-05-06 17:41:41,759 DEBUG [handlers.SceQuotaListener] Handling Quota event for subscriber xxx with quota 7668 and type QUOTA_EVENT_TYPE_BELOW_THRESHOLD
    [... QBT event  ???]
    2011-05-06 17:42:41,819 DEBUG [handlers.SceQuotaListener] Handling Quota event for subscriber xxx with quota -910 and type QUOTA_EVENT_TYPE_BELOW_THRESHOLD
    2011-05-06 17:42:41,823 DEBUG [handlers.SceQuotaListener] Handling Quota event for subscriber xxx with quota -910 and type QUOTA_EVENT_TYPE_DEPLETED ...
    We are of the impression that the quota below threshold or depletion conditions are detected after a small delay of 2-3 seconds. Is it possible that this detection gets delayed somehow under certain conditions?
    Thank you in advance

    Rapid use of quota does explain this behavior but seems like you are already aware of this situation.
    Just to be sure, The Threshold should be set by evaluating the volume that the fastest subscriber in the system can consume in 30 seconds period. In your policy this would be both upstream + downstream the fastest subscriber can get (since quota is calculated in both directions). So the volume which the subscriber can consume with such a rate for 30 second period should be a guideline for setting the threshold value (threshold should be set significantly higher than this value).
    The QM uses a Sliding Window Model for measuring subscriber consumption -
    http://www.cisco.com/en/US/docs/cable/serv_exch/serv_control/broadband_app/rel36x/qm_sol/01_overview.html#wp1053153
    The Remaining Quota interval is configured in the SCA-BB GUI (at the RDRs section). What have you configured it as?
    Can you increase the quota replenishment limit to 100megs from 50 megs and leave the below threshold limit to 20 megs.
    Also not sure if you already have this but this link and the paragraph below may help:
    http://www.cisco.com/en/US/docs/cable/serv_exch/serv_control/broadband_app/rel355/qm_sol/02_scenarios.html#wp1053156
    Maximizing Quota Accuracy
    One of the most important aspects of the quota manager is accuracy of the quota levels for any subscriber. When you provision quota using an external server, a trade-off exists between quota accuracy and the number of network messages.
    To maximize accuracy, configure the rate of the periodic remaining quota indication to a high value, and configure the size of the quota dosage to a small value. A configuration causes performance degradation due to the high number of messages being generated in the network.
    Quota inaccuracies may occur during the changeover from one aggregation period to the next, or due to SCE fail-over. The level of inaccuracy depends on the configuration of the following parameters:
    •Rate of the periodic remaining quota indications
    •Quota dosage value
    During an aggregation period changeover, the following occurs until the first quota indication is received in the new aggregation period:
    •Any quota consumed by the subscriber is subtracted from the previous aggregation period.
    •The quota dosage value limits the size of any quota error.
    •The interval between the remaining quota indications limits the length of time during which consumed quota is subtracted from the previous aggregation period.
    In cases of SCE fail-over, the following occurs between the last quota indication in the failed SCE and the first quota indication in the new, active SCE:
    •Any quota consumed by the subscriber is not removed from the subscriber buckets.
    •The quota dosage value limits the size of any quota error.
    •The length of time during which quota is consumed is limited by the interval between the remaining quota indications.
    In all cases of inaccuracy, the quota remaining is calculated in favor of the subscriber. The only exception is if the aggregation period changeover occurs when the subscriber quota is already breached.
    Shelley.

  • Weird "Trend" metric calculation in Tabular KPI

    Hi Experts ,
    We have a tabular model in which we started designing the KPI to show Actual metric value vs its Target comparison by an Status indicator to know how good the metric has performed.  In the KPI calculation window of the tabular model , there are only
    place holders to calculate Value , Status & Target but not for “Trend”  . Even though we didn’t code anything specific for “Trend” calculation , Under the newly created KPI we see the “Trend” along with  Value , Goal & Status . But the “Trend”
    is behaving weirdly in the Tabular KPI, The trend indicator is being showed for every dimension attribute that is sliced with the KPI irrespective of whether it has Metric value or not.  I searched many websites to understand how this “Trend”  is
    being calculated in a KPI , but none of them are able to throw some light on the “Trend” calculation. In this kind of scenario please suggest me a way to circumvent this issue.
    How to hide the “Trend” indicator from the newly created KPI, as this I think we cannot define “Trend” calculation in tabular as in that of Multidimensional Cubes
    Understand the reason why “Trend” is displayed in tabular models
    Below is snapshot of our KPI when interfaced through Excel.
    Can you guys please help on how to hide the “Trend” expression from tabular models. So that our users wouldn’t be confused by using unrequired metric in KPI.
    Rajesh Nedunuri.

    Hi NedunuriRajesh,
    According to your description, since you haven't specified any expression for Trend calculation, you want to hide the Trend option. Right?
    In Analysis Services Tabular, Value, Goal Status and Trend in KPI are based on the Base Value, Target Value and Status Threshold. No matter you specify a Trend Expression or not, the Trend box is always displayed in the KPI pane. And it will do the
    calculation automatically. This is feature by design. There's no way to edit or modify it. So your requirement can't be achieved currently.
    I recommend you
    a feature request at https://connect.microsoft.com/SQLServer so
    that we can try to modify and
    expand the product features based on your needs.
    Best Regards,
    Simon Hou
    TechNet Community Support

  • Threshold value problem.

    Hi, experts.
    Now, I'm testing the preference calculateion.
    After I save sales order document, the sales order I saved is replicated to the GTS server.
    And, the threshold value is calculated by the system.
    In my understanding, the threshold value should be come from the rule that is assigned to the relevant product.
    When I excuted the preference calculation in GTS system, the threshold value is calculated correctly.
    But, after the GTS system get the sales order document from the ERP system the value is different from the value of the preference calculation excuted in GTS system.
    And even I changed the rule's condition(%), the threshold value of the sales document in GTS system is not changed.
    But, the threshold value of GTS's preference calculation transaction is changed whenever I changes the rule.
    I want to know how to solve this issue.
    The threshold values shouldn't be different from each other.
    Thank you for reading.
    Best regards,
    Jong Hwan.
    Edited by: Jong-Hwan Park on May 26, 2011 5:11 AM

    Dear. Marc P. Gilomen.
    I totally understand what you are written above.
    The threshold value shouldn't be changed until I change the rule related to the product.
    However, after I changed the rule, I have expected the threshold value should be changed according to the rule.
    But, it's not.
    This is the point.
    After I change the rule, for example 35% to 45%, the threshold value of GTS preference determination transaction '/SAPSLL/PRECA01' is changed. But, The replicated sales order's threshold value is not changed. And delivery, billing is the same with salesorder also.
    It's not changed whatever I do.
    So, I want to solve this problem. But until now, I didn't find the answer.
    Thank you for reading.
    Best regards,
    Jong Hwan

  • Tables for threshold value in GTS 11

    Good morning,
    to get away from our old ERP preference calculation we bought a license for GTS. At the moment we are discussing implementing version 10.1 or even 11.
    I am worrying as I am not sure, how to put data out of GTS to BI (BW) with version 11. We need the price and the threshold value (former table MMPREF_PRO_....) and those for the vendor declaration from supplier and the compression. (LFEI/MAPE).
    Does anybody has experience with getting this into BI? And the "new" tablenames in GTS?
    Hope, my questions are not too dumb, but we are just starting.
    Thanks for any help and have a nice day.
    Rgds Alex Linck
    Kostal Germany

    Hi ,
    Customize as per the path given below for the GTS to BI(BW) data replication.
    IMG : Integration with Other mySAP.com Components >> Data Transfer to the SAP Business Information Warehouse >> General Settings >> Maintain Control Parameters for Data Transfer. 
    GTS > General Setting > Document Structure > Define Document Types Turn on > Transfer to SAP Netweaver BI Active flag .
    GTS > General Setting > Document Structure > Define Item Categories > Transfer to SAP Netweaver BI Active flag .
    GTS > General Setting > Organizational Structure > Control at Foreign Trade Organization [FTO] for SAP NW BI > In the BI Active column, this flag is turned on for active FTOs.
    Ashish

  • Trying to set a calculated attribute in an entity implementation java file

    Hi,, im working in Jdeveloper 9.0.3.2 in a web application and the problem is as follow:
    I have one table, This table has an attribute which value is calculated from another attribute in the same table. Take in account That it is not a trascient column, it is a real entity column in the entity object.
    First i tried this by using custom code in the validateEntity method but it was impossible because of the error: "JBO-28200: Validation threshold limit
    reached. Invalid Entities still in cache".
    next, i try the same by using custom code in the setter and getter methods by using the populateAttribute method and seems that transaction is successful but this result is only reflected in the entity cache so when im query directly the database this attribute is empty.
    I have not idea what to do.. im bored trying to get solution to this problem.
    Please help me!!!
    Thank u
    Orlando Acosta
    Infogroup Team
    Colombia South America

    Thank you very much for the answer.
    I tried to do what you suggested, but I get an error message when I tried to put session data into the user session object on my JSP page.
    Here is part of my codes in the JSP page.
    <%
    // Retrieve all request parameters using our routine to handle multipart encoding type
    RequestParameters params = HtmlServices.getRequestParameters(pageContext);
    String dsParam = params.getParameter("datasource");
    String formName = dsParam + "_form";
    String rowAction = "Current";
    String event = "Update";
    String userName = (String) session.getAttribute("userName");
    if (!(getDBTransaction().getSession().getUserData().containsKey("user")))
    getDBTransaction().getSession().getUserData().put("user",userName);
    %>
    And here is my error message.
    Error(16,16): method getDBTransaction not found in class _DecalDataEditComponent
    I got the other half working where I retrieved the session data in a setter method of an Entity Object Class as below.
    public void setDestroy(Date value)
    if (value!=null)
    setDestroyedBy((String)getDBTransaction().getSession().getUserData().get("user"));
    setAttributeInternal(DESTROY, value);
    Your help is very appreciated.

Maybe you are looking for