Impact of Scavenger Policing on CPU ???

Hi Guys,
With so many policers in use in the scavenger class model, does this impact on the switches CPU? For example, if I configure my access layer 6500 system with 500 users to have a policer on every ingress interface, that will be a lot of policers -500!! could the Sup720 CPU get hit?
Likewise with the 4500's and 3750's ???
I understand the Sacvenger class system. However, I have this CPU worry. If it's all handled in hardware then it's OK?

Could you paste me the capture of "show proc cpu" before and after applying the policy?

Similar Messages

  • Screen Sharing and WindowServer processes using 35-45% of CPU!!

    I have recently relocated and reconfigured my Apple network and computers in the house :1) MacBook Air dual display MBA with 20" cinema screen in the office, and 2) and Mac Mini connected to a large LCD TV in the living room.
    The plan now is to use the MBA as a primary computer and connect via wireless screen sharing to the MM so that I can monitor that system, run more intensive apps, watch EyeTV, file server tasks, etc.
    But now this has increased CPU and fan activity on the MBA beyond what I would expect. The activity monitor indicates that the Screen Sharing and WindowServer processes are using combined between 35-45% of the CPU (screen share process is a constant 25%!!).
    This is not good, and I wonder if it will be fixed in a future release. Any ideas or suggestions on how to limit the impact of screen sharing on CPU?

    It seems to be running better with system update, and use of a faster computer (now using the higher end mac book air)

  • 80% CPU ???

    What's wrong with Safari? It takes 80% of my CPU (2 GHz G5) even if there is not a single window open!
    Thanks for a hint
    Peter

    I can't determine, or think of a reason why that may be occurring; however, I am having exactly the same issue as you on my Intel iMac - with Safari often running beyond 112% CPU usage on just one open window. If a page has to use Flash or a similar CPU taxing plugin, Safari becomes virtually unusable. Quitting, then re-opening often reduces the impact of Safari on the CPU; however, ten, or even just a few minutes into use - it's once again wildly out of control...
    Peter; It's probably not the answer you're looking for - and is most likely something you have already given consideration to; but I'd suggest that for the time-being you use Firefox (which seems infinitely more stable) or try to roll back Safari to an older, more stable version. An update to fix the current problem may be some-time away.
    Paul.

  • Burst value for Policers in a 3750

    Hi,
    I am trying to create a Policy-Map to use it at the ingress of a Fastethernet interface to be able to enforce bandwidth utilization and marking for incoming packets.
    One of my queues is used for VOICE. My objective for this queue, is to be able to guarantee around 6Mbps. I started using the below configuration, but after using 3rd party testing software (WAN Killer and Qcheck) I realized that the enforcing wasn't working because I was reaching speeds of 90Mbs for that particular class.
    The first thing I thought about, was queueing at the egress.
    As you can see, at a queueing level, I am using shaping for the PQ with a 10% (10 0 0 0), so I assume all traffic should be dropped after using 10Mbps (assuming I use a 100Mbps port).
    After not being able to explain why my shape wasn't working the way I thought, I focused on the policer.
    Looking at the Burst value, I tried to modify it, but the results of my testing didn't make any sense *.
    I started testing with pairs of Bandwidth/Burst and the average speed reached just didn't make any sense.
    I would like to be able to predict the max. speed based on my Policers (Bandwidth/burst). Is there any way to do it?
    Thanks in advance for the help.
    mls qos
    mls qos map policed-dscp 24 26 to 0
    mls qos map cos-dscp 0 8 24 26 34 46 48 56
    mls qos srr-queue output dscp-map queue 1 threshold 2 34
    mls qos srr-queue output dscp-map queue 1 threshold 3 46
    mls qos srr-queue output dscp-map queue 2 threshold 2 24
    mls qos srr-queue output dscp-map queue 2 threshold 3 26
    mls qos srr-queue output dscp-map queue 3 threshold 3 0
    mls qos srr-queue output dscp-map queue 4 threshold 1 8
    mls qos queue-set output 1 buffers 5 10 84 1
    ip access-list extended MAPI
    deny ip any any
    ip access-list extended SCAVENGER
    deny ip any any
    ip access-list extended VOICE-SIGNALING
    permit tcp any any range 2000 2002
    permit tcp any range 2000 2002 any
    ip access-list extended VIDEO
    deny ip any any
    ip access-list extended VOICE
    permit udp any any range 16384 32767
    class-map match-all MAPI
    match access-group name MAPI
    class-map match-all VOICE-SIGNALING
    match access-group name VOICE-SIGNALING
    class-map match-all VIDEO
    match access-group name VIDEO
    class-map match-all VOICE
    match access-group name VOICE
    class-map match-all SCAVENGER
    match access-group name SCAVENGER
    policy-map QOS
    class VOICE
    police 6000000 450000 exceed-action drop
    set dscp 46
    class VIDEO
    police 4000000 300000 exceed-action drop
    set dscp 34
    class VOICE-SIGNALING
    police 1000000 75000 exceed-action policed-dscp-transmit
    set dscp 26
    class MAPI
    police 3000000 225000 exceed-action policed-dscp-transmit
    set dscp 24
    class SCAVENGER
    police 1000000 75000 exceed-action drop
    set dscp 8
    class class-default
    police 85000000 1000000 exceed-action drop
    set dscp 0
    ! Access Interfaces:
    interface range FastEthernet0/1 - 48
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport voice vlan 800
    spanning-tree portfast
    srr-queue bandwidth share 1 5 94 1
    srr-queue bandwidth shape 10 0 0 0
    priority-queue out
    service-policy input QOS
    exit
    * I tried testing different sets of values and the Bandwidth I was able to reach wasn't congruent at least with the way I understand it.
    Here are some sets of values I used and the average reached speed:
    Bandwidth/Burst -> Reached-Bandwidth
    10000/8000 -> About 1Mbps
    20000/8000 -> About 2Mbps
    20000/20000 -> About 2Mbps
    30000/8000 -> About 0.5Mbps
    30000/15000 -> About 0.5Mbps
    30000/500000 -> About 0.5Mbps
    320000/8000 -> About 320Kbps -> This is an actual example from the SRND.
    4000000/8000 -> About 3Mbps
    5000000/8000 -> About 5Mbps
    6000000/8000 -> More than 15Mbps
    5500000/8000 -> Around 6Mbps (I had a few instances where the max. speed reached 25Mbps)

    None of this is making sense to me. Please provide more details after considering the following:
    1. The priority-queue on the c3750 is output only, so setting a 'shape' to 10% would only effect outgoing traffic from the port, not inbound traffic. You can configure one ingress queue as the priority queue by using the 'mls qos srr-queue input priority-queue bandwidth ' global configuration command (see the docs).
    2. The 'shaping' that you are describing for the priority queue is queue shaping, not a traffic shaper as if you had a policy map with a 'shape average ' command. This 'queue shaping' is outbound only and will not impact inbound traffic to the port. This is different if this is a metro c3750 on the ES ports.
    3. Why are all 48 ports configured as trunks?
    4. Why are you configuring portfast on each port instead of globally 'spanning-tree portfast default'?
    5. What is the egress port for this switch? A Gig port or just other copper ports? Where does this traffic go from these copper ports? What is the config on the uplink or downlink ports? If this is the only switch involved in your test than you are definitely creating an interesting test environment in which your policer is policing inbound traffic and each port is 'queue shaping' outbound traffic. I would suggest one or the other and configure what is appropriate for what you need rather than configuring what looks like everything and trying to find out what works. I've been down this road, you will never get it exactly the way you want it.
    6. Why on earth are the burst parameters so high on your config example? In most cases you should just configure the average and have the switch determine the appropriate burst values.
    7. Why are you using access-lists for matching voice traffic? Your ACL matching udp ports from 16384 to 32767 have two huge problems: first, no guarantee that the udp packet is voice (many worms use ports in that range), second RTP traffic only uses even port numbers, not odd, odd port numbers are SRTP signalling packets. I want to be on your network, if I configure my laptop to trunk and send my edonkey traffic on udp port 16999 I will have high priority! At least include the destination ip of the call manager and voice gateways in the ACL to be more restrictive.
    8. Why aren't you trusting the end device like an ip phone rather than trying to re-write the IP DSCP value using an ACL? The best practice is that the switch ports be configured (using auto qos or not) to use CDP to allow access to the voice vlan and NOT to use a trunk (the ip phone will tag the voice traffic using dot1q for the voice vlan and the port will never 'trunk'.
    9. There are significant restrictions on how you can apply QoS policies to the switch ports on the ASIC based Catalyst platforms, including policing granularity, number of TCAM entries required, number of match statements per class, number of classes, etc.
    10. Last, can you provide the IOS version and switch model that you are using?
    I will provide some more advise once I understand the above information.
    /Rick

  • How to use loop in RuleBase

    Hi,
    I have a scenario where i have a set of values(say 10) and i need to verify if 2 of them are same.
    How can i write rule around this scenario?
    Please help me!!
    Thanks.

    Not sure I am following your scenario. The best approach is almost always to 1.) clearly identify the determination/decision(s) you are trying to make (i.e. is something valid, eligible, etc.) Note it is best to express the meaning of the conclusion, not the technical (i.e. Good-> product X's incompatible options include ... vs. cryptic-> product X can't be sold with options whose ID's begins with "Y") 2.) write out your policies that define the conditions under which each conclusion can be reached. Use complete sentences and structure them to deal with more complex logic. Focus on meaning not implementation specific expressions.
    With this top down approach the business level policies should be clearly expressed and easier to understand. At that point, the focus can shift to any specific phrasings or syntax that is necessary for the business policies to be understood or deduced from implementation specific details such as codes, ID's etc.
    Applying this approach to your scenario may require answering some of the following questions:
    - What does a base represent?
    - What does a SKAC represent?
    - What does the relationship between base and SKAC mean in business terminology? (i.e. is one a subaccount of a main account)
    - What decision(s) about base accounts or SKAC accounts are you trying to make?
    - What, if any, information is preexisting vs. being selected or provided as part of a transaction? (this isn't usually necessary for expressing business policies, but will eventually impact how business policies ground out in the data being provided in order to make a determination)
    Hope this helps get you moving forward!

  • Portal UWL load flooding ECC

    We are intermittently encountering an issue with the load generated by the UWL service. The Portal is flooding the backend (ECC) with RFCs calling SAP_WAPI_GET_HEADER and SAP_WAPI_GET_OBJECTS. We go from a couple of million RFCs a day up to 15 million when these calls come in.
    The impact was the very high CPU utilization on the app servers and wait time for work processes, so we reduced the logon group to only two app servers, which then breached the conversation ID limit (500) and basically brought down those two instances.
    We've now opened up the logon group to the four app servers and restarted the Portal to allow balancing across them. All's now well for the moment. However, we're at a loss as to what to do next. SAP has recommended an upgrade to the service but this will take a couple of weeks of deployment and testing. We're approaching quarter end, so are in a critical period.
    We've reviewed KBA 1577547 and are configured as follows:
    Default Cache Validity Period, in Minutes = 5
    Page refresh rate = 150
    Delta Pull Channel Refresh Period (in Seconds): 60
    Job in ECC (UWL_DELTA_PULL_2) = 3 mins (changed from 1 minute last week).
    Any ideas?

    Hi Tony
    As I said earlier, I am not much conversant with UWL, however, looking at the RFC calls  it seems that for every work item the system is calling SAP_WAPI_GET_HEADER to get the header information of a single work item at a time and SAP_WAPI_GET_OBJECTS is used to get the associated business/class objects associated with a single work item at a time. Both of these RFC function modules take a single work item ID as  an input.
    So, logically speaking, the number of work items would have an impact on the number of RFC calls..
    If this is a recent spike, please check which new workflows or changes to an existing workflow has happened lately. Also it is quite possible that some IDOC functionality has been activated .... coz for some idocs there are work items generated (especially failure cases)...AND if there are repeated failures , the work items can go up.
    SWI2_FREQ is a good transaction which can give an overview  of Workflows - Top level as well as Work item level. Just select the appropriate date range and check which TASK (Work items) have the maximum entries or which Workflow (Header level/sub-WF) has maximum entries . Compare with the time range earlier to this spike. If there is a huge difference, this might be the cause.
    If not, then let's wait for experts in UWL area to provide us some guidance :-)
    Regards,
    Modak

  • Patch Assessment Report/Template

    Does anyone have an Oracle patch assessment report/template that can be used to provide to management/owners of Oracle systems?
    I know Oracle does a good job of providing the quarterly Critical Patch Update (CPU) with comprehensive information ..... which maybe upto 10 pages.
    But is there a short and generic template (1 or 2 page) that you use for your managers, which highlights the risk, business impacts, contingencies, etc of a CPU and leaves the real technical details out? Or has anyone got other experiences, stories, information they provide to managers to convince them and get approval on deploying Oracle patches.

    Yes, but I was wondering if anyone has used any other tricks / templates with their managers to get approval, ie. those managers who dont really understand an Oracle CPU.

  • Overclocking a Core i7 920 with X58 Platinum: Advice needed!

    Hello everyone,
    Just got my Core i7 920 and I would like to see how far I can take it with the hardware that I have. Although I've read countless overclocking articles over the years (mainly on Tom's Hardware), this is actually going to be my first time doing it. So I guess this makes me a n00b.... 
    So, here is my hardware setup:
    CM Storm Scout Case
    Intel Core i7 920
    MSI X58 Platinum
    Coolit Systems Domino CPU Water Cooling System
    G.SKILL 6GB DDR3-1600 CL 9-9-9-24
    ASUS GeForce 9500GT Top
    Seagate Barracuda 7200.12 1TB
    PC Power & Cooling Silencer 750 Quad 750W
    I've already tried fiddling around the BIOS and also tried the DIP switches on the MOBO (tried the 166 settings). At this point I'm guessing I'll have to turn off some options in the BIOS or something because CPU-Z is reporting different Core Speed, multiplier and QPI Link every 2 second.
    For the record, I'm not necessarily trying to break 4Ghz with this overclock... just wanna take advantage of this 920 and the CoolIt Domino; anything stable between 3.2 and 3.8GHz and I'm happy

    Depends on the type of blue screen you get.
    The way I under stand it is if the errors end with 124 ie   0x0000124 this is most likely QPI related.
    This is taken from the evga forums and I am only posting as a reference..  The Credit belongs to Freemortal at evga. Not everythign applies to the MSI boards but the important ones do.
    This guide is an attempt to assemble as much info about the various i7 voltages you can change in the BIOS.  I can't do it alone though.  This community has a wealth of information on the subject.  Please post what you know and I'll add it to the first post here.  Please make it as concise as possible.
    If you believe any of this information to be inaccurate, let me know.
    NOTE: This guide is applies specifically to the "EVGA X58 SLI" motherboard released in 2008. Most of this info also applies to other X58 motherboards as well but some labels and tolerances may be different depending on the make.  This guide assumes you use a high quality, after-market heatsink and fan.  Do not raise voltages with the stock cooler!
    THE VOLTAGES
    Caution: The voltage you set is not necessarily the voltage you get.  Furthermore the voltage you read is not necessarily accurate either.  The Eleet utility (along with any other software monitoring utility) will simply report what the motherboard tells it to report.  When measured independently, these readings are close, but not entirely accurate.
    VCore (default: 1.28125v, Intel's max 1.375v, VCore over 1.50v on air cooling is risky)
      What it does:
        Sets max voltage to the CPU cores. (if Vdroop is disabled, it will set the min voltage instead)  The i7 doesn't need much voltage at speeds under 3.8ghz.   (For example, I can get 3.8ghz on 1.275 vcore)  Beyond that the voltage requirements climb sharply.
      When to raise VCore:
        * BSOD 101 "Clock Interrupt not received from Secondary Processor"
        * LinX produces errors that are very
        * LinX errors happen within 1 min of LinX
        * LinX produces BSOD within the first minute
      You know VCore is too high when:
        * CPU cores approach a peak of 85c on full load
        * It is unknown how higher voltages may impact the life of the CPU
    CPU VTT Voltage (default: 1.1V (+0mV in BIOS) Intel's max 1.35 (+250mV)
    What it does:
        VTT connects the cores with the memory.  Raising VTT helps keep a system stable at higher QPI rate.  Since QPI is calculated from bclk: the higher the bclk the more VTT voltage you will need.  VTT is also called "QPI/DRAM Core" on other motherboards,
        Prevent CPU damage: VTT voltage must be within 0.5V of VDimm. Vdimm can fluctuate by as much as 0.05V from settings so you may want VTT within 0.45V of VDimm for that extra margin of safety.  Example: if Vdimm is 1.65V, then VTT must be at least 1.20V.
      When to raise CPU VTT Voltage:
        * BSOD 124 "general hardware failure"
        * LinX errors happen only after 10 min or more
        * LinX hangs but does not BSOD
        * LinX reboots without BSOD
      You know CPU VTT Voltage is too high when:
        * Most users try and stay below 1.45V (+350V) for 24/7 use without additional direct cooling.
        * The motherboard doesn't read the temp so you may need an IR thermometer to be sure you are not pushing VTT too far. 
    CPU PLL VCore (default: 1.800V, spec range: 1.65V-1.89V)
    What it does:
        Keeps CPU clock in-sync with bclk.
      When to raise CPU PLL VCore:
        * May help with stability while increasing the bclk or CPU multiplier.(or may make it worse)
        * May help with stability past 210 bclk if you observe that during runtime the QPI Link (found in E-Leet) bounces too much.
        * Not a commonly raised.  May actually cause instability.  Test this variable alone.
      You know CPU PLL VCore s too high when:
        * Its possible you could actually gain stability by lowering this.
    DIMM Voltage (default: 1.5V, Intel's max 1.65)
      What it does:
        Voltage to the RAM. Despite Intel's warnings, you can raise voltage beyond 1.65 as long as it is always within 0.5V of VTT (as described above).
      When to raise DIMM Voltage :
        * High performance/gaming RAM usually requires at least 1.65v to run at spec.  Some manage to get it slightly lower.
        * Stable bclks over 180 often require VDIMM beyond 1.65V.  Remember to keep VTT voltage within 0.5V of VDIMM.
      You know DIMM Voltage is too high when:
        * Memory is too hot.  [more info on this is needed]
    DIMM DQ Vref (default: +0mV)
      What it does:
        It is the reference voltage for a pseudo-differential transmission line. The DQ signals sent by the memory controller on the i7 should swing between logic-hi and logic-lo voltages centered around VREF. VREF is typically half way between the drain and source voltages on the RAM.  Most VREF generator circuits are designed to center between the VDD and VSS voltages on the RAM. There is usually temperature compensation built into the circuitry as well.
     When to raise DIMM DQ Vref:
        * Vref might be adjusted if (after measurement) it was determined not to be properly centered between VDD and VSS of the DIMM. Without a good osciloscope it's difficult to imagine that most users could set VREF correctly. They may be able to set VREF empirically by moving it up or down and checking for POST or BSOD problems.
     Further reading:
        http://download.micron.com/pdf/technotes/ddr2/TN4723.pdf  The document is for DDR2 but differential signaling is a topic that transcends memory models. It has been done for decades in high-end systems and the advantages/drawbacks are well understood.
    QPI PLL VCore (default: 1.1v, <1.4v is pretty safe)
      What it does:
        Keeps on-chip memory controller in-sync with bclk.
     When to raise QPI PLL VCore:
        * Try raising this along with Vcore and VTT, but in smaller increments.
        * Helps stabilize higher CPU Uncore frequencies and QPI frequencies (in CPU feature)
        * Try raising this when you increase memory clock speed via multiplier.
        * Try raising when LinX produces errors after a few minutes without BSOD
    IOH Vcore (default: 1.1V)
      What it does:
        Sets voltage for on-chip north bridge which connects PCIE2.0, GPU, Memory, and CPU.
      When to raise IOH VCore:
        * Possibly needed if you overclock your north bridge (via bclk and CPU Uncore freq.)
     You know IOH VCore is too high when:
        * Memory errors? (just a guess)
        * GPU intensive apps like 3dmark vantage crash. (another guess)
    IOH/ICH I/O Voltage (default: 1.5V)
      What it does:
        some sort of on-chip bus voltage. unknown
    ICH Vcore (default: 1.05V)
      What it does:
        South Bridge chip on the motherboard.  Connects all motherboard features, cards (not PCIE2.0), and drives to CPU/memory on IOH
      When to raise ICH Vcore:
        * I don't know if raising this can help in overclocking at all.  Possibly necessary in order to keep up with an overclocked northbridge?
      You know ICH Vcore is too high when:
        * unknown.  I wouldn't overvolt it too much though.
    PWM Frequency (default: 800)
      What it does:
        unknown
      When to raise PWM Frequency:
        * Overclocking beyond 4.2ghz
      You know PWM Frequency is too high when:
        * VREG approaches 85c
    VDroop (default: enabled)
      What it does:
        Safety feature designed by Intel to protect the chip from excessive wear from voltage spikes.  Enabling VDroop keeps actual voltage running below the VCore setting in BIOS
      What does disabling VDroop do?
        * Makes VCore setting the minimum value for actual voltage; CPU will run at higher voltages than what you set in BIOS.
        * Disabling VDroop is the same as enabling Load Line Calibration on other x58 boards.
      Why would I want to disable VDroop?
        * Some overclockers use it because it allows them to get a high overclock while setting lower VCore in BIOS. This is because the running voltage is actually higher than what was set in BIOS.  Disabling VDroop keeps actual voltage higher than what is set for VCore in the BIOS.  Enabling Vdroop keeps actual voltage lower than VCore.
        * It might help if you are pushing the bleeding edge.
    Diagnosing errors. What to do when...
    BSODs
        * BSOD "IRQL_NOT_LESS_THAN_OR_EQUAL" (I forget)
        * BSOD 101 "Clock Interrupt not received from Secondary Processor" Try raising VCore
        * BSOD 124 "general hardware failure" Try raising VTT
    LinX Errors
    If you get an error you would have x same (correct) results and 1 different (an error):
        * If the incorrect result differs slightly from the rest (numbers very close, same powers in Residual & Residual (norm)) it is most likely that there's not enough vcore. In this case only a small vcore bump is usually needed to stabilize the system (alternatively, Vtt & GTL tweaking can sometimes fix this too)
        * If the wrong result differs much from the others (different power or even positive power in Residual or Residual (norm)) it might be 1) insufficient vcore (the error would happen at the very first runs then) or 2) some memory / NB instability (when it worked for say 10 minutes ok and then produced a different result)
      More serious LinX errors:
        * BSOD during testing (at the very first runs) is often caused by too low vcore
        * System hangs and remains hung it is almost 100% not a CPU but memory or possibly NB issue
        * System reboots (with no hang prior to reboot and no BSOD) - a CPU issue, but not vcore related (insufficient PLL or Vtt I guess)
        * System hangs for a short while and then BSODs - once again NB or memory problem (but might be wrong Vtt / GTL setting as well)
        * System hangs and then just reboots - wrong Vtt (too low or too high) or GTL settings

  • Je ha, auto add/remove machine?

    Cloud servers such as Amazon ec2 have the ability to start/stop machines easily.
    Can I add/remove machines for ha groups automatically? And how to decide when to do that?

    I haven't asked this yet so, do you already have an application built on top of JE HA? How many nodes are there in the replication group? Are you observing a good throughput in the application (for read and write operations) and implicitly a reduced latency in response times?
    There's no guide or documentation material covering what exact information you should be looking for when deciding whether there's a need to add or remove a replication node.
    JE HA is a single master/writer - multiple clients/readers HA system, so adding a node to a replication group increases the ability of the group to do reads, and makes additional copies of the data, which makes it more secure. Though, it does nothing for writes.
    The decision to add a node to a replication group (generically, not necessarily for a JE HA rep group), aside the infrastructure, network partitions, bandwidth, hardware availability, replication factor etc criteria, should be governed by the expected response time for a read operation (the time it takes a node, replica nodes or master node, to service a query). This is something the application should monitor and when the read request's response time increases above an allowed limit, you could add a new (replica) node. Of course, all of this revolves around load balancing the read requests, which is a good use case for using a Monitor node in the rep group that routes requests (both write and read requests) and balances the read requests between the replicas.
    Another criteria for deciding to add a new node can be the availability of enough replicas to satisfy the durability and consistency policies.
    CPU/mem/disk info are outside of JE's ability to obtain -- there are system tools that you can use to collect such info. Replication related information can be obtained in JE through the ReplicatedEnvironmentStats and through the monitoring and diagnostic JMX MBeans RepJEMonitor and RepJEDiagnostics (or the RepJEJConsole JConsole Plugin).
    Though, what you really need to do at this stage is to see what the latency and throughput is, at the application level. Plus there's the question of load balancing, so you need to be aware that you have to spread the reads among different nodes.
    You should also your system stats. Are the replicas loaded to the max or not? If they're very loaded in terms of CPU and I/O, maybe you need more rep nodes.
    Whereas for removing, you could remove a node from a rep group if for example the master spends too much time to achieve a durability ack policy (ReplicatedEnvironmentStats.getAckWaitMs() might be useful in this case) or if the replica node is lagging the master too much (compared to the other replica nodes -- ReplicatedEnvironmentStats.getTrackerLagConsistencyWaitMs() is relevant).
    Regards,
    Andrei

  • Is there any impact that apply Oracle CPU on standby and primary DB

    We plan to apply Oracle April CPU on standby DB at this weekend. Then apply the same patch on primary DB next weekend. If I apply the patch on standby, then enable redo apply between primary and standby, is there any impact on primary DB. Theoretically, I don't think so. However, both DBs are production. I want to make sure there is no any impact by applying CPU in different time. Otherwise, I have to apply both DBs at the same time. Please advise. Thanks in advance.

    853153 wrote:
    I hope you read other people's post carefully. Don't pretend yourself is expert so that you can say whatever you want to say. This is the tech forum for Oracle technical people to ask question, seek help and so on. How do you know I didn't review document 278641.1 before I asked my question. Document 278641.1 only provide info on how to apply patch on primary and standby. Didn't say anything about applying the same patch on both DBs with data Guard at the different time. I just want to make sure If I apply patch on both DBs with one week difference, will this affect my primary database? Is there anything wrong here. If you paly in this forum as expert, please help people, not just look down people.
    I suggest you review [url http://www.catb.org/~esr/faqs/smart-questions.html]How To Ask Questions The Smart Way.
    When you ask for help, state the platform, operating system, Oracle version and patch level, no matter whether it seems important to the question or not. It most often is important. You may not know the difference between Itanium and PA-RISC, but that is important for Oracle and important for someone doing administrative work on an Oracle database to know. You should also learn to search the internet for answers to common questions. Everyone here wants to help, but few want to spoon-feed.
    When you have any kind of standby, that constrains what you can do to a primary, if you want the standby to work properly. You need to understand how things get from one to the other, and how to decide when to simply rebuild the standby from scratch - and that decision is based on your configuration and requirements. You don't want to be in some undefined configuration for any period of time.

  • FWSM with contexts - Broadcast storm impact CPU

    Hi,
    we have a FWSM (4.1(5)) configured with several contexts.
    Last day we had a broadcast storm in one VLAN connected to one FWSM context and all contexts were impacted with loss of service.
    We could check that CPU in impacted context went to 50 - 60 % but in fact service allocated in other contexts were impacted.
    We have Resource Class implemented, but there is nothing about CPU usage (only connections, xlates, .... ).
    Any idea about how to protect contexts against a broadcast storm or high CPU usage in one context ?
    Thanks a lot
    Felipe

    Hi Felipe,
    Unfortunately, the FWSM's CPU is not virtualized across contexts like the conn tables, xlate tables, etc are. High CPU caused by traffic in one context will indeed affect traffic on other contexts on the same physical firewall, which is a limitation of the architecture.
    -Mike

  • Impact of BIA 7 on BI.7 sizing in terms of CPU requirements

    Hello,
    We are finalizing the sizing of our future BI.7 environnement.
    As designed, the BI7 server capacity will be determined by the SAPS/memory quicksizer results.
    But in addition to that, to be on the safe side, we have also to keep in mind that our BI.7 database server must be SAPS scalable,
    In concrete terms, it means bying a much more expensive server than expected.
    Besides, we are also planning on deploying BIA next year. So we figured that since the Accelerator contributes to reduce BI workload (data read, rollup, change-run, etc), we won't need that much CPU
    in the future.
    Based on your BIAccelerator experience, what you think of thoses assumptions ?
    Thank you for your attention and your help.
    Best Regards.

    Hi Raoul
    Your assumptions are legitimate in regard to BWA removing some of the work load on BW. However, before you make those assumptions you need to be committed to purchasing BWA next year. If for some reason, your company does not purchase BWA next year and you made sizing decisions for the BW environment based on a BWA purchase you might be grossly under sized.
    If you are going to consider those assumptions when sizing BW, you will also have to understand how you plan to use BWA when you get it. For instance, if you only plan to use BWA for a handful of cubes with poor performing queries, you're not as likely to reduce as much of the workload from BW had you put every infocube in BWA and ran with zero aggregates. There's a lot of thought which needs to be put into this up front. Considering, you are just beginning with BW, I would personally recommend to size your BW without any thoughts of purchasing a BWA. This is certainly the safest.
    Regards,
    Josh
    SAP NetWeaver RIG

  • High cpu impact and many Java exceptions

    Hi,
    Our development portal is behaving strangely. It's running at normal speed, but with a much higher CPU usage than  normal. A quick trace showed that the log-files is filling up fast with the following error message:
    System.err sap.com/irj Guest 19 #SAPEngine_Application_Thread[impl:3]_7##0#0#Error##Plain###com.sap.engine.services.httpserver.exceptions.HttpIOException: Read timeout. The client has disconnected or a synchronization error has occurred. Read 15205812 bytes. Expected 45455962.
    I have no idea what causes this error, or for how long it has been a problem.
    Any clues on what component may be failing, and how to debug and solve this so the development portal can return to a normal state?
    Thanks in advance guys,
    J. Daugaard

    ...further research and learning. 
    Here is a good tutorial on Java.  Evidently, what I called "jobs" is referred to as processes and threads in this tutorial.  The tutorial states that multithreaded execution is an essential feature of the Java platform which explains the avalanche of threads I observe.  However, I am still stumped at what causes the avalanche of threads and excessive CPU consumption for such a mediocre task.  Also the system log says that IcedTea has an error stating "Application title was not found in manifest. Check with application vendor".   I interpret this to mean that IcedTea does not recognized some application.  What application?  .....the applet?

  • Java threads and WinLogon processes taking CPU!

    We are seeing a strange problem. We have a multi-threaded java application
    that is deployed on a Microsoft Windows 2000 server (SP4) with Citrix
    Metaframe XP (Feature release 2). When this application starts, we start to see
    multiple WINLOGON.EXE processes (20 plus) each taking up 1-2% of CPU -
    this raises the CPU usage on the server significantly, impacting performance.
    We have tested with JDK 1.3, 1.4, and 1.5 and see the same issues. If we
    run the java application in a single thread, we dont see the same issue.
    Has any one seen this problem before? Any suggestion on how this can be resolved?
    Thanks in advance!

    Thanks for your replies. This is a Citrix environment where there are 50 plus
    users logged in and there are 50 plus instances of Winlogon.exe always
    running. Most of the time the cpu usage of these processes is 0.
    We tried a multi-threaded program that lists files in a directory every few
    seconds. There are 40 plus servers in the farm that we are deploying this
    application on. We are seeing this problem on only some of the servers.
    If we run a single thread from main(), we dont see the issue. But if we spawn
    10 threads doing the scans periodically, we notice that as soon as all the threads
    start, the WinLogons appear to start to take up CPU. When we stop the java
    program, the WinLogon processes drop in CPU usage.
    Is it possible that Java and WinLogon share some dlls that could be triggering
    the WinLogon processes off?
    We have tried running the single thread and multi-threaded programs around
    same time and the correlation with the winlogons is clearly visible. If we add
    sleep() soon after we start a thread, the winlogons seem to kick in later.

  • Memory Speed loss with CPU upgrade 2.6 - 3.0

    I have a n 875P Neo board with 2 banks 512m DDR400  
    I have just upgraded my cpu from 2.6G (800 FSB)  to 3.0G (800FSB)
    My BIOS is ver 1.8 (8/9/ 2003)
    I have found that my memory speed timings have dropped by 30% since the upgrade.   Will I get better results if I update my bios to ver 2.0  (the latest bios)??
    Sysoft Sandra Memory Benchmark shows to memory timings to be about 25% lower than expected
    Can anyone help??
    Thanks
    M Miller

    The 875P Neo and 865PE Neo2 boards are kissing cousins   They have a lot in common.
    It would help if you create a signature and put all your hardware information in it. Things like what type of video card and what type of ram all have an impact on what performance modes you can operate under. See the Rules at the top for what to include and how to do it.
    You should also read  P4 Neo Unofficial Guide and  Moan Guide.

Maybe you are looking for

  • ICal under 10.5 doesn't print the List view with To Do's correctly.

    The problem is this: I have a bunch of To Do's in iCal. I usually print them out on paper using the LIST VIEW (print dialog) so that I can take my To Do list with me. Unfortunately, after updating to Leopard 10.5, all the URL's in my To Do's have wie

  • How to create a event in SWED

    Hi Guys, I have to create a event for Health Benefit Plan (HR module). I created a Change Doucment object using transaction SCDO and generated. For this CHange Document Object i have to create the event. I wnet to transaction SWED and in the Change D

  • How to synchronize a local file with the remote copy.

    I have a local file (file.txt) inside the directory of my java application and also a copy in a remote machine. Every time I start the application I want to check if the remote copy has been updated and if yes, synchronize the local copy. If you have

  • How do I know if formhistory.sqlite on firefox 3.6.8 is corrupt?

    Ever since I updated to 3.6.8 (Wiindows XP), Firefox looses form history on restart, even though all settings are for it to retain and there is no such setup on my antivirus. I tried support chat and agent zzxc suggested that perhaps my formhistory.s

  • IP_FOLLOWUP for filling line item for complaint followup

    Hi All, I am using method IP_FOLLOW in the main window implementation class to fill the line items which i am getting from ECC into itab IT_LINESITEM now i need to fill the same at the line item level in the complaint created i have a parameter iv_co