Weighted fair queue question

when i configure WFQ on an interface, i see a parameter called available bandwidth on the sh interface command which is less than the actual interface bandwidth. And if the traffic exceeds the available bandwidth then will result in output queue drops.
also i found that changing the interface bandwidth to any arbitrary value changes these parameters as well. ( eg. if i ahve a 1 MB interface and i configure the bandwidht statement as 10 MB, the available bandwidth shows around 8MB) . will it mean then, it will not show drops till the traffic exceeds 8 MB.
please make me understand how this is calculated and how does it affect the interface parameters like queuing etc.

Hello,
in case you use WFQ configured on an interface, the configured "bandwidth" has no influence on the queueing behaviour. Software based queueing (like WFQ in a router) will be involved, when the hardware queue is full, i.e. when there is more traffic than the interface can send at the time. You can never send more traffic than the layer1 clock rate allows you to send - no matter what you configure with the interface command "bandwidth".
In case you configure CBWFQ through a policy-map, then the configured interface "bandwidth" is used to calculate the bandwidth per class within percent based CBWFQ. Again this will only make sense if you adjust the interface "bandwidth" to the clock rate.
Hope tis helps! Please rate all posts.
Regards, Martin

Similar Messages

  • QOS fair queue on fast ethernet

    I started with a company and they have fair queue 64 256 0 configured on fast ethernet. I beleive that fair queue was only meant for WAN links 2mps or less. I have noticed many output drops and thought that it was relate to the fair queueing tail drops. Was I correct by removing the fair queueing and just defaulting to FIFO?

    Totally agreed on the simple is the best. Simplify the configuration, remove the unused commmand may also improve the performance. Due to lowe CPU process required.
    Moreove, if there is not many tunnel, ipsec, ip accounting, etc. CPU intensive process, then the 3660 should be good enough.
    I can't comment why the management purchase new model (but not the high end) to replace 3660. There should be some reasons behind. Don't worry about the new hardware, just try to explore new feature and benefit from the new box that can provide to your company.
    Did you checked the CPU loading of the 3660 ? If it is not high (>50%) then it shouldn't be the hardware problem.
    Hope this helps.

  • JMS queue question

    Hi all,
    I am using Weblogic Application Server and say I have 1000 messages in a JMS Queue say "Q1".
    Q1. How are these messages picked up from the queues? i.e. by a thread etc.
    Q2. If they are picked up by threads then who configures these threads? Are they default threads created by the Weblogic Application Server or are user-created threads?
    Q3. Also, if 20 threads are involved in picking the messages from the queue and of them one thread snaps while picking the message then what will happen? and what will be the solution??
    Thanks.

    Q1/Q2*:
    The answer to your question is "its provider specific".
    You will need someone who explicitly works on weblogic to provide specifics so you may want to utilize a weblogic specific forum instead of a generic JMS forum
    I'll try to answer this in very generic terms that should apply to all providers:
    If you are receiving messages with an MDB: there is a thread pool associated with the resource adapter which plugs the JMS provider into the server. This is generally configured as part of the application server configuration. That pool dictates the maximum number of threads which can call the same MDB's at the same time.
    If you are receiving messages with a Servlet: In this case, you are probably calling receive within the servlet. That will use the thread provide by the webstack which is executing the servlet
    If you are receiving messages in an AppClient or Standalone provider: You are using your own thread if you call receive(), or a thread provided by the provider if you are calling onMessage()
    In all cases, the provider may be allocating additional threads "beneath the covers" to process those messages.
    Q3*:
    I'm not quite sure what snaps means. Threads run until they exit. I'm going to guess that the question is "what if the MDB or code throws a RuntimeException". If you mean something different, please supply a more specific description of the error.
    This one depends on what whether or not you are using transactions and how you are receiving those messages.
    If you are using transactions in an MDB: the transaction will rollback and the message will be redelivered to another consumer.
    In all other cases, it depends on the specifics (I was going to list them but it seems like to much work for a Friday afternoon).
    If you are using a non-transacted or non-MDB method to retrieve the messages, let me know and I can tell you how, per the JMS and J2EE specs, it should work. I'll need to know acknowledge mode/transaction, type of client and if you are calling receive() or onMessage().
    -- Linda

  • What are fair interview questions

    I had 2 recently I didnt think were fair.
    1. was how would you recover a table to a point in time from a truncate without disturbing existing prod?
    I said you had to rman restore elsewhere, roll forward with arc logs to last point before the trunc and then export or CTAS back to prod.
    2. how do you deal with index contention oin concurrent inserts.
    Without any further info I said evaluate the index in use to see if its the correct index type.
    For 1, the Interviewer said well really it would be faster if you used your dataguard database and flashbacked. (could tell he wasnt impressed)
    For 2, the Interviewer said well the answer I was looking for was if its for a sequence based table in RAC you could up the cache to a few thousand.
    I could have got into arguments here with the interviewer but thought better of it, but next time I will argue.
    Where did he get the replication features and rac sequences from? If theyre going to randomly throw features into the equation after the question, then Im bringing sybrand, tom kyte and jonny lewis with me on concall. (think of it like a phone a friend, if I win you split the pot). "Where did you get those guys from" the interviewer will ask, "The same place you got your dataguard and rac from" I'll answer.
    whats fair and not fair in an interview?

    >
    I had 2 recently I didnt think were fair.
    1. was how would you recover a table to a point in time from a truncate without disturbing existing prod?
    I said you had to rman restore elsewhere, roll forward with arc logs to last point before the trunc and then export or CTAS back to prod.
    2. how do you deal with index contention oin concurrent inserts.
    Without any further info I said evaluate the index in use to see if its the correct index type.
    whats fair and not fair in an interview?
    >
    Alls fair in love and interviews. What position were you interviewing for?
    I don't see anything wrong with those questions - they are both fair questions. What is it about them that you think is unfair?
    What other questions were ask before you were ask those two?
    I would offer several suggestions
    You should NOT offer an answer if you don't have enough facts to base the answer on; that's called 'guessing'. You either need to get the missing facts that you need from the person asking the question or make reasonable assumptions and base your answer on them. Always include your assumptions in the answer you provide.
    Question #1 - you could have modified your answer to include assumptions and said something like
    >
    One way to do that if you use RMAN and archive logs are available is to restore the table elsewhere, roll forward with arc logs to last point before the trunc and then export or CTAS back to prod.
    >
    That answer provides information about the assumtions you made. They may not use RMAN and they may not be in archive log mode. You either need to find out those facts or include the assumption about them in your answer.
    It also answers how you 'could' do it instead of how you 'would' do it; since how you 'would' do it might depend on whether dataguard is available. And you are free to include in your answer a phrase like - 'if dataguard is not available'.
    Question #2 you said
    >
    Without any further info I said evaluate the index in use to see if its the correct index type
    >
    As I said above you either need to get further info or tell the interviewer what info you are basing your answer on.
    Your answer suggests a good solution but doesn't include enough info to tell if you really know WHY the index type might matter.
    My answer would probably begin with 'It depends on what is causing the index contention.' Then I would either ask for more information (e.g. type of index) or I would make a reasonable assumption and base my answer on that.
    For example your answer is absolutely correct and easily defended (if they decide to argue) by adding some more information to what you said.
    >
    One cause of index contention with concurrent inserts is the use of a BITMAP index in an OLTP system. A bitmap index will serialize DML operations and can cause significant performance issues.
    >
    That is an absolutely correct answer even if it isn't the answer the interviewer wanted; you can defend it.
    That puts the onus back on the interviewer to provide more information if they want a different answer.
    Or you can ask questions to elicit more information if you want (I don't recommend that though once you give a correct answer) to cover other causes of contention
    1. what type of index is it?
    2. contention can be caused by a value for MAXTRANS that is too low.
    3. contention can be caused by serialization processes used by the INSERT operation (complex trigger activity, sequence number generation (use of NOCACHE).
    >
    If theyre going to randomly throw features into the equation after the question,
    >
    That is EXACTLY why you have to make it clear to the interviewer what assumptions you are basing your answer on. That beats them to the punch. Give a correct answer using your own assumtions. Then if they throw in new information (replication, RAC) it becomes a new question rather than destroying the question you answered.
    If you have read enough of these forum threads you should know by now that when it comes to Oracle most answers begin with
    >
    It depends . . .
    >
    If an interviewer doesn't provide enough information for you to provide a good answer then you are free to use your own reasonable information to base your answer on. Just be sure to tell them what those assumptions are as part of your answer.
    Remember, even seemingly simple and obvious questions 'depend' on the assumtions you make.
    Quick - how many bytes does the string 'CAT' take to store in a VARCHAR2(20) column?
    Depends on the character set doesn't it? If you aren't told the character set use any set you like and base your answer on it. Using ASCII or UTF8 it will take 3 bytes.
    Wait - is that right? Or was the interviewer asking about the actual BINARY representation which would then include a length byte?

  • MDB and MessageConsumer Queue Questions

    (1) Does a MessageConsumer object, when called on its onMessage() method, provide the same JMS queue processing as a Message Driven Bean (MDB) called on it's onMessage() method? That is, the message remains on the queue until the onMessage() method completes in both cases.
    (2) Does the same apply to a MessageConsumer's receive() method? That is, the message remains on the queue until the next receive is called?
    Thanks

    Unfortunately, MDBs do not currently support running on a separate queue. They run on
    the default queue.
    -- Rob
    Nicole wrote:
    Hi Folks,
    as we got problems on thread deadlocks using JMS, we decided to define our own thread
    queue to be used by our application. In the documentation it is described, that you
    need to generate all your EJBs with java weblogic.ejbc -dispatchPolicy xyz. By this
    you connect your EJB to the named thread queue, which you will need to add to your
    config.xml.
    What I could find out was, that this works fine for stateful or stateless session
    beans, but it looks like message driven beans do ignore this option.
    So here are my questions:
    Which thread queue is used by message driven beans?
    How can I change the thread queue?
    Many thanks,
    Nicole

  • Priority Queue Question

    If compareTo returns negative, does that mean that the object it was invoked on is higher or lower priority?
    Also in a priority queue, is it invoked on the inserted object or on the object already in the queue?

    If compareTo returns negative, does that mean that
    the object it was invoked on is higher or lower
    priority? If a.compareTo(b) returns a negative value, that means a is "less than" be by whatever sematics apply for the objects in question.
    Whether it means "higher" or "lower" priority depends on how that class is implemented. If you're implementing it, you can use it whichever way you want. Just make sure it's at least somewhat logical and that you document your decision.
    If it's a third-party or core API class, then look at the API docs and/or source code, or just write a little test program to find out how that particular class interprets negative/positive vis-a-vis "higher"/'lower" priority.
    Also in a priority queue, is it invoked on the
    inserted object or on the object already in the queue?????

  • Loads of Queues question

    The current system I am working on has over 100 queues in a cluster.
              I'm just trying to accomplish the same steps that I have in my pre-JMS
              MQ code in the JMS way. I want to be able to connect just to our main
              cluster server box and send messages to one or more of these queues.
              With JMS do I need to have a JNDI name entry for each of the over 100
              queues or can I just specify the queue manager and connect that way?
              If so how would I do this? The application(s) could grow to include
              upwards of 3000 queues on different machines and I don't relish the
              notion of having an entry for each. Due to issues publish/subscribe
              is not an option. Thanks [email protected]
              

              Hi Ben,
              Somewhere, somehow you will need to configure a seperate
              JNDI name for each queue. Otherwise, how will applications
              find them?
              With WebLogic JMS there are many methods for creating queues,
              which should help alleviate the drudgery. With MQ, I have no clue.
              If your question is specific to MQ I suggest you post to an IBM forum.
              With WebLogic JMS you can create queues programmatically
              via direct calls to the public WebLogic management JMX mbean APIs,
              similar to any other configurable in WebLogic. Alternatively
              you can call helper methods on a helper class we supply that
              will in turn call JMX for you - weblogic.jms.extensions.JMSHelper.
              Alternatively, you can use script commands that call the
              weblogic.Admin tool (which ultimately calls JMX). Similarly you
              can use a popular scripting tool called "wlshell" (which
              ultimately calls JMX). This tool is available
              on dev2dev.bea.com. Finally, you can simply edit the config.xml.
              In WebLogic, when you have many queues, it is helpful to
              configure a "Template" destination that all of the queues
              inherit from. This puts common configuration
              for the queues in a single place. Changes to the template are
              automatically reflected in the destinations - even if they occur
              at runtime.
              Tom, BEA
              [email protected] (Ben) wrote:
              >The current system
              > I am working on has over 100 queues in a cluster.
              >
              >I'm just trying to accomplish the same steps that I have in my pre-JMS
              >MQ code in the JMS way. I want to be able to connect just to our main
              >cluster server box and send messages to one or more of these queues.
              >
              >With JMS do I need to have a JNDI name entry for each of the over 100
              >queues or can I just specify the queue manager and connect that way?
              >
              >If so how would I do this? The application(s) could grow to include
              >upwards of 3000 queues on different machines and I don't relish the
              >notion of having an entry for each. Due to issues publish/subscribe
              >is not an option. Thanks [email protected]
              

  • Ironport ESA queue question?

    Greeting Expert
    Can anyone tell me how big is the ESA queue size? let say my exchange server is down and i`m still receiving emails from the outside, ironport will intercept these messages but since Exchange is down the message will stay in the queue to be delivered? How the ESA manage these messages?
    Thanks

    See the following eKB article --->
    https://ironport.custhelp.com/app/answers/detail/a_id/695
    By default, mail is queued for 72 hours (259200 seconds) OR 100 retry attempts before it bounces to the original sender. 
    This setting is configurable from the command line (CLI): type "bounceconfig" and edit the "default" settings.  Also, you can modify this from the GUI interface by going to "Network > Bounce Profiles" and click on the Default profile.
    Also, the queue could fill up if there is too much mail. However, if the system reaches its storage limit, it will soft bounce further attempts by other mail servers to deliver more messages. This ensures that no messages will get lost, as these mail servers will reattempt message delivery as well until the ESA accepts messages again.
    Note: If you plan to shut down your internal mail server for maintenance for a longer period (more than a couple hours), best practice is to suspend the incoming listeners on your Email Security Appliances as well (CLI: suspendlisteners). As mentioned before, in this case any connection attempts will be soft bounced, and retried later. This way, you leave the task of storing the messages to the sending mail server, which will prevent the mail queue on your email appliances filling up quickly. No messages will be lost however, once you got your internal mail server back into service, also resume the listeners on your Email Security Appliances (CLI: resume), to allow delivery from remote hosts again.
    I hope this helps!
    -Robert
    (*If you have received the answer to your original question, and found this helpful/correct - please mark the question as answered, and be sure to leave a rating to reflect!)

  • N2000 weighted load balance question

    Hi --
    I have a question about how to use a weighted load balancing configuration to support a failover condition.
    My goal is to have an active and a standby configuration. This is not a web application, so it doesn't follow the same type of rules. In particular, I have a situation where there are multiple clients establishing long-term connections to a server. If the server goes down, I would like the LB to close the connections and when the clients try to reconnect it will route to the standby system.
    My question is: If I configure the 'weighting' for the active server to be 1 and for the standby server to be 0, will it always result in incoming connections being routed to the active server (and never the standby server)? If the active server is down, will it still route to the standby even though the weight is zero?
    Any thoughts/ideas/suggestions are greatly appreciated.

    Hi --
    I have a question about how to use a weighted load balancing configuration to support a failover condition.
    My goal is to have an active and a standby configuration. This is not a web application, so it doesn't follow the same type of rules. In particular, I have a situation where there are multiple clients establishing long-term connections to a server. If the server goes down, I would like the LB to close the connections and when the clients try to reconnect it will route to the standby system.
    My question is: If I configure the 'weighting' for the active server to be 1 and for the standby server to be 0, will it always result in incoming connections being routed to the active server (and never the standby server)? If the active server is down, will it still route to the standby even though the weight is zero?
    Any thoughts/ideas/suggestions are greatly appreciated.

  • Render Queue Question for After Effects CS5.5

    What is the best render settings for my laptop amd-300 apu with radeon hd graphics 1.30 GHz that has 2.00 GB Ram (1.60GB Usable?) It's also 64 bit.
    I am rendering some special effects with CC Mr. Mercury and CC Vector Blur. I tried using just the standard settings and it goes fine until it gets up to the CC Mr. Mercury and Vector Blur, then it slows the heck down and takes forever to finish.
    In the Render Queue what type of settings should I use to make it go faster and still make the effects look good?
    Thanks Adrian

    FYI, I have a zillion year old MacBook (plastic computer - remember those) that meets the minimum system requirements of an Intel processor and 2GB ram. It's old and runs CS6 just fine as long as I just do the basics and keep the preview resolution down to about 1/4. I would not try and render production work on that system because it's not up to the task and I'd never expect it to be a production machine. I do use it often when I don't want to drag an expensive machine around with me on my travels.
    If you want a production machine you have to pay for it. I have a fully decked out new MBPro R that I use as my primary AE design machine but I still go to a desktop system for large projects that have to be produced on a deadline. If I were a hobbyist I'd live with what I could afford. Because I am a professional and I charge for my services I adjust my rates to pay for the gear that I need to do the job. It's simple economics. Most film makers I know have no idea of how to run a business. Most are starving most of the time. Only a few understand that doing what we talk about on this forum is either an expensive hobby or a business. If it's a business you have to learn how to run a business before you learn how to make a movie. If it's a hobby they you have to have the means to support it. It's no different than Skiing, biking, or building model airplanes. If you can't afford new gear you have to make do with what you've got.

  • Mail Queue question

    I can't seem to get the mail queue to show the email that the server recived
    I changed the mail store location to another hard drive on the same server.
    and I moved data base location to the same hard drive as the mail store.
    Does that have any effect on the Mail Queue?
    when is the mail queue supose to show mail? (incoming, out going, or both?)
    the server sends and recives mail fine so i am sure it is something small but I dont know what
    thanks in advance to anyone that can help me

    Unless you have a very busy mail server, you probably will not see anything. That queue window is not 'live' but merely refreshes itself every 15 seconds (IIRC) - keep hitting 'refresh' and you should pick up something. If everything is working normally then mail actually spends very little time in a 'queue' as it passes from one process to another. Normally, you might just see deferred delivery mail there.
    -david
    Link: http://developer.apple.com/documentation/Darwin/Reference/ManPages/man8/qmgr.8.h tml

  • Subvi with queues question

    Hi,
    If you have a subvi that uses queues in a producer consumer pattern for
    example, and you do not release the queues at the end of the subvi, the
    next time you use the subvi, will it use the data from that was the
    queue from the previous usage of the subvi. In otherwords, I basically
    have a subvi that uses queues inside of it. This subvi is placed in a
    for loop. One the first iteration of the for loop, the subvi creates a
    queue and then ends. I do not release the subvi. The next time the
    subvi executes which is in the second iteration of the for loop will
    the queues now have data from the previous usage of the subvi in the
    first iteration of the for loop.
    It seems that this is the case, but I am not sure. I just wanted to ask
    others in the labview community if this is the case. However, I noticed
    if I just run the subvi as just a regualar vi, meaning running it once,
    and then waiting for the vi to terminate, and then running it again,
    the release queue does not seem to be neccessary.
    -Tim

    Just me pondered;
    Has anybody else noticed when dealing with huge queues that when the vi goes idle without releasing queue the next time the vi is started it runs slower each time like it fully didn't release the queues memory? Not until labview is shutdown and restarted does it seem like it releases the memory.
    Yep!
    Since allocating memory and other resource takes a LONG time, LV will defer that work until when there is no chance the resource will be used again. That happens at LV exit.
    For the most part, if we just close the resource LV can handle things better.
    I seem to remember a (deallocate as soon as possible) VI or a property node (am I just dreaming again? ) to handle the situation where a dynamically loaded VI uses a big memory foot print but is un-loaded after returning.
    No not dreaming, just remembering. see here.
    http://forums.ni.com/ni/board/message?board.id=170&message.id=143070&query.id=85322#M143070
    Ben
    Message Edited by Ben on 02-12-2007 03:04 PM
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Advanced Queues question

    Hi.
    Can anyone tell me how soon after I place an item into an advanced queue, the notification procedure (which was registered via dbms_aq.register to process the items in the queue) is called when that item is commited?
    Ie: what delay Oracle has? Is it configurable? I am having trouble finding information on the timing of the operation.

    I have read through as much as of the documentation I can find, and now including your link rp0428 ( thank you by the way ) however information on timing is somewhat illusive.
    I'll be creating items in the queue in an after statement trigger ( this way, any rollbacks will also rollback the placed queue item). The plan is to kick off processing once the commit has occurred ( to look at and take action on the latest information updated or inserted).
    My concern was if the record I'm interested in is updated (I don't think I need to worry about inserts) several times (with commits each time) in quick succession. It would create several items in the queue (which is correct, I want it to fire for each commit)
    The notification procedure I register (by the time it is called) may not know several updates have occurred. I know I could possibly include in the queue, a copy of the data but the design I'm trying to follow can't use that information (in order to meet some design goals).
    If the notification (this is all via pl*sql by the way, not email or java) takes a few seconds I'll have a problem. If almost instantly, that's good however information on the speed of that is what may make or break this design.

  • Quick UCM/Unity 7.x Hold Queue Question

    I was wondering if it was possible to create an automated "hold queue" in either UCM or Unity?  Specifically, we would like to have a call automatically placed in a queue if no one picks it up in a given amount of time.  We would prefer to have some kind of greeting also played to notify the callers they are being placed in the queue.  The other challenge would be notifying the individuals fielding the calls that there were calls in the queue.
    Can this be done natively in UCM or Unity or is this advanced functionality I would only get in say Contact Center?
    Will rate posts.
    Thanks!

    Hi
    Realistically this is the kind of thing you need Contact Center for I'm afraid.
    You can do *some* of this with Call Handlers in Unity / Unity Connection - but this basically goes as far as playing a message to callers, and then sending the call somewhere else (for example back around a hunt group).
    A relatively common implementation that I've done for customers is to:
    1. Route the call into a hunt group.
    2. If the call isn't picked up by the hunt group it diverts to a Unity Connection Call Handler.
    3. The caller is then played a message, and given the choice of continuing to hold (call is sent back to the hunt pilot) or leave a VM (call is transferred to a VM box).
    Queuing it isn't, but it does some of what you are after.
    It's nowhere near as sophisticated as what you get with Contact Centre. If you want real queuing, with real stats with real agent availability, then UCCX is the way to go.
    HTH. Barry

  • A fairly simple question with a quick answer :)

    Hi there,
    So i have just purchased a mac pro to edit on.
    I currently run Final Cut Studio on a macbook pro. I purchased final cut studio in Feb 06.
    I need to uninstall FCS from my macbook pro. What's the easiest way to do this?
    However, the main question...
    Can i then install all of the same version of Final Cut Studio using the installation discs i used to install it on my MBP on my new Mac Pro. (Bare in mind i will uninstall FCS on my macbook pro)
    Also, if if this is possible - Can i upgrade to Final Cut Studio 2 on the new Mac Pro once the previous version of FCS is installed?
    Many thanks,
    Jamie Mc

    It is possible.
    Make sure you don't happen to have the lappy hooked up to the network at the time you install on the Pro.
    Also -- there was something I thought that changed in the EULA that said you could "could" have it on a desktop AND a laptop... but don't quote me. If it isn't in the EULA then you can't have it on both...
    But yes... you can install on the desktop for sure and then take it off the lappy.
    Good luck,
    CaptM

Maybe you are looking for

  • BI4.1 SP02 Installation Stuck on "Adding Web Intelligence Sample"

    Hi, I am in Panic, I am upgrading my BO4.0 SP04 to BI4.1 SP02. My existing BI4.0  system is clustered. Two clustered CMS running in Machine1 and machine2. I am running the 64 bit BI4.1 SP02 installer in Machine1.. Almost 80% of the installation is do

  • SQL Errors In Informatica Session Logs

    I am running a new installation of Analytics 7.9.6. No customizations have been made, and I'm trying to load data from a vanilla "Vision" instance of EBS 11.5.10. I have successfully completed loading of the Financial Analytics data warehouse tables,

  • Why is the display on lg 23mp75 different when macbook closed or open?

    When I connect the 2008 Macbook with OS X v 10.9.5 with LG 23mp75 monitor, the display on the monitor doesn't fill up the whole monitor but leaves 1" spaces on the left and right each.  But when I close the Macbook, the display fills up the whole mon

  • Applying REGEX-pattern into XML File

    I have the following problem: I have an xml-file. let's say... <NODE><NODE1 attr1="a1" attr2="a2">      <NAME> abc</NAME>      <VERSION> 1.0</VERSION> </NODE1> <NODE2 attr1="a3" attr2="a4">      <NAME> xyz</NAME>      <VERSION> 3.1</VERSION> </NODE2>

  • Can't Load 10.4.6

    Is anyone else having this problem on their 12" Powerbook? Via software update, the manual combo or regular download, won't complete the update.