Undo os 10.4.11 update???

i recently just updated to os 10.4.11 to get safari 3.....waste of time.....but i have a processor upgrade that runs of cpu director and the masterminds at powerlogix (the company who made the processor) cant seem to keep up with updating their software so now ive lost all speed on my computer. IS THERE A WAY TO UNDO AN OS UPGRADE AND GO BACK TO USING 10.4.9??? thanks

I'd say b6 is worth a try...
2.3b6 (4 May 2007) - Added support for OS X 10.4.9
2.3b5 (3 Oct 2006) - Added support for OS X 10.4.7 and 10.4.8
And from this site...
http://www.xlr8yourmac.com/OSX/10.4.8_reports.html
(added 10/5/2006)
" Just installed the new CPU Director 2.3b5 update (see below) on both (B&W G3) machines mentioned earlier. Both are now testing almost 10% faster at the same speed settings as before the new update. This update has improved integer performance somewhat. Super.
Just using a very simple utility, SpeedX, to determine CPU speed by comparision. Noticed the speed increase with the upgrade. Did not expect it. Not sophisticated, but repeatable results.
-Garth M. "

Similar Messages

  • How can i undo the latest ios7 software update for my iphone4? It completely screwed up my phone!

    I updated my iPhone 4 a few days ago to the latest operating system, iOS7. I've always proceed with the operating system updates because I assumed there were security fixes included as well as other fixes that were supposed to make the system run better. I haven't had any significant problems with updates in the past, except for some minor differences in appearance that didn't bother me.  This time but the update appears to have completely screwed up my phone.
    My first complaint is, where the **** did all of my apps go?? I paid for them, and now they're gone. Are they hiding in the iCloud? I had over 3 screens of apps, and now I've only got 1 screen.
    My second complaint is, what was wrong with the appearance of the interface and icons in the previous version? The new system has gone with opaque pastel-colered icons that, frankly, I don't like as much as the icons in the previous system.  And why does there have to be a gray band on the bottom row of icons?
    My third complaint is everthing else on the phone is screwed up. Nothing works like it did before. Most of the photos I had on my phone have disappeared (iCloud??) Granted, my complaints about the appearance may seem trivial, but this is really annoying.
    Can I reverse the iOS7 "downgrade" and go back to the good old days with the previous version when everything worked just fine and I was able to access all of my apps?

    You might want to check out these two threads:
    https://discussions.apple.com/thread/5325838?tstart=0
    https://discussions.apple.com/thread/5369545?tstart=200
    This topic has been discussed ad nauseum, they might give you some ideas on how to deal with the changes.  Be sure to leave Apple your feedback on the matter: http://www.apple.com/feedback/  Good luck!

  • How do I undo the most recent iTunes update?

    This iTunes update is HORRIBLE. I haven't updated my iPod in the longest because I can't even stand to look at the thing. I REALLY hope someone changes this. It's so complicated for no reason.

    Got a backup?
    One thing that might help, is to return the sidebar, if you haven't done this already.
    You can do that by clicking View >> Show Sidebar.

  • Can i undo the iOS 8.3 update?

    undo iOS 8.3 update

    No, Apple does not support downgrading iOS. If you are having issues, post what they are and someone may be able to assist you.

  • How do I  undo an update to pages 2.1 on an iPad 2? I can no longer open a document previously saved, even when I try to open it in the iCloud and on my Mac.

    How do I  undo an update to pages 2.1 on an iPad 2? I can no longer open a document previously saved, even when I try to open it in the iCloud and on my Mac.

    With IOS Pages v2.0 and v2.1, Apple introduced a new document format that is incompatible with prior Pages versions on IOS or OS X Pages ’09. Opening older documents with these recent versions of Pages will permanently prevent the document from opening in earlier versions of Pages on IOS or OS X.
    Nice of Apple to warn people.
    There is no undo of an iTunes application update, short of forcing iTunes to restore IOS v6 back on that iPad 2. Even if you did that, you won't find the IOS Pages v1.7.2 in the iTunes store now.
    IOS Pages v2+ will allow you to export content in Word .docx format. You do this by first tapping the box with an up-arrow in it (next to the left-most +). A menu will slide up — choose Send a Copy. Select a document icon. Pick Mail, and then Word. Your Pages v2.1 document will now be converted to Word .docx, and attached to an email body. Fill in the to address and send.

  • Is there a way to undo a crashed firmware update?

    Well...what now? I ran the latest firm update and viola', it hung in a closed loop and finally after an hour I cancelled and now my great running EA4500 is now an expensive plastic paperweight. Is there any way to undo the "damaged" eprom/firmware update.
    I'm electronics capable and willing to rip off the cover and replace the eprom, if possible, like we used to do in the the old, old days replacing the bios!
    Solved!
    Go to Solution.

    Was the update being done via wired or wireless connection? 
    Always do a FW update over a wired LAN connection.
    You might do a search on the support site for 30-30-30 reset to see if this can get you recovered or to a page where you can reload the FW load. 
    Future use:
    To safely update FW, I recommend doing the following: Download the FW file from the support site first.
    1. Save router config to file first using IE or FF with all security add-ons disabled.
    2. Factory reset the router with all other devices disconnected or turned OFF accept for 1 wired PC.
    3. Reload or Update the FW using IE or FF. Just download the FW file to your local wired LAN PC.
    4. Factory reset the router and then set up from scratch first and test with out loading the saved config from file. Check to see if any problems are fixed before loading the saved config from file. Sometimes you need to set up from scratch with out loading the saved config file. Just safer that way.

  • Why do constant CC updates matter? Adobe is not putting out many essential updates anymore.

    Reading these posts on CC I have continually come across people who claim that to be a serious professional in a creative field you need to have your Adobe software up to date. How does one compete if they are using old Adobe software? As someone who has been working in the field for a long time I don’t see how this could be any further from the truth. Yes, I do believe that you need software that is relatively up to date but I don’t believe you need the new software that is coming from Adobe. The software that I really need has been coming from third party plug-ins. This is probably the main reason I have a problem with the rental software; it keeps pushing new updates on you even when they are the wrong updates. Below are many examples at how over the past several years the 3rd party has constantly delivered and Adobe has faltered.
    When I first tried out InDesign’s automatic text flow feature I couldn’t get it to work even after asking around on several different online forums. I was told about some plug-ins that could do auto text flow and after I tried it everything worked.
    I later found that when I would add or delete a page that had facing pages my layout would get completely messed up. How could I fix this problem? By upgrading to a new version of ID? Nope, new versions of ID didn’t fix that. I fixed the problem by buying a plug-in.
    I once had a client who sent me a Word file that was loaded with grammar errors. In a perfect world the content editor should be the one mostly in charge of fixing grammar but as we all know we don’t live in a perfect world. Unfortunately ID does not provide grammar checking at all. This feature has been available in Word since like, what, 1988(?) but it won’t work on the absolutely latest version of ID. It will work with a plug-in.
    The history panel is a feature that has been available since about PS 5. It has provided some of the most basic functionality of a graphics program; the ability to go back a few steps. Since this is such a basic feature it’s made it’s way to other apps like Fireworks and Lightroom. It’s even available in consumer apps like PS Elements. So how can you run this basic feature in a professional program like ID FIFTEEN YEARS after it was released in PS 5? Get a plug-in.
    Hey look! The new features page says that CC finally comes with built in barcode support! Hallelujah!! Praise the Lord! Now there is a feature that I can use! True, I needed the feature three years ago when I was making ID badges and, yes, I already spent hundreds of dollars buying a plug-in that can more then adequately do that functionality already, but hey better late then never right? Good things surely do come to those who wait!
    Huh? What is that I am now reading? The barcode software only supports the pattern type of barcodes and not the standard line type found on just about every kind of merchandise know to mankind? My client needed the standard kind of barcode. When is that going to be released? Do I have to wait another three years?
    The fact of the matter is that Adobe has come out with THREE major updates since I last upgraded and I can still barely find enough important features that would make the update worthwhile. New features this time around include things like font management (even though every serious designer already has font management software) and a different color interface.
    This shortage of new worthwhile features is certainly not due to there being enough features that need improvement. For an example doing a very simple data merge with a basic layout is still immensely complicated. I would be overjoyed if they made that process as simple and powerful as it should be. Instead what we will get is more pointless updates that will only break the important plug-ins that I rely on to get my work done on tight deadlines. You know how web browsers with bi-monthly updates (Firefox, Chrome) keep breaking your browsers plug-ins except this time instead of breaking goofy trivial plug-ins that let you do things like check the weather they will break important ones that you need to rely on to get work done.
    The plug-ins developers will have to dedicate an enormous amount of time checking to see if their software works with every little ID update that comes out. This means they will have less time to working on features even though they are the ones making the best advancements.

    Yes, I realize that Firefox is free, but that doesn't really negate the validity of anyone's frustrations. I just wish they would warn people better about the problems that the updates can cause before asking people to update. Most people rely very heavily on their computers for work, etc. and not everyone has the time or ability to easily undo the problems that the updates cause. Again, if it was just a one-time thing that I've had to deal with, I wouldn't mind as much.

  • SELECT query sometimes runs extremely slowly - UNDO question

    Hi,
    The Background
    We have a subpartitioned table:
    CREATE TABLE TAB_A
      RUN_ID           NUMBER                       NOT NULL,
      COB_DATE         DATE                         NOT NULL,
      PARTITION_KEY    NUMBER                       NOT NULL,
      DATA_TYPE        VARCHAR2(10),
      START_DATE       DATE,
      END_DATE         DATE,
      VALUE            NUMBER,
      HOLDING_DATE     DATE,
      VALUE_CURRENCY   VARCHAR2(3),
      NAME             VARCHAR2(60),
    PARTITION BY RANGE (COB_DATE)
    SUBPARTITION BY LIST (PARTITION_KEY)
    SUBPARTITION TEMPLATE
      (SUBPARTITION GROUP1 VALUES (1) TABLESPACE BROIL_LARGE_DATA,
       SUBPARTITION GROUP2 VALUES (2) TABLESPACE BROIL_LARGE_DATA,
       SUBPARTITION GROUP3 VALUES (3) TABLESPACE BROIL_LARGE_DATA,
       SUBPARTITION GROUP4 VALUES (4) TABLESPACE BROIL_LARGE_DATA,
       SUBPARTITION GROUP5 VALUES (DEFAULT) TABLESPACE BROIL_LARGE_DATA
      PARTITION PARTNO_03 VALUES LESS THAN
      (TO_DATE(' 2008-07-22 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
      ( SUBPARTITION PARTNO_03_GROUP1 VALUES (1),
        SUBPARTITION PARTNO_03_GROUP2 VALUES (2),
        SUBPARTITION PARTNO_03_GROUP3 VALUES (3),
        SUBPARTITION PARTNO_03_GROUP4 VALUES (4),
        SUBPARTITION PARTNO_03_GROUP5 VALUES (DEFAULT) ), 
      PARTITION PARTNO_01 VALUES LESS THAN
      (TO_DATE(' 2008-07-23 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
      ( SUBPARTITION PARTNO_01_GROUP1 VALUES (1),
        SUBPARTITION PARTNO_01_GROUP2 VALUES (2),
        SUBPARTITION PARTNO_01_GROUP3 VALUES (3),
        SUBPARTITION PARTNO_01_GROUP4 VALUES (4),
        SUBPARTITION PARTNO_01_GROUP5 VALUES (DEFAULT) ), 
      PARTITION PARTNO_02 VALUES LESS THAN
      (TO_DATE(' 2008-07-24 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
      ( SUBPARTITION PARTNO_02_GROUP1 VALUES (1),
        SUBPARTITION PARTNO_02_GROUP2 VALUES (2),
        SUBPARTITION PARTNO_02_GROUP3 VALUES (3),
        SUBPARTITION PARTNO_02_GROUP4 VALUES (4),
        SUBPARTITION PARTNO_02_GROUP5 VALUES (DEFAULT) ), 
      PARTITION PARTNO_OTHER VALUES LESS THAN (MAXVALUE)
      ( SUBPARTITION PARTNO_OTHER_GROUP1 VALUES (1),
        SUBPARTITION PARTNO_OTHER_GROUP2 VALUES (2),
        SUBPARTITION PARTNO_OTHER_GROUP3 VALUES (3),
        SUBPARTITION PARTNO_OTHER_GROUP4 VALUES (4),
        SUBPARTITION PARTNO_OTHER_GROUP5 VALUES (DEFAULT) )
    CREATE INDEX TAB_A_IDX ON TAB_A
    (RUN_ID, COB_DATE, PARTITION_KEY, DATA_TYPE, VALUE_CURRENCY)
      LOCAL;The table is subpartitioned as each partition typically has 135million rows in it.
    Overnight, serveral runs occur that load data into this table (the partitions are rolled over daily, such that the oldest one is dropped and a new one created. Stats are exported from the oldest partition prior to being dropped and imported to the newly created partition. The oldest partition once the partition has been created has it's stats analyzed).
    Data loads can load anything from 200 rows to 20million rows into the table, with most of the rows ending up in the Default subpartition. Most of the runs that load a larger set of rows have been set up to add into one of the other 4 subpartitions.
    We then run a process to extract data from the table that gets put into a file. This is a two step process (due to Oracle completely picking the wrong execution plan and us not being able to rewrite the query in such a way that it'll pick the right path up by itself!):
    1. Identify all the unique currencies
    2. Update the (dynamic) sql query to add a CASE clause into the select clause based on the currencies identified in step 1, and run the query.
    Step 1 uses this query:
    SELECT DISTINCT value_currency
    FROM            tab_a
    WHERE           run_id = :b3 AND cob_date = :b2 AND partition_key = :b1;and usually finishes this within 20 minutes.
    The problem
    Occasionally, this simple query runs over 20 minutes (I don't think we've ever seen it run to completion on these occurrences, and I've certainly seen it take over 3 hours before we killed it, for a run where it would normally complete in 2 or 3 minutes), which we've now come to recognise as it "being stuck". The execution path it takes is the same as when it runs normally, there are no unusual wait events, and no unusual wait times. All in all, it looks "normal" except for the fact that it's taking forever (tongue-in-cheek!) to run. When we kill and rerun, the execution time returns to normal. (We've sent system state dumps to Oracle to be analyzed, and they came back with "The database is doing stuff, can't see anything wrong")
    We've never been able to come up with any explanation before, but the same run has failed consistently for the last three days, so I managed to wangle a DBA to help me investigate it further.
    After looking through the ASH reports, he proposed a theory that the problem was it was having to go to the UNDO to retrieve results, and that this could explain the massive run time of the query.
    I looked at the runs and agreed that UNDO might have been used in that particular instance of the query, as another run had loaded data into the table at the same time it was being read.
    However, another one of the problematic runs had not had any inserts (or updates/deletes - they don't happen in our process) during the reading of the data, and yet it had taken a long time too. The ASH report showed that it too had read from UNDO.
    My question
    I understand from this link: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:44798632736844 about how Selects may generate REDO, but I don't see why UNDO would possibly be generated by a select. Does anyone know of a situation where a select would end up looking through UNDO, even though no inserts/updates/deletes had taken place on the table/index it was looking at?
    Also, does the theory that having to look through the UNDO (currently UNDO ts is 50000MB, in case that's relevant) causing queries to take an extremely long time hold water? We're on 10.2.0.3
    Message was edited by:
    Boneist
    Ok, having carried on searching t'internet, I can see that it's maybe Delayed Block Cleanout that's causing the UNDO to be referenced. Even taking that into account, I can't see why going back to the UNDO to be told to commit the change to disk could slow the query down that much? What waits would this show up as, if any?

    Since you're on 10.2 and I understand that you use
    the statistics of the "previous" content for the
    partition that you are now loading (am I right?) you
    have to be very careful with the 10g optimizer. If
    the statistics tell the optimizer that the values
    that you're querying for are sufficiently
    out-of-range this might change the execution plan
    because it estimates that only a few or no rows will
    be returned. So if the old statistics do not fit the
    new data loaded in terms of column min/max values
    then this could be a valid reason for different
    execution plans for some executions (depending on the
    statistics of the old partition and the current
    values used). Your RUN_ID is a good candidate I guess
    as it could be ever increasing... If the max value of
    the old partition is sufficiently different from the
    current value this might be the cause.
    Do you actually use bind variables for that
    particular statement? Then we have in addition bind
    variable peeking and potentially statement re-using
    to consider.
    I would prefer literals instead of bind variables or
    do you encounter parse issues?Yes, that query runs as part of a procedure and uses bind variables (well, pl/sql variables!). We are aware that because of the histograms that get produced, the stats are not as good as we'd like. I'm wondering if analyzing the partition would be the best way to go, only that means analysing the entire partition, not just the subpartition, I guess? But if other inserts are taking place at the same time, having several analyzes taking place won't help the speed of inserting, or won't it matter?
    Do you have the "default" 10g statistics gathering
    job active? This could also explain why you get
    different execution plans at different execution
    times. If the job determines that the statistics of
    some of your partitions are stale then it will
    attempt to gather statistics even if you already have
    statistics imported/generated.No, we turned that off. The stats do not change when we rerun the query - we guess there is some sort of contention taking place, possibly when reading from the UNDO, although I would expect that to show up in the waits - it doesn't appear to though.
    Data loads can load anything from 200 rows to
    20million rows into the table, with most of therows
    ending up in the Default subpartition. Most of the
    runs that load a larger set of rows have been setup
    to add into one of the other 4 subpartitions.
    I'm not sure about above description. Do most of the
    rows end up in the default subpartition (most rows in
    default partition) or do the larger sets load into
    the other 4 ones... (most rows in the non-default
    partitions)?Sorry, I mean that the loads that load say 20million + rows at a time have a specified subpartition to go into (defined via some config. We had to make up a "partition key" in order to do this as there is nothing in the data that lends itself to the subpartition list, unfortunately - the process determines which subpartition to load to/extract from via a config table), but this applies to not many of the runs. So, the majority of the runs (with fewer rows) go into the default partition.
    The query itself scans the index, not the table, doing partition pruning, etc.
    The same SQL_ID doesn't mean it's the same plan. So
    are you 100% sure that the plans where the same? I
    could imagine (see above) that the execution plans
    might be different.The DBA looking at it said that the plans were the same, and I have no reason to doubt him. Also, the session browser in Toad shows the same explain plan in the "Current Statement" tab for normal and abnormal runs and is as follows:
    Time     IO Cost     CPU Cost     Cardinality     Bytes     Cost     Plan
                             6     SELECT STATEMENT  ALL_ROWS                    
    4 1     4 4     4 82,406     4 1     4 20     4 6          4 HASH UNIQUE                 
    3 1     3 4     3 28,686     3 1     3 20     3 5               3 PARTITION RANGE SINGLE  Partition #: 2            
    2 1     2 4     2 28,686     2 1     2 20     2 5                    2 PARTITION LIST SINGLE  Partition #: 3       
    1 1     1 4     1 28,686     1 1     1 20     1 5                         1 INDEX RANGE SCAN INDEX TAB_A_IDX Access Predicates: "RUN_ID"=:B3 AND "COB_DATE"=:B2 AND "PARTITION_KEY"=:B1  Partition #: 3 
    How do you perform your INSERTs? Are overlapping
    loads and queries actually working on the same
    (sub-)partition of the table or in different ones? Do
    you use direct-path inserts or parallel dml?
    Direct-path inserts as far I know create "clean"
    blocks that do not need a delayed block cleanout.We insert using a select from an external table - there's a parallel hint in there, but I think that is often ignored (at least, I've never seen any hint of sessions running in parallel when looking at the session browser, and I've seen it happen in one of our dev databases, so...). As mentioned above, rows could get inserted into different partitions, although the majority of runs load into the default subpartition. In practise, I don't think more than 3 or 4 loads take place at the same time.
    If you loading and querying different partitions then
    your queries shouldn't have to check for UNDO except
    for the delayed block cleanout case.
    You should check at least two important things:
    - Are the execution plans different for the slow and
    normal executions?
    - Get the session statistics (logical I/Os, redo
    generated) for the normal and slow ones in order to
    see and compare the amount of work that they
    generate, and to find out how much redo your query
    potentially generated due to delayed block cleanout.It's difficult to do a direct comparison that's exact, due to other work going on in the database, and the abnormal query taking far longer than normal, but here is the ASH comparison between a normal run (1st) and our abnormal run (2nd) (both taken over 90 mins, and the first run may well include other runs that use the same query in the results):
                  Exec Time of DB    Exec Time (ms)      #Exec/sec   CPU Time (ms)   Physical Reads / Exec  #Rows Processed    
                  Time                / Exec             (DB Time)    / Exec                                 / Exec
    SQL Id        1st  2nd   Diff    1st     2nd         1st  2nd    1st    2nd      1st       2nd          1st  2nd         Multiple Plans   SQL Text
    gpgaxqgnssnvt 3.54 15.76 12.23   223,751 1,720,297   0.00 0.00   11,127 49,095   42,333.00 176,565.00   2.67 4.00        No               SELECT DISTINCT VALUE_CURRENCY...

  • Windows 8.1 Update always sets Bing as Search Provider (IEAK doesn´t tell it to)

    Hello,
    I´m building a Windows 8.1 (with Update 1)-Image for my Company and want to customize IE11. Everything works fine, beside two things:
    1. In my IEAK-Package I add Google as search provider and set it to default. The IE-branding-package is applied during the building of the image. But when I log in the first time, IE comes up with Bing as default and Google as additional search provider.
    First I thought this has anything to do with SCCM installing the branding pack as LocalSystem during the task sequence, but with some testing, I found out, that Win 8.1 Update 1 (KB2919355) was my Problem! If I install my branding package to Windows 8.1 (without
    Update 1), Google is set as default search provider for accounts that already exist and for new accounts. When Update 1 is applied and my Branding pack runs after that, Google is set as default search provider for existing accounts only, but not for new accounts,
    they get Bing! It doesn´t matter if I do that as LocalSystem within a task sequence or as local admin, with Update 1 - Bing, without - Google! I also tried to set the HKCU\Software\Microsoft\Internet Explorer\SearchScopes\DefaultScope-value in the "Default
    User" and imported the {xxxx...xxxxx}-Search Provider-Value for Google, but this seems to be overwritten by some first logon command. Is there any Win 8.1 or IEAK-Hotfix to correct this behaviour?
    2. This one is AFAIK in every IEAK since IE8, but it´s annoying anyway: you select that no MS Defaults should be added, and that only items created by the Administrator should be deleted, in the "Browsing Options" of IEAK but (in
    the case of IE11) this one "Bing"-favorite resides there and says "Hello!" to every new user.
    Problem 1 is the bigger one, but I´m glad about every kind of help and every answer!
    Kind Regards, Chris

    So it did.  The settings are being customized by "ie4uinit.exe -UserConfig" during Active Setup for each new user.  I could not find where it was picking up its orders, otherwise I would have edited the ieuinit.inf or registry or xml
    file rather than attempt to undo the damage using the below method.  Documentation on ie4uinit.exe was nonexistent or hidden, and I didn't have the patience to keep filtering and trawling through 7000 reg entries to find the source of the problem.
    As a test, disabling / renaming / removing the "stubpath" fixed the problem, but also vanished the Metro IE icon.  I didn't pursue this further to see if this was a simple fix, choosing instead to make things a little more complicated:
    During the deployment task sequence, I create another registry key under "Installed Components" in Active Setup.  I don't know if there's a way to control the order of execution, but I think I got lucky enough that my stuff executes after
    the other commands per user.
    Task script (during deployment):
    REM Prep for active setup to undo 8.1 Update 1 Bing evilness
    MD c:\temp\bingremove
    XCOPY hklm_scope.reg c:\temp\bingremove
    XCOPY hkcu_scope_current.reg c:\temp\bingremove
    XCOPY RedoSearchScopes.cmd c:\temp\bingremove
    REG.EXE import RedoSearchScopes.reg
    RedoSearchScopes.reg
    Windows Registry Editor Version 5.00
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Active Setup\Installed Components\RedoSearchScopes]
    @="Redo IE Search"
    "IsInstalled"=dword:00000001
    "ComponentID"="REDOBING"
    "StubPath"="C:\\temp\\bingremove\\RedoSearchScopes.cmd"
    RedoSearchScopes.cmd
    REM Clear & Add search links
    REG.EXE delete "HKCU\Software\Microsoft\Internet Explorer\SearchScopes" /f
    REG.EXE delete "HKLM\Software\Microsoft\Internet Explorer\SearchScopes" /f
    REG.EXE import c:\temp\bingremove\hklm_scope.reg
    REG.EXE import c:\temp\bingremove\hkcu_scope_current.reg
    del /f /q %USERPROFILE%\Favorites\Bing.url
    del /f /q %USERPROFILE%\..\Default\Favorites\Bing.url
    hklm_scope and hkcu_scope_current are exports of searchscopes that I'm reasonably happy with.  Except after
    KB2988414, my users are prompted to approve my change away from the default of Bing, only once.  So MS wins again, unless I go uninstall that update and block it - except it was part of a cumulative rollup!
    Anyway, I don't even care that much about which search engine goes, but search suggestions were ON by default, which is a big nono when dealing with sensitive work.  Search suggestions mean that all text in a browser address bar is logged and stored
    by the search company.
    A creative workaround I hadn't thought of was to keep the Bing GUID and change all the values underneath to your search engine of choice:
    http://www.laurierhodes.info/?q=node/81
    I was considering this, but it still feels a bit hacky, and I'm sure it would only be a matter of time before something breaks it.
    Oh, well.  Not the ideal solution, and I resent beyond words the inability to set something from an administrator's task and then have the OS second-guess me and undo my changes via an update meant to give more control to users who could not care less
    about these "enable" or "approve" prompts.
    Thanks for the tip, R.A.

  • Compositing text modifications with font size modifications for undo

    Hey TLF gurus,
    So here's my situation.  I have a text flow.  The user starts typing "The lazy brown dog..." and after each character is typed I'm modifying the font size of all the text to make it fit inside the container.  Here's how I'm doing that: http://aaronhardy.com/flex/size-text-to-container/ (you can right-click the app to see the source).
    That all works well, but undo isn't working so well.  When I hit undo, I would like it to remove the text.  But when I call undo on the undo manager, it gets to EditManager.performUndo() where it checks to see if operation.endGeneration is different than textFlow.generation.  In my case, they aren't the same because I updated the font size after the last keystroke.  Because they're different, the undo fails.  Because I updated the font size after the last text insert, TextFlow.processModelChanged() was called and increased the text flow's generation while the insert text operation's end generation stayed the same.
    Ultimately I think I want to create a composite operation that contains any text modifications with their following text auto-sizing.  Either that or I want to prevent my font auto-sizing operation from affecting the text flow generation.  Either way, I'm not sure how to go about pulling it off.  I'm sure I can make some sweet hacks but I'd really appreciate some guidance.  Thanks.
    Aaron

    I'm able to successfully merge the sizing operations with the text operations.  I did it in a round-about way but afterward I found a more appropriate way: executing the ApplyFormatOperation by calling back into EditManager.doOperation when the FlowOperationEvent.FLOW_OPERATION_END event is dispatched from TextFlow (essentially making it recursive).  This way the EditManager will composite the operations.  This is described pretty well in the code and documentation of EditManager.doInternal().  I didn't test it, however.
    Instead, for a couple reasons I decided not to go with that plan at all but instead went with my Plan B which was to prevent my font auto-sizing operation from affecting the text flow generation.  I essentially want the auto-sizing to occur without it going into the undo history, modifying the text flow generation, etc., etc.  I want it to go incognito.  The way I did this was to store the text flow's generation, make the font size modifications, then set the text flow's generation back to what it was before the font size was applied.  This may have some negative repercussions but for now I think it's the best approach for my needs.
    Thanks for letting me talk to myself!  Feel free to add ideas.

  • 11g RMAN UNDO backup optimization

    Hi all?
    I have tested 11g RMAN UNDO backup optimization
    1st I fill the undo tablespace by sql manipulations and not commiting
    2nd backed undo_ts up by RMAN (size 24m)
    3rd I made a commit
    Then backed undo tablespace again but backup_size didn’t change     (24m)
    Then I made some more manipulations and backed undo_ts again. This time backup_size reduced. (11m)
    Then I restarted db and backed up undo_ts again. This time backup size became what I expected (600K)
    The question is why 11g rman undo tablespace backup size didn’t reduce after commit?
    according to 11g undo optimization it had to
    SQL> select sum(bytes) from dba_free_space where tablespace_name = 'UNDOTBS2';
    SUM(BYTES)
    13172736
    SQL> begin
    for i in 1..100000 loop
    insert into testundo values(i);
    end loop;
    end;
    2 3 4 5 6
    PL/SQL procedure successfully completed.
    SQL> SQL> update testundo set
    id=2 where id>0;
    2
    update testundo set
    ERROR at line 1:
    ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS2'
    SQL> select sum(bytes) from dba_free_space where tablespace_name = 'UNDOTBS2';
    SUM(BYTES)RMAN> backup datafile 6;
    RMAN> list backup of datafile 6;
    List of Backup Sets
    ===================
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    10 Full *24.54M* DISK 00:00:04 10-JUN-10
    BP Key: 10 Status: AVAILABLE Compressed: NO Tag: TAG20100610T142437
    Piece Name: /home/oracle/flash_recovery_area/11GR1/backupset/2010_06_10/o1_mf_nnndf_TAG20100610T142437_611ctr1f_.bkp
    List of Datafiles in backup set 10
    File LV Type Ckp SCN Ckp Time Name
    6 Full 577669 10-JUN-10 /home/oracle/oradata/11GR1/datafile/undotbs2.dbf
    SQL> commit;
    Commit complete.RMAN> backup datafile 6 format 'after commit.backup';
    RMAN> list backup of datafile 6;
    List of Backup Sets
    ===================
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    11 Full *24.54M* DISK 00:00:02 10-JUN-10
    BP Key: 11 Status: AVAILABLE Compressed: NO Tag: TAG20100610T142541
    Piece Name: /home/oracle/product/11/Db_1/dbs/after commit.backup
    List of Datafiles in backup set 11
    File LV Type Ckp SCN Ckp Time Name
    6 Full 577705 10-JUN-10 /home/oracle/oradata/11GR1/datafile/undotbs2.dbf
    SQL> alter system archive log current;
    System altered.
    SQL> commit;
    Commit complete.
    SQL> select count(*) from testundo;
    COUNT(*)
    100000
    SQL> delete from testundo;
    100000 rows deleted.
    SQL> commit;
    Commit complete.
    SQL> insert into testundo values(1);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> alter system flush buffer_cache;
    System altered.RMAN> backup datafile 6;
    RMAN> list backup of datafile 6;
    List of Backup Sets
    ===================
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    13 Full *11.03M* DISK 00:00:01 10-JUN-10
    BP Key: 13 Status: AVAILABLE Compressed: NO Tag: TAG20100610T143359
    Piece Name: /home/oracle/flash_recovery_area/11GR1/backupset/2010_06_10/o1_mf_nnndf_TAG20100610T143359_611dd8sz_.bkp
    List of Datafiles in backup set 13
    File LV Type Ckp SCN Ckp Time Name
    6 Full 578410 10-JUN-10 /home/oracle/oradata/11GR1/datafile/undotbs2.dbf
    RMAN>
    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 393375744 bytes
    Fixed Size 1300156 bytes
    Variable Size 352323908 bytes
    Database Buffers 33554432 bytes
    Redo Buffers 6197248 bytes
    Database mounted.
    Database opened.
    SQL> RMAN> backup datafile 6;
    RMAN> list backup of datafile 6;
    List of Backup Sets
    ===================
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    14 Full *600.00K* DISK 00:00:02 10-JUN-10
    BP Key: 14 Status: AVAILABLE Compressed: NO Tag: TAG20100610T152843
    Piece Name: /home/oracle/flash_recovery_area/11GR1/backupset/2010_06_10/o1_mf_nnndf_TAG20100610T152843_611hlwmv_.bkp
    List of Datafiles in backup set 14
    File LV Type Ckp SCN Ckp Time Name
    6 Full 580347 10-JUN-10 /home/oracle/oradata/11GR1/datafile/undotbs2.dbf
    Thanks in advance
    Turkel

    Hi Turkel,
    The space used for undo is also related to the undo retention setting.
    As it seems you do a test update and proceed with backups on:
    - 14:24:37 (-> 25M)
    - 14:25:41 (-> 25M)
    - 14:33:59 (-> 11M)
    - 15:28:43 (-> 600K)
    The first two backups probably are still within the undo retention period for the update.
    The third backup shows a partly empty undo (is your setting 900?).
    The last falls outside the retention period for the update resulting in the small backup size.
    Regards,
    Tycho

  • How do I reverse an auto update? mainly a java update

    On 6/13/12 there were a few "automatic" updates - one of them was
    a JAVA for MAC OS X 10.6 update 9
       (updating Java SE;6 to 1.6.0_33)
    Ever since this was installed - I can no longer access a site that
    has worked for some time now.  It is a USCG Auxiliary site containing
    a database application that is extremely important for me.
    Can this be "fixed" or can I UN-INSTALL the update?
    Thanks
    DJ1

    In general, you should not avoid installing or 'undoing' this or any other update that improves the security of your Mac. If you do, you leave it vulnerable to publicized security weaknesses that might be exploited by some malware author, in this case several weakness in the security of the Java runtime environment. (See http://support.apple.com/kb/HT5319 for more about that.)
    As X423424X mentioned, the update has turned Java off in web browsers. Reenable it as he suggests by checking the box in Java Preferences, as shown below:
    Note that you have to quit & relaunch the browser for the change to take effect.

  • DBWn and Undo

    Hello,
    I have an Oracle concept's question.
    I've read "Concepts" doc and some forum messages, but one thing is still not clear to me.
    I change data in the table (insert or update). Undo data are generated, LGWR writes redo and suppose that DBWN has already been writing blocks to datafiles. So there are some uncommitted data in the datafiles. In the end I rollback the transaction, so those new data should be erased from datafiles, right?
    How is it working? How Oracle know what to erase?
    Thx in advance
    A.

    Hi,
    Things are not that simple as they look.When you change some thing, Oracle first tries to find the block that you are willing to modify.When it gets the block either in the cache or by a physical read from the datafile to the cache, the transaction header of the block is checked and your transaction occupies some thing called ITL(interested transaction list) slot and maintains over there ITE(Interested transaction entry).For the same Oacle maintains an entry in the UNDO block and there it updates the information of your old image.The same information is maintained in V$transaction and X$ktuxe table.From there than Oracle update teh log buffer and maintains the change entry over there.Oracle maintains some thing called RBA(redo byte address) in your block header from where it knows where is the changed entry of your transaction is maintained in the redo plus it is also having an info where does your undo image is maintainesd in the same transaction header in a portion called UBA(undo byte address).From here Oracle knows that using the info that is stored at this location, it has to rollback the transaction in case we discard the transaction.Now if your block is in the cache only then there isnot much issue.Oracle will discard your snapshot blocks(CR blocks) releases teh ITL enteries and thus updates that the transaction is over.Also from the undo the information will be released/erased that the block is now not a part of transaction.If your block was flushed to the hard disk before the rollback happened, then some thing called Delayed block clean out will kick in.Wehn you would issue the rollback, Oracle will release the undo trnsacation table saying that the transaction is over but it will not immediately go to the datafile to update the block header for the same.It will cause too much of the physical IO forOracle to do.So whenthe next time you would pull the block, SCNs will be matched and if Oracle finds that there is still an active transaction entry maintained in the block heder, it will verify it from the undo that whether we have to use some old consistent copy for the block or not.Undo will tell that the transaction is over from its side so Oracle will clear the block status.
    Well I am sure there is alot more happening than what I have told you.Lets wait to hear from others too.
    Aman....

  • Updates have destroyed my setup.

    Ok. What is going on? What planet did I wake up on where Apple is now putting out busch-league, shoddy updates that end up making their software not function properly? Did Apple get bought out by Microsoft or something? This is absolutely infuriating. Since upgrading to *iTunes 8.1*, my *Apple TV* dropped streaming connection every time I played something. So, after restarting my computer, *Apple TV* no longer appears in the devices list and it refuses to be found wirelessly or wired. I also have the 7.4.1 "upgrade" for my *Airport Extreme*. I read about people's issues in the forums and rolled back to 7.3.2 on the Airport. Still nothing. And now I can't roll back to *iTunes 8.0* because people have found that it's not backward compatible with the 8.1 format? This is not like Apple at all. I really hope someone at your company scrambles to undo all the damage these updates are doing to your customers' confidence in your products and to your reputation. Apple has some of the most loyal supporters out there and I do not believe this is the way to repay them. I just want my technology to do what it is supposed to do. If I wanted to tweak every little setting and update just to have things marginally work, I would have stuck with Windows. Please fix these problems immediately and then address why they were allowed to happen in the first place. Thank you.

    The only people reading this page are other Apple users like you. If you want Apple to hear you, go through their feedback website http://www.apple.com/contact/

  • Remote client copy performance

    Hello experts,
    I need an opinion on following possibilities that could speed up remote client copy:
    Increasing maximum number of processes for parallel processing in dialog mode?
    Increasing the update processes?
    Decreasing the undo retention from default 900 seconds to 600 seconds (Oracle) ?
    Increase the number of redo log files?
    An opinion or any other insights would be highly appreciated.
    Thank you,
    Rohit

    Well, the answer is maybe.
    You did not supply any information about your run and your system, how can we give you good advice? Any of your proposals might speed up the copy process, but we cannot tell (well decreasing the undo retention and changing the update work processes will not change much). If your hardware (cpu/disk/network) is not exhausted, then increase the number of parallel processes.
    Besides that these notes contain good hints regarding copy speedup:
    [489690 - CC INFO: Copying large production clients|https://service.sap.com/sap/support/notes/489690]
    [446485 - CC-ADMIN: Special copying options|https://service.sap.com/sap/support/notes/446485]
    Cheers Michael

Maybe you are looking for

  • Updating Transport Layer in Packages

    Hi all, I have a question on Transport Layers and that set in Packages. The situation is a system that has been copied and renamed in 4.6 (System was copied from a combined install to a new install with all the customisations kept to provide a separa

  • JDI - error in import SAP_BUILDT and SAP_JTECHS Files

    Hello! I installed and configurated JDI. When i want to import dependence files in my software component, i have this error: Info:Starting Step CBS-make at 2006-03-02 09:36:52.0984 -3:00 Info:wait until CBS queue of buildspace EPT_BitTest_D is comple

  • Big XML document as blob in firebird database

    Hi! I'm having some problems trying to use blob to store an XML document on a Firebird database. I'll try to make this simple: I'm reading a CD and copying the structure to an xml document (org.w3c.dom.Document), and i need to store it on a firebid d

  • How to forward Leave to his 2nd immediate manager

    We are on 11.5.10, we have define Leave rule in AME on the bases of manager approval, Right now it goes to immediate manger for approval and working fine but my client face some issued in case of manger on leave so in that case leave remain pending t

  • Windows deployment services in windows server 2012 : operating system not found

    Hello, In order of learning WDS in Windows Server 2012, I created a simple test environment composed of : A Domain controler running WS 2K12 and hosting both DHCP and WDS services. (with the option of not listening port 67 checked while installing WD