Defragmentation and Optimization  OS X Tiger

Hi, which of the following utilities does the best job of
1) hard drive defragmentation and,
2) hard drive optimization.
A) iDegrag or
B) TechTool Pro 4

There is no good reason to do what you want to do, because OSX does not need to be defragged like a Windows OS. Those "utilities" are unnecessary and depend on newcomers for a customer base. TechTool can be useful for disk repair and salvage, but only if something is seriously wrong with the hardware.
I don't know what you mean by optimization. Possible using a separate partition for virtual memory files. If so, you would be better off investing the money in more RAM. You can keep track of memory swapfiles by watching the /private/var/vm folder over the course of several days. A good way to optimize is to run your OS in one partition and users in another, which is what I do.

Similar Messages

  • I GIVE UP!  Is it possible to uninstall Leopard and go back to Tiger?

    I am ready to give up for now. Is it possible to uninstall and go back to Tiger?

    If you were wise you would consider other options just besides getting rid of Leopard and throwing it away. First, you could wait until Apple releases OS 10.5.1, then install and update to it. This might help take care of some problems. You could dual boot OS 10.4.10 and 10.5 as long as you have the OS 10.4 full install Disk and the OS 10.5 full install disk. I personally will wait until at least Christmas time before I install Leopard. New OS's are always full of bugs. On the Microsoft side, I've read where Windows Vista is still having major problems and is taking a long time to become widely used. These problems take time to fix. When Windows XP came out nobody would upgrade to it because of all it's bugs, now Windows XP is considered the standard on PCs. Mac OS 10.4 was also full of bugs at first, but now we talk about how rock solid it is. Simply give Apple time to fix these problems.

  • G4 and G5 upgraded to Tiger

    I am working in a prepress environment and will be upgrading our G4's and a G5 to Tiger from Panther 10.3.9. Will be running Adobe Creative Suite 3, Quark 7 and other programs. *Does anyone know of any issues that I should be aware of (i.e. slow downs, crashes, software problems, etc...)?* Please let me know. Thank you.

    Hi Tom in London;
    Since I plan to upgrade my PowerMac Quad to a Mac Pro later this year, I have been following the treads on Migration Assistant. From what I have read the general opinion is the best way to use MA is to install all of your applications first then use it to move over the data and settings.
    If you don't the performance on the new Mac Pro could be slower then whatever the old computer. It appears that the Mac Pro is not able to use the applications as they were installed PowerPC Mac.
    Allan

  • Differentiate between mapping and optimization.

    Hi
    tell me some thing about this.
    Differentiate between mapping and optimization.
    please
    urgent. imran

    user571615 wrote:
    Hi
    tell me some thing about this.
    Differentiate between mapping and optimization.
    please
    urgent. imranThis is a forum of volunteers. There is no urgent here. For urgent, buy yourself a support contract and open an SR on MetaLink.

  • Code Generation and Optimization

    hi,
    we have been looking into a problem with our app, running in a JRockit JVM, pausing. Sometimes for up to a second or so at a time.
    We enabled the verbose logging of codegen and opt.
    And we see entries like:
    [Fri Sep  3 09:51:38 2004][28810][opt    ] #114 0x200 o com/abco/util/Checker.execute(Lcom/abco/util/;)V
    [Fri Sep  3 09:51:39 2004][28810][opt    ] #114 0x200 o @0x243c4000-0x243c7740 1186.23 ms (14784.88 ms)
    So the optimization took 1186 ms.
    Does this optimization happen in the main thread?
    I.e., is the above message an indication that the app had to stop for 1186ms while the optimization occurred?
    Any help on this would be greatly appreciated!
    Also, does anyone have any more pointers to info on the code generation and optimization in JRockit?
    I have only managed to find the following:
    http://edocs.beasys.com/wljrockit/docs142/intro/understa.html#1015273
    thanks,
    JN

    Hi,
    The optimization is done in its own thread and should not cause pauses in your application.
    The probable cause for long pause times is garbage collection. Enable verbose output for gc (-Xverbose:memory), or use the JRockit Management Console to monitor your application. Try using the generational concurrent gc, -Xgc:gencon, or the gc strategy -Xgcprio:pausetime. Read more on: http://edocs.beasys.com/wljrockit/docs142/userguide/memman.html
    If you allocate a lot of small, shortlived objects, you might want to configure a nursery where small objects are allocated. When the nursery is full, only that small part of the heap is garbage collected at a smaller cost than scanning the whole heap.
    If you need help tuning your application, you could make a JRA recording and send to us: http://e-docs.bea.com/wljrockit/docs142/userguide/jra.html.
    Good luck,
    Cecilia
    BEA WebLogic JRockit

  • Relaunch and optimize taking forever - can I get some realtime help?

    I have a large fast system running Vista 64 with 8GB Ram. Suddenly Outlook started running really slow. So I loaded relaunce and optimize. Optimization has been running for more than a half hour using 95 -100% memory. However, only 3 to 5% of CPU is ever used.
    Is this normal?
    What will happen to my catelog if I reset the system?
    As soon as I started the optimization I realized that I did not backup the catelog  before I started it.
    Jim Groan

    The optimization finished - It took almost an hour and used virtually all memory during that time. since there was nothing else going on in the system I still wonder if this should be considered a normal behavior
    Jim

  • SQL Tuning and OPTIMIZER - Execution Time with  " AND col .."

    Hi all,
    I get a question about SQL Tuning and OPTIMIZER.
    There are three samples with EXPLAIN PLAN and execution time.
    This "tw_pkg.getMaxAktion" is a PLSQL Package.
    1.) Execution Time : 0.25 Second
    2.) Execution Time : 0.59 Second
    3.) Execution Time : 1.11 Second
    The only difference is some additional "AND col <> .."
    Why is this execution time growing so strong?
    Many Thanks,
    Thomas
    ----[First example]---
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as dbadmin2
    SQL>
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
      3                    FROM studie
      4                 ) max_aktion
      5  WHERE max_aktion.max_aktion_id < 900 ;
    Explained
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3201460684
    | Id  | Operation            | Name        | Rows  | Bytes | Cost (%CPU)| Time
    |   0 | SELECT STATEMENT     |             |   220 |   880 |     5  (40)| 00:00:
    |*  1 |  INDEX FAST FULL SCAN| SYS_C005393 |   220 |   880 |     5  (40)| 00:00:
    Predicate Information (identified by operation id):
       1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900)
    13 rows selected
    SQL>
    Execution time (PL/SQL Developer says): 0.25 seconds
    ----[/First]---
    ----[Second example]---
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as dbadmin2
    SQL>
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
      3                    FROM studie
      4                 ) max_aktion
      5  WHERE max_aktion.max_aktion_id < 900
      6    AND max_aktion.max_aktion_id <> 692;
    Explained
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3201460684
    | Id  | Operation            | Name        | Rows  | Bytes | Cost (%CPU)| Time
    |   0 | SELECT STATEMENT     |             |    11 |    44 |     6  (50)| 00:00:
    |*  1 |  INDEX FAST FULL SCAN| SYS_C005393 |    11 |    44 |     6  (50)| 00:00:
    Predicate Information (identified by operation id):
       1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900 AND
                  "TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>692)
    14 rows selected
    SQL>
    Execution time (PL/SQL Developer says): 0.59 seconds
    ----[/Second]---
    ----[Third example]---
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
      3                    FROM studie
      4                 ) max_aktion
      5  WHERE max_aktion.max_aktion_id < 900
      6    AND max_aktion.max_aktion_id <> 692
      7    AND max_aktion.max_aktion_id <> 392;
    Explained
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3201460684
    | Id  | Operation            | Name        | Rows  | Bytes | Cost (%CPU)| Time
    |   0 | SELECT STATEMENT     |             |     1 |     4 |     6  (50)| 00:00:
    |*  1 |  INDEX FAST FULL SCAN| SYS_C005393 |     1 |     4 |     6  (50)| 00:00:
    Predicate Information (identified by operation id):
       1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900 AND
                  "TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>692 AND
                  "TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>392)
    15 rows selected
    SQL>
    Execution time (PL/SQL Developer says): 1.11 seconds
    ----[/Third]---Edited by: thomas_w on Jul 9, 2010 11:35 AM
    Edited by: thomas_w on Jul 12, 2010 8:29 AM

    Hi,
    this is likely because SQL Developer fetches and displays only limited number of rows from query results.
    This number is a parameter called 'sql array fetch size', you can find it in SQL Developer preferences under Tools/Preferences/Database/Advanced tab, and it's default value is 50 rows.
    Query scans a table from the beginning and continue scanning until first 50 rows are selected.
    If query conditions are more selective, then more table rows (or index entries) must be scanned to fetch first 50 results and execution time grows.
    This effect is usually unnoticeable when query uses simple and fast built-in comparison operators (like = <> etc) or oracle built-in functions, but your query uses a PL/SQL function that is much more slower than built-in functions/operators.
    Try to change this parameter to 1000 and most likely you will see that execution time of all 3 queries will be similar.
    Look at this simple test to figure out how it works:
    CREATE TABLE studie AS
    SELECT row_number() OVER (ORDER BY object_id) studie_id,  o.*
    FROM (
      SELECT * FROM all_objects
      CROSS JOIN
      (SELECT 1 FROM dual CONNECT BY LEVEL <= 100)
    ) o;
    CREATE INDEX studie_ix ON studie(object_name, studie_id);
    ANALYZE TABLE studie COMPUTE STATISTICS;
    CREATE OR REPLACE FUNCTION very_slow_function(action IN NUMBER)
    RETURN NUMBER
    IS
    BEGIN
      RETURN action;
    END;
    /'SQL array fetch size' parameter in SQLDeveloper has been set to 50 (default). We will run 3 different queries on test table.
    Query 1:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      1.22       1.29          0       1310          0          50
    total        3      1.22       1.29          0       1310          0          50
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
         50  INDEX FAST FULL SCAN STUDIE_IX (cr=1310 pr=0 pw=0 time=355838 us cost=5536 size=827075 card=165415)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
         50   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 2:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
          AND max_aktion.max_aktion_id > 800
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      8.40       8.62          0       9351          0          50
    total        3      8.40       8.64          0       9351          0          50
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
         50  INDEX FAST FULL SCAN STUDIE_IX (cr=9351 pr=0 pw=0 time=16988202 us cost=5552 size=41355 card=8271)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
         50   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 3:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id = 600
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.72      19.16          0      19315          0           1
    total        3     18.73      19.16          0      19315          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
          1  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=0 us cost=5536 size=165415 card=33083)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 1 - 1,29 sec, 50 rows fetched, 1310 index entries scanned to find these 50 rows.
    Query 2 - 8,64 sec, 50 rows fetched, 9351 index entries scanned to find these 50 rows.
    Query 3 - 19,16 sec, only 1 row fetched, 19315 index entries scanned (full index).
    Now 'SQL array fetch size' parameter in SQLDeveloper has been set to 1000.
    Query 1:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.35      18.46          0      19315          0         899
    total        3     18.35      18.46          0      19315          0         899
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
        899  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=20571272 us cost=5536 size=827075 card=165415)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
        899   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 2:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
          AND max_aktion.max_aktion_id > 800
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.79      18.86          0      19315          0          99
    total        3     18.79      18.86          0      19315          0          99
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
         99  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=32805696 us cost=5552 size=41355 card=8271)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
         99   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 3:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id = 600
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.69      18.84          0      19315          0           1
    total        3     18.69      18.84          0      19315          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
          1  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=0 us cost=5536 size=165415 card=33083)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)And now:
    Query 1 - 18.46 sec, 899 rows fetched, 19315 index entries scanned.
    Query 2 - 18.86 sec, 99 rows fetched, 19315 index entries scanned.
    Query 3 - 18.84 sec, 1 row fetched, 19315 index entries scanned.

  • Analyze and optimization

    Hello everyone,
    I'm on this subject for now 3 weeks and i need help.
    I'm trainee in a company where i have to analyze and optimize their GPO as simple as that, I so learn in detail how does this tool work ect and other useful things about Active Directory.
    I learned their 60 gpos (Some rules up to 600 settings...) and their thousands parameters which is essential for me and during my searches i found many many many softwares to detect parameters conflicts or duplicated settings, but after all my tries i'm
    not satisfied today by what i found.
    I used a trial version of GPOAdmin, the GPO Reporting pack from SDM, probably all the Microsoft tools, ActiveAdministrator ect ... I mean all these tools are very powerful and allow many features but i just need something that will find and tell me where
    are all my conflicts on my domain and by this I will correct these settings to have a full capable domain optimized and users won't complain anymore because they'll have a faster logon ect...
    Maybe I don't use the products as i should or maybe it doesn't even exist but it seems very long to analyze all by my self and write every parameter on each object that will be applied and check if there won't be conflict or another GPO for this setting.
    Maybe Powershell can help me on this but I don't know how to use it to.
    So here I am and if you have any idea to help me on the best practice or someone had to do the same job as I have tell me I'll be very happy to receive your information.
    Thanks and sorry for my English.

    > I mean that if there are 10 gpo for the domain and 10 others on children
    > UO, some parameters will be overwritten (Conflict) or the same
    > parameters will be set 5 times (Duplication).
    Yes, that's true. But setting a simple registry key takes a time windows
    cannot even log to the gpsvc.log file. This is from a VM running on a
    desktop system concurrently with 4 other VMs:
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 1 =>
    Microsoft.CredentialManager  [OK]
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 2 => Microsoft.GetPrograms
     [OK]
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 3 => Microsoft.HomeGroup  [OK]
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 4 =>
    Microsoft.iSCSIInitiator  [OK]
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 5 =>
    Microsoft.ParentalControls  [OK]
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 6 =>
    Microsoft.PeopleNearMe  [OK]
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 7 =>
    Microsoft.UserAccounts  [OK]
    GPSVC(478.d68) 11:48:19:829 SetRegistryValue: 8 =>
    Microsoft.WindowsAnytimeUpgrade  [OK]
    And even here it takes only about 1 ms average - on a real system, this
    is about 50 times faster.
    Martin
    Mal ein
    GUTES Buch über GPOs lesen?
    NO THEY ARE NOT EVIL, if you know what you are doing:
    Good or bad GPOs?
    And if IT bothers me - coke bottle design refreshment :))

  • Hi i have a macbook its osx 10.4.11 core duel and i think its tiger already installed but it needs updating what do i do  thanks

    hi i have a macbook its osx 10.4.11 core duel and i think its tiger already installed but it needs updating what do i do  thanks

    Hi Paul
    The current version of OS X is Snow Leopard (OS X 10.6) and you should be able to order an installation disk for your MacBook from the on line Apple Store. 
    Soon, a newer version of OS X (called Lion) is being released, but I don't think that it will run on a Core Duo because that particular processor is 32 bit only.
    Bob

  • Problem Modify and optimize an application

    We have problem using modify application and optimize application on a specific application.
    The application name is GPFormat.
    When we do a modify application and choose "Reassign SQL Index" we get the following errror message:
    Error message:: Cannot drop the index 'dbo.tblFACTGPFormat.IX_tblFACTGPFormat', because it does not exist or you do not have permission.
    And if we do an optimize application with "Full Optimize" and "Compress database" we get the following errror message:
    Error message:There is already an object named 'CONSTTBLFACTGPFORMAT' in the database.
    Could not create constraint. See previous errors.
    We using BPC 5,1 SP5 and SQL2005

    It seems a previous run of optimize with compress was failing.
    So you have to rename CONSTTBLFACTGPFORMAT table and make sure that installation user
    of SAP BPC has the correct access regarding table tblFactGPFFormat.
    Are you using custom indexes for this table?
    I suggest to drop the existing cluster index for table tblFactGPFormat and after you have to run another optimize with compress. This should fix all your problems.
    Regards
    Sorin Radulescu

  • [svn:osmf:] 12659: A few code cleanup and optimization tasks for the Manifest.

    Revision: 12659
    Revision: 12659
    Author:   [email protected]
    Date:     2009-12-08 10:56:56 -0800 (Tue, 08 Dec 2009)
    Log Message:
    A few code cleanup and optimization tasks for the Manifest.
    Modified Paths:
        osmf/trunk/framework/MediaFramework/org/osmf/net/F4MLoader.as
        osmf/trunk/framework/MediaFramework/org/osmf/net/ManifestParser.as

    Many thanks for the fast reply.
    I've got a follow up question.
    What will happen if I modify the reconnect Code in the OSMF Netloader Class as recommended and then load multiple third party OSMF plugins,
    which may have included the origin OSMF version of the Netloader class.
    Which one will be used at runtime?
    Thanks in advance!

  • New site about J2ME game programming and optimization

    Hello!
    I just wanted to tell you about a new J2ME development site
    SupremeJ2ME found at
    http://supremej2me.bambalam.se
    It has a lot of useful guides and tips about mobile J2ME game development and optimization, a forum and information about the best J2ME tools.
    Check it out!
    cheers,
    Cranky

    J2ME Polish have licensing costs associated with it:
    http://www.j2mepolish.org/licenses.html
    As for Canvas VS GameCanvas:
    GameCanvas is MIDP 2.0 and up, so depending which devices your app supports you might want to stick with Canvas. GameCanvas eases graphics flushing, but double buffering can be implemented on Canvas like so:
    Image offscreen = isDoubleBuffered () ? null :
               Image.createImage (getWidth (), getHeight ());See http://www.developer.com/java/j2me/article.php/10934_1561591_8
    section: Avoiding Flickering

  • Can I "test" Leopard w/my bootable FW drive and still return to Tiger?

    With some talk of problems with some software not yet working well with Leopard (and my income dependent on one PowerBook G4), I'd like to live with a copy of Leopard I've been able to boot from a firewire drive to see how things go.
    My question is, if I find some major problem I can't get around (unable to use a critical piece of software, for example) can I simply again boot off of Tiger on my internal HD and go back to normal, or will just using my documents, etc. with Leopard "upgrade" any files/structures/settings that won't allow me to then go back and use them in Tiger again?
    Any other advice on doing a test like this would be appreciated. Thanks.

    Hi Bob Mcinnis;
    You should not have any problems with your plan. It sounds like a wise method to try Leopard.
    Allan

  • Archive and Install back to Tiger

    Hello all!
    After about a month of frustration, I think I am ready to take my PowerBook back to Tiger. I see that I can do an Archive and Install from my Tiger Retail. The only problem I see is that the "Preserve Users" setting is grayed out.
    So, if I go through this Archive and Install, will I still have access to my old apps (iLife '08, and some third party stuff like Finale), photos, documents, and music?
    Thanks for all of your help!
    Josh

    Here is the latest update:
    I am happy (albeit slightly puzzled) to report that I have been running KERNEL PANIC FREE for most of yesterday and today. Here is why I am puzzled: I booted up yesterday and the system seemed to hang. I noticed that the keyboard worked, but not the trackpad. I went and got an old Apple USB mouse. When I plugged it in, it responded. I was able to work all of yesterday afternoon and this morning without incident.
    Just for giggles, I unplugged the mouse this morning. I got a KP after about 30 seconds. Here is the report from that KP:
    Mon Sep 1 07:30:31 2008
    Unresolved kernel trap(cpu 0): 0x300 - Data access DAR=0x0000000000000578 PC=0x000000000055A6A0
    Latest crash info for cpu 0:
    Exception state (sv=0x46143280)
    PC=0x0055A6A0; MSR=0x00009030; DAR=0x00000578; DSISR=0x40000000; LR=0x00D2F7C0; R1=0x2A2C3BA0; XCP=0x0000000C (0x300 - Data access)
    Backtrace:
    0x00D74924 0x00D2F7C0 0x00D40E54 0x00D424D8 0x00D429B8 0x00D9F4AC
    0x00D35CE4 0x0035A994 0x0003F94C 0x000B05D4
    Kernel loadable modules in backtrace (with dependencies):
    com.apple.driver.AirPortBrcm43xx(314.46.9)@0xd2e000->0xe5afff
    dependency: com.apple.iokit.IO80211Family(211.1)@0xd0a000
    dependency: com.apple.iokit.IOPCIFamily(2.4.1)@0x552000
    dependency: com.apple.iokit.IONetworkingFamily(1.6.0)@0xc8e000
    com.apple.iokit.IOPCIFamily(2.4.1)@0x552000->0x565fff
    Proceeding back via exception chain:
    Exception state (sv=0x46143280)
    previously dumped as "Latest" state. skipping...
    Exception state (sv=0x34f51500)
    PC=0x00000000; MSR=0x0000D030; DAR=0x00000000; DSISR=0x00000000; LR=0x00000000; R1=0x00000000; XCP=0x00000000 (Unknown)
    BSD process name corresponding to current thread: kernel_task
    Mac OS version:
    9E17
    Kernel version:
    Darwin Kernel Version 9.4.0: Mon Jun 9 19:36:17 PDT 2008; root:xnu-1228.5.20~1/RELEASE_PPC
    System model name: PowerBook6,2
    panic(cpu 0 caller 0xFFFF0003): 0x300 - Data access
    Latest stack backtrace for cpu 0:
    Backtrace:
    0x0009B498 0x0009BE3C 0x00029DD8 0x000AF210 0x000B2A78
    Proceeding back via exception chain:
    Exception state (sv=0x46143280)
    PC=0x0055A6A0; MSR=0x00009030; DAR=0x00000578; DSISR=0x40000000; LR=0x00D2F7C0; R1=0x2A2C3BA0; XCP=0x0000000C (0x300 - Data access)
    Backtrace:
    0x00D74924 0x00D2F7C0 0x00D40E54 0x00D424D8 0x00D429B8 0x00D9F4AC
    0x00D35CE4 0x0035A994 0x0003F94C 0x000B05D4
    Kernel loadable modules in backtrace (with dependencies):
    com.apple.driver.AirPortBrcm43xx(314.46.9)@0xd2e000->0xe5afff
    dependency: com.apple.iokit.IO80211Family(211.1)@0xd0a000
    dependency: com.apple.iokit.IOPCIFamily(2.4.1)@0x552000
    dependency: com.apple.iokit.IONetworkingFamily(1.6.0)@0xc8e000
    com.apple.iokit.IOPCIFamily(2.4.1)@0x552000->0x565fff
    Exception state (sv=0x34f51500) P
    Not sure why it is working, but I am not complaining either.
    Thanks again for your help and insight!
    Josh

  • [svn:fx-trunk] 5028: IGraphicElement interface clean-up and optimizations.

    Revision: 5028
    Author: [email protected]
    Date: 2009-02-20 16:02:17 -0800 (Fri, 20 Feb 2009)
    Log Message:
    IGraphicElement interface clean-up and optimizations.
    Animating a GraphicElement that doesn't share the Group's DO should be now faster and smoother since redrawing it won't redraw the Group anymore.
    1. Group doesn't always clear the first sequence of display objects now
    2. Moved the shared DO logic almost entirely into Group
    3. More granular invalidation for GraphicElements
    QE Notes: Make sure we have test that count the number of display objects for a given set of graphic elements and a group
    Doc Notes:
    Bugs: None
    Reviewer: Glenn, Ryan, Jason
    tests: None
    Modified Paths:
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/components/Group.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/components/baseClasses/GroupBase.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/core/InvalidatingSprite.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/BitmapGraphic.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/IGraphicElement.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/StrokedElement.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/baseClasses/GraphicElement.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/baseClasses/TextGraphicElement.a s
    flex/sdk/trunk/frameworks/projects/framework/src/mx/core/IVisualElement.as
    Added Paths:
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/ISharedDisplayObject.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/baseClasses/ISharedGraphicsDispl ayObject.as

Maybe you are looking for

  • 'connect without context' profile takes away data/view data rights in model

    Demystify ODI Security -- Documentation is very weak on this topic. Non-generic profiles-This concept works perfectly well for projects/folders/packages. But as soon as I bring in the new profile: CONNECT_WITHOUT_CONTEXT, the user loses the ability t

  • Asking for Suggestions

    I am here to ask for suggestions as to approach with Verizon Wireless as to an error they made that they want me to pay for. You see, I send my payment on the 1st business day of the month, every month, even though the bill is not due until the 17th

  • DPS6: Unable to retrieve a backend SEARCH connection to process the search

    I have installed and configured DPS 6 but I cannot get it to proxy through to our back-end Sun DS 5.2 servers. The error message I get is: Error while reading entry  [LDAP: error code 1 - Unable to retrieve a backend SEARCH connection to process the

  • File Saving with Linked Images

    Why does file saves take longer when images are linked (not embedded.) ? It's just linking to the file, it should be nice and quick to save, such as it is in inDesign.

  • When i try to submit my podcast to itunes store i get 11111 error about cookies

    Every time  i try to submit my podcast to the itunes store i get an error that say this we could not complete your itunes store request. an unknown error occurred (11111) there was an error in the itunes store please try again later. so i try again a