Conclusion so far:  WebX performance beats Jive by a ratio of 30 to 1

After four days of using the new Jive forums, I've come to the conclusion that I have dedicated about three times the usual amount of time to the Adobe forums and have accomplished less than one tenth of what I used to do in a similar amount of time. 
This works out to a ratio of 30 to 1 in terms of how much better the old forum format performed in comparison to what we've ended up with here so far.
I'm basing this calculation on the number of hours spent here since last Sunday versus the few satisfying threads I have been able to read as well as the remarkably little assistance I've been able to give.
These results are naturally somewhat skewed by such things as the proportionally inordinate amount of time spent in the Forum Comments forum as well as reading and commenting on forum complaints on the other forums I've visited, to wit:  Photoshop Macintosh, Bridge Macintosh, Camera Raw, DNG, Color Management, Photography, Typography, Type, Illustrator, InDesign and this Forum Comments forums.
While this is a conclusion based on my subjective use of the forums and will necessarily be different than what other users experience, it does give me a somewhat quantifiable measure of the magnitude of the problem as it affects me personally.
Now I don't feel as bad as I did this morning when I feared I was not putting enough effort into adapting to the change as I increasingly came to the realization that this just isn't worth it anymore.
There will be dozens of critics who will be very happy to see me participating less and less everyday, so not everyone loses. 
As I've said before these last few days, it just doesn't seem worth it anymore. 
This will not be a surprise to Adobe, as we were told during the preview the past month that they had fully weighed and accepted the probability that some of us would just fade away and would eventually be replaced by new blood.
At this point I just want to make sure that I express my gratitude to the many contributors from whom I've learned so much over the years.  At the risk of inevitably forgetting to mention many, I'll start by bowing to the memory of the late, lamented Bruce Fraser and the also departed John Slate, and then thanking living contributors such as Thomas Knoll, creator of Photoshop and Camera Raw, Chris Cox, Eric Chan, Jeff Schewe, Ian Lyons, Neil Keller, Wade Zimmermann, Buko, Ann Shelbourne, Phos±, Marco Ugolini, Lou Dina, Andrew Rodney, Professor Gernot Hoffmann, Thomas Phinney, Dov Isaacs, Doug Katz, Welles Goodrich, Ed Hannigan, Conrad Chavez, and others whose names escape me now.

As I've said before these last few days, it just doesn't seem worth it anymore.
That was the conclusion that I have regretfully reached too.
I have found the Adobe Forums  to be interesting, stimulating and FUN for more than ten years, and have met some wonderful people through them.
Unfortunately, attempting to post in the new Format, has become an exercise in frustration.
This is directly due to the terrible performance of the Jive software;
the incompetence of the engineers (who seem to be incapable of providing the most basic levels of Usability (such as effective navigational tools);
the dreadful design and functionality (or the total lack of same!) of the Reply box; and the abysmal performance of the Servers in regards to speed (even refreshing a page is painfully slow).
Adobe certainly bought a Pig-in-a-Poke when they awarded the contract to Jive — and no amount of Lipstick can hide its ugliness.

Similar Messages

  • Conclusions so far on the display problems?

    After reading thread after thread about the display problems this is what I understand:
    Most of the 20-inch Alu-iMacs seem to have problems? Not being able to show solid colours without a gradient from top to bottom is the biggest issue?
    "Alot" of the 24-inch Alu-Imacs have problems with a left to right gradient, often white to yellowish white.
    Apple is not admitting that the Alu-iMacs have general display problems. (Maybe due to too cheap display technique?)
    We don't know for sure if it is hardware or software related?
    Is wait and see the best practice for one who want to buy a new 20-inch Alu-iMac?
    /Krister

    What I'd like to know is why? Could the bottom lamp be getting more power than the top one from the inverter board?
    I received a lot of these for our labs and most of them (I haven't opened them all yet) have lower lamps that appear almost twice as bright at the top lamps... It's in the hardware, and I don't know what to do about it.
    So, where is the problem? Traditionally the lamps (usually two below and two above) are identical models, so could this be an inverter issue? Has anyone tried to get a voltage reading from the lamps to see how much output the inverter is giving them? (Not at all suggesting, just asking. I know what Apple would say if I asked people to go cracking their iMacs open.)
    Could the layer of glass in the LCD be refracting more light toward us in the bottom 2-3 inches and away from us for the top 2-3 inches?
    It's frustrating to experience, but worse to not even know what causes it. Because, to Apple engineering... What problem? There must be one bad LCD out there... and we all have it.
    Anyway, anyone who has any light to shed on the subject I'd love to hear it... (just add it to the top few inches of my display.)
    -John

  • Performance issue in Report (getting time out error)

    Hi experts,
    I am doing Performance for a Report (getting time out error)
    Please see the code below and .
    while looping internal table IVBAP after 25 minutes its showing  time out error at this poit ->
    SELECT MAX( ERDAT ) .
    please send alternate code for this .
    Advance thanks
    from
    Nagendra
    Get Sales Order Details
    CLEAR IVBAP.
    REFRESH IVBAP.
    SELECT VBELN POSNR MATNR NETWR KWMENG WERKS FROM VBAP
       INTO CORRESPONDING FIELDS OF TABLE IVBAP
         FOR ALL ENTRIES IN IVBAK
           WHERE VBELN =  IVBAK-VBELN
           AND   MATNR IN Z_MATNR
           AND   WERKS IN Z_WERKS
           AND   ABGRU = ' '.
    Check for Obsolete Materials - Get Product Hierarhy/Mat'l Description
      SORT IVBAP BY MATNR WERKS.
      CLEAR: WK_MATNR, WK_WERKS, WK_PRDHA, WK_MAKTX,
             WK_BLOCK, WK_MMSTA, WK_MSTAE.
      LOOP AT IVBAP.
          CLEAR WK_INVDATE.                                   "I6677.sn
          SELECT MAX( ERDAT ) FROM VBRP INTO WK_INVDATE WHERE
          AUBEL EQ IVBAP-VBELN AND
          AUPOS EQ IVBAP-POSNR.
          IF SY-SUBRC = 0.
              MOVE WK_INVDATE TO IVBAP-INVDT.
              MODIFY IVBAP.
          ENDIF.                                               "I6677.e n
          SELECT SINGLE * FROM MBEW WHERE             "I6759.sn
          MATNR EQ IVBAP-MATNR AND
          BWKEY EQ IVBAP-WERKS AND
          BWTAR EQ SPACE.
          IF SY-SUBRC = 0.
             MOVE MBEW-STPRS TO IVBAP-STPRS.
             IVBAP-TOT = MBEW-STPRS * IVBAP-KWMENG.
             MODIFY IVBAP.
          ENDIF.                                      "I6759.en
        IF IVBAP-MATNR NE WK_MATNR OR IVBAP-WERKS NE WK_WERKS.
          CLEAR: WK_BLOCK, WK_MMSTA, WK_MSTAE, WK_PRDHA, WK_MAKTX.
          MOVE IVBAP-MATNR TO WK_MATNR.
          MOVE IVBAP-WERKS TO WK_WERKS.
          SELECT SINGLE MMSTA FROM MARC INTO MARC-MMSTA
            WHERE MATNR = WK_MATNR
            AND   WERKS = WK_WERKS.
          IF NOT MARC-MMSTA IS INITIAL.
            MOVE '*' TO WK_MMSTA.
          ENDIF.
          SELECT SINGLE LVORM PRDHA MSTAE MSTAV FROM MARA
            INTO (MARA-LVORM, MARA-PRDHA, MARA-MSTAE, MARA-MSTAV)
            WHERE MATNR = WK_MATNR.
          IF ( NOT MARA-MSTAE IS INITIAL ) OR
             ( NOT MARA-MSTAV IS INITIAL ) OR
             ( NOT MARA-LVORM IS INITIAL ).
             MOVE '*' TO WK_MSTAE.
          ENDIF.
          MOVE MARA-PRDHA TO WK_PRDHA.
          SELECT SINGLE MAKTX FROM MAKT INTO WK_MAKTX
            WHERE MATNR = WK_MATNR
              AND SPRAS = SY-LANGU.
        ENDIF.
        IF Z_BLOCK EQ 'B'.
          IF WK_MMSTA EQ ' ' AND WK_MSTAE EQ ' '.
            DELETE IVBAP.
            CONTINUE.
          ENDIF.
        ELSEIF Z_BLOCK EQ 'U'.
          IF WK_MMSTA EQ '' OR WK_MSTAE EQ ''.
            DELETE IVBAP.
            CONTINUE.
          ENDIF.
        ELSE.
          IF WK_MMSTA EQ '' OR WK_MSTAE EQ ''.
            MOVE '*' TO WK_BLOCK.
          ENDIF.
        ENDIF.
        IF WK_PRDHA IN Z_PRDHA.                                    "I4792
          MOVE WK_BLOCK TO IVBAP-BLOCK.
          MOVE WK_PRDHA TO IVBAP-PRDHA.
          MOVE WK_MAKTX TO IVBAP-MAKTX.
          MODIFY IVBAP.
        ELSE.                                                     "I4792
          DELETE IVBAP.                                           "I4792
        ENDIF.                                                    "I4792
        IF NOT Z_ALNUM[] IS INITIAL.                              "I9076
          SELECT SINGLE * FROM MAEX                               "I9076
            WHERE MATNR = IVBAP-MATNR                             "I9076
              AND ALNUM IN Z_ALNUM.                               "I9076
          IF SY-SUBRC <> 0.                                       "I9076
            DELETE IVBAP.                                         "I9076
          ENDIF.                                                  "I9076
        ENDIF.                                                    "I9076
      ENDLOOP.

    Hi Nagendra!
    Get Sales Order Details
    CLEAR IVBAP.
    REFRESH IVBAP.
    check ivbak is not initial
    SELECT VBELN POSNR MATNR NETWR KWMENG WERKS FROM VBAP
    INTO CORRESPONDING FIELDS OF TABLE IVBAP
    FOR ALL ENTRIES IN IVBAK
    WHERE VBELN = IVBAK-VBELN
    AND MATNR IN Z_MATNR
    AND WERKS IN Z_WERKS
    AND ABGRU = ' '.
    Check for Obsolete Materials - Get Product Hierarhy/Mat'l Description
    SORT IVBAP BY MATNR WERKS.
    CLEAR: WK_MATNR, WK_WERKS, WK_PRDHA, WK_MAKTX,
    WK_BLOCK, WK_MMSTA, WK_MSTAE.
    avoid select widin loop. instead do selection outside loop.u can use read statement......and then loop if required.
    LOOP AT IVBAP.
    CLEAR WK_INVDATE. "I6677.sn
    SELECT MAX( ERDAT ) FROM VBRP INTO WK_INVDATE WHERE
    AUBEL EQ IVBAP-VBELN AND
    AUPOS EQ IVBAP-POSNR.
    IF SY-SUBRC = 0.
    MOVE WK_INVDATE TO IVBAP-INVDT.
    MODIFY IVBAP.
    ENDIF. "I6677.e n
    SELECT SINGLE * FROM MBEW WHERE "I6759.sn
    MATNR EQ IVBAP-MATNR AND
    BWKEY EQ IVBAP-WERKS AND
    BWTAR EQ SPACE.
    IF SY-SUBRC = 0.
    MOVE MBEW-STPRS TO IVBAP-STPRS.
    IVBAP-TOT = MBEW-STPRS * IVBAP-KWMENG.
    MODIFY IVBAP.
    ENDIF. "I6759.en
    IF IVBAP-MATNR NE WK_MATNR OR IVBAP-WERKS NE WK_WERKS.
    CLEAR: WK_BLOCK, WK_MMSTA, WK_MSTAE, WK_PRDHA, WK_MAKTX.
    MOVE IVBAP-MATNR TO WK_MATNR.
    MOVE IVBAP-WERKS TO WK_WERKS.
    SELECT SINGLE MMSTA FROM MARC INTO MARC-MMSTA
    WHERE MATNR = WK_MATNR
    AND WERKS = WK_WERKS.
    IF NOT MARC-MMSTA IS INITIAL.
    MOVE '*' TO WK_MMSTA.
    ENDIF.
    SELECT SINGLE LVORM PRDHA MSTAE MSTAV FROM MARA
    INTO (MARA-LVORM, MARA-PRDHA, MARA-MSTAE, MARA-MSTAV)
    WHERE MATNR = WK_MATNR.
    IF ( NOT MARA-MSTAE IS INITIAL ) OR
    ( NOT MARA-MSTAV IS INITIAL ) OR
    ( NOT MARA-LVORM IS INITIAL ).
    MOVE '*' TO WK_MSTAE.
    ENDIF.
    MOVE MARA-PRDHA TO WK_PRDHA.
    SELECT SINGLE MAKTX FROM MAKT INTO WK_MAKTX
    WHERE MATNR = WK_MATNR
    AND SPRAS = SY-LANGU.
    ENDIF.
    IF Z_BLOCK EQ 'B'.
    IF WK_MMSTA EQ ' ' AND WK_MSTAE EQ ' '.
    DELETE IVBAP.
    CONTINUE.
    ENDIF.
    ELSEIF Z_BLOCK EQ 'U'.
    IF WK_MMSTA EQ '' OR WK_MSTAE EQ ''.
    DELETE IVBAP.
    CONTINUE.
    ENDIF.
    ELSE.
    IF WK_MMSTA EQ '' OR WK_MSTAE EQ ''.
    MOVE '*' TO WK_BLOCK.
    ENDIF.
    ENDIF.
    IF WK_PRDHA IN Z_PRDHA. "I4792
    MOVE WK_BLOCK TO IVBAP-BLOCK.
    MOVE WK_PRDHA TO IVBAP-PRDHA.
    MOVE WK_MAKTX TO IVBAP-MAKTX.
    MODIFY IVBAP.
    ELSE. "I4792
    DELETE IVBAP. "I4792
    ENDIF. "I4792
    IF NOT Z_ALNUM[] IS INITIAL. "I9076
    SELECT SINGLE * FROM MAEX "I9076
    WHERE MATNR = IVBAP-MATNR "I9076
    AND ALNUM IN Z_ALNUM. "I9076
    IF SY-SUBRC 0. "I9076
    DELETE IVBAP. "I9076
    ENDIF. "I9076
    ENDIF. "I9076
    endloop.
    U have used many select queries widin loop-endloop which is a big hindrance as far as performance is concerned.Avoid such practice.
    Thanks
    Deepika

  • SQL 2014 performance

    We have a fresh install of SQL 2014 running on Windows 2012 R2 running in Production. 
    We tested this extensively first and most of our code ran as fast as SQL 2012 or slightly better. 
    Pure engine processes such as Backup jobs run about 30% faster. 
    However, now that we’re in Production for the first day we’re seeing some very sporadic performance. 
    For example, 90% of our jobs are running just fine.  10% of the jobs are running significantly longer. 
    Jobs that used to take 7-8 minutes are now running in 40 minutes plus. 
    Not only is this slowing down our server in general but it’s causing a lot of blocking issues.
    We’re convinced this is a SQL 2014 issue and not hardware. 
    Using iometer we can tell that this server outperforms our old server. 
    SQL doesn’t always substantiate the numbers we’re seeing from iometer as far as performance compared to our old server.
    We’re running on RTM, not Cumulative Update 1.
    Our hardware is a beast of a server; DL580 G8, 384 GB ram, only 2 physical hard drives (running the OS/SQL) and the rest are Fusion iO cards. 
    We have 9 total Fusion cards for the database, log and tempdb. 
    Our old server is also a very robust server: DL380 G8, 384 GB ram, 48 physical drives and 2 Fusion iO cards for tempdb.
    Has anyone else deployed SQL 2014?  Has anyone else encountered these sporadic performance issues? 
    If so, did cumulative update 1 fix the issues?
    Separate question: If I put my database in 2012 compatibility mode and do a backup, can SQL 2012 read the SQL 2014 backup?
    Thanks in advance,
    André
    André

    We applied CU2 last night.  Things seem to be better this morning but we're still seeing some differences between SQL 2012.  Here are 2 of the biggest issues we're seeing.
    1 - views are slow.  We have queries that use views that are taking forever.  In some cases the view might be a simple "select 10 fields from table" type of views.  If we change our query to use the base table instead of the view, the query
    finishes very quickly.  As an example, a query using the view takes 3 minutes.  If we change the query to use the base table instead of the view it finishes in 5 seconds.
    2 - inner joins are slow compared to "not exists" or "exists".  Back when we migrated from SQL 2000 to 2008 we were told that joins were faster, so we migrated our code to use joins and our code flew.  Now we're seeing performance issues on some,
    but not all, queries that use joins.
    3 - inner joins are not able to see the outside table.  For example, this query worked just fine in SQL 2012, and it throws an error in 2014
    select
    top100
    fromcustomers c(nolock)
    inner
    join(selectcorpnumber,fdate,code
    fromdbo.FN_Customers_ParseCodes(c.rcode,c.corpnumber,c.fdate))r
    onr.corpnumber=c.corpnumber
    where
    c.importtimestamp='2014-06-30
    15:37:17.347'
    and
    c.customerid='123'
    The error it throws is:
    Msg 4104
    ,Level16,State1,Line
    2
    The multi
    -part identifier "c.codes" could
    notbe bound.
    Msg 4104
    ,Level16,State1,Line
    2
    The multi
    -part identifier "c.corpnumber" could
    notbe bound.
    Msg 4104
    ,Level16,State1,Line
    2
    The multi
    -part identifier "c.fdate" could
    notbe bound.
    André

  • Java Plug in 1.4 Performance

    I'm getting really poor performance on my applet using the Java plug in that I just installed from JDK 1.4 running under Win 2000.
    What's the deal?
    Hate to say it, but the MS VM blows this away as far as performance goes.
    Also, some of the applet resources (gif files) did not load from the JAR properly. I assume this is a bug, as it worked fine under JDK 1.3 / Java plug-in 1.3
    Drew

    Hey,
    I too have noticed really poor performance with the 1.4 plugin as well. I have spent the past couple of days banging my head against a wall trying to get to the bottom of the problem, but here is all that I have found (this is for 1.4.2_02):
    1. When you run the plugin to a web site that runs over https, the applet jar caching mechanism doesn't seem to work. Or at least, when I access my app/applet on my local box that runs over http, everything works fine. But when I try the app/applet combination against one of our staging environments that operates over https/SSL, I see the applet .jar files loaded multiple times. This definitely results in a slow down.
    2. We use RSA signed .jar files to deliver our applets (these are signed with the Netscape signtool). There is something about the signature confirmation that the plugin class loader is doing that is slowing the applet WAY down (3-4 times as slow). For example, if I run our applet code as a 'stand alone' application (i.e. run from a 'main' method using the same VM as the plugin) it take about 10-12 seconds to initialize. But if I run it as an applet from a singed .jar file in the plugin it takes about 60 seconds to initialize (ouch). Finally, if I use the plugin to run the applet but use UNSIGNED .jar files to deploy it, and add the correct policy entries into my java.policy file for the plugin so that the applet runs with full permissions, I'm back to running at 10-12 seconds. So these three things lead me to believe that the problem is not with the SSL implementation (in the third case I'm still using SSL to talk between the applet/server) but rather has to do with the signature confirmation performed by the plugin class loader.
    3. If I run my applet in the plugin but place the .jar files on my hard drive and then set them to be in the boot class path of the plugin (using the -Xbootclasspath/a flag) so that they are loaded using the default class loader, my applet also initialized in 10-12 seconds. Once again, this seems to point to the plugin class loader being the culprit for the slowdown.
    I hope this helps somebody, and that somebody can shine some light on my problems.
    Thanks
    Andy Peterson

  • Terrible performance after moving VM's to new hardware

    We recently decided to purchase a new server to run our VM's on since the old one did not have redundant drives. Unfortunately our new and improved server has resulted in far worse performance now that I've moved the VM's over to it! I am not really sure
    why, but can only assume it is to do with the drive configuration.
    Basically when the VM's are running on the new virtual server host, it takes a long time just to login to the VM's (with our without a roaming profile). General use once you are logged in seems OK, but it is much slower than before. We have a DC as one of
    the VM's and when it was on the new server it was taking a very long time to provide AD group membership information resulting in some applications that request this information to perform very badly. As a result, I had to move that VM back to the old server.
    The new server has the following specs:
    Dell PowerEdge R620
    2x Xeon E5-2630v2 2.6GHz processors
    64GB RAM
    PERC H710 integrated RAID controller, 512MB NV CACHE
    6x 1TB 7.2K RPM 6Gps SAS 2.5" HDD
    O/S (C drive) configured on RAID 1 using 2x 1TB drives, GPT partition
    Hyper-V storage (D drive) configured on RAID 10 using 4x 1TB drives on 2 spans, GPT partition on simple volume in Windows formatted with 64KB cluster size
    Running Windows Server 2012 Datacenter R2 with Hyper-V
    The old server has the following specs:
    Dell PowerEdge 2950 (2007 model)
    2x Xeon E5335 2.0GHz
    24GB RAM
    Unknown RAID controller
    4x 300GB 15K SAS drives running with NO RAID. Each drive holds 2-3 separate VM's.
    Running Windows Server 2012 Datacenter with Hyper-V
    The new server is currently only running 6 VM's and already displaying the performance problems. We have over 10 VM's that need to be hosted ideally. The current 6 VM's which are running include our production Exchange 2010 server (only 10 staff with an
    80GB VHD), Windows Update Server (100GB VHD), SQL 2000 server with limited use (100GB VHD), FTP server with minor use, build server for .NET development with infrequent use and a basic server which is used for licensing/connection services for our own applications
    we develop (low use).
    The only thing I can think of is the difference in the disk/RAID configuration. But it seems this configuration I am using is quite popular and therefore I would expect it to easily handle 6 VM's given the specs of the server?
    Any help much appreciated.

    Hi your hardware setups looks good, and should perform nicely with 6 VM's, it could be network related, but make sure you have write cache allocated to your VM Drive.
    I presume you installed all the latest drivers and firmware updates to your new server for 2012R2, for network and storage controller etc.
    Ive seen slow VM's when the network isn't performing, or configured wrong, you could test a non domained server, with no network connectivity, or boot an exsisting one into safe mode without networking just to see how it performs.
    Ruling out network, make sure if you have any AV, that you have excluded these directories :-
    ·         Default virtual machine configuration directory(C:\ProgramData\Microsoft\Windows\Hyper-V)
    ·         Custom virtual machine configuration directories
    ·         Default virtual hard disk directory (C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks)
    ·         Custom virtual hard disk directories
    ·         Snapshot directories
    ·         Vmms.exe
    ·         Vmwp.exe <o:p></o:p>
    also that you have disabled C-States in your System Bios, and you have set any performance plans to maximum performance.
    Mixed VHD and VHDX will have no effect on your underlying disk performance.
    you can check the disk performance on your host server using the resource monitor, Queue length needs to less than 1 for your RAID 10 disk.
    hope this helps. let us know how you get on.
    regards
    MArk

  • Improving G5 performance

    Hello,
    I've got a G5 dual-core 2.0Ghz (PCI-e model) and I'd like to boost its performance. I'm considering two options, but can only afford one. I'd like some help deciding which would have the greatest effect for my needs.
    Option 1: upgrade from 2.5GB to 4.5GB of RAM
    Option 2: upgrade from 6600 to x1900 graphics card
    I'm a fine art photographer, and I use Aperture and Photoshop daily. I do most of my editing in Photoshop, and basically use Aperture as an image management system. If I had to choose between improving Aperture performance, or improving Photoshop, I'd probably choose Photoshop. (Of course I'd prefer both!)
    I understand that the x1900 will improve Aperture a lot, but will it affect Photoshop at all? Will it have any effect on the general GUI "responsiveness" and applications like Safari, iCal, etc.?
    I'm guessing the 2 more gigs of RAM will help Photoshop, but how much? Will it be significantly noticeable? Will more RAM help Aperture noticeably?
    Thanks for your help understanding the finer points of this.
    Best,
    Chris

    Thank you for your very thorough reply. I have a couple of questions comments (below).
    Option 1: upgrade from 2.5GB to 4.5GB of RAM
    I did go ahead and upgrade to 4.5GB of RAM. So far, any performance improvements that might have happened are barely noticeable. Actually, the app I notice the most change in is Aperture, which is nice.
    The more you fill a boot drive (or any drive) over
    50% the slower it will perform, since your are
    caching/swapping memory to the boot drive constantly
    due to insufficient RAM, this makes everything slow.
    I'm using a "sandbox" set-up with a backup/cloning program called SuperDuper. I have a 74GB Raptor set as the sandbox, which means it has a copy of the applications and system installed on it but nothing else. Of 70GB available, 15GB are used - leaving roughly 80% of free space.
    The internal HD (400GB, 7200 RPM) has my system and user files on it. That drive has 150GB used, so about 60% remains. It would seem disk space is not the problem, right?
    You probably know how a sandbox works, but for those who don't... it enables me to install updates to the system and apps and test them for a while. If there aren't any problems, I use SuperDuper to "smart update" the HD with new changes. If there are problems, I simply boot back to the HD and then roll back the sandbox to the state it was in prior to the updates.
    Finally, I back up that internal 400GB HD to an external 400GB FW drive.
    I don't know what effect, if any, the sandbox configuration has on speed.
    If you go through the steps of a backup/c boot from
    the Tiger installer/disk utility erase with
    zero/fresh install of Tiger/update/install apps and
    update and all that for the boot drive, your machine
    will perform better as well as this will optimize the
    drive as well as eliminate any glitches that have
    occurred which may be affecting your performance in
    other programs.
    I understand everything you've said here, except for "backup/c boot". What is that?
    If you just run one major heavy duty program at a
    time, it will have all the RAM for itself, instead of
    sharing it. Supposedly 3.5 GB is what Photoshop CS2
    could optimally use, then you need a 1GB for Mac OS
    X. If you have Dashboard widgets up the kazoo that
    eats memory. (reboot to clear)
    I have five widgets active, and maybe 20 in the toolbar (do those use memory too?)
    Use X-Bench to test your boot drive speed. Your
    looking for the fastest 4k uncached write speed as
    you can get.
    I've never done this. Is it self-explanatory if I download x-bench?
    Are you saying you have a RAID 0 with two 10,000 RPM drives? (Sorry, I know very little about storage tech). If so, with only two internal drives on the G5, where do your files go?
    Thanks,
    Chris

  • Performance question - Streaming content in a Page Viewer

    When pointing to streaming video for example using a SharePoint Page Viewer:  does the video stream go from content provider straight to user's desktop?  I.e. There's no issue with the SP Server as far as performance is concerned?

    Hi USP, correct, the two are treated separately, just like you have two browsers open at the same time. The SP server won't be affected. Here's a link that gives you an idea on how it works:
    http://technet.microsoft.com/en-us/library/cc767496.aspx
    cameron rautmann

  • Very disappointed with my Mac so far

    I just took receipt of my first ever Mac yesterday after being convinced by several friends as to how cool Macs are and particularly impressed by promises that "Macs just work" (as opposed to PCs, which just give you lots of bother). After reading the (very short) manuals that came with it cover to cover, I started up and tried to use the thing. My experience was that a Mac is every bit as cantankerous as any PC.
    First off, it didn't correctly understand my keyboard. I'm using a bog standard UK 105 key PC keyboard (which Apple claim will work on a Mac), but it did exactly the same as many PCs - it thought that the @ key was a " and vice versa. On a PC I would fix this by going to Regional Settings and changing the keyboard locale to UK, but on the Mac I had already done this! Furthermore, that keyboard viewer app doesn't show the shifted key values (not that it would have helped if it had - it would only have confirmed that the Mac is confused).
    Next, I couldn't get my ADSL modem to work. It's a BT Voyager 105 modem - one of the most common such modems in use in the UK, and at the end of the installation process the Mac was of the opinion that it wasn't attached. I will concede, however, that this could be because the driver on the modem's drivers disk was only labelled as being for up to OSX 10.3, whereas I'm using Tiger, so I've downloaded the latest and will try that tonight when I get home.
    Then there's the bug in the online help, and I have a feeling I saw this one many years ago when I first tried playing with a Mac. If you fire up Mac help and start navigating though it, you will encounter a few pages that immediately go into an infinite refresh loop, which means that the page flickers in a very nasty way and you can't scroll it. Also some of the links don't actually go anywhere.
    The online help and documentation is poor anyway - couldn't find the list of standard keyboard shortcuts anywhere (did find a list of specifically configured shortcuts in the Preferences | Keyboards setcion, but that's not what I'm after here). For example, it was only by experimentation that I discovered that the way to jump between applications is Command-Tab. I never did discover how to launch something from the Finder via the keyboard. In Windows I would just select it and hit Return or Enter, but that did nothing in Tiger.
    Also, why doesn't the maximise button work on all applications? I came across one during my exploits (can't remember which now) which when I clicked the maximise button did enlarge the window, but not so as to fill the desktop - instead I had to manually resize it.
    So, all in all I am not impressed. The graphics are fancy (but how they compare with Vista when it releases remains to be seen - of course the Mac does manage it with a lot less horsepower than Vista needs, and that is to be commended), but that's just appearance - I'll happily take a green screen dumb terminal if it gets the job done quickly and well. If I don't start enjoying using this machine and quick it's going straight back to Apple for a refund.

    Well, it's been four days now with my new machine and I'm pushing through the learning curve reasonably. Thanks to all those who have contributed to this thread. The keyboard thing is indeed because Apple's UK keyboard layout is not in line with UK PC keyboards. Since Apple tout the mini as BYODKM and clearly want switchers, the failure to provide keyboard mappings/resource files for PC keyboards from the world's various locales is a disappointing omission. I now know to use DeRez and Rez to configure my own keyboard mapping, but of course there was nothing in the help to tell me that. Indeed I never found anything telling me that the software development stuff needed to be explicitly installed by me - a fiend had to tell me this.
    I've been playing with the iLife suite a bit, and was initially impressed but then less so. I made my first ever DVD (with videos and slideshows - not a DVD-ROM) using iDVD. Boy was that an experience - it was just so easy and the results looked so good, at first. Amazing menu buttons that contained clips of the bits of video that they themselves linked to, backing track, beautifully done templates - absolutely marvellous. Until I came to actually use the DVD in a DVD player. The iPhoto slideshow (now that was impressive - Ken Burns effect, 3 seconds per picture, fade to the next, backing track) that I had exported to iDVD had lost a lot of quality in the export (I did have the option to select a quality level and I had picked DVD or whatever seemed appropriate at the time). One of the menu buttons didn't actually link to its video clip, but just lost focus and the navigation buttons would not regain focus - the only way to get control was to back to the top menu and start again. While watching one of my video clips it suddenly jumped into another one for a few seconds, then back to the original, then away to the second, and so on.
    The Mac still doesn't recognise that my modem isn't plugged in and I've accepted that it isn't going to. The bug in the help system is reproducible, although when I found it again I phoned up a friend who has a Mac and asked him to go to the same page, but his worked fine. This might be a difference between 10.4.5 and his 10.4.7 - I'd upgrade but I can't because I have no modem connection.
    My conclusions so far:
    a) The physical machine is gorgeous, but one can build PCs along the same lines
    b) OSX is reputedly (and I have no reason to doubt this) incredibly stable, being Unix based
    c) The rest is extremely pretty window dressing hiding disappointing functionality
    d) The documentation is so crap that I get a sense of arrogance - it's almost as though Apple actually believe their products are so intuitive that no one could possibly need any additional help.
    e) The people who suggested that I'm just too set in my ways to be bothered learning a new way of doing things are correct to an extent. If I had evidence that I would be getting something significant in return for expending this effort, I would be prepared to soldier on, but that isn't happening. I am coming more and more to the conclusion that I am going to send this back and get a silent mini-itx or nano-itx PC instead.
    Sorry folks - one falls by the wayside.

  • Help me tuning performance ( Plan pasted inline)

    I am a newbie as far as performance tuning is concerned.
    Plz check the plan. I dont understand much of the plan. Can you figure out where exactly its chewing time or looping. I can post the actual query on reqest.
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=112 Card=2 Bytes=296)
    1 2 RECURSIVE EXECUTION OF 'SYS_LE_2_0'
    2 0 TEMP TABLE TRANSFORMATION
    3 2 SORT (UNIQUE) (Cost=112 Card=2 Bytes=296)
    4 3 UNION-ALL
    5 4 VIEW (Cost=2 Card=1 Bytes=231)
    6 5 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D663D_F61AD09C' (Cost=2 Card=1 Bytes=175)
    7 4 NESTED LOOPS (Cost=104 Card=1 Bytes=65)
    8 7 MERGE JOIN (CARTESIAN) (Cost=103 Card=1 Bytes=34)
    9 8 TABLE ACCESS (BY INDEX ROWID) OF 'MYTABLE1' (Cost=1 Card=1 Bytes=22)
    10 9 INDEX (RANGE SCAN) OF 'MYTABLE3' (NON-UNIQUE) (Cost=1 Card=1)
    11 10 TABLE ACCESS (FULL) OF 'TABLE4' (Cost=2 Card=1 Bytes=28)
    12 8 BUFFER (SORT) (Cost=102 Card=8 Bytes=96)
    13 12 TABLE ACCESS (FULL) OF 'MYTABLE2' (Cost=102 Card=8 Bytes=96)
    14 7 TABLE ACCESS (BY INDEX ROWID) OF 'MYTABLE4' (Cost=1 Card=1 Bytes=31)
    15 14 INDEX (UNIQUE SCAN) OF 'MYTABLE5' (UNIQUE)
    16 15 VIEW (Cost=2 Card=1 Bytes=26)
    17 16 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D663D_F61AD09C' (Cost=2 Card=1 Bytes=175)
    Thanx in advance.
    Rachit

    Hi Rachit,
    You need to do away with Full table Scans and Merge Join Cartesian's,
    which i could find in the Explain Plan provided.
    Merge Join Cartesian's might be bcoz you are missing some join in the where clause.
    And Full table scans are a problem if u have huge data in the tables.you need to eliminate them as well.
    Vig

  • EA6400 Poor Performance in Townhome Community

    Though I am highly technical with 30+ years experience in IT, never have I joined a forum until today -- driven by frustration.
    I've been using a TP-link WR1043 as a WAP for several years with moderate success, and certainly the device was a good value, but the number of wireless devices in the home has grown to 9, the overlap from neighbors has increased to seeing 8 WAPs from my bedroom, and the family's tolerance of network problems has plummeted.  So I thought to bury the problems by purchasing this EA6400 WAP, courted by its reviews, the "high power amplifier", the "beam-forming technology", the number of devices supported ("class D"), and the higher price ("you get what you pay for").
    So far, its performance is verifiably worse on wireless than the device it replaced.  Media played on tablets and phones (less than 1 year old) from many sites is jerky with unsynchronized sound, and on two phones, even the static browser displays are broken up.  Using the "WiFi Analyzer" android app (with its beautiful displays), I noted that the signal strength from this gadget is less than the previous WAP nearly everywhere in the house, and that 2 neighbor's WAP signals are now stronger than mine in most of my house.  I get a "green" signal strength reading only when less than 10 feet from the device.  However, even physically close to the device, the performance is spotty.
    The former WAP had several parameters to play with plus there were 3 external antennas that could be adjusted, thus often I could tune a better experience,   This gadget provides very little in the way of user-accessible parameters to adjust and the antennas are internal.
    For example, I originally set the channel to "auto" and it has invariably picked the most congested channels overlapping the neighbors, so I set it to 11 which was quieter (for at least 15 minutes before I made the change).  It doesn't seem to have helped performance however.
    Does this experience match with your own?  Is this particular unit a lemon that should be replaced?
    Will interference from at least 8 other WAPs make the problems perpetual?  Does the available repeater provide noticeable improvement?  Plus any other advice ...

    Hi, try to check on the following:
    • Upgrade the router’s firmware
    • Reset the router after the firmware upgrade
    • Optimize the router’s wireless settings
    • Use a wifi analyzer like http://www.metageek.net/products/inssider/ to select a non overlapping channel to the router
    • Relocate the router on a more central location for better wireless coverage
    • Avoid placing the router on a glass or metallic surface

  • Multi table inheritance and performance

    I really like the idea of multi-table inheritance, since a have a main
    class and three subclasses which just add one integer to the main class.
    It would be a waste to spend 4 tables on this, so I decided to put them
    all into one.
    My problem now is, that when I query for a specific class, kodo will build
    SQL like:
    select ... from table where
    JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
    this is pretty slow, when the table grows because string comparisons are
    awefull - and even worse: the database has to compare nearly the whole
    string because it differs only in the last letters.
    indexing would help a bit but wouldn't outperforming integer comparisons.
    Is it possible to get kodo to do one more step of normalization ?
    Having an extra table containing all classnames und id's for them (and
    references in the original table) would improve performance of
    multi-tables quite a lot !
    Even with standard classes it would save a lot memory not having the full
    classname in each row.

    Stefan-
    Thanks for the feedback. Note that 3.0 does make this simpler: we have
    extensions that allow you to define the mechanism for subclass
    identification purely in the metadata file(s). See:
    http://solarmetric.com/Software/Documentation/3.0.0RC1/docs/manual.html#ref_guide_mapping_classind
    The idea for having a separate table mapping numbers to class names is
    good, but we prefer to have as few Kodo-managed tables as possible. It
    is just as easy to do this in the metadata file.
    In article <[email protected]>, Stefan wrote:
    First of all: thx for the fast help, this one (IntegerProvider) helped and
    solves my problem.
    kodo is really amazing with all it's places where customization can be
    done !
    Anyway as a wish for future releases: exactly this technique - using
    integer as class-identifiers rather than the full class names is what I
    meant with "normalization".
    The only thing missing, is a table containing information of how classIDs
    are mapped to classnames (which is now contained as an explicit statement
    in the jdo-File). This table is not mapped to the primary key of the main
    table (as you suggested), but to the classID-Integer wich acts as a
    foreign key.
    A query for a specific class would be solved with a query like:
    select * from classValues, classMapping where
    classValues.JDOCLASSX=classmapping.IDX and
    classmapping.CLASSNAMEX='de.company.whatever'
    This table should be managed by kodo of course !
    Imagine a table with 300.000 rows containing only 3 different derived
    classes.
    You would have an extra table with 4 rows (base class + 3 derived types).
    Searching for the classID is done in that 4row table, while searching the
    actual class instances than would be done over an indexed integer-classID
    field.
    This is much faster than having the database doing 300.000 String
    comparisons (even when indexed).
    (By the way - it would save a lot memory as well, even on classes which
    are not derived)
    If this technique is done by kodo transparently, maybe turned on with an
    extra option ... that would be great, since you wouldn't need to take care
    for different "subclass-indicator-values", can go on as everytime and have
    a far better performance ...
    Stephen Kim wrote:
    You could push off fields to seperate tables (as long as the pk column
    is the same), however, I doubt that would add much performance benefit
    in this case, since we'd simply add a join (e.g. select data.name,
    info.jdoclassx, info.jdoidx where data.jdoidx = info.jdoidx where
    info.jdoclassx = 'foo'). One could turn off default fetch group for
    fields stored in data, but now you're adding a second select to load one
    "row" of data.
    However, we DO provide an integer subclass provider which can speed
    these sorts of queries a lot if you need to constrain your queries by
    class, esp. with indexing, at the expense of simple legibility:
    http://solarmetric.com/Software/Documentation/2.5.3/docs/ref_guide_meta_class.html#meta-class-subclass-provider
    Stefan wrote:
    I really like the idea of multi-table inheritance, since a have a main
    class and three subclasses which just add one integer to the main class.
    It would be a waste to spend 4 tables on this, so I decided to put them
    all into one.
    My problem now is, that when I query for a specific class, kodo will build
    SQL like:
    select ... from table where
    JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
    this is pretty slow, when the table grows because string comparisons are
    awefull - and even worse: the database has to compare nearly the whole
    string because it differs only in the last letters.
    indexing would help a bit but wouldn't outperforming integer comparisons.
    Is it possible to get kodo to do one more step of normalization ?
    Having an extra table containing all classnames und id's for them (and
    references in the original table) would improve performance of
    multi-tables quite a lot !
    Even with standard classes it would save a lot memory not having the full
    classname in each row.
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • 2.6.11.7 performance baseline

    Looks like arch has found just the right combination of patches for the 2.6.11.7 kernel.  As far as performance goes, the arch kernel stomps all over suse's 9.3 kernel.  I don't have any benchmarks, but the difference is so obvious, I won't need any to be convinced.
    I modified the PKGBUILD use 2.6.11.10, and the patches still apply with some offset and fuzz.
    It would be nice to keep maintaining the 2.6.11 kernel and patchset as a performance baseline, since performance does not always improve with new releases of the kernel; sometimes it gets worse.

    Gullible Jones wrote:via82xx driver for the stock kernel is very annoying.
    annoying, good description
    if you wanted, you could get the broken out mm patches and apply the via82xx driver from mm onto a CK kernel, thats what i did when the i915 graphics driver was only in MM.
    http://kernel.org/pub/linux/kernel/peop … 2-rc4-mm2/
    broken out are there.

  • ONS 15305 VC-3 performance monitoring

    We're using ONS 15305 with Cisco Edgecraft 1.2. Inside this ONS we have STM-1 module and T3 module. A VC-3 inside STM-1 is mapped to a T3 port.
    What do "far end" and "near end" performance statistics in VC-3 under PDH T3 port mean? Are these statistics coming from actual T3 port, or from VC-3 inside SDH to which this port is mapped?
    Is it possible to see statistics for actual VC-3 inside STM-1?

    when monitoring near end performance monitoring for a VC3 circuit that means the monitoring for errors are centrally located to the 454 node that the VC3 card resides in, and that you have to check towards the VC3 patch panel to the customers equipment for problems. Far End performance monitoring means that the monitoring is being done from the far end 454 node through the cross connect cards and through the fiber optic spans.

  • InfoCube Performance.

    Hi Experts,
    What are the parameters to measure the performance of an InfoCube and how InfoCube performance is measured?

    Hi ,
    we prepared a dcoument in our company for perfromance so from that i m psoting some points which may be helpful to you pertainng to infocube performance.
    Have a look over Info Cube design, changed the cube design if any dimension was having more than 10% of fact table data.
    1)-     Our main goal should be maintaining the DIM table records as less as possible compared to Fact table data. SAP recommended the ratio should be below 10%.
    -     Or suppose if your fact table data is having 1000 records then your DIM table records should be less than 100.
    -     The Info Cube performs best when this data ratio is 0.01%.
    -     From SAP_INFOCUBE_DESIGNS you can check this ratio.
    -     Create small dimension as much as possible.
    -     Use Navigational attribute very carefully, unnecessary navigational attribute slows down query. If possible use Info object as a characteristic.
    -     Try to put related characteristics into one dimension.
    -     Ideally one should check the relationship between two characteristics when design the dimension.Dont put M: N Relationship characteristics into single dimension.
    -     When creating dimension check the relationship between the participating characteristics and add the most atomic characteristics as the first Characteristic in Dimension.
    -     The sequence of added dimension should be Most distinctive -> Less distinctive -> Least distinctive.
    -     If an Info Object has almost as many distinct values as there are entries in the fact tables, then the dimension this Info Object belongs to should be defined as a line item dimension with High cardinality Checkbox.
    Compulsorily add Index and DB Statistics generation to each and every Info-Cube in process chain.
    2)  Delete Entries not used in the dimension of an Info Cube.
    3)  Replaced Info Sets with Multiprovider.
    4)  Calculated %DB Time and Aggregate Ratio to check for the possible Aggregates to be created.
    -     From ST03N you can check %DB Time and Aggregate Ratio = Data selected / Data Transferred.
    -     From RSRT -> Execute + Debug -> Display Statistics data you can check these parameters.
    -     SAP recommends if your %DB time is more than 30% and aggregate ratio is more than 10 then you should go for aggregate.
    -     From RSRT -> Execute + Debug -> Display Aggregates you can find out possible Aggregates to be built with all the conditions (filter, all Value etc).
    5) Partitioning:
        DB Partitioning / Physical Partitioning.
    -     Check whether your query is running on 0CALMONTH or 0FISCPER restrictions.
    -     If yes and if your Cube is having huge amount of data and
    -     Distinct values of 0CALMONTH/0FISCPER are more or huge historical data present in your cube then go for Physical partitioning.
    -     Always compress Cube before partitioning.
       Logical Partitioning.
    -     Or you can go with Logical partitioning.
    -     Create different cubes based on Cal year or Fiscal Year.
    -     Combine them in a multiprovider.
       Choosing one from these two depends upon lot of parameters.
       My personal suggestion---
    If you report is based on historical data or let say you are comparing last 4 years data with current year data then go for Logical partitioning.
    Create 5 cubes based on different year and use multiprovider on them. Your main query will be distributed in 5 different queries and all those 5 Queries will run parallel.
    And if your query is running on 2 years or less data and basically month or period based restrictions are used then go for Physical partition.
    As Oracle / DB2 (latest) / from SQL Server 2005 support range partition in SAP BW we can't make object level partition in SAP BW. But in Database level with the support of DBA you can partition the cube based on other Info Object.
    Hope it helps.
    Regards,
    AL

Maybe you are looking for

  • How to work with two different backend with same MI Server and war file

    Hi All, We have a requirement that we need to work with one Middleware for two backends. For that we had to copy MAM30 sync bo's to zsync BO'S with the name ZMAM30. Now both sync BO'S will point to different backends. I have a standard war file which

  • Where can I purchase the regular ipod charger at a reasonable price?

    I keep buying ipod chargers for my ipeod touch and unfortunately they keep breaking!! I'm tired of spending money on over priced chargers that break in about a  month or so. I would like to know a store that sells ipod chargers at a reasonable price

  • Wine & Labview Executable​s?

    Is it possible to run LabView executables under wine? Has anyone ever attempted to do this? Thanks for any information. Regards, Ken

  • Can't upgrade to 10.10.2 on macbook air ??

    10.10.2 downloads, seems to install, but does not do anything. I am still on 10.10.1. The Appstore shows 3 previous downloads of 10.10.2 in the last few days saying it was installed, but it also asks me to update to 10.10.2 !!! Looks like an endless

  • Two displays necessary?

    Is a 24" enough for Logic Pro? I'm crammed into a 12" PB screen right now and I'm going to upgrade soon but I'm having trouble deciding between imac and mac pro. I'll have a 24" display soon as well so if i got the imac i'd have two screens to work w