ITunes performance problems CPU specific?

one question to anyone who is experiencing performance problems and playback problems still with version 7.0.2. I discussed the situation with a view friends and found out, that there are some still having all these problems with performance and playback quality and some that have them not. So i had the following idea: Could it be, that all these people, who use the iTunes for Windows on machines with AMD CPUs have problems and those who use it on INTEL CPUs haven't?
Please give some Feedback to me and most important: to Apple.
If so, please Apple fix it soon.

I'm using a Sony desktop PC with an Intel processor, and have had numerous problems with the performance of the latest version(s) of iTunes (7.0 and 7.0.2).

Similar Messages

  • Performance Problems - CPU

    Hi all,
    I'm having some performance problems and i have generated an AWR of a day and i have seen this following things:
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 50,318 41.7
    db file sequential read 6,688,472 32,711 5 27.1 User I/O
    Backup: sbtwrite2 1,068,309 7,903 7 6.6 Administra
    db file scattered read 1,012,065 6,999 7 5.8 User I/O
    PX Deq Credit: send blkd 231,401 4,989 22 4.1 Other
    Operating System Statistics DB/Inst: CAPDB14P/capdb14p1 Snaps: 15710-15778
    Statistic Total
    AVG_BUSY_TIME 3,221,704
    AVG_IDLE_TIME 4,923,831
    AVG_IOWAIT_TIME 2,302,776
    AVG_SYS_TIME 537,429
    AVG_USER_TIME 2,682,900
    BUSY_TIME 6,446,121
    IDLE_TIME 9,850,381
    IOWAIT_TIME 4,608,322
    SYS_TIME 1,077,598
    USER_TIME 5,368,523
    LOAD 0
    OS_CPU_WAIT_TIME 1,999,898,469,700
    RSRC_MGR_CPU_WAIT_TIME 0
    VM_IN_BYTES 12,201,893,888
    VM_OUT_BYTES 476,655,616
    PHYSICAL_MEMORY_BYTES 8,568,512,512
    NUM_CPUS 2
    NUM_CPU_SOCKETS 2
    ###########################3
    I think that we are having CPU problems here !!
    All my memory caches are good, 99% hit.
    Anybody agree with me???
    Tks,
    Paulo

    I have problems on some queries that have another wait event related to RAC.
    "gc cs multi block request" is taking a lot of time on some queries. These queries run very fast at another databas that isn't a RAC database.
    Example:
    1-Tables has the same number of rows!!!!!
    2-Both tables and indexes are analyzed using the same tool (DBMA_STATS)
    ####RAC DATABASE####
    SELECT 1 from dual
    WHERE NOT EXISTS (SELECT 1
    FROM mensalidade a
    WHERE data_vencimento >= CHAR_TO_DATE('20070201'));
    ----Explain
    SELECT STATEMENT, GOAL = ALL_ROWS               4     1     
    FILTER                         
    FAST DUAL               2     1     
    PX COORDINATOR FORCED SERIAL                         
    PX SEND QC (RANDOM)     SYS     :TQ10000     2     1     7
    PX BLOCK ITERATOR               2     1     7
    INDEX FAST FULL SCAN     BRCAPDB2     IMENSALIDADE1     2     1     7
    ----It takes more than 500 seconds to run
    ####STANDALONE DATABASE####
    SELECT 1 from dual
    WHERE NOT EXISTS (SELECT 1
    FROM mensalidade a
    WHERE data_vencimento >= CHAR_TO_DATE('20070201'));
    ----Explain
    SELECT STATEMENT, GOAL = ALL_ROWS               4     1     
    FILTER                         
    FAST DUAL               2     1     
    PX COORDINATOR FORCED SERIAL                         
    PX SEND QC (RANDOM)     SYS     :TQ10000     2     2     16
    PX BLOCK ITERATOR               2     2     16
    TABLE ACCESS FULL     BRCAPDB2     MENSALIDADE     2     2     16
    ----It takes 0.1 seconds to run

  • Performance problem while CPU is 80% Idel ?

    Hi,
    My end users are claim for performance problem during execution of batch process.
    As you can see there are 1,745 statement executing each second.
    Awr report shows 98.1% of the time , waits on CPU .
    Also Awr report shows that Host CPU is :79.9% Idel.
    The second wait event shows only 212 seconds waits on db file sequential read.
    Yet , 4 minute in 1 hour period is seems not an issue.
    Please advise
    DB Name         DB Id    Instance     Inst Num Startup Time    Release     RAC
    QERP          xxx        erp                 1 21-Jan-13 15:40 11.2.0.2.0 ; NO
    Host Name        Platform                         CPUs Cores Sockets Memory(GB)
    erptst           HP-UX IA (64-bit)                  16    16       4     127.83
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:     40066 22-Jan-13 20:00:52       207       9.6
      End Snap:     40067 22-Jan-13 21:00:05       210       9.6
       Elapsed:               59.21 (mins)
       DB Time:              189.24 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     8,800M     8,800M  Std Block Size:         8K
               Shared Pool Size:     1,056M     1,056M      Log Buffer:    49,344K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                3.2 ;               0.1 ;      0.00 ;      0.05
           DB CPU(s):                3.1 ;               0.1 ;      0.00 ;      0.05
           Redo size:          604,285.1 ;          27,271.3
       Logical reads:          364,792.3 ;          16,463.0
       Block changes:            3,629.5 ;             163.8
      Physical reads:               21.5 ;               1.0
    Physical writes:               95.3 ;               4.3
          User calls:               68.7 ;               3.1
              Parses:              212.9 ;               9.6
         Hard parses:                0.3 ;               0.0
    W/A MB processed:                1.2 ;               0.1
              Logons:                0.3 ;               0.0
            Executes:            1,745.2 ;              78.8
           Rollbacks:                1.2 ;               0.1
        Transactions:               22.2
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:  100.00 ;      Redo NoWait %:  100.00
                Buffer  Hit   %:   99.99 ;   In-memory Sort %:  100.00
                Library Hit   %:   99.95 ;       Soft Parse %:   99.85
             Execute to Parse %:   87.80 ;        Latch Hit %:   99.99
    Parse CPU to Parse Elapsd %:   74.76 ;    % Non-Parse CPU:   99.89
    Shared Pool Statistics        Begin    End
                 Memory Usage %:   75.37 ;  76.85
        % SQL with executions>1:   95.31 ;  85.98
      % Memory for SQL w/exec>1:   90.33 ;  82.84
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    DB CPU                                           11,144          98.1
    db file sequential read              52,714         214      4    1.9 User I/O
    SQL*Net break/reset to client        29,050           6      0     .1 Applicatio
    log file sync                         2,536           6      2     .0 Commit
    buffer busy waits                     4,338           2      1     .0 Concurrenc
    Host CPU (CPUs:   16 Cores:   16 Sockets:    4)
    ~~~~~~~~         Load Average
                   Begin       End     %User   %System      %WIO     %Idle
                    0.34 ;     0.33 ;     19.7 ;      0.4 ;      1.8 ;     79.9

    Nikolay Savvinov wrote:
    if the users are complaining about performance of the batch process, then that's what you should be looking at, not the entire system.I find it strange to see "end users" and "the batch process" in the same sentence (as it was in the first post). "End users" gives me the feeling of a significant number of concurrent sessions with people waiting for results in real time at the far end, while "batch process" carries the image a small number of large scale processes running overnight to prepare the data for the following morning.
    I mention this because my first view of the AWR output was: you've got 16 CPUs, only three in use, virtually no users, and doing very little work, how can the users complain. (One answer, of course, is that the 13 CPUs could be locked out of use as far as Oracle is concerned). On the second read I decided that the "users" had gone home, and the complaint was simply that the batch process wasn't completing in time.
    In this case I think "the entire system" IS "the batch process"
    Determine which stored procedures and/or SQL statements took longer than usual and then find out why. Most likely you'll be able to find
    everything you need in AWR views (DBA_HIST_SQL%) and ASH archive (DBA_HIST_ACTIVE_SESS_HISTORY).
    If the batch process has changed dramatically and recently, then a simple first step might be to look at the current AWR report, find the few most time-consuming SQL statements, and use the awrsqrpt.sql script to find their history of execution plans.
    But I'd also just look at the expensive SQL - bearing in mind, particularly, that there are very few user calls per second, yet many hundred executions per second: it strikes me that there could be quite a lot of PL/SQL going on doing something a little bit expensive many times or some PL/SQL function that calls some SQL that used to be called rarely from an SQL statement but is now (due, perhaps to a change in plan) being called much more frequently - so check SQL Ordered by Executions.
    Regards
    Jonathan Lewis

  • Performance Problem : High CPU on a simple Insert

    I have a simple INSERT ( INSERT into TABLE values ( :1, :2 ) etc. )
    But this statement is exhibiting following characteristics
    Optimizer Cost : 1
    Buffer Gets : 26 million !!!
    Rows : 750,000
    Gets/Execution : 35,000
    This is causing performance problem on our production.
    Any ideas will be appreciated ?
    thanks

    897208 wrote:
    I have a simple INSERT ( INSERT into TABLE values ( :1, :2 ) etc. )
    But this statement is exhibiting following characteristics
    Optimizer Cost : 1
    Buffer Gets : 26 million !!!
    Rows : 750,000
    Gets/Execution : 35,000
    This is causing performance problem on our production.
    Any ideas will be appreciated ?
    thanksThread: HOW TO: Post a SQL statement tuning request - template posting
    HOW TO: Post a SQL statement tuning request - template posting
    post EXPLAIN PLAN & results from SQL_TRACE this statement

  • Performance problems on non-Intel CPU's

    Hello,
    Recently I have been looking into tracking down a performance problem with Adobe Flash on a couple of different Linux platforms.  I understand that there is limited hardware acceleration and that is actually important for my question.  On multiple netbooks I tested similar Flash based content and found that content my EeePC 701 with a 600Mhz Intel Celeron could play, a netbook with a VIA processor running at 1Ghz couldn't even come close to playing.  Running sysprof I see that 80+% of the time is spent within the libflashplayer library.
    To my point, which you may or may not want to answer.  Is it possible that Adobe uses Intel's compiler to produce their binaries?  I only ask this because it has been documented that Intel's compiler explicitely disables certain optimizations for non-GenuineIntel processors, http://www.theinquirer.net/inquirer/news/1567108/intel-compiler-cripples-code-amd-via-chip s
    .   I have also examined the codepaths and see that libflashplayer.so does check /proc/cpuinfo for the vendor_id string in it's initialization logic.
    I don't want to start conspiracy theories, but if any of this is valid at all, could it please be fixed? Thanks.

    http://www.apple.com/support/bootcamp/
    Only Intel when talking Macs, no PowerMacs or G4 etc.

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • LR3 "Extra Processing in Develop" Performance Problem

    I have been investigating a specific LR3 performance problem.  It may explain a small subset of the problems people have reported in the "Why is LR3 So Slow?" thread.   I'm starting this thread to focus on this particular problem.  I hope others will confirm/refute/refine my findings.
    The Problem
    In Develop, when I make an adjustment, normally the following happens: The CPU usage (as shown in Activity Monitor's bar graph) jumps to between 50 and 75% for all four cores, the updated image appears, and the CPU usage settles back down.  This all happens in less than half a second.  Note: this is with the image at the Fit size.  However, sometimes I instead get the following after an adjustment: the CPU usage jumps to 50 to 75% for all four cores and the updated image appears as usual, however, instead of settling back down, the CPU usage jumps up to 90 to 100% for all cores and stays there for 3 to 5 seconds before settling down. Thus it appears that LR is doing some kind of "extra processing" since a lot of computation is happening AFTER the updated image has already appeared.  I will refer to this problem as "EP".  Obviously, when you are getting EP, editing in Develop becomes very balky.
    Dependency on ratio between image size and displayed size
    It appears that EP only happens when the displayed size of the image (in Fit zoom level and perhaps also Fill zoom level) is above a certain percentage of the actual image size (as currently cropped).  Evidence: When editing full 21MP 5D2 images, I don't experience EP.  If I crop the 5D2 image fairly significantly, then I can get EP.  When editing 10MP images from my Canon S90, I usually get EP for landscape orientation pictures but not for portrait orientation pictures (since in Fit mode, landscape images display at a higher zoom level than portrait images).  If I am getting EP, I can eliminate it by sufficiently reducing the size that LR is displaying the image by resizing the LR window smaller, opening additional panels (I normally edit with only the right panel open), displaying the toolbar, etc.  It appears that EP is enabled when the displayed image is about 50% or larger w.r.t. the actual image (as currently cropped).  For example, EP becomes enabled when a 3648 pixel wide S90 image is displayed at least 17 and 7/8 inches wide on my 100 ppi monitor (i.e. about 1787 pixels).
    Dependency on HOW an adjustment is invoked
    Even when the displayed image size is large enough w.r.t. the actual image size to enable EP, whether you get it on a given adjustment depends on how you invoke it:
    - If you CLICK (i.e. press the mouse button down and quickly release it) on the track of one of the sliders (a technique I use often to make big jumps), EP will happen.
    - If you press the mouse button down on a slider handle, drag it to a new position, and quickly release the mouse button), EP will happen
    - If you press the mouse button down on a slider handle, drag it to a new position, but continue to hold the mouse button down until the displayed image is updated, EP does NOT happen (either before or after you then release the mouse button).
    - If you highlight the numeric field at the end of a slider and use the arrow keys (possibly along with Shift) to increment or decrement the value, EP does NOT happen.
    - EP will happen if you resize the LR window such that the displayed image size is above the threshold.  (In fact, I determined the threshold by making a series of window width increases until I saw EP indicated by the CPU bar graphs.)
    - EP can happen with local adjustment brush applications, but as with the sliders, it depends on HOW you perform the brush stroke.  Single click and drags with immediate mouse release cause EP, drags with delayed mouse button release don't.
    - Clicking an earlier History state causes EP
    - More exploration could be done.  For example, I haven't looked at Graduated Filter and Spot Removal adjustments.
    My theory of what's happening
    With LR2, my understanding is that in Develop mode when the displayed image is below 1:1 zoom level, after an adjustment is invoked, LR calculates the new version of the image to display using a fast, simplified algorithm that doesn't include the more computationally intensive adjustments like Sharpening and Noise Reduction (and perhaps works on a lower rez version of the image with multiple sensels binned together?).  It appears that in conditions described above, LR3 calculates the initial, fast image update and then goes on to do the full update of the image, including the computationally intensive adjustments.  Evidence:   setting Sharpening Amount and Luminance and Color Noise Reduction to zero eliminates EP (or reduces the amount of time it takes to be barely noticeable).  I'm not sure whether the displayed image is updated with the results of the extra processing.  I think the answer is Yes since when I tried an adjustment of changing Sharpening Amount from 0 to 90, the initial update of the displayed image showed sharpening but after the EP, the displayed image was updated again to show somewhat different sharpening. Perhaps Adobe felt that it would be useful to see the more accurate version of the image when it is at or above 50% zoom.  Maybe the UI is supposed to cancel the EP if you start to make another adjustment before it has completed but the canceling doesn't happen unless you invoke the adjustment in one of the ways described above that doesn't cause EP.  
    Misc
    - EP doesn't seem to happen for Process 2003
    - As others have mentioned, I'm surprised that LR (both version 2 and 3) in 64bit mode doesn't use more available RAM.  I don't think I've seen LR go above 4GB of virtual memory or above 3GB "Real Memory" (as reported by Activity Monitor) even though I have several GB free.
    - It should be obvious from the above that if you experience EP, there are workarounds: reduce the size of the displayed image (e.g. by window resizing), invoke adjustments in ways that don't cause EP, turn off Sharpening and Noise Reduction until the end of editing an image.
    System specs
    First generation Intel Mac Pro with two dual-core CPUs at 2.66 Ghz
    OS 10.5.8
    21GB RAM
    ACR cache on volume striped across 3 internal SATA drives
    LR catalog and RAWs on an internal SATA drive
    30" HP LP3065 monitor (2560 pixels wide)
    NVIDIA GeForce 7300 GT

    I'm impressed by your thorough analysis.
    Clearly, the programmers haven't figured out the best way to do intelligent caching and/or parallel rendering at a reduced size yet.
    In my experience reducing the settings in the "Details" panel doesn't help.
    What really bugs me is that the lag (or increasing lack of interactiveness) depends on the number of adjustments one has made.
    This shouldn't be the case. If a cache is produced then every further adjustment should only cost the effort for that latest adjustment and not include adjustments before it. There are things that stand in the way of straightforward edit applications:
    If you work below 1:1 preview, adjustments have to be shown in a reduced form. If you don't have a way to faithfully mimic the adjustments on the reduced size, you have to do them on the original image and then scale down. That's expensive.
    To the best of my knowledge LR uses a fixed image pipeline. Hence, independently of the order in which you apply edits, they are always performed in the same fixed order. Say all spot removal operations are done first. If you have a lot of adjustment brush edits and then add a spot removal operation, it means that all the adjustment brush operations have to be replayed each time you do a little adjustment on your spot removal edit.
    I believe what you are seeing is mostly related to 1.
    I also believe that the way LR currently handles a moderate number of edits is unacceptable and incompatible with the notion that it is usable in a commercial setting for more than trivial edits. I suspect there is something else going on. If everyone saw the deterioration in performance after a number of edits that I see, I don't think LR would be as accepted as it is. Having said that, I've read that the problem of repeated applications of the adjustment brush slowing LR down has existed for a long time. I truly hope that this doesn't mean we'll have to live for it for the foreseeable future.
    There are two ways I can see how 2. should be addressed:
    combine the effects of a set of operations into one bitmap operation. Instead of replaying all adjustment brush strokes one after the other (speedwise it feels like this is happening), compute a single bitmap operation that combines all effects.
    give up the idea that there is an image pipeline with a fixed execution order.
    Some might argue that the second point is at odds with the whole idea of parametric editing, but I dispute that. Either edit operations are commutable in which case the order is immaterial, or they are not. If they are not, the user applies the edits in a way as he/she sees fit and will thus compensate for any effect of a changed ordering.
    N.B., currently the doctrine of "fixed ordering of edit applications" results in the effect that even if you convert an image into B&W all your adjustment brush edits that applied colour tints will still show through. Reasoning: The user should be able to locally tint a B&W image. I agree with the latter but this could be achieved by only applying those tinting brush strokes that were created after the B&W conversion. All the ones that happened before should only be used to obtain the correct luminance values for the B&W conversion but obviously they shouldn't cause tinted areas.
    The above example demonstrates to me that users naturally expect operations to occurr in the order they have been introduced, not in a fixed predefined order. If that principle were followed, I see no reason why the speed of a single edit should depend on the number of edits that were done to the image before.
    I hope the programmers can (and the management wants to) address the performance issues. While I find LR usable for pretty modest edits, in no way the performance on my system approaches that would I would expect from an industrial strenght application.
    P.S.: Your message reminded me of the following: When I experience serious lag with LR showing the strokes I make with an adjustment brush, it helps to pause a moment after the first click before one starts moving. This allows LR to catch up and then one can see the effect of the application pretty much interactively. Otherwise, there is terrible lag and the feedback where you have brushed an effect comes way too late.

  • Itunes 100 percent CPU

    I am running Itunes and my CPU is running at 100% also when trying to go onto the iTunes Store it just says "accessing iTunes Store" and hangs there. There was an update available so I tried to update which resulted in getting an error when installing saying that MSVCR80.dll was missing. I then uninstalled itunes following all the steps removing :
    Itunes
    Apple Software Update
    Apple Mobile Device Support (if this won't uninstall press on)
    Bonjour
    Apple Application Support
    I downloaded the latest version of Itunes and installed it. I still have the same problem. I then tried using command prompt and  netsh winsock reset. Restarting the computer but this hasn't helped either. I then tried Autoruns from Microsoft Website to see if there was any other programs using Winsock, but the only one that is showing is Bonjour.
    Please can anyone help? I'm running out of ideas
    I am running itunes on Windows XP SP3. I've only just started to get this problem and I haven't installed anything new for a while (not counting itunes-which I only updated to try and fix the problem)

    Close your iTunes,
    Go to command Prompt -
    (Win 7/Vista) - START/ALL PROGRAMS/ACCESSORIES, right mouse click "Command Prompt", choose "Run as Administrator".
    (Win XP SP2 &amp; above) - START/ALL PROGRAMS/ACCESSORIES/Command Prompt
    In the "Command Prompt" screen, type in
    netsh winsock reset
    Hit "ENTER" key
    Restart your computer.
    If you do get a prompt after restart windows to remap LSP, just click NO.
    Now launch your iTunes and see if CPU problem is fixed now.
    If you are still having these type of problems after trying the winsock reset, refer to this article to identify which software in your system is inserting LSP:
    iTunes 10.5 for Windows: May see performance issues and blank iTunes Store
    http://support.apple.com/kb/TS4123?viewlocale=en_US

  • 3D performance problems after upgrading memory

    I recently purchased an additional 2GB of memory to try and extend the life of my aging computer.  I installed the memory yesterday and Windows seems to recognize it (reporting now 3.3GB) but when I dropped into WoW (pretty much the only game I have) the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).  Basically WoW was being software 3D rendered!!!
    I went through the usual reinstall drivers, reboot, etc... and couldn't find a fix.  I powered down, pulled out 2 of the memory sticks, booted up, and dropped into WoW - it ran at the full 60FPS and CPU utilization was very low (i.e. back to GPU Hardware 3D rendering).  I powered down again, swapped the 2 sticks for the other 2 sticks, booted up, and dropped into WoW - again it ran 100% fine.  So I powered down, put all four sticks in, booted back up, and when I dropped into WoW it was running in the software 3D rendering mode (20FPS at best and High CPU/Kernel usage).
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    All info in signature is up to date.
    Thanks in advance for any help!

    Quote
    Well his last post was a little over 6 hours ago so he was up pretty late.
    Looks like nothing one does in here goes completely unnoticed.   
    Anyway, I am done sleeping now.
    Quote
    his 2 Pfennig's worth.  I know, I know it's Euro's now.
    Yeah, and what used to be "Pfennige" is now also called "Cents" and here are mine:
    Quote
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    PAE or Physical Memory Extension will not do anything as Microsoft has castrated this feature to such an extend that it has nothing to do with memory addressing anymore when in comes to Windows XP:
    http://en.wikipedia.org/wiki/Physical_Address_Extension#Microsoft_Windows
    Quote
    Windows XP Service Pack 2 and later, by default, on processors with the no-execute (NX) or execute-disable (XD) feature, runs in PAE mode in order to allow NX. The NX (or XD) bit resides in bit 63 of the page table entry and, without PAE, page table entries only have 32 bits; therefore PAE mode is required if the NX feature is to be exploited. However, desktop versions of Windows (Windows XP, Windows Vista) limit physical address space to 4 GiB for driver compatibility reasons.
    The feature is already automatically enabled.  But since is original function (Address Extension) does no longer exist when it comes to the desktop versions of Windows XP, it won't really do anything you would ever notice.
    About the /MAXMEM Switch:  In Windows 32bit operating systems, every process is limited to 2GB of memory.  The point of the switch is to allow certain applications (or their run-time process) to occupy a higher amount of system memory than 2GB.  However, the culprit here is that only those applications are able to utilize this ability that have been programmed (or compiled) accordingly.  A special flag (large memory aware) has to be implemented.  Otherwise, these application will be restricted to 2GB even though the /MAXMEM Switch has been set to extend the 2GB limit to 3GB.  Most 32bit applications come without the "large memory aware" flag and that is why usually, settings the switch won't change anything.
    In any case, it is unlikely that /PAE (even if it would not be castrated) and /MAXMEM would have an impact on your actual issue because I doubt that it has much to do with either memory adressing or the memory limit of an indiviual Windows process.
    Quote
    the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).
    There are a couple of hardware based explanations to consider here.  Let's start with the most obvious one:
    1. 975X Memory Controller
    The main reason that the system chooses to automatically set the Memory Speed to DDR2-667 even though DDR2-800 modules are installed, is that by design the memory controller of the Intel 975X Chipset does not natively support DDR2-800 modules, but
    >>Intel® 975X Express Chipset Datasheet - For the Intel® 82975X Memory Controller Hub (MCH)<< [Page 20]
    This means, that from the point of view of the memory controller, operating the memory @DDR2-800 actually means overclocking it (with all potential side effects).
    Basically, if your initial problem disappears as soon as you reduce the memory speed to DDR2-667, the design limitation of the memory controller may explain your findings.
    2. Different memory modules
    If I read your signature correctly, you are actually mixing two different kits/models of RAM (CM2X1024-6400C4DHX and  CM2X1024-6400C4).  This can work of course, but in practice it not necessarely does under all circumstances. 
    This list  (-> http://ramlist.i4memory.com/ddr2/) indicates that there are at least 14 different module types/revisions of Corsair DDR2-800 / CL4 modules that utilize a wide range of different memory chips (Elpida, ProMos, Micron, Infinion, Powerchip, Qimonda, Samsung, Infinion etc.).  Even though the superficial specifications for these chips appear to be pretty similar (DDR2-800 / CL5 / CL4), this does not necessarely mean that the modules will respond to the same operating conditions in the same way. There may be small difference in sub-timings/sub-latencies and/or the general responsiveness of the ICs which may affect the operating behaviour of the memory controller (which by the way also includes the PCI-Express interface which your video card is hooked up to).
    And again:  If running the system @DDR2-667 solves your issue, the possible explanation is that higher clock speeds may amplify (or trigger) potential performance problems that could have to do with the use of non-identical memory modules.
    Furthermore: It is also possible that the memory controller's design limitations and the potential compatibility problems that may be attributed to mixing different modules types may reinforce each other in terms of reduced system performance.
    3. The BIOS may have an impact as well
    There has been known issue with the use of certain video cards in conjunction with 4GB of system memory on this mainboard:
    https://forum-en.msi.com/index.php?topic=107301.0
    https://forum-en.msi.com/index.php?topic=105955.0
    https://forum-en.msi.com/index.php?topic=99818.msg798951#msg798951
    What may have come out as graphics/display corruption in earlier BIOS Releases may come out as reduced system performance when using the latest BIOS Release.  Of course, this is hard to prove, but I thought I'd mention it anyway.  May I ask what amount of video memory your card has onboard?
    Fortunately, there is a BIOS version that you could consider to try in this matter.  It is not only the last BIOS Release that could be used in order to avoid the corruption issue, but it is (in my oppionion) the best BIOS Version that was ever released for the 975X Platinum PUE Mainboard:  W7246IMS.716 [v7.1b6].  I have been using this mainboard for almost two years and have tested almost every BIOS Release that ever came out and I always went back to v7.1b6 as "ground zero". 
    It will properly support your E6600 (so you don't have to worry about that) and as far as I remember, there are no known compatibility issues with other components.  So maybe, you want to give this a shot.
    The bottom line is that in a worst case scenario, the problem you describe could be caused by all of the above things at the same time.  You cannot really do anything about the 975X Chipset Specifications and the only way to rule out explanation #2 is to test modules that are actually identical (same model number, revision and memory chips).  A test of the 7.1b6 BIOS Release is something you should consider.  It may be the only way to test the BIOS Hypothesis.
    This post turned out to be longer than I intended, but then again, I am well-rested after a good sleep and the wake-up coffee is kicking in pretty good.

  • Performance problem

    Hi, I'm having a performance problem I cannot solve, so I'm hoping anyone here has some advice.
    Our system consists of a central database server, and many applications (on various servers) that connect to this database.
    Most of these applications are older, legacy applications.
    They use a special mechanism for concurrency that was designed, also a long time ago.
    This mechanism requires the applications to lock a specific table with a TABLOCKX before making any changes to the database (in any table).
    This is of course quite nasty as it essentially turns the database into single user mode, but we are planning to phase this mechanism out.
    However, there is too much legacy code to simply migrate everything right away, so for the time being we're stuck with it.
    Secondly, this central database may not become too big because the legacy applications cannot cope with large amounts of data.
    So, a 'cleaning' mechanism was implemented to move older data from the central database to a 'history' database.
    This cleaning mechanism was created in newer technology; C# .NET.
    It is actually a CLR stored procedure that is called from a SQL job that runs every night on the history server.
    But, even though it is new technology, the CLR proc *must* use the same lock as the legacy applications because they all depend on the correct use of this lock.
    The cleaning functionality is not a primary process, so it does not have high priority.
    This means that any other application should be able to interrupt the cleaning and get handled by SQL first.
    Therefore, we designed the cleaning so that it cleans as few data as possible per transaction, so that any other process can get the lock right after committing each cleaning transaction.
    On the other hand, this means that we do have a LOT of small transactions, and because each transaction is distributed, they are expensive and so it takes a long time for the cleaning to complete.
    Now, the problem: when we start the cleaning process, we notice that the CPU shoots up to 100%, and other applications are slowed down excessively.
    It even caused one of our customer's factories to shut down!
    So, I'm wondering why the legacy applications cannot interrupt the cleaning.
    Every cleaning transaction takes about 0.6 seconds, which may seem long for a single transaction, but this also means any other application should be able to interrupt within 0,6 seconds.
    This is obviously not the case; it seems the cleaning session 'steals' all of the CPU, or for some reason other SQL sessions are not handled anymore, or at least with serious delays.
    I was thinking about one of the following approaches to solve this problem:
    1) Add waits within the CLR procedure, after each transaction. This should lower the CPU, but I have no idea whether this will allow the other applications to interrupt sooner. (And I really don't want the factory to shut down twice!)
    2) Try to look within SQL Server (using DMV's or something) and see whether I can determine why the cleaning session gets more CPU. But I don't know whether this information is available, and I have no experience with DMV's.
    So, to round up:
    - The total cleaning process may take a long time, as long as it can be interrupted ASAP.
    - The cleaning statements themselves are already as simple as possible so I don't think more query tuning is possible.
    - I don't think it's possible to tell SQL Server that specific SQL statements have low priority.
    Can anyone offer some advice on how to handle this?
    Regards,
    Stefan

    Hi Stefan,
    Have you solved this issue after follow up Bob's suggestion? I’m writing to follow up with you on this post, please let us know how things go.
    If you have any feedback on our support, please click
    here.
    Elvis Long
    TechNet Community Support

  • Erratic performance problems in Oracle 8.0.x

    Hi all,
    We are having a performance problem that appeared somewhere between 8.0.6 and 8.1.5 when using embedded SQL and the ProC compiler under Linux and Solaris.
    The moment we use client libraries > 8.0.x, things seem to grind to a halt. We are currently using 8.0.6 client against 8, 9 and 10g databases. Using 8.1.5 or 10g clients against Oracle 9 or 10 databases triggers the problem.
    The problem also isn't tied to any specific query. On the latest run we tried, the timings for a specific problem query are as follows: Oracle 10 server, Oracle 8.0.6 client - 40 seconds; Oracle 10 server (same database), Oracle 10 client - 14 hours
    Explain plan doesn't show anything funny with the query. On occasion, the query does get through quickly. Subsequent runs are then also quick.
    The query also runs fine in SQLPlus.
    What we have noticed is that the server process flatlines it at 100% CPU usage for the entire duration. The client, on the other hand is just sleeping, waiting for data from the server. Stopping the client in the debugger shows that the client is waiting in the sqlcxt() call when opening the cursor, not actually fetching data.
    We are at our wits end as to where look next, and we can't stay on 8.0.6 client libraries for ever as this is starting to cause us other hassles now.
    Did something significant perhaps change between 8.0.x and 8.1.x that we need to cater for in our apps?
    Any help/ideas would be greatly appreciated.
    Regards,
    Gerhard

    Check Metalink
    Client / Server / Interoperability Support Between Different Oracle Versions
    Doc ID: Note:207303.1
    Looks like Client version 8.1.5 has some problem, it was never designed to support Oracle version higher than 8.1.7
    On the other hand, 8.0.6 was supported up to 9.2
    I would stay with 8.0.6 if I have to use Oracle 8 client. Client version 8.1.7 seems much better.

  • 1.1 performance problems related to system configuration?

    It seems like a lot of people are having serious performance problems with Aperture 1.1 in areas were they didn't have any (or at least not so much) problems in the previous 1.01 release.
    Most often these problems occur as slow behaviour of the application when switching views (especially into and out of full view), loading images into the viewer or doing image adjustments. In most cases Aperture works normal for some time and then starts to slow down gradually up to point were images are no longer refreshed correctly or the whole application crashes. Most of the time simply restarting Aperture doesn't help, one has to restart the OS.
    Most of the time the problems occur in conjunction with CPU usage rates which are much higher than in 1.0.1.
    For some people even other applications seem to be affected to a point where the whole system has to be restarted to get everything working up at full speed again. Also shutdown times seem to increase dramatically after such an Aperture slowdown.
    My intention in this thread is to collect information from users who are experiencing such problems about their system configuration. At the moment it does not look like these problems are related to special configurations only, but maybe we can find a common point when we collect as much information as possible about system where Aperture 1.1 shows this behaviour.
    Before I continue with my configuration, I would like to point out that this thread is not about general speed issues with Aperture. If you're not able to work smoothly with 16MPix RAW files on G5 systems with Radeon 9650 video cards or Aperture is generally slow on your iBook 14" system where you installed it with a hack, than this is not the right thread. I fully understand if you want to complain about these general speed issues, but please refrain from doing so in this thread.
    Here I only want to collect information from people who either know that some things works considerably faster in the previous release or who notice that Aperture 1.1 really slows down after some time of use.
    Enough said, here is my information:
    - Powermac G5 Dualcore 2.0
    - 2.5 GB RAM
    - Nvidia 7800GT (flashed PC version)
    - System disk: Software RAID0 (2 WD 10000rpm 74GB Raptor drives)
    - Aperture library on a hardware RAID0 (2 Maxtor 160GB drives) connected to Highpoint RocketRAID 2320 PCIe adapter
    - Displays: 17" and 15" TFT
    I do not think, that we need more information, things like external drives (apart from ones used for the actual library), superdrive types, connected USB stuff like printers, scanners etc. shouldn't make any difference so no need to report that. Also it is self-evident that Mac OS 10.4.6 is used.
    Of interest might be any internal cards (PCIe/PCI/PCI-X...) build into your system like my RAID adapter, Decklink cards (wasn't there a report about problems with them?), any other special video or audio cards or additional graphic cards.
    Again, please only post here if you're experiencing any of the mentioned problems and please try to keep your information as condensed as possible. This thread is about collecting data, there are already enough other threads where the specific problems (or other general speed issues) are discussed.
    Bye,
    Carsten
    BTW: Within the next week I will perform some tests which will include replacing my 7800GT with the original 6600 and removing as much extra stuff from my system as possible to see if that helps.

    Yesterday i had my first decent run in 1.1 and was pleased i avoided a lot perfromance issues that seemed to affect others.
    After i posted, i got hit by a big slow-down in system perfromance. I tried to quit Aperture but couldn't, it had no tasks in its activity window. However Activity Monitor showed Aperture as a 30 thread 1.4GB Virtual memory hairball soaking-up 80-90% of my 4 cpu's. Given the high cpu activity i suspected the reason was not my 2GB of RAM, althought its obviously better with more. So what caused the sudded decerease in system perfromance after 6 hours of relative trouble free editing/sorting with 1.1 ?
    This morning i re-created the issue. Before i go further, when i ran 1.1 for the first time i did not migrate my whole library to the new raw algorithum (its not called the bleeding edge for nothing). So this morning i selected one project to migrate all its raw images to 1.1 and after the progress bar completed its work, the cpus ramped and system got bogged-down again.
    So Aperture is doing a background task that is consuming large amounts of cpu power, shows nothing in its activity monitor and takes a very long time to complete. My project had 89 raw images migrated to the 1.1 algorithum and it took 4 minutes to complete those 'background processes' (more reconstituting of images?). I'm not sure what its doing, but it takes a long time and shows no obvious sign it is normal. If you leave it to complete its work, the system returns to normal. More of an issue is the system allows you to continue to work as the background processes crank, compounding the heavy workload.
    Bit of a guess this, but is this what is causing people system's problems ? As i said if i left my quad alone for 4 minutes all returns as normal. Its just not normal to think it will ever end, so you do more and compound the slow-down ?
    In the interests of research i did another project migrating 245 8MB raws to the 1.1 algorithum and it took 8 minutes. First 5mins consumed 1GB of virtual memory over 20 threads at average 250% CPU usage for Aperture alone. The last three minutes saw the cpus ramp higher to 350%, virtual memory increase to 1.2GB. After the 8 minutes all returned to nornal and fans slowed down (excellent fan/noise behaviour on these quads).
    Is this what others are seeing ?
    When you force quit Aperture during these system slow-downs what effect does this having on your images ? Do the uncompleted background processes restart when you go to try and view them ?
    If i get time i'll try and compare to my MBP.

  • ITunes 9.1 - CPU goes to 100% when playing prot. AAC - clicks and gaps

    Since downloading 9.1, there are clicks and gaps when I play protected AAC songs on iTunes. Songs play fine with QuickTime player. When a protected song is played in 9.1, iTunes absorbs all available CPU.
    I have followed all remediation steps in the apple.com support site including turning off sound enhancer, checking for driver updates, etc. I have also set buffering to maximum.
    If I play an AAC protected song for more than a few seconds it becomes impossible to pause it and all other applications running freeze. This happens even when there are no other applications running . . . it is not a conflict problem with other applications. itunes absorbs whatever cpu there is left to take. Consistently using 60-90% of CPU w/ total CPU @ 100% constantly (when an AAC protected song is played).
    iTunes works fine as long as the song is not protected.
    I cannot find anything on the apple site to allow me to go back to 9.0. Why can't apple allow one to step back to the previour version? I am seriously looking at getting permanently off of iTunes. Each new release is harder to use and performs more poorly.

    Unfortunately I have the same problem since I installed Itunes 9.1 anche this problem is shown in a lot of thread.....
    I am seriously disappointed cause I BOUGTH these music and it's unbelieveble that Itunes doesn't play its music (sorry for my english...I am writing from Italy...).
    I am afraid that it's an Intel Atom processor conflict.
    I have already update everything was possible to update (driver, codecs, setting audio configuration, energy saving...) but I didn't find any stuff to solve this issue...

  • Performance problem in Zstick report...

    Hi Experts,
    I am facing performance problem in Custoom Stock report of Material Management.
    In this report i am fetching all the materials with its batches to get the desired output, at a time this report executes 36,000 plus unique combination of material and batch.
    This report takes around 30 mins to execute. And this report is to be viewed regularly in every 2 hours.
    To read the batch characteristics value I am using FM -> '/SAPMP/CE1_BATCH_GET_DETAIL'
    Is there any way out to increase the performance of this report, the output of the report is in ALV.
    May i have any refresh button in the report so that data may get refreshed automatically without executing it again. or is there any cache memory concept.
    Note: I have declared all the itabs with type sorted, all the select queries are fetched with key and index.
    Thanks
    Rohit Gharwar

    Hello,
    SE30 is old. Switch on trace on ST12 while running this progarm and identify where exactly most of the time is being spent. If you see high CPU time this problem with the ABAP code. You can exactly figure out the program/function module from ST12 trace where exactly time is being spent. If you see high database time in ST12, problem is with database related issue. So basically you have to analyze sql statement from performance traces in ST12. These could resolve your issue.
    Yours Sincerely
    Dileep

  • Performance problem with WPF Viewer CRVS2010

    Hi,
    We are using Crystal Reports 2010 and the new WPF Viewer. Last week when we set up a test machine to run our integration tests (several hundred) all report tests failed (about 30 tests) with a timeout exception.
    The testmachine setup:
    HP DL 580 G5
    WMWare ESXi 4.0
    Guest OS: Windows 7 Enterprise 64-bit
    Memory (guest OS): 3GB
    CPU: 1
    Visual Studio 2010
    Crystal Reports for Visual Studio 2010 with 64 bit runtime installed
    Visual Studio 2008 installed
    Microsoft Office 2010 installed
    Macafee antivirus
    There are about 10 other virtual machines on the same HW.
    I think the performance problem is related to text obejcts on a report document viewed in a WPF Viewer. I made a simple WPF GUI with 2 buttons and the first button executes a very simple report that only has a text object with a few words in it and the other button is also a simple report with only 1 text object with approx. 100 words (about 800 charchters).
    The first report executes and displays almost instantly and the second report executes instantantly but displays after approx. 1 min 30 sec.
    And execute in this context means that all VB.Net code runs in the compiler without any exception or performance problem. The performance problem seems to come after viewer.Show() (in the code below) has executed.
    I did another test on the second report and replaced the text obejct with a formula field with the same text as the text object and this test executed and displayed the report instantly.
    So the performance problem seems to have something to do with rendering of textobjects in the WPF Viewer on a virtual machine with the above setup.
    I've made several tests on local machines with Windows XP (32 bit) or Winows 7 (64 bit) installed and none of them have this performance problem. Its not a critical issue for us because our users will run this application on their local PCs with Windows 7 64-bit but its a bit problematic for our project not being able to run all of our integration tests but I will probably solve this by using a local PC instead.
    Here is the VB.Net code Im using to View the reports:
    Private Sub LightWeight_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeight.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
        Private Sub LightWeightSlow_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeightSlow.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
    The reports are 2 empty default reports with only 1 textobject on the details section.
    // Thomas

    See if the KB [
    [1448013  - Connecting to Oracle database. Error; Failed to load database information|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433343338333033313333%7D.do] helps.
    Also the following may not hurt to have a look at (if only for ideas):
    [1217021 - Err Msg: "Unable to connect invalid log on parameters" using Oracle in VS .NET|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333233313337333033323331%7D.do]
    [1471508 - Logon error when connecting to Oracle database in a VS .NET application|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433373331333533303338%7D.do]
    [1196712 - Error: "Failed to load the oci.dll" in ASP.NET application against an Oracle database|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333133393336333733313332%7D.do]
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup

Maybe you are looking for

  • Battery will not charge, but works fine on ac.

    I tried resetting the pmu, and using different ac adapters, but to no avail, sometimes the light on the power supply changes to amber, but the charge on the battery remains zero, any tips? (I press the button on the battery without the ac adapter and

  • Error of connecction DBCO

    Hi Experts I'm new in ABAP programming do you know exactly how I have to fill this fields to make the connecion in DBCO transaction Field Name Description Value (For: E.g.:) CON_NAME Logical name for a database connection--> What I 'd like?? DBMS Dat

  • Encore: editing .avi clips results in large file

    Greetings!!! So, I am using Encore CS5.1 in my attempt to edit some parts out of .avi clip.  The source clip is is around 500MB.  After I have set all the begin/end markers & got the desired output clip ready for rendering, I proceed with the final r

  • File chooser Very basic

    Hey , i have a simple GUI, a button and a text box. When we click the button a file chooser will appear and we are able to select the folder/file we like. My question is, when we click on the button and when the File chooser window appears, i wont it

  • Sub-Pages on iWeb

    Hello Forum! I am learning how to create an iWeb site. I'm curious to do the following: If I want something like a sub-page, let's call it "My Portal", to be under the "About Me" link...how do I do that? Another words, people that will come to visit