Performance problems that come and go

My four year old Intel iMac with 10.6.7 installed has had performance issues from about the time I installed Aperture 3.  Not saying that they're related, but things did tend to go south from about that time.  In Aperture 3, it takes about five seconds for the "Loading" message to disappear from the image.  In other software, performance was no longer snappy.
Recently, the performance really went in the dumper and my Parallels VM would barely function it was so slow, adding stacks in a Rapidweaver site was like a five second operation instead of being almost instantaneous. 
After a restart, performance was for about half an hour, almost perfect with Aperture images loading in about a second, Rapidweaver going back to normal, etc.  But that didn't last long and I'm back to the slow loading, etc.
Repair permissions have been done, verify disk OK, no nasties in login items, activity monitor shows nothing is going on that would suck up resources, plenty of free RAM (4gb installed), CPU almost idling, nothing to speak of in network traffic.
Anywhere else I can look for problems?

Thanks, forgot about that.  No change in Aperture (maybe a small gain) but Rapidweaver is nice and snappy again.  I'm typing this with the VM running so obviously no issues there.
BTW, I'm testing Aperture performance by always using the same images and with other apps running.  Very curious to know why I suddenly received that performance boost that last only half an hour.  (Turbo switch remember them?)
The VM seems to be working very nicely, so improvement there.

Similar Messages

  • Performance problems when filtering and excluding hierarchy node

    Dear all,
    I setup a query with a hierarchy linked to it, where I use a "hard-filtered" exclusion on this hierarchy for a specific node AND a variable which the users can use for flexible filtering purposes.
    When I execute the query, the runtime of this query is unimaginable high and I really doubt, whether the underlying SQL-Statement is executed (or generated) correctly. Before I start tracing everything I want to know, whether you encountered some similar problems with such kind of selection and de-selection in a query?!
    Thanks,
    Andreas

    Hi Andreas,
    I had read in one of the performance Docs that,
    "Inclusion gives better performance than exclusions.."
    Never tried it but you can check if there is any performance improvement by including the nodes you need rather than excluding the one you don`t need..
    Let me know..
    Ashish

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Mac Pro 8-core Power Supply Making Humming Noise That Comes And Goes

    This is in reference to a Mac Pro 3,1 8-core purchased around Feb 2008.
    Lately, I've noticed a strange noise coming from my power supply that sounds like the noise a refrigerator radiator makes. It's a humming noise that rapidly comes and goes, constantly.
    The power supply fan speed is running at 599rpm
    I'm not sure if the problem could've been caused by adding additional ram (12GB, OWC) or an additional HDD (3rd HDD, WD), but I didn't really notice the noise until after adding these components.
    What could be the cause of this problem? What are my repair options if necessary (without Apple Care)?
    Thanks in advance.

    I know for a fact it is not any of my hard drives because the noise is becoming more frequent no matter what hard drive I use, I actually have a 1.5tb, a 1tb, two 500gb and two 320gb drives, I have set an appointment to take my system in and have it tested, it is starting to crash 5-6 times a day. I can not get a lot of work done with this annoying spinning noise and the shutting down. Thank god for my Apple care, and I have visited various forums that lead me to believe its the power supply going out, with the symptoms it is having. This article and others like it explain what to look for when a power supply is going bad.
    http://www.smartcomputing.com/editorial/article.asp?article=articles/archive/r07 04/24r04/24r04.asp

  • [URGENT] Performance problem with BC4J and partioned data

    Hi all,
    I have a big performance probelm with BC4J and partitioned data. As as partitioned table shouldn't have a primary key like a sequence (or something else) my partitioned table doesn't have any primary key.
    When I debug my BC4J application I can see a message showing me "ignoring row with no primary key" from EntityCache. It takes a long time to retrieve my data even if I use the partition keys. A quick & dirty forms application was multiple times faster!
    Is this a bug in BC4J, or is BC4J not suitable for partitioned data? Can anyone give me a hint what to do, do make the BC4J application fast even with partitioned data? In a non-partitioned environment the application works quite well. So it seams that it must be an "error" somewhere in this part.
    Thanks,
    Axel

    Here's a SQL statement that creates the table.
    CREATE TABLE SEARCH
    (SEAR_PARTKEY_DAY              NUMBER(4)        NOT NULL
    ,SEAR_PARTKEY_EMP            VARCHAR2(2)      NOT NULL
    ,SEAR_ID                     NUMBER(20)       NOT NULL
    ,SEAR_ENTRY_DATE             TIMESTAMP        NOT NULL
    ,SEAR_LAST_MODIFIED            TIMESTAMP             NOT NULL
    ,SEAR_STATUS                 VARCHAR2(100)    DEFAULT '0'
    ,SEAR_ITC_DATE               TIMESTAMP        NOT NULL
    ,SEAR_MESSAGE_CLASS          VARCHAR2(15)     NOT NULL
    ,SEAR_CHIPHERING_TYPE        VARCHAR2(256)   
    ,SEAR_GMAT                   VARCHAR2(1)      DEFAULT 'U'
    ,SEAR_NATIONALITY            VARCHAR2(3)      DEFAULT 'XXX'
    ,SEAR_MESSAGE_ID             VARCHAR2(32)     NOT NULL
    ,SEAR_COMMENT                VARCHAR2(256)    NOT NULL
    ,SEAR_NUMBER_OF              NUMBER(3)        NOT NULL
    ,SEAR_INTERCEPTION_SYSTEM    VARCHAR2(40)    
    ,SEAR_COMM_PRIOD_H           NUMBER(5)        DEFAULT -1
    ,SEAR_PRIOD_R                  NUMBER(5)        DEFAULT -1
    ,SEAR_INMARSAT_CES           VARCHAR2(40)    
    ,SEAR_BEAM                   VARCHAR2(10)    
    ,SEAR_DIALED_NUMBER          VARCHAR2(70)    
    ,SEAR_TRANSMIT_NUMBER        VARCHAR2(70)    
    ,SEAR_CALLED_NUMBER          VARCHAR2(40)    
    ,SEAR_CALLER_NUMBER          VARCHAR2(40)    
    ,SEAR_MATERIAL_TYPE          VARCHAR2(3)      NOT NULL
    ,SEAR_SOURCE                 VARCHAR2(10)    
    ,SEAR_MAPPING                VARCHAR2(100)    DEFAULT '__REST'
    ,SEAR_DETAIL_MAPPING         VARCHAR2(100)
    ,SEAR_PRIORITY               NUMBER(3)        DEFAULT 255
    ,SEAR_LANGUAGE               VARCHAR2(5)      DEFAULT 'XXX'
    ,SEAR_TRANSMISSION_TYPE      VARCHAR2(40)    
    ,SEAR_INMARSAT_STD           VARCHAR2(1)     
    ,SEAR_FILE_NAME              VARCHAR2(100)    NOT NULL
    PARTITION BY RANGE (SEAR_PARTKEY_DAY, SEAR_PARTKEY_EMP)
      PARTITION SEARCH_MAX VALUES LESS THAN (MAXVALUE, MAXVALUE) MIRA4_SEARCH_EVEN
    );of course SEAR_ID is filled by a sequence but the field is not the primary key as it would decrease the performance of partitioned data.
    We moved to native JDBC with our application and the performance is like we never expected to be!

  • When I run activity monitor with no programs running my dock shows fluctuating usage...between 4% up to 20%.  Random figures that come and go with corresponding CPU usage.  A little help?

    My activity monitor show random and fluctuating CPU usage at the Dock site with absolutelyno programs running.  %'s fluctuate between 3 - 20 and they pretty much come and go.  What might be causing this?

    There's a lot of background processes running that aren't documented as Apps as such. And some of them are the Time Machine backups, Spotlight searches, all kinds of small little bits of maintenance caches, there's a lot going on. So it's not completely out of the question that this is 'normal' for a Mac. If it were 50% or more I would be concerned but if it spikes up to 20% for a second or two you are probably not in any trouble of any kind.

  • 7.2 has Save as Performance ( Whats that mean and why )

    the topic has it

    probably it's for people that they don't know how to create that effect,and that they don't have big knowledge on Audio engineering,so they made it easy for them.
    It's useful for those who want to create podcasts etc
    On the Reference it says that >for technical reasons the Ducker can be inserted only on a Bus or Master o/p
    http://www.harmony-central.com/Effects/Articles/Compression/
    on the page on the above link it says a little bit about ducking,for those interested
    A

  • Performance problems with JDBC and mysql

    Hi,
    I have here a "one table" database which has 1.000.000 data rows. The problem is, that deleting a special entry (ie delete from mytable where da_value="foo1") takes between 10 and 60 seconds.
    Does anyone can help me out of this performance trap.
    LoCal

    table indexed and keys used?
    always preferred postgres! :)

  • About the wifi disconnection problem that me and many of you have...

    Before making a hasty post on this new chic discussions forum I went through the recent activity and found many of you randomly loose connectivity to their wifi routers. I recent bought a Belkin Surf 802.11n router and have been having similar problems. The Airport icon would show connectivity but safari would fail to load pages, including the router's gui page.
    Upon reconnection things would work fine.
    I found something common among all the times this happened to me. My iPhone 3G was placed just next to my MacBook pro, about 1 feet away, on the desk whenever this happened. also, it comes across the path of the MacBook pro and the router.
    I'm wondering if this is a problem relating to signal interference. interestingly, the old router was a/b/g and the new is a/b/g/n and the iPhone 3g is only up till g. so it could possibly me a signal mix up of the two devices, MacBook  on n and iPhone 3g on g.
    has anyone of u too had a wifi device nearby when this happened, particularly an iPhone 3g.
    also, if this problem is because of something else, please tell me what it is and what's the solution, if u know.
    thanks.
    Neerav
    Message was edited by: AceNeerav

    Hi Macybean,
    Please continue your other thread here:
    https://discussions.apple.com/message/15051193
    You obviously haven't found a solution yet, and didn't add that detail there.  It is better to finish one thread, get it answered completely, than interrupt another person's with a "me too" especially if there are numerous causes that could possibly cause the same issue.  It is harder to isolate issues, if they are melded in the same thread.  Now if you have a solution you can offer, that's fine.
    Thank you.

  • Line of "dead pixels" on the screen that come and go.

    I just did a clean erase and install, i'm using leopard with all the current updates. i noticed the system was leaving lines on the screen before the fresh install so i figured it was a software problem. well, after doing a clean install they are back. its like the video isn't processing correctly or something because i can put another window on top of the "dead pixels" and it covers them up, like the in this picture. http://farm4.static.flickr.com/3269/2442796153fb66961090o.png
    the finder window is covering an inch or so over the dead pixels. and to clarify the line of dead pixels or what ever they are appear to associate with a certain window or app and show up at different parts of the screen. for instance, if i see the lines in firefox, i can minimize the window and the line is gone.
    this is on a 24inch white imac with the 2.16 c2d.
    any ideas what would make this happen?
    Message was edited by: DagYo!

    There are quite a few posts about this sort of thing.
    Many peopel find some minutes to hours after the lines of pixels the Mac freezes - mouse still moves but you can't do anything.
    It's not clear if it's software or hardware.
    My suspicion is that it's a software problem but I may be wrong. Either graphics memory or RAM used for buffering is faulty, or something is inadvertently writing garbage to graphics memory.
    Most people attribute the issue to Leopard, but I'm still running Tiger.
    Lots of posts about it being an nVidia card driver issue, but I'm running ATI.
    Maybe coincidence but my 20" late 2006 iMac started playing up after a security update earlier this year.
    Some people have had graphics cards replaced and it's cured it, pointing at hardware.
    Another thing that seems to help is upping default fan speed using say smcFanControl.
    Two things that have made this less frequent for me are:
    - turning off Genie effect in Sytem Prefeerences for minimising/maximising windows (I often got freezes when changing windows)
    - running smcFanControl
    There is some debate as to whether or not for some reason the fans are not speeding up properly (software/EFI or SMC problem?) in response to higher temps and hence components are becoming unstable.
    Try some searches on 'glitches' or 'freezing' or browse through the last couple of weeks threads.
    If your machine is under warranty I'd be phoning Apple. Mine alas is not.
    AC

  • Performance problem with sproc and out parameter ref cursor

    Hi
    I have sproc with Ref Cursor as an OUT parameter.
    It is extremely slow looping over the ResultSet (does it record by record in the fetch).
    so I have added setPrefetchRowCount(100) and setPrefetchMemorySize(6000)
    pseudo code below:
    string sqlSmt = "BEGIN get_tick_data( :v1 , :v2); END;";
    Statement* s = connection->createStatement(sqlStmt);
    s->setString(1, i1);
    // cursor ( f1 , f2, f3 , f4 , i1 ) f for float type and i for interger value.
    // 5 columns as part of cursor with 4 columns are having float value and
    // 1 column is having int value assuming 40 bytes for one rec.
    s->setPrefetchRowCount (100);
    s->PrefetchMemorySize(6000);
    s->registerOutParam(2,OCCICURSOR);
    s->execute();
    ResultSet* rs = s->getCursor(2);
    while (rs->next()) {
    // do, and do v slowly!
    }

    Hi,
    I have the same problem. It seems, when retrieving cursor, that "setPrefetchRowCount" is not taking into account by OCCI. If you have a SQL statement like "SELECT STR1, STR2, STR3 FROM TABLE1" that works fine but if your SQL statement is a call to a stored procedure returning a cursor each row fetching need a roudtrip.
    To avoid this problem you need to use the method "setDataBuffer" from the object "ResultSet" for each column of your cursor. It's easy to use with INT type and STRING type, a lit bit more complex with DATE type. But until now, I'm not able to do the same thing with REF type.
    Below a sample with STRING TYPE (It's assuming that the cursor return only one column of STRING type):
    try
      l_Statement = m_Connection->createStatement("BEGIN :1 := PACKAGE1.GetCursor1(:2); END;");
      l_Statement->registerOutParam(1, oracle::occi::OCCINUMBER, sizeof(l_CodeErreur));
      l_Statement->registerOutParam(2, oracle::occi::OCCICURSOR);
      l_Statement->executeQuery();
      l_CodeErreur = l_Statement->getNumber(1);
      if ((int) l_CodeErreur     == 0)
        char                         l_ArrayName[5][256];
        ub2                          l_ArrayNameSize[5];
        l_ResultSet  = l_Statement->getCursor(2);
        l_ResultSet->setDataBuffer(1, l_ArrayName,   OCCI_SQLT_STR, sizeof(l_ArrayName[0]),   l_ArrayNameSize,   NULL, NULL);
        while (l_ResultSet->next(5))
          for (int i = 0; i < l_ResultSet->getNumArrayRows(); i++)
            l_Name = CString(l_ArrayName);
    l_Statement->closeResultSet(l_ResultSet);
    m_Connection->terminateStatement(l_Statement);
    catch (SQLException &p_SQLException)
    I hope that sample help you.
    Regards                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Performance problems with combobox and datasource

    We have a perfomance problem, when we are connecting a datatable object or something like this to a datasource property of a combobox. Below you find the source code. The SQL-Statement reads about 40000 rows and the result (all 40000) should be listed in the combobox. There is duration about 30 second before this process has finished. Any suggestions?
    Dim ds As New DataSet
    strSQL = "Select * from am.city"
    conn = New Oracle.DataAccess.Client.OracleConnection(Configuration.ConfigurationSettings.AppSettings("conORA"))
    comm = New Oracle.DataAccess.Client.OracleCommand(strSQL)
    da = New Oracle.DataAccess.Client.OracleDataAdapter(strSQL, conn)
    conn.Open()
    da.Fill(ds)
    conn.Close()
    Dim dt As New DataTable
    dt = ds.Tables(0)
    ComboBox1.DataSource = dt
    ComboBox1.ValueMember = dv.Table.Columns("id").ColumnName
    ComboBox1.DisplayMember = dv.Table.Columns("city").ColumnName

    But how long does it take to fill the DataTable?
    I can fill a 40000 row datatable in under 4 seconds.
    DataBinding a combo box to that many rows is pretty expensive, and not normally recommended.
    David
    Dim strConnection As String = "Data Source=oracle;User ID=scott;Password=tiger;"
    Dim conn As OracleConnection = New OracleConnection(strConnection)
    conn.Open()
    Dim cmd As New OracleCommand("select * from (select * from all_objects union all select * from all_objects) where rownum <= 40000", conn)
    Dim ds As New DataSet()
    Dim da As New OracleDataAdapter(cmd)
    Dim begin As Date = Now
    da.Fill(ds)
    Console.WriteLine(ds.Tables(0).Rows.Count & " rows loaded in " & Now.Subtract(begin).TotalSeconds & " seconds")
    outputs
    40000 rows loaded in 3.734375 seconds

  • Performance problems between dev and prod

    I run the same query with identical data and indexes, but one system takes a 0.01 seconds to run while the production system takes 1.0 seconds to run. TKprof for dev is:
    Rows Row Source Operation
    1 TABLE ACCESS BY INDEX ROWID VAP_BANDVALUE
    3 NESTED LOOPS
    1 NESTED LOOPS
    41 NESTED LOOPS
    41 NESTED LOOPS
    1 TABLE ACCESS BY INDEX ROWID VAP_PACKAGE
    1 INDEX UNIQUE SCAN SYS_C0032600 (object id 51356)
    41 TABLE ACCESS BY INDEX ROWID VAP_BANDELEMENT
    41 AND-EQUAL
    82 INDEX RANGE SCAN IDX_BE2 (object id 53559)
    41 INDEX RANGE SCAN IDX_BE1 (object id 53558)
    41 TABLE ACCESS BY INDEX ROWID VAP_BAND
    41 INDEX UNIQUE SCAN SYS_C0034599 (object id 53556)
    1 INDEX UNIQUE SCAN SYS_C0032549 (object id 51335)
    1 INDEX RANGE SCAN IDX_BV1 (object id 53557)Tkprof for Prod is :
    Rows Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    1 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_BANDVALUE' (TABLE)
    52001 NESTED LOOPS
    26000 NESTED LOOPS
    26000 NESTED LOOPS
    26000 NESTED LOOPS
    1 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_PACKAGE' (TABLE)
    1 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'SYS_C0018725' (INDEX (UNIQUE))
    26000 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_BANDELEMENT' (TABLE)
    26000 INDEX MODE: ANALYZED (RANGE SCAN) OF 'IDX_BE2' (INDEX)
    26000 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_BAND' (TABLE)
    26000 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'SYS_C0030648' (INDEX (UNIQUE))
    26000 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'SYS_C0018674' (INDEX (UNIQUE))
    26000 INDEX MODE: ANALYZED (RANGE SCAN) OF 'IDX_BV1' (INDEX).The row count varies greatly. But it shouldn't as the data is the same.
    Any ideas?

    From DEV you show the Row Source Operations for the query. The column named "Rows" signifies the actual number of rows processed with each step.
    From PROD you show the Execution Plan for the query; that is, tkprof was executed with the EXPLAIN option which generates the execution plan as of the time when tkprof was run. The "Rows" column in the Explain Plan output comes from the PLAN_TABLE.CARDINALITY, which represents an estimate by the CBO for the number of rows [expected to be] processed with each step.
    So, if by <quote>The row count varies greatly</quote> you meant these "Rows" columns outputs then you're are comparing actuals from a database with estimates from another. Get the Row Source Operations from both.
    "Identical data and indexes":
    1. data may be the same, but it is not necessarily stored physically the same way.
    2. Indexes being the same means their definitions are the same; again, physically they are not necessarily identical
    In other words, data in PROD (the way it is stored on disk) may have evolved as a result of discrete deletes/updates/inserts ... in DEV it could be, for example, stored more compact if you took a copy of PROD and moved into DEV. So, the number of blocks for your segments will likely be different between PROD and DEV, the clustering factor for your indexes are likely different, etc ... things which could [and do] influence the CBO. The statistics may be different.
    I guess what I'm saying is ... it is quite hard, if not outright impossible, to get two identical databases/instances/load ... hence, don't expect the executions to be 100% identical, even if you have "identical data and indexes". By all means compare between DEV and PROD (make sure you compare the same thing though) and use the observed differences as an indicator for further investigation ... don't chase the goal of 100% identical behavior.
    Now, by all means look at that query taking 1 second in PROD ... I have only addressed <quote>The row count varies greatly. But it shouldn't as the data is the same.</quote>

  • When running diagnostic one of the problems that comes up is itunes is not running in safe mode,how do i fix

    Cannot connect to ITunes Store - Diagnostic report comes up as  -secure link to ITunes store failed - how do i fix

    ''Scotty2015 [[#question-1051017|said]]''
    <blockquote>
    First time situation occurred folders started to disappear as i read messages TB 31.5.0 win 7 pro 64 bit
    Thunderbird user forever got into safe mode and disabled all add-ons - now all folders are visible and all content is accessible back to 2009 used mozback so I have backup restarted TB and back to 3 folders. Inbox gets created with the new emails that arrive. Filters send email to one of the visible folders 'Travel' When you open 'travel' all emails 23 unread are accessible. Go back into 'safe' mode and all folders and emails are present
    Thanks
    </blockquote>
    no extensions , no addons uninstalled thunderbird, deleted all TB folders then rebooted win safe mode and ran Malwarebytes - no malware, rebooted normal windows and downloaded fresh version of TB reloaded profile using mozback. Still the same TB loads showing 3 folders no inbox and should be 2 accounts. Change to safe mode in TB and all files are present and 2 inboxes
    i am not losing data just can only see emails when in TB safe mode. What TB profile files could be contaminated?

  • 3D performance problems after upgrading memory

    I recently purchased an additional 2GB of memory to try and extend the life of my aging computer.  I installed the memory yesterday and Windows seems to recognize it (reporting now 3.3GB) but when I dropped into WoW (pretty much the only game I have) the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).  Basically WoW was being software 3D rendered!!!
    I went through the usual reinstall drivers, reboot, etc... and couldn't find a fix.  I powered down, pulled out 2 of the memory sticks, booted up, and dropped into WoW - it ran at the full 60FPS and CPU utilization was very low (i.e. back to GPU Hardware 3D rendering).  I powered down again, swapped the 2 sticks for the other 2 sticks, booted up, and dropped into WoW - again it ran 100% fine.  So I powered down, put all four sticks in, booted back up, and when I dropped into WoW it was running in the software 3D rendering mode (20FPS at best and High CPU/Kernel usage).
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    All info in signature is up to date.
    Thanks in advance for any help!

    Quote
    Well his last post was a little over 6 hours ago so he was up pretty late.
    Looks like nothing one does in here goes completely unnoticed.   
    Anyway, I am done sleeping now.
    Quote
    his 2 Pfennig's worth.  I know, I know it's Euro's now.
    Yeah, and what used to be "Pfennige" is now also called "Cents" and here are mine:
    Quote
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    PAE or Physical Memory Extension will not do anything as Microsoft has castrated this feature to such an extend that it has nothing to do with memory addressing anymore when in comes to Windows XP:
    http://en.wikipedia.org/wiki/Physical_Address_Extension#Microsoft_Windows
    Quote
    Windows XP Service Pack 2 and later, by default, on processors with the no-execute (NX) or execute-disable (XD) feature, runs in PAE mode in order to allow NX. The NX (or XD) bit resides in bit 63 of the page table entry and, without PAE, page table entries only have 32 bits; therefore PAE mode is required if the NX feature is to be exploited. However, desktop versions of Windows (Windows XP, Windows Vista) limit physical address space to 4 GiB for driver compatibility reasons.
    The feature is already automatically enabled.  But since is original function (Address Extension) does no longer exist when it comes to the desktop versions of Windows XP, it won't really do anything you would ever notice.
    About the /MAXMEM Switch:  In Windows 32bit operating systems, every process is limited to 2GB of memory.  The point of the switch is to allow certain applications (or their run-time process) to occupy a higher amount of system memory than 2GB.  However, the culprit here is that only those applications are able to utilize this ability that have been programmed (or compiled) accordingly.  A special flag (large memory aware) has to be implemented.  Otherwise, these application will be restricted to 2GB even though the /MAXMEM Switch has been set to extend the 2GB limit to 3GB.  Most 32bit applications come without the "large memory aware" flag and that is why usually, settings the switch won't change anything.
    In any case, it is unlikely that /PAE (even if it would not be castrated) and /MAXMEM would have an impact on your actual issue because I doubt that it has much to do with either memory adressing or the memory limit of an indiviual Windows process.
    Quote
    the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).
    There are a couple of hardware based explanations to consider here.  Let's start with the most obvious one:
    1. 975X Memory Controller
    The main reason that the system chooses to automatically set the Memory Speed to DDR2-667 even though DDR2-800 modules are installed, is that by design the memory controller of the Intel 975X Chipset does not natively support DDR2-800 modules, but
    >>Intel® 975X Express Chipset Datasheet - For the Intel® 82975X Memory Controller Hub (MCH)<< [Page 20]
    This means, that from the point of view of the memory controller, operating the memory @DDR2-800 actually means overclocking it (with all potential side effects).
    Basically, if your initial problem disappears as soon as you reduce the memory speed to DDR2-667, the design limitation of the memory controller may explain your findings.
    2. Different memory modules
    If I read your signature correctly, you are actually mixing two different kits/models of RAM (CM2X1024-6400C4DHX and  CM2X1024-6400C4).  This can work of course, but in practice it not necessarely does under all circumstances. 
    This list  (-> http://ramlist.i4memory.com/ddr2/) indicates that there are at least 14 different module types/revisions of Corsair DDR2-800 / CL4 modules that utilize a wide range of different memory chips (Elpida, ProMos, Micron, Infinion, Powerchip, Qimonda, Samsung, Infinion etc.).  Even though the superficial specifications for these chips appear to be pretty similar (DDR2-800 / CL5 / CL4), this does not necessarely mean that the modules will respond to the same operating conditions in the same way. There may be small difference in sub-timings/sub-latencies and/or the general responsiveness of the ICs which may affect the operating behaviour of the memory controller (which by the way also includes the PCI-Express interface which your video card is hooked up to).
    And again:  If running the system @DDR2-667 solves your issue, the possible explanation is that higher clock speeds may amplify (or trigger) potential performance problems that could have to do with the use of non-identical memory modules.
    Furthermore: It is also possible that the memory controller's design limitations and the potential compatibility problems that may be attributed to mixing different modules types may reinforce each other in terms of reduced system performance.
    3. The BIOS may have an impact as well
    There has been known issue with the use of certain video cards in conjunction with 4GB of system memory on this mainboard:
    https://forum-en.msi.com/index.php?topic=107301.0
    https://forum-en.msi.com/index.php?topic=105955.0
    https://forum-en.msi.com/index.php?topic=99818.msg798951#msg798951
    What may have come out as graphics/display corruption in earlier BIOS Releases may come out as reduced system performance when using the latest BIOS Release.  Of course, this is hard to prove, but I thought I'd mention it anyway.  May I ask what amount of video memory your card has onboard?
    Fortunately, there is a BIOS version that you could consider to try in this matter.  It is not only the last BIOS Release that could be used in order to avoid the corruption issue, but it is (in my oppionion) the best BIOS Version that was ever released for the 975X Platinum PUE Mainboard:  W7246IMS.716 [v7.1b6].  I have been using this mainboard for almost two years and have tested almost every BIOS Release that ever came out and I always went back to v7.1b6 as "ground zero". 
    It will properly support your E6600 (so you don't have to worry about that) and as far as I remember, there are no known compatibility issues with other components.  So maybe, you want to give this a shot.
    The bottom line is that in a worst case scenario, the problem you describe could be caused by all of the above things at the same time.  You cannot really do anything about the 975X Chipset Specifications and the only way to rule out explanation #2 is to test modules that are actually identical (same model number, revision and memory chips).  A test of the 7.1b6 BIOS Release is something you should consider.  It may be the only way to test the BIOS Hypothesis.
    This post turned out to be longer than I intended, but then again, I am well-rested after a good sleep and the wake-up coffee is kicking in pretty good.

Maybe you are looking for

  • Can we use the same credit card for multiple customers???

    One of my clients has a requirement where they need to use the same Credit Card number for multiple customers. I have looked into the possibility of using the 'Alternate Payer' functionality but besides that is there any other way SAP can be tweaked

  • CUPS - Conferencing profil problem

    Hello, I've a little problem with a CUPS 8.6, so, when I create my conferencing profil i put my name, the description and the primary conferencing server. This is work perfectly, but when i want to add one user in the conferencing profil, I click on

  • How to use FInger Printer Sensor with HP Probook 440 G2

    I have HP ProBook 440 G2 I am unable to set finger print sensor as the login to my device.  

  • Is it possible to execute a process from BI?

    I am implementing in BI a web app which currently relies on calling a local program to get some of its report information. I know BI wants everything to be in a DB, so I'm wondering how to imitate this functionality. I can think of a few hacks (e.g.,

  • BOM Explosion setting in SPRO

    Dear Gurus/Friends This is related to SPRO->Plant Parameters->Bom Explosion Here there are 5 selections      BOM explosion number/order start date 1     Order start date 2     Order finish date 3     BOM explosion number/order start date 4     BOM ex