More no of coulmns matters on performance?

Dear Members,
We have a table of Vehicle master, we have nearly 50 attributes to define for each vehicle, which could be the better best to define all 50 columns in a single table or having multiple tables like given ex. below.
1) veh_master
veh_id PRIMARY KEY
veh_registration_no
2) veh_registration_details
veh_id REFERENCES veh_master(veh_id)
veh_registered_town
veh_registered_name
3) veh_physical_details
veh_id REFERENCES veh_master(veh_id)
veh_no_of_seats
veh_no_of_sleepers
infact in all tables all fields are manadatory and child tables not meant multi record tables, all are single record table only.
having one table with so many columns will effect any performace? which one gives good performance multiple tables or single table?
Thanks in advance

Having many columns in the table will have a small slow-down effect. Having to do joins to do queries for the other data will have a larger slow-down effect. Having to do a join as part of the scan for the data you want (where the select criteria are spread across different tables) has a massive slow-down effect.
If the data always exists and is 1-1 with the master, I would have one table.

Similar Messages

  • Premiere Pro CS5 never uses more than 15% of CPU, so, performs much slower than premiere 6.5!

    Hello.
    I'm doing wedding videos for quite a while. My hardware included DV500DVD card and premiere 6.5, CPU is Core 2 Duo E8400, 2GB of RAM, Windows XP SP1 (DV500 can't work with SP2 or SP3).
    The above mentioned system is quite old, and absolutely not usable for HD videos. So, I've decided to build another machine. The configuration is as follows:
    OS: Windows 7 Professional 64 bit.
    M/B: Asus LGA 1155, H67 chipset.
    CPU: Intel Core i7-2600K
    RAM: 4GB DDR3
    HDD: 2x 64GB SSD, 4 x 2TB SATA (RAID) (All HDDs connected to separate PCI-E controller, so intel chipset flaw is not an issue here)
    I've recently downloaded and installed the trial version of Premiere Pro CS5, to see, how it suits to my needs. And I'm very dissapointed. I've started premiere, selected DV-widescreen preset, inserted two static pictures on timeline, and did an alpha blend between them. Then put movie to export. Not only it took way too long time to render, but it only used single core, and cpu usage is only 15% ! Just for curiosity, I've installed premiere 6.5 on same system, and it did that task about 10 times faster!
    What is this, limitation of trial version, or just Premiere Pro CS5 can't be used with this modern CPU ?
    Thanks in advance,
    Alex

    First off 4GB of ram is really to low for CS5. CS5 is 64 bit for a reason and requires far more ram. 2nd, what video card are you using?
    As for the opteron system the other poster listed, you could get the lower end Intel platform and easily out perform that opteron system. A Geforce 450GTS video card would out perform that Quadro by 2 or 3 times and that card is barely over $100.
    No offence but you can't run CS5 on minimal or very old specifications and expect the same results as everyone else or older software. If you can't upgrade the hardware then I suggest staying with the previous version until you can. The MPE engine requires allot of resources to function especially ram and your lower spec or older system will just bottleneck the pipeline rather than speed things up.
    Eric
    ADK

  • Will more ram increase my performance?

    I'm running Logic Pro 7.1.1 on a Powerbook G4 1.67 with 1 GB of ram installed running a fair amount of sample libraries and effects. When I check the Activity Monitor under system memory I get the following:
    Wired: 108 MB
    Active: 600 MB
    Inactive: 300 MB
    Free: 13 MB
    Since I still have a decent amount of inactive memory, does that mean that more ram would not affect my performance? Or will more ram help out with system overloads?

    i'm always careful in wholeheartedly saying 'yes' when people ask if more RAM will give more performance.. while of course it will in almost every case help things along, sometimes quite dramatically, I think it's important to be realistic about what it actually does.
    it's easy to have the misconception that more RAM = more plug ins on some kinf of linear scale -- I had 2GB, so now with 4GB I should get something like double the performance, right? just as long as you're clear why this isn't the case, then that's the main thing.
    I think since the days of CPU cards like protools TDM, the confusion is out there that adding RAM is like adding more DSP. so you just have to be careful that you understand what it's doing any why it can free your system up to do a little more. you can indeed see decent increases if your machine was struggling along before, given what you were trying to get it to do. or, it may just be a subtle thing.
    IMO the best way to think of it is this. your machine has the capacity to perform up to a certain level. when you start getting it to do a lot of things, you will start creating a hindrance to it performing as fast as it can as soon as there are moments when it doesn't have enough RAM to do it. this amount will be different for everyone, dependant on your own use. so, the best thing you can do is remove the bottlenecks, and you'll best let your machine go as fast as it will go.

  • Performance Tuning in IR

    Hello All,
    We have created some reports using Interactive Reporting Studio. The volume of data in that Oracle database are huge and in some tables of the relational database are having above 3-4 crores rows individually. We have created the .oce connection file using the 'Oracle Net' option. Oracle client ver is 10g. We earlier created pivot, chart and report in those .bqy files but had to delete those where-ever possible to decrease the processing time for getting those report generated.
    But deleting those from the file and retaining just the result section (the bare minimum part of the file) even not yet helped us out solving the performance issue fully. Still now, in some reports, system gives error message 'Out of Memory' at the time of processing those reports. The memory of the client PCs,wherefrom the reports are being generated are 1 - 1.5 GB. For some reports, even it takes 1-2 hours for saving the results after process. In some cases, the PCs gets hanged at the time of processing. When we extract the query of those reports in sql and run them in TOAD/SQL PLUS, they take not so much time like IR.
    Would you please help us out in the aforesaid issue ASAP? Please share your views/tips/suggestions etc in respect of performance tuning for IR. All reply would be highly appreciated.
    Regards,
    Raj

    SQL + & Toad are tools that send SQL and spool results; IR is a tool that sends a request to the database to run SQL and then fiddles with the results before the user is even told data has been received. You need to minimize the time spent by IR manipulating results into objects the user isn't even asking for.
    When a request is made to the database, Hyperion will wait until all of the results have been received. Once ALL of the results have been received, then IR will make multiple passes to apply sorts, filters and computed items existing in the results section. For some unknown reason, those three steps are performed more inefficiently then they would be performed in a table section. Only after all of the computed items have been calculated, all filters applied and all sorts sorted, then IR will start to calculate any reports, charts and pivots. After all that is done, the report stops processing and the data has been "returned"
    To increase performance, you need to fine tune your IR Services and your BQY docs. Replicate your DAS on your server - it can only transfer 2g before it dies, restarts and your requested document hangs. You can replicated the DAS multiple times and should do so to make sure there are enough resources available for any concurrent users to make necessary requests and have data delivered to them.
    To tune your bqy documents...
    1) Your Results section MUST be free of any sorts, filters, or computed items. Create a staging table and put any sorts or local filters there. Move as many of your computed items to your database request line and ask the database to make the calculation (either directly or through stored procedures) so you are not at the mercy of the client machine. Any computed items that cannot be moved to the request line, need to be put on your new staging table.
    2) Ask the users to choose filters. Programmatically build dynamic filters based on what the user is looking for. The goal is to cast a net only as big as the user needs so you are not bringing back unnecessary data. Otherwise, you will bring your server and client machines to a grinding halt.
    3) Halt any report pagination. Built your reports from their own tables and put a dummy filter on the table that forces 0 rows in the table until the report is invoked. Hyperion will paginate every report BEFORE it even tells the user it has results so this will prevent the user from waiting an hour while 1000s of pages are paginated across multiple reports
    4) Halt any object rendering until request. Same as above - create a system programmically for the user to tell the bqy what they want so they are not waiting forever for a pivot and 2 reports to compile and paginate when they want just a chart.
    5) Saved compressed documents
    6) Unless this document can be run as a job, there should be NO results stored with the document but if you do save results with the document, store the calculations too so you at least don't have to wait for them to pass again.
    7) Remove all duplicate images and keep the image file size small.
    Hope this helps!
    PS: I forgot to mention - aside from results sections, in documents where the results are NOT saved, additional table sections take up very, very, very small bits of file size and, as long as there are not excessively larger images the same is true for Reports, Pivots and Charts. Additionally, the impact of file size only matters when the user is requesting the document. The file size is never an issue when the user is processing the report because it has already been delivered to them and cached (in workspace and in the web client)
    Edited by: user10899957 on Feb 10, 2009 6:07 AM

  • Message properties and performance

              Hi
              Our application uses BytesMessage - but we add some userdefined properties in
              the message header area- all string properties. The number of properties that
              we add is around 5-6 and am trying my best to reduce that somehow. Wondering whether
              the number of proerties would matter or is it like black or white? Also if I end
              up with a screwed up design just to reduce 1 or 2 properties out of 5/6 - id it
              really worth it? Also wondering if there is any way that an MDB instance knows
              which Queue the message came from if the message itself does not contain any user
              defined property like "queuename" put by the producer.
              thanks
              Anamitra
              

    One more thing to consider outside of performance
              - message header and property fields do not get paged out.
              This becomes a factor when there
              are a large number of messages on the server
              and at the same time message properties are
              fairly large in comparison to message header information.
              Tom Barnes wrote:
              > Hi,
              >
              > Anamitra wrote:
              >
              >> Hi
              >> Our application uses BytesMessage - but we add some userdefined
              >> properties in
              >> the message header area- all string properties. The number of
              >> properties that
              >> we add is around 5-6 and am trying my best to reduce that somehow.
              >> Wondering whether
              >> the number of proerties would matter or is it like black or white?
              >> Also if I end
              >> up with a screwed up design just to reduce 1 or 2 properties out of
              >> 5/6 - id it
              >> really worth it?
              >
              >
              > Likely not worth it.
              >
              > Its not the number Strings so much as the size of
              > the Strings that matters.
              >
              > The perf gain is likely not measurable except
              > for high throughput non-persistent messaging (rates
              > of 1000 msgs/sec higher) with "small" (few hundred
              > byte) message bodies and Strings greater than 25
              > characters in length. Of course, these are
              > very rough estimates - say plus/minus 75%, with
              > measured perf gains at 5% or more.
              >
              >> Also wondering if there is any way that an MDB instance knows
              >> which Queue the message came from if the message itself does not
              >> contain any user
              >> defined property like "queuename" put by the producer.
              >
              >
              > javax.jms.Destination dest = ((javax.jms.Message)msg).getJMSDestination();
              >
              > // get JMX mbean name of destination
              > String name =
              > ((javax.jms.Queue)dest_.getName();
              >
              >
              >>
              >> thanks
              >> Anamitra
              >
              >
              

  • In SQL Trace how to see which statement getting more time .

    Hi Expart,
    In SQL Trace (T-code ST05) . I am running the standard transaction . how to see which statement
    running more time and less time . suppose one statement running more time so how resolve the
    performance .
    Plz. reply me
    Regards
    Razz

    > The ones in 'RED' color are the statement which are taking a lot of time and you need to
    > optimise the same.
    No, that is incorrect, the red ones show only the ones which need several hundret milliseconds in one execution. This can even be correct for hard tasks. And there are lots of problem, which you will not see
    I have said everything here:
    SQL trace:
    /people/siegfried.boes/blog/2007/09/05/the-sql-trace-st05-150-quick-and-easy
    Go to 'Tracelist' -> Summarize by SQL statements', this is the view which you want to see!
    I summarizes all executions of the same statement.
    There are even the checks explained, the slow ones are the one which need a lot of time per record!
    See MinTime/Rec > 10.000 microseconds.
    Check all number of records, executions, buffer, identicals.
    The SE30 Tipps and Tricks will not help much.
    Siegfried

  • Performance degradation in pl/sql parsing

    We are trying to use xml pl/sql parser and noticed performance degradation as we run multiple times. We zeroed into the following clause:
    doc := xmlparser.getDocument(p);
    The first time the procedure is run the elapsed time at sqlplus is something like .45sec, but as we run repeatedly in the same session the elapsed time keeps on increasing by .02 seconds. If we log out and start fresh, we start again from .45sec.
    We noticed similar degradation with
    p := xmlparser.newParser;
    but we got around by making the 'p' variable as package variable, initializing it once and using the same for all invocations.
    Any suggestions?
    Thank you.

    Can I enhance the PL/SQL code for better performance ? Probably you can enhance it.
    or, is this OK to take so long to process these many rows? It should take a few minutes, not several hours.
    But please provide some more details like your database version etc.
    I suggest to TRACE the session that executes the PL/SQL code, with WAIT events, so you'll see where and on what time is spent, you'll identify your 'problem statements very quickly' (after you or your DBA have TKPROF'ed the trace file).
    SQL> alter session set events '10046 trace name context forever, level 12';
    SQL> execute your PL/SQL code here
    SQL> exitWill give you a .trc file in your udump directory on the server.
    http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
    Also this informative thread can give you more ideas:
    HOW TO: Post a SQL statement tuning request - template posting
    as well as doing a search on 10046 at AskTom, http://asktom.oracle.com will give you more examples.
    and reading Oracle's Performance Tuning Guide: http://www.oracle.com/pls/db102/to_toc?pathname=server.102%2Fb14211%2Ftoc.htm&remark=portal+%28Getting+Started%29

  • Performance with MySQL and Database connectivity toolbox

    Hi!
    I'm having quite some problems with the performance of MySQL and Database connectivity toolbox. However, I'm very happy with the ease of using database connectivity toolbox. The background is:
    I have 61 variables (ints and floats) which I would like to save in the MySQL-database. This is no problem, however, the loop time increases from 8ms to 50ms when using the database. I have concluded that it has to do with the DB Tools Insert Data.vi and I think that I have some kind of performance issue with this VI. The CPU never reach more the 15% of its maximum performance. I use a default setup and connect through ODBC.
    My questions are:
    1. I would like to save 61 variables each 8-10ms, is this impossible using this solution?
    2. Is there any way of increasing the performance of the DB Tools Insert Data.vi or use any other VI?
    3. Is there any way of adjusting the MySQL setup to achieve better performance?
    Thank you very much for your time.
    Regards,
    Mattias

    First of all, thank you very much for your time. All of you have been really good support to me.
    >> Is your database on a different computer?  Does your loop execute 61 times? 
    Database is on the same computer as the MySQL server.
    The loop saves 61 values at once to the database, in one SQL-statement.
    I have now added the front panel and block diagram for my test-VI. I have implemented the queue system and separate loops for producer and consumer. However, since the queue is building up faster then the consumer loop consumes values, the queue is building up quite fast and the disc starts working.
    The test database table that I add data to is created by a simple:
    create table test(aa int, bb char(15));
    ...I'm sure that this can be improved in some way.
    I always open and close the connection to the database "outside the loop". However, it still takes some 40-50 ms to save the data to the database table - so, unfortunatly no progress to far. I currently just want to save the data.
    Any more advise will be gratefully accepted.
    Regards,
    Mattias
    Message Edited by mattias@hv on 10-23-2007 07:50 AM
    Attachments:
    front panel 2.JPG ‏101 KB
    block diagram.JPG ‏135 KB

  • Need Help to see why the performance is not good

    Hi,
    We have an application that all process are developed in PL/SQL on Oracle 9i Database :
    Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    PL/SQL Release 9.2.0.6.0 - Production
    Why I have created this package. the application is a production management on chemical industries. I need to sometimes trace the Manufacturing order execution to eventually answer some incoherent data. If I analyze directly the data in the Table is not always responding because the origin of problem can be provide of some execution that perform some calculation.
    In the procedure or function a use my package PAC_LOG_ERROR.PUT_LINE(xxxxxx) to save the information. This command save the information in the memory before. At the end of the procedure or function a perform the insert with the COMMIT calling PAC_LOG_ERROR.LOGS or PAC_LOG_ERROR.ERRORS on the catch exception.
    This package is always call. On each routines performed I execute it. In the trace log of the database we have see a problem we the procedure GET_PROC_NAME in the package. We have identify that is called more that 800x and increase the performance. Who increase is this select command :
        SELECT * INTO SOURCE_TEXT
        FROM (SELECT TEXT FROM all_source
            WHERE OWNER = SOURCE_OWNER AND
                  NAME=SOURCE_NAME AND
                  TYPE IN ('PROCEDURE','FUNCTION','PACKAGE BODY') AND
                  LINE <= SOURCE_LINE AND SUBSTR(TRIM(TEXT),1,9) IN ('PROCEDURE','FUNCTION ')
            ORDER BY LINE DESC)
        WHERE ROWNUM = 1;I use it to get the procedure or function name where my log proc is called. I now that I can pass in parameters, but I have think to use an automatic method, that can help to not have some problem with others developer team to make a copy/past and not update the parameters. Because the Log info is read by the Help Desk and if we have an error on the information, it not a good help.
    COULD YOU PLEASE HELP ME TO OPTIMIZE OR SAID THE BETTER METHOD TO DO IT ?
    Here my package :
    create or replace
    PACKAGE PAC_LOG_ERROR AS
    -- Name         : pac_log_error.sql
    -- Author       : Calà Salvatore - 02 July 2010
    -- Description  : Basic Error and Log management.
    -- Usage notes  : To active the Log management execute this statement
    --                UPDATE PARAM_TECHNIC SET PRM_VALUE = 'Y' WHERE PRM_TYPE = 'TRC_LOG';
    --                COMMIT;
    --                To set the period in day before to delete tracability
    --                UPDATE PARAM_TECHNIC SET PRM_VALUE = 60 WHERE PRM_TYPE = 'DEL_TRC_LOG';
    --                COMMIT;
    --                To set the number in day where the ERROR is save before deleted
    --                UPDATE PARAM_TECHNIC SET PRM_VALUE = 60 WHERE PRM_TYPE = 'DEL_TRC_LOG';
    --                COMMIT;
    -- Requirements : Packages PAC_PUBLIC and OWA_UTIL
    -- Revision History
    -- --------+---------------+-------------+--------------------------------------
    -- Version |    Author     |  Date       | Comment
    -- --------+---------------+-------------+--------------------------------------
    -- 1.0.0   | S. Calà       | 02-Jul-2010 | Initial Version
    -- --------+---------------+-------------+--------------------------------------
    --         |               |             |
    -- --------+---------------+-------------+--------------------------------------
      PROCEDURE INITIALIZE;
      PROCEDURE CLEAN;
      PROCEDURE RESETS(IN_SOURCE IN VARCHAR2 DEFAULT NULL);
      PROCEDURE PUT_LINE(TXT IN VARCHAR2);
      PROCEDURE ERRORS(REF_TYPE IN VARCHAR2 DEFAULT 'SITE', REF_VALUE IN VARCHAR2 DEFAULT '99', ERR_CODE IN NUMBER DEFAULT SQLCODE, ERR_MSG IN VARCHAR2 DEFAULT SQLERRM);
      PROCEDURE LOGS(REF_TYPE IN VARCHAR2 DEFAULT 'SITE', REF_VALUE IN VARCHAR2 DEFAULT '99');
    END PAC_LOG_ERROR;
    create or replace
    PACKAGE BODY PAC_LOG_ERROR
    AS
      /* Private Constant */
      CR    CONSTANT CHAR(1)  := CHR(13);  -- Retour chariot
      LF    CONSTANT CHAR(1)  := CHR(10);  -- Saut de ligne
      CR_LF CONSTANT CHAR(2)  := LF || CR; --Saut de ligne et retour chariot
      TAB   CONSTANT PLS_INTEGER := 50;
      sDelay   CONSTANT PLS_INTEGER := 30;
      /* Private Record */
      TYPE REC_LOG IS RECORD(
        ERR_DATE TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
        ERR_TXT  VARCHAR2(4000)
      /* Private Type Table */
      TYPE TAB_VALUE IS TABLE OF REC_LOG INDEX BY PLS_INTEGER;
      TYPE TAB_POINTER IS TABLE OF TAB_VALUE INDEX BY VARCHAR2(80);
      /* Private Variables Structures */
      LOG_TRC PARAM_TECHNIC.PRM_VALUE%TYPE;
      LIST_PARAM TAB_POINTER;
      /* Private Programs */
      FUNCTION GET_PROC_NAME( SOURCE_OWNER IN all_source.OWNER%TYPE
                             ,SOURCE_NAME  IN all_source.NAME%TYPE
                             ,SOURCE_LINE  IN all_source.LINE%TYPE) RETURN VARCHAR2
      AS
        SOURCE_TEXT  all_source.TEXT%TYPE;
        TYPE RECORD_TEXT IS TABLE OF all_source.TEXT%TYPE;
        RETURN_TEXT     RECORD_TEXT;
      BEGIN
        SELECT * INTO SOURCE_TEXT
        FROM (SELECT TEXT FROM all_source
            WHERE OWNER = SOURCE_OWNER AND
                  NAME=SOURCE_NAME AND
                  TYPE IN ('PROCEDURE','FUNCTION','PACKAGE BODY') AND
                  LINE <= SOURCE_LINE AND SUBSTR(TRIM(TEXT),1,9) IN ('PROCEDURE','FUNCTION ')
            ORDER BY LINE DESC)
        WHERE ROWNUM = 1;
        IF SOURCE_TEXT IS NOT NULL OR  SOURCE_TEXT != '' THEN
          SOURCE_TEXT := TRIM(SUBSTR(SOURCE_TEXT,1,INSTR(SOURCE_TEXT,'(')-1));     
          SOURCE_TEXT := LTRIM(LTRIM(TRIM(SOURCE_TEXT),'PROCEDURE'),'FUNCTION');
          SOURCE_TEXT := SOURCE_NAME||'.'|| TRIM(SOURCE_TEXT);
        ELSE
          SOURCE_TEXT := 'ANONYMOUS BLOCK';
        END IF;
        RETURN SOURCE_TEXT;
      END GET_PROC_NAME;
      PROCEDURE SELECT_MASTER(REF_TYPE IN VARCHAR2, PARAM_VALUE IN VARCHAR2, SITE OUT VARCHAR2, REF_MASTER OUT VARCHAR2)
      AS
      BEGIN
          REF_MASTER := '';
          SITE := '99';
          CASE UPPER(REF_TYPE)
            WHEN 'PO' THEN -- Process Order
              SELECT SITE_CODE INTO SITE FROM PO_PROCESS_ORDER WHERE PO_NUMBER = PARAM_VALUE;
            WHEN 'SO' THEN -- Shop Order
              SELECT P.SITE_CODE,P.PO_NUMBER INTO SITE,REF_MASTER FROM SO_SHOP_ORDER S
              INNER JOIN PO_PROCESS_ORDER P ON P.PO_NUMBER = S.PO_NUMBER
              WHERE S.NUMOF = PARAM_VALUE;
            WHEN 'SM' THEN -- Submixing
              SELECT SITE_CODE,NUMOF INTO SITE,REF_MASTER FROM SO_SUBMIXING WHERE IDSM = PARAM_VALUE;
            WHEN 'IDSM' THEN -- Submixing
              SELECT SITE_CODE,NUMOF INTO SITE,REF_MASTER FROM SO_SUBMIXING WHERE IDSM = PARAM_VALUE;
            WHEN 'PR' THEN -- Pourring
              SELECT B.SITE_CODE,P.NUMOF INTO SITE,REF_MASTER FROM SO_POURING P
              INNER JOIN SO_SUBMIXING B ON B.IDSM=P.IDSM
              WHERE P.IDSM = PARAM_VALUE;
            WHEN 'NUMSMP' THEN -- Pourring
              SELECT SITE_CODE,NUMOF INTO SITE,REF_MASTER FROM SAMPLE WHERE NUMSMP = TO_NUMBER(PARAM_VALUE);
    --        WHEN 'MSG' THEN -- Messages
    --          SELECT SITE_CODE,PO_NUMBER INTO SITE,REF_MASTER FROM CMS_INTERFACE.MAP_ITF_PO WHERE MSG_ID = PARAM_VALUE;
            ELSE
              SITE := sys_context('usr_context', 'site_attribute');
          END CASE;
      EXCEPTION
        WHEN OTHERS THEN
          REF_MASTER := '';
          SITE := sys_context('usr_context', 'site_attribute');
      END SELECT_MASTER;
      PROCEDURE ADD_LIST_PROC
      AS
      PRAGMA AUTONOMOUS_TRANSACTION;
      BEGIN
        MERGE INTO LOG_PARAM A
        USING (SELECT OWNER, TYPE
                     ,NAME PROC
                     , CASE NAME WHEN SUBNAME THEN NULL
                                 ELSE SUBNAME
                       END SUBPROC
               FROM (
                  SELECT owner,TYPE,UPPER(NAME) NAME,UPPER(trim(substr(substr(trim(text),1,instr(trim(text),'(')-1),instr(substr(trim(text),1,instr(trim(text),'(')-1),' ')))) SUBNAME
                         FROM ALL_SOURCE where owner in ('CMS_ADM','CMS_INTERFACE')
                                             and type in ('FUNCTION','PROCEDURE','PACKAGE BODY')
                                             and (instr(substr(trim(text),1,instr(trim(upper(text)),'(')-1),'FUNCTION') = 1 or instr(substr(trim(text),1,instr(trim(upper(text)),'(')-1),'PROCEDURE')=1)
               )-- ORDER BY OWNER,PROC,SUBPROC NULLS FIRST
        ) B
        ON (A.OWNER = B.OWNER AND A.TYPE = B.TYPE AND A.PROC=B.PROC AND NVL(A.SUBPROC,' ') = NVL(B.SUBPROC,' '))
        WHEN NOT MATCHED THEN
          INSERT (OWNER,TYPE,PROC,SUBPROC) VALUES (B.OWNER,B.TYPE,B.PROC,B.SUBPROC)
        WHEN MATCHED THEN
          UPDATE SET ACTIVE = ACTIVE;
        DELETE LOG_PARAM A
        WHERE NOT EXISTS (SELECT OWNER, TYPE
                     ,NAME PROC
                     , CASE NAME WHEN SUBNAME THEN NULL
                                 ELSE SUBNAME
                       END SUBPROC
               FROM (
                  SELECT owner,TYPE,NAME,upper(trim(substr(substr(trim(text),1,instr(trim(text),'(')-1),instr(substr(trim(text),1,instr(trim(text),'(')-1),' ')))) SUBNAME
                         FROM ALL_SOURCE where owner in ('CMS_ADM','CMS_INTERFACE')
                                             and type in ('FUNCTION','PROCEDURE','PACKAGE BODY')
                                             and (instr(substr(trim(text),1,instr(trim(text),'(')-1),'FUNCTION') = 1 or instr(substr(trim(text),1,instr(trim(text),'(')-1),'PROCEDURE')=1)
               ) WHERE A.OWNER = OWNER AND A.TYPE = TYPE AND A.PROC=PROC AND NVL(A.SUBPROC,' ') = NVL(SUBPROC,' '));
        COMMIT;
      EXCEPTION
        WHEN OTHERS THEN
          NULL;
      END ADD_LIST_PROC;
      PROCEDURE INITIALIZE
      AS
      BEGIN
        LIST_PARAM.DELETE;
        CLEAN;
    --    ADD_LIST_PROC;
      EXCEPTION
        WHEN OTHERS THEN
          NULL;
      END INITIALIZE;
      PROCEDURE CLEAN
      AS
        PRAGMA AUTONOMOUS_TRANSACTION;
        dtTrcLog DATE;
        dtTrcErr DATE;
      BEGIN
        BEGIN
          SELECT dbdate-NUMTODSINTERVAL(to_number(PRM_VALUE),'DAY') INTO dtTrcLog
          FROM PARAM_TECHNIC WHERE PRM_TYPE = 'DEL_TRC_LOG';
        EXCEPTION
          WHEN OTHERS THEN
            dtTrcLog := dbdate -NUMTODSINTERVAL(sDelay,'DAY');
        END;
        BEGIN
          SELECT dbdate-NUMTODSINTERVAL(to_number(PRM_VALUE),'DAY') INTO dtTrcErr
          FROM PARAM_TECHNIC WHERE PRM_TYPE = 'DEL_TRC_ERR';
        EXCEPTION
          WHEN OTHERS THEN
            dtTrcErr := dbdate -NUMTODSINTERVAL(sDelay,'DAY');
          END;
        DELETE FROM ERROR_LOG WHERE ERR_TYPE ='LOG' AND ERR_DATE < dtTrcLog;
        DELETE FROM ERROR_LOG WHERE ERR_TYPE ='ERROR' AND ERR_DATE < dtTrcErr;
        COMMIT;
      EXCEPTION
        WHEN OTHERS THEN
          NULL; -- Do nothing if error occurs and catch exception
      END CLEAN;
      PROCEDURE RESETS(IN_SOURCE IN VARCHAR2 DEFAULT NULL)
      AS
        SOURCE_OWNER all_source.OWNER%TYPE;
        SOURCE_NAME      all_source.NAME%TYPE;
        SOURCE_LINE      all_source.LINE%TYPE;
        SOURCE_TEXT  all_source.TEXT%TYPE;
        SOURCE_PROC  VARCHAR2(32727);
      BEGIN
        OWA_UTIL.WHO_CALLED_ME(owner    => SOURCE_OWNER,
                               name     => SOURCE_NAME,
                               lineno   => SOURCE_LINE,
                               caller_t => SOURCE_TEXT);
        IF SOURCE_PROC IS NULL THEN
          SOURCE_PROC := SUBSTR(GET_PROC_NAME(SOURCE_OWNER,SOURCE_NAME,SOURCE_LINE),1,125);
        ELSE
          SOURCE_PROC := IN_SOURCE;
        END IF;
        LIST_PARAM.DELETE(SOURCE_PROC);
      EXCEPTION
        WHEN OTHERS THEN
          NULL;
      END RESETS;
      PROCEDURE PUT_LINE(TXT IN VARCHAR2)
      AS
        PRAGMA AUTONOMOUS_TRANSACTION;
        SOURCE_OWNER     all_source.OWNER%TYPE;
        SOURCE_NAME     all_source.NAME%TYPE;
        SOURCE_LINE     all_source.LINE%TYPE;
        SOURCE_TEXT all_source.TEXT%TYPE;
        SOURCE_PROC VARCHAR2(128); 
      BEGIN
        IF TXT IS NULL OR TXT = '' THEN
          RETURN;
        END IF;
        OWA_UTIL.WHO_CALLED_ME(owner    => SOURCE_OWNER,
                               name     => SOURCE_NAME,
                               lineno   => SOURCE_LINE,
                               caller_t => SOURCE_TEXT);
        SOURCE_PROC := GET_PROC_NAME(SOURCE_OWNER,SOURCE_NAME,SOURCE_LINE);
        IF LIST_PARAM.EXISTS(SOURCE_PROC) THEN
          LIST_PARAM(SOURCE_PROC)(LIST_PARAM(SOURCE_PROC).COUNT+1).ERR_TXT := TXT;
        ELSE 
          LIST_PARAM(SOURCE_PROC)(1).ERR_TXT := TXT;
        END IF;
      EXCEPTION
        WHEN OTHERS THEN
          NULL;   
      END PUT_LINE;
      PROCEDURE LOGS(REF_TYPE IN VARCHAR2 DEFAULT 'SITE', REF_VALUE IN VARCHAR2 DEFAULT '99')
      AS
        PRAGMA AUTONOMOUS_TRANSACTION;
        MASTER_VALUE ERROR_LOG.ERR_MASTER%TYPE;
        SITE PARAMTAB.SITE_CODE%TYPE;
        SOURCE_OWNER     all_source.OWNER%TYPE;
        SOURCE_NAME     all_source.NAME%TYPE;
        SOURCE_LINE     all_source.LINE%TYPE;
        SOURCE_TEXT all_source.TEXT%TYPE;
        SOURCE_PROC VARCHAR2(128);
        ERR_KEY NUMBER;
      BEGIN
    --    NULL;
        OWA_UTIL.WHO_CALLED_ME(owner    => SOURCE_OWNER,
                               name     => SOURCE_NAME,
                               lineno   => SOURCE_LINE,
                               caller_t => SOURCE_TEXT);
        SOURCE_PROC := SUBSTR(GET_PROC_NAME(SOURCE_OWNER,SOURCE_NAME,SOURCE_LINE),1,128);
        LIST_PARAM.DELETE(SOURCE_PROC);
    --    SELECT NVL(MAX(ACTIVE),'N') INTO LOG_TRC FROM LOG_PARAM WHERE TRIM(UPPER((PROC||'.'||SUBPROC))) = TRIM(UPPER(SOURCE_PROC))
    --                                      AND OWNER =SOURCE_OWNER AND TYPE = SOURCE_TEXT ;
    --    IF LOG_TRC = 'N' THEN
    --      LIST_PARAM.DELETE(SOURCE_PROC);
    --      RETURN;
    --    END IF;   
    --    SELECT_MASTER(REF_TYPE => UPPER(REF_TYPE), PARAM_VALUE => REF_VALUE, SITE => SITE, REF_MASTER => MASTER_VALUE);
    --    ERR_KEY := TO_CHAR(LOCALTIMESTAMP,'YYYYMMDDHH24MISSFF6');
    --    FOR AIX IN 1..LIST_PARAM(SOURCE_PROC).COUNT LOOP
    --      INSERT INTO ERROR_LOG (ERR_KEY, ERR_SITE,ERR_SLAVE  ,ERR_MASTER  ,ERR_TYPE ,ERR_PROC,ERR_DATE,ERR_TXT)
    --      VALUES (ERR_KEY,SITE,REF_VALUE,MASTER_VALUE,'LOG',SOURCE_PROC,LIST_PARAM(SOURCE_PROC)(AIX).ERR_DATE ,LIST_PARAM(SOURCE_PROC)(AIX).ERR_TXT);
    --    END LOOP; 
    --    UPDATE SESSION_CONTEXT SET SCX_ERR_KEY = ERR_KEY WHERE SCX_ID = SYS_CONTEXT('USERENV','SESSIONID');
    --    LIST_PARAM.DELETE(SOURCE_PROC);
    --    COMMIT;
      EXCEPTION
        WHEN OTHERS THEN
          LIST_PARAM.DELETE(SOURCE_PROC);
      END LOGS;
      PROCEDURE ERRORS(REF_TYPE IN VARCHAR2 DEFAULT 'SITE', REF_VALUE IN VARCHAR2 DEFAULT '99', ERR_CODE IN NUMBER DEFAULT SQLCODE, ERR_MSG IN VARCHAR2 DEFAULT SQLERRM)
      AS
        PRAGMA AUTONOMOUS_TRANSACTION;
        MASTER_VALUE ERROR_LOG.ERR_MASTER%TYPE;
        SITE         PARAMTAB.SITE_CODE%TYPE;
        SOURCE_OWNER all_source.OWNER%TYPE;
        SOURCE_NAME      all_source.NAME%TYPE;
        SOURCE_LINE      all_source.LINE%TYPE;
        SOURCE_TEXT  all_source.TEXT%TYPE;
        SOURCE_PROC  VARCHAR2(4000);
        ERR_KEY NUMBER := TO_CHAR(LOCALTIMESTAMP,'YYYYMMDDHH24MISSFF6');
      BEGIN
        OWA_UTIL.WHO_CALLED_ME(owner    => SOURCE_OWNER,
                               name     => SOURCE_NAME,
                               lineno   => SOURCE_LINE,
                               caller_t => SOURCE_TEXT);
        SOURCE_PROC := SUBSTR(GET_PROC_NAME(SOURCE_OWNER,SOURCE_NAME,SOURCE_LINE),1,125);
        SELECT_MASTER(REF_TYPE => UPPER(REF_TYPE), PARAM_VALUE => REF_VALUE, SITE => SITE, REF_MASTER => MASTER_VALUE);
       IF LIST_PARAM.EXISTS(SOURCE_PROC) THEN
          FOR AIX IN 1..LIST_PARAM(SOURCE_PROC).COUNT LOOP
            INSERT INTO ERROR_LOG (ERR_KEY,ERR_SITE,ERR_SLAVE,ERR_MASTER,ERR_PROC,ERR_DATE,ERR_TXT,ERR_CODE,ERR_MSG)
            VALUES (ERR_KEY,SITE,REF_VALUE,MASTER_VALUE,SOURCE_PROC,LIST_PARAM(SOURCE_PROC)(AIX).ERR_DATE, LIST_PARAM(SOURCE_PROC)(AIX).ERR_TXT,ERR_CODE,ERR_MSG);
          END LOOP; 
         LIST_PARAM.DELETE(SOURCE_PROC);
        ELSE
          INSERT INTO ERROR_LOG (ERR_KEY,ERR_SITE,ERR_SLAVE,ERR_MASTER,ERR_PROC,ERR_DATE,ERR_TXT,ERR_CODE,ERR_MSG)
          VALUES (ERR_KEY,SITE,REF_VALUE,MASTER_VALUE,SOURCE_PROC,CURRENT_TIMESTAMP,'Error info',ERR_CODE,ERR_MSG);
        END IF;
        UPDATE SESSION_CONTEXT SET SCX_ERR_KEY = ERR_KEY WHERE SCX_ID = sys_context('usr_context', 'session_id');
        COMMIT;
      EXCEPTION
        WHEN OTHERS THEN
          LIST_PARAM.DELETE(SOURCE_PROC);
      END ERRORS;
    END PAC_LOG_ERROR;

    This package is always call. On each routines performed I execute it. In the trace log of the database we have see a problem we the procedure GET_PROC_NAME in the package. We have identify that is called more that 800x and increase the performance. Who increase is this select command :
        SELECT * INTO SOURCE_TEXT
        FROM (SELECT TEXT FROM all_source
            WHERE OWNER = SOURCE_OWNER AND
                  NAME=SOURCE_NAME AND
                  TYPE IN ('PROCEDURE','FUNCTION','PACKAGE BODY') AND
                  LINE <= SOURCE_LINE AND SUBSTR(TRIM(TEXT),1,9) IN ('PROCEDURE','FUNCTION ')
            ORDER BY LINE DESC)
        WHERE ROWNUM = 1;Complex SQL like inline views and views of views can overwhelm the cost-based optimizer resulting in bad execution plans. Start with getting an execution plan of your problem query to see if it is inefficient - look for full table scans in particular. You might bet better performance by eliminating the IN and merging the results of 3 queries with a UNION.

  • Need Help with site performance

    Looking for Help..
    In particular we would like help from experts in ssl, browser experts
    (how browsers handle encryption, de-encryption), iPlanet experts, Sun
    crypto card experts, webdesign for performance experts.
    Our website is hosted on a Sun Enterprise 450 server running Solaris v7
    The machine is hosted at Exodus. These are the following software
    servers that perform the core functions of the website:
    iPlanet Web Server v. 4.1 ( Java server is enabled)
    IBM db2 v. 7.1
    SAA uses SmartSite, a proprietary system developed by Adaptations
    (www.adaptations.com). At the level of individual HTML pages, SmartSite
    uses
    proprietary markup tags and Tcl code embedded in HTML comments to
    publish
    content stored in a database. SmartSite allows for control over when,
    how and
    to whom content appears. It is implemented as a java servlet which
    stores its data on the db2 server and uses a tcl like scripting language
    (jacl- orginally developed by Sun)
    CHALLENGE:
    In late June this year we launched a redesigned website with ssl enabled
    on all pages. (a departure from the previous practice of maintaining
    most of the site on non-secure server and only some pages on a ssl
    server). We also introduced a new website design with greater use of
    images, nested tables and javascript.
    We have found that the introduction of the "secure everywhere" policy
    has had a detrimental effect on the web site user experience, due to
    decreased web server and web browser performance. In other words, the
    site got slower. Specifically, we have
    identified the following problems:
    1. Web server performance degradation. Due to unidentified increases in
    web
    server resource demand caused (probably) by the global usage of SSL, the
    web
    server experienced instability. This was resolved by increasing the
    amount of
    operating system (OS) resources available to the server.
    2. Web browser performance degradation. Several categories are noted:
    2.1. Page load and rendering. Page load and rendering time has
    increased dramatically on the new site, particularly in the case of
    Netscape Navigator. Some of this may be attributed to the usage of SSL.
    Particularly, the rendering time of complex tables and images may be
    markedly slower on slower client machines.
    2.2. Non-caching of content. Web browsers should not cache any content
    derived from https on the local hard disk. The amount of RAM caching
    ability varies form browser to browser, and machine to machine, but is
    generally much less than for disk caching. In addition, some browser may
    not cache content in RAM cache at all. The overall effect of reduced
    caching is increased accesses to the web server to retrieve content.
    This
    will degrade server performance, as it services more content, and also
    web browser performance, as it will spend more time waiting for page
    content before and while rendering it.
    Things that have been attempted to improve performance:
    1) Reducing javascript redundancy (less compiling time required)
    2) Optimizing HTML code (taking out nested tables, hard coding in specs
    where possible to reduce compiling time)
    3) Optimizing page content assembly (reducing routine redundancy,
    enabling things to be compiled ahead of time)
    4) Installing an encryption card (to speed page encryption rate) - was
    removed as it did not seem to improve performance, but seemed to have
    degraded performance

    Fred Martinez wrote:
    Looking for Help..
    In particular we would like help from experts in ssl, browser experts
    (how browsers handle encryption, de-encryption), iPlanet experts, Sun
    crypto card experts, webdesign for performance experts.
    Our website is hosted on a Sun Enterprise 450 server running Solaris v7
    The machine is hosted at Exodus. These are the following software
    servers that perform the core functions of the website:
    iPlanet Web Server v. 4.1 ( Java server is enabled)
    IBM db2 v. 7.1
    SAA uses SmartSite, a proprietary system developed by Adaptations
    (www.adaptations.com). Since I don't see iPlanet's application server in the mix here this (a
    newsgroup
    for performance questions for iAS) is not the newsgroup to ask in.
    Kent

  • QC View v's QC Renderer performance?

    I'm very new at this - I've made a small cocoa app with a Quartz Composition to display a live feed from a firewire connected DV camera. I used QC View to add the composition into a window and have a QCPatchController to allow the user to change a few published settings.
    With standard DV input the performance is fine, but the app does use about 30-40% of the processor's on a Dual G5 2.0. However I am also trying to make the app work with a DVCPRO HD input which is a more processor intensive codec and the performance for this is unusable.
    In reading on a bit further, it seems that what I should do now is to use QC Renderer to play the composition. Does this allow greater performance with the overlay as its using an 'OpenGL context'?
    What's not entirely clear to me is what advantage I will get by using the QCRenderer - obviously its a lot more about actual code writing but the Apple doc,s have gotten me this far!
    Alternatively, perhaps I should be using QTKit instead of a Quartz Composition - the main features must be allow live preview of video input, allow real time 180 degree rotation and real time de-saturation.
    Any insight's much appreciated.
    Mark
    P.S - Here is the current app:
    http://homepage.mac.com/mark.burton/app/FlipFlop.zip

    .

  • Slow performance on a iMac 27" (maybe originated by the HDD)

    Hello,
    Well, I've had my iMac for 10 months now, but have never really paid attention to its performance since it was my first Mac and I was upgrading from a quite old computer, so it obviously proved to be a far superior machine.
    However, I noticed the problem when I started to be more critical with my machine's performance (this triggered by the fact that I bought, some months after, a MacBook Air which flies beyond time and space). Curious about this, I requested a friend to do a benchmark test with Xbench resulting in his machine (another iMac 27” with comparable specs) clearly outperforming mine. Even the Air, with its slower processor and smaller RAM, has a relatively close performance.
    Additionally, I have started to notice how the computer takes a long time to start-up and, after the desktop is “layed out”, it takes a lot of time before the machine becomes “responsive” (I usually try to start Mail and Google Chrome as soon as the pointer pops in!) with the final result that I have to wait in company of the beach ball for quite some seconds (half a minute) and even after the application had started, I can still hear the HDD working it as if there would be no tomorrow! And then, it is slooooow. Funnily enough, “tougher” applications like Photoshop or Illustrator don’t seem to have as many problems, relatively speaking.
    A little bit on the background info: I have two partitions: one for Bootcamp (since I was migrating from Windows, I had some software that I wanted to use there: Matlab and Office, basically, and then some smaller Windows-only programs).
    Derived from the benchmark test, I think I might have isolated the problem to the HDD (not sure though!).
    SO THE QUESTIONS ARE:
    1. How can I verify that the slow performance is due to the HDD being defective/badly configured/etc.?
    2. Has the Windows Bootcamp Partition any effect on my machine’s performance under MacOS? (I would assume not, but you never know!). I have more than 1.3 TB free space left
    3. What kind of corrective measures should I take?
    4. Any other type of advice will be warmly welcome: I will try to do some PRAM and SMC resets but I am not counting on it. I had also ran the disk utilities and there were no permission errors (not so sure what this means, but I’ve read that it is supposed to be good news.)
    5. Any extra info you might need, lemme know!
    Greetz,
    Víctor.
    PS: Sorry for the loooong post. I tried to be thorough!

    1. How can I verify that the slow performance is due to the HDD being defective/badly configured/etc.?
    Run Apple Hardware Test: http://support.apple.com/kb/HT1509
    2. Has the Windows Bootcamp Partition any effect on my machine’s performance under MacOS? (I would assume not, but you never know!). I have more than 1.3 TB free space left
    Probably not but 1.3TB of free space left on the total HD is not enough for OS X, OS X needs a minimum of 10-15% free space! If you mean 1.3TB of Windows space I'd recommend posting that question in the Boot Camp forum.
    3. What kind of corrective measures should I take?
    If the drive is failing per the AHT then it's covered by warranty and will need to be replaced. Remember back up!!!!!!!!!!
    4. Any other type of advice will be warmly welcome: I will try to do some PRAM and SMC resets but I am not counting on it. I had also ran the disk utilities and there were no permission errors (not so sure what this means, but I’ve read that it is supposed to be good news.)
    Can't hurt, make sure you read the instructions carefully and execute the tests exactly as they are described for Intel based iMacs.
    5. Any extra info you might need, lemme know!
    If you no longer have a need for Windows on your iMac I'd recommend removing that partition using Boot Camp Assistant. That may correct your problems.
    I would also recommend checking if System Preferences - Startup Disk and see if your internal HD is highlighted as the Startup Disk.
    In addition I would recommend checking your Login Items (System Preferences - Accounts - Login Items) and delete any applications you don't need launching at Login.

  • Performance Degradtion on PowerBook G4-133 (1.2G RAM Tiger)

    It just runs slower, and slower until I finally have to reboot - Maybe Memory Leaks?
    I have recently tried to breathe some new life into my somewhat neglected
    PB-G4 133 by giving it an additional 1G of RAM, and a new 120G Hitachi HD.
    I'm using Tiger since it came pre-installed on the Hitachi when I had that drive
    installed, but previously the G4 had Panther. I also have Ubuntu Linux 8.05 installed on a external 250G FireLite. Ubuntu seems to run faster than Tiger,
    but it runs hot and the fan goes full blast so I am a little worried about overheating and limit the uptime. This is a nice compact combination to take
    on the road. The performance is OK when I first boot up, but as I fire up
    more applications (iPhoto, LightRoom, Camino, Adium) the slower things get. Shutting down applications brings a marginal improvement in overall speed and responsiveness of the desktop. I have come into the Apple fold from the Linux world so I'm using tools like vm_stat, top, and iostat, whose cousins exist in linux, to track things down.
    I also have an iBook G4 (1G RAM) which runs Open SUSE 10. This machine can run
    for 6 months or more at a time with no performance degradation. My first generation MacBook (1G RAM and Tiger) runs faster than the PB, but it's just enough heavier that it usually gets left at home in favor or the smaller PowerBook.
    I'm wondering if my PowerBook apps have some memory leaks that eat up the
    free memory, never to be released ? Or maybe there is something that I'm overlooking?
    Thanks in advance for reading this long post and for any ideas you folks may have.

    Welcome back to Apple Discussions!
    If this is a 15" or 17" Powerbook, it might need its clock battery replaced.
    If this is a 12" Powerbook it might need its power manager reset.
    If the clock battery has been replaced in the last 4 years, then its PRAM might need resetting.
    Note, the overheating you might be experiencing while running Linux might be damaging some chips, and/or causing the Energy Saver settings to force the processor to go at a lower speed to keep the processor cool. In Mac OS X that's Apple menu -> System Preferences -> Energy Saver.
    In Linux, I don't know what that would be, but pmset is the command used for it in Mac OS X's command line.
    Camino and Adium depend on internet speeds. Try using OpenDNS for your DNS numbers. I find that's usually faster than using the internet service provider supplied DNSes. Changing the MTU in your network settings can also speed your internet, and toggling the IPv6 setting on/off may affect it as well.
    If you are using numerous Dashboard widgets, some of them may not be compatible with your version of Mac OS X and those can slow internet speeds down too. iStat Pro has been known to have trouble with that too. If none of those appears to be at issue, are you seeing issues with these titles in WiFi hotspots, or just at home?
    iPhoto has improved performance with each new release, and an oversized photo library can slow down earlier versions of iPhoto. I don't know how Lightroom works, but it may be the same issue.
    Make sure your hard disk is not in excess of 85% full. This arbitrary number has been found to slow Mac OS X down.
    If you still can't explain the slowdown, tell us when you've backed up your data at least twice, and tell us what you've tried so far?

  • Performance impact related to workbook

    Hi All,
    Please let me know the performance impact on workbooks if i insert 13 work sheets,
    How much performance impact will be there?
    If i insert all reports reports in 2 worksheets by splitting them,
    How much performance impact ll be there?
    Please let me know which one is better way
    Urgent!...
    Thanks,
    Sarau

    Hello Sarau,
    The performance is purely based on the Query selection, since that decide the number of records to be fetched.
    The more you filter the query the performance will be better.
    Since you are having many queries in a workbook, make sure that all have common variable screen.
    Thanks
    Chandran

  • Adding more than 2Gb RAM to MacBook Pro

    Do you guys know of any way to put more than 2Gb RAM in an Intel 17" MacBook Pro?
    Model Name: MacBook Pro 17"
    Model Identifier: MacBookPro1,2
    Processor Name: Intel Core Duo
    Processor Speed: 2.16 GHz
    Number Of Processors: 1
    Total Number Of Cores: 2
    L2 Cache (per processor): 2 MB
    Memory: 2 GB
    Bus Speed: 667 MHz
    Boot ROM Version: MBP12.0061.B03
    SMC Version: 1.5f10
    I bought this when the 17" models first came out, and have been disappointed in it ever since (I broke my rule of NOT buying the first generation of something). As time has passed, the main problem is the 2Gb limit for RAM. That's NOWHERE near enough, and shame on Apple for that limitation. If there were some way, even a non-standard way, of adding more RAM, I could get reasonable performance from this computer. As it is, this has been a wasted $3600 investment. whew!
    Any help or ideas would be appreciated. Thanks! Feel free to email bruce at dbsdesigngroup.com

    That's possible...actually likely. I'd love for that to be the case because I could fix it. ::Grin:: It's been this way since I first got it. When it arrived with just 1 Gb RAM, it was virtually unusable. I have a few apps as startup items, but they're just Firefox, Entourage, iNotepad, Linotype Font Explorer (with very few fonts activated), and Stickies. That shouldn't affect things much.
    My needs are to use Photoshop, Illustrator and GoLive at the same time, with MSWord, Firefox and Entourage open as well. After the first bit of usage, I end up with 4-6 pageouts and a VERY slow computer. While it's doing the pageouts (which take quite a while), or switching between apps, it's very slow. And it's also extremly slow when Spotlight is indexing, so I disabled that.
    I'm just not sure what it might be...nothing in Activity Monitor seems like it would be a culprit. Even in Safe Mode the problem is very evident.
    I'll keep exploring and hopefully find something that will make this machine more viable for my use. I really appreciate the input greatly.

Maybe you are looking for

  • Keynote and photos via bluetooth

    A couple of questions regarding Keynote for iPad and photos: Any way to take a photo from a non-iPhone and have it bluetooth to the ipad so I can use the photo in a keynote presentation? Better yet, any way to have keynote grab a bluetooth photo auto

  • Question about dates shown in finder

    I have a problem and need help. First question is: what date is shown in default finder settings on files? Creating date , Modified or added Date? on Mac, Windows? The problem lies here, I have to give clients JPEG picture files. we had to re-shoot (

  • Problem in FB50,FB60 transactions

    Recently due to some change which we are not able to track in transactions FB50,FB60 the line item column data is altered Ideally it starts with  Compcode,D/C indicator ,G/L acct etc But it now starts with the 5 customer specific fields added as part

  • Printing problem in Acrobat 8

    When trying to print large scale pdf files from one of the seven computers in the office to an OCE TDS400 plotter, rather than plotting the file, the plotter simply starts spewing out all the paper on the roll (50 yrds on a new roll)  This problem do

  • Do I erase Muse CC 2014?

    Creative Cloud often acts like it has no idea of what is on the computer. I have Muse installed. I start Muse and it tells me about the new update. Do I want to install? Sure. Nothing. After tweaking, I finally decide to quit CC Manager and start aga