How can I improve Adobe CS5 performance?

I have got a trial of Adobe Creative Suite 5 on a PC at work and so far it's ok but it's really lagging. The PC currently has an Intel Core 2 Quad Q9400 @ 2.66GHz, 4GB DDR2 Ram, 160GB Hard Drive and an Integrated Intel Graphics Chip.
Will adding a dedicated graphics card improve CS5's performance? I have read on forums that Mac users use a 2nd hard drive as a 'scratch disk' for the likes of photoshop which improves the performance but not sure how to implement that.
Any ideas would be great.

The intergrated graphics may well be taking about system RAM. Additionally, you didn't state your operating system but I recommend Win 7 64 bit with a boat load of RAM.
Fast harddrives with a dedicated scratch disk will help, also.
Bob

Similar Messages

  • How can I improve my imac performance?

    Hi guys,
    When I'm using a complex files on Adobe illustrator CS4, it crashes...
    How can I improve my imac performance?
    Here's my system's specs:
      Model Name:    iMac
      Model Identifier:    iMac11,1
      Processor Name:    Intel Core i5
      Processor Speed:    2.66 GHz
      Number Of Processors:    1
      Total Number Of Cores:    4
      L2 Cache (per core):    256 KB
      L3 Cache:    8 MB
      Memory:    4 GB
      Processor Interconnect Speed:    4.8 GT/s
      Boot ROM Version:    IM111.0034.B02
      SMC Version (system):    1.54f36
    Thanks for youyr help!

    I figured you'd ask for those    Since I can't remember the topic/subject line, I have no idea if I have enough time  in this life to try to find them - I had subscriptions for all of them, but of course, those went poof into cyberspace.
    Edit: Athough the search function does not work well (could not specificy 'iMac' - would get "unauthorized to search" every time), so I had to go through quite a few, but I found two of the major threads:
    https://discussions.apple.com/message/12417693#12417693
    https://discussions.apple.com/message/12807961#12807961
    Enjoy reading....
    Barbara

  • How Can we improve the report performance..?

    Hi exports,
    I am learning the Business Objects XIR2, Please let me know How Can we improve the report performance..?
    Please give the answer in detailed way.

    First find out why your report is performing slowly. Then fix it.
    That sounds silly, but there's really no single-path process for improving report performance. You might find issues with the report. With the network. With the universe. With the database. With the database design. With the query definition. With report variables. With the ETL. Once you figure out where the problem is, then you start fixing it. Fixing one problem may very well reveal another. I spent two years working on a project where we touched every single aspect of reporting (from data collection through ETL and all the way to report delivery) at some point or another.
    I feel like your question is a bit broad (meaning too generic) to address as you have phrased it. Even some of the suggestions already given...
    Array fetch size - this determines the number of rows fetched at a single pass. You really don't need to modify this unless your network is giving issues. I have seen folks suggest setting this to one (which results in a lot of network requests) or 500 (which results in fewer requests but they're much MUCH larger). Does either improve performance? They might, or they might make it worse. Without understanding how your network traffic is managed it's hard to say.
    Shortcut joins? Sure, they can help, as long as they are appropriate. [Many times they are not.|http://www.dagira.com/2010/05/27/everything-about-shortcut-joins/]
    And I could go on and on. The bottom line is that performance tuning doesn't typically fall into a "cookie cutter" approach. It would be better to have a specific question.

  • How can I improve below SQL performance.

    Hi,
    How can I improve below SQL performance. This SQL consumes CPU and occures wait events. It is running every 10 seconds. When I look at the session information from Enterprise Manager I can see that "Histogram for Wait Event: PX Deq Credit: send blkd"
    I created some indexes. I heard that the indexes are not used when there is a NULL but when I checked the xecution plan It uses index.
    SELECT i.ID
    FROM EXPRESS.invoices i
    WHERE i.nbr IS NOT NULL
    AND i.EXTRACT_BATCH IS NULL
    AND i.SUB_TYPE='COD'
    Explain Plan from Toad
    SELECT STATEMENT CHOOSECost: 77 Bytes: 6,98 Cardinality: 349                     
         4 PX COORDINATOR                
              3 PX SEND QC (RANDOM) SYS.:TQ10000 Cost: 77 Bytes: 6,98 Cardinality: 349           
                   2 PX BLOCK ITERATOR Cost: 77 Bytes: 6,98 Cardinality: 349      
                        1 INDEX FAST FULL SCAN INDEX EXPRESS.INVC_TRANS_INDX Cost: 77 Bytes: 6,98 Cardinality: 349
    Execution Plan from Sqlplus
    | Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
    | 0 | SELECT STATEMENT | | 349 | 6980 | 77 | | | |
    | 1 | PX COORDINATOR | | | | | | | |
    | 2 | PX SEND QC (RANDOM) | :TQ10000 | 349 | 6980 | 77 | Q1,00 | P->S | QC (RAND) |
    | 3 | PX BLOCK ITERATOR | | 349 | 6980 | 77 | Q1,00 | PCWC | |
    |* 4 | INDEX FAST FULL SCAN| INVC_TRANS_INDX | 349 | 6980 | 77 | Q1,00 | PCWP | |
    Predicate Information (identified by operation id):
    4 - filter("I"."NBR" IS NOT NULL AND "I"."EXTRACT_BATCH" IS NULL AND "I"."SUB_TYPE"='COD')
    Note
    - 'PLAN_TABLE' is old version
    - cpu costing is off (consider enabling it)
    Statistics
    141 recursive calls
    0 db block gets
    5568 consistent gets
    0 physical reads
    0 redo size
    319 bytes sent via SQL*Net to client
    458 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    0 rows processed
    Instance Efficiency Percentages (Target 100%)
    Buffer Nowait %: 100.00
    Redo NoWait %: 100.00
    Buffer Hit %: 99.70
    In-memory Sort %: 100.00
    Library Hit %: 99.81
    Soft Parse %: 99.77
    Execute to Parse %: 63.56
    Latch Hit %: 90.07
    Parse CPU to Parse Elapsd %: 0.81
    % Non-Parse CPU: 98.88
    Top 5 Timed Events                         
    Event     Waits     Time(s)     Avg Wait(ms)     % Total Call Time     Wait Class
    latch: library cache     12,626     16,757     1,327     62.6     Concurrency
    CPU time          5,712          21.Mar     
    latch: session allocation     1,848,987     1,99     1     07.Nis     Other
    PX Deq Credit: send blkd     1,242,265     981     1     03.Tem     Other
    PX qref latch     1,405,819     726     1     02.Tem     Other
    The database version is 10.2.0.1 but we haven't installed the patch 10.2.0.5. yet.
    I am waiting your comments.
    Thanks in advance

    Welcome to the forum.
    I created some indexes. I heard that the indexes are not used when there is a NULL but when I checked the xecution plan It uses index. What columns are indexed?
    And what do:
    select i.sub_type
    ,      count(*)
    from   express.invoices i
    where  i.nbr is not null
    and    i.extract_batch is null
    group by i.sub_type; and
    select i.sub_type
    ,      count(*)
    from   express.invoices i
    group by i.sub_type; return?
    Also, try use the {noformat}{noformat} tag when posting examples/execution plans etc.
    See: HOW TO: Post a SQL statement tuning request - template posting for more tuning instructions.
    It'll make a big difference:
    SELECT i.ID
    FROM EXPRESS.invoices i
    WHERE i.nbr IS NOT NULL
    AND i.EXTRACT_BATCH IS NULL
    AND i.SUB_TYPE='COD'
    Explain Plan from Toad
    SELECT STATEMENT CHOOSECost: 77 Bytes: 6,98 Cardinality: 349                     
         4 PX COORDINATOR                
              3 PX SEND QC (RANDOM) SYS.:TQ10000 Cost: 77 Bytes: 6,98 Cardinality: 349           
                   2 PX BLOCK ITERATOR Cost: 77 Bytes: 6,98 Cardinality: 349      
                        1 INDEX FAST FULL SCAN INDEX EXPRESS.INVC_TRANS_INDX Cost: 77 Bytes: 6,98 Cardinality: 349
    Execution Plan from Sqlplus
    | Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
    | 0 | SELECT STATEMENT | | 349 | 6980 | 77 | | | |
    | 1 | PX COORDINATOR | | | | | | | |
    | 2 | PX SEND QC (RANDOM) | :TQ10000 | 349 | 6980 | 77 | Q1,00 | P->S | QC (RAND) |
    | 3 | PX BLOCK ITERATOR | | 349 | 6980 | 77 | Q1,00 | PCWC | |
    |* 4 | INDEX FAST FULL SCAN| INVC_TRANS_INDX | 349 | 6980 | 77 | Q1,00 | PCWP | |
    Predicate Information (identified by operation id):
    4 - filter("I"."NBR" IS NOT NULL AND "I"."EXTRACT_BATCH" IS NULL AND "I"."SUB_TYPE"='COD')
    Note
    - 'PLAN_TABLE' is old version
    - cpu costing is off (consider enabling it)
    Statistics
    141 recursive calls
    0 db block gets
    5568 consistent gets
    0 physical reads
    0 redo size
    319 bytes sent via SQL*Net to client
    458 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    0 rows processed
    Instance Efficiency Percentages (Target 100%)
    Buffer Nowait %: 100.00
    Redo NoWait %: 100.00
    Buffer Hit %: 99.70
    In-memory Sort %: 100.00
    Library Hit %: 99.81
    Soft Parse %: 99.77
    Execute to Parse %: 63.56
    Latch Hit %: 90.07
    Parse CPU to Parse Elapsd %: 0.81
    % Non-Parse CPU: 98.88
    Top 5 Timed Events                         
    Event     Waits     Time(s)     Avg Wait(ms)     % Total Call Time     Wait Class
    latch: library cache     12,626     16,757     1,327     62.6     Concurrency
    CPU time          5,712          21.Mar     
    latch: session allocation     1,848,987     1,99     1     07.Nis     Other
    PX Deq Credit: send blkd     1,242,265     981     1     03.Tem     Other
    PX qref latch     1,405,819     726     1     02.Tem     Other

  • How can I improve my FMS performance?

    Hi...
    I have two FMS 3.5 streaming servers... Both have Windows Server 2003 x64 with 16 GB of RAM...
    I'm streaming from a common storage and my Application.xml file is like this:
    <Application>
      <StreamManager>
        <VirtualDirectory>
          <!-- Specifies application specific virtual directory mapping for recorded streams.   -->
          <Streams>/;L:\media</Streams>
        </VirtualDirectory>
      </StreamManager>
      <DisallowedProtocols>rtmp,rtmps,rtmpt</DisallowedProtocols>
      <!-- Settings specific to runtime script engine memory -->
      <JSEngine>
        <!-- This specifies the max size (Kb.) the runtime can grow to before -->
        <!-- garbage collection is performed.                                 -->
        <RuntimeSize>20480</RuntimeSize>
      </JSEngine>
      <Client>
        <Bandwidth>
          <!-- Specified in bytes/sec -->
          <ServerToClient>2500000</ServerToClient>
          <!-- Specified in bytes/sec -->
          <ClientToServer>2500000</ClientToServer>
        </Bandwidth>
        <MsgQueue>
          <Live>
            <!-- Drop live audio if audio q exceeds time specified. time in milliseconds -->
            <MaxAudioLatency>2000</MaxAudioLatency>
            <!-- Default buffer length in millisecond for live audio and video queue. -->
            <MinBufferTime>2000</MinBufferTime>
          </Live>
          <Recorded>
            <!-- Default buffer length in millisecond for live audio and video, value cannot be set below this by Flash player. -->
            <MinBufferTime>2000</MinBufferTime>
          </Recorded>
          <Server>
            <!-- Ratio of the buffer length used by server side stream -->
            <!-- to live buffer.  The value is between 0 and 1.  To    -->
            <!-- avoid break up of audio, the ratio should not be more -->
            <!-- than 0.5 of the live buffer.                          -->
            <BufferRatio>0.5</BufferRatio>
          </Server>
        </MsgQueue>
         <!--OVERRIDE APPLICATION LEVEL-->
         <!-- Specifies the RTMP chunk size to use in all streams for this     -->
         <!-- application.  Stream content breaks into chunks of this size     -->
         <!-- in bytes.  Larger values reduce CPU usage, but also commit to     -->
         <!-- larger writes that can delay other content on lower bandwidth     -->
         <!-- connections.  This can have a minimum value of 128 (bytes) and     -->
         <!-- a maximum value of 65536 (bytes) with a default of 4096 bytes     -->
         <!-- Note that older clients may not support chunk sizes largee than     -->
         <!-- 1024 bytes. If the chunk setting is larger than these clients can     -->
         <!-- support, the chunk setting will be capped at 1024 bytes.          -->
         <OutChunkSize>3072</OutChunkSize>
         <!--OVERRIDE APPLICATION LEVEL-->
         <!-- An application can be configured to deliver aggregate messages to       -->
         <!-- clients that support them by setting the "enabled" attribute to "true". -->
         <!-- The server will attempt to send aggregate messages to these supported   -->
         <!-- clients based whenever possible.                                        -->
         <!-- When this setting is disabled, aggregate messages will always be broken -->
         <!-- up into individual messages before being delivered to clients.          -->
         <!--  The default is "true".                                         -->
         <AggregateMessages enabled="false"></AggregateMessages>
         <!--OVERRIDE VHOST LEVEL-->
         <AutoCloseIdleClients enable="true">     
              <CheckInterval>60</CheckInterval>
              <MaxIdleTime>1200</MaxIdleTime>
         </AutoCloseIdleClients>
      </Client>
    </Application>
    The Server.xml and fms.ini have default settings.
    My RAM usage is always low... 1,1 GB... and also my CPU usage is low. The FMSCore.exe process reaches 630 MB usage and then stays at this level.
    How can I use my server's RAM or have you any tips or suggestions to improve FMS performance with some special settings?
    I tried to change this in fms.ini:
    SERVER.FLVCACHE_MAXSIZE=500
    to 1000 but the application crashes after it reaches 2GB of RAM.
    Thanks in advance
    best,
    Marco

    Hi,
    Thanks for trying the different settings...
    When we talk of tuning for the best performance, different settings are required for different scenarios, for example, a live broadcast would require aggregation of messages (if latency is OK) but a live conferencing solution might need to disable it. So unless the use case is little briefed it is tough to conclude on any generic settings to improve performance.
    I would also like to comment on the FLVCache size. I recommend it to be set to 1/4 the size of your RAM. And you can safely tweak only the SERVER.MAXFLVCACHESIZE variable without worrying on the other variable (which is in percentage). This is supposed to take priority over all other settings.
    If your use case is a vod, have 1/4 RAM as cache size, and also tweak the video buffer settings for mp4 (if you have mp4 content as well). OutchunkSize is another variable that you may want to tweak (at the cost of CPU and latency). You can aggregate the messages.
    Also, you need to make sure of the client side buffer settings to match your FMS settings as well as your use case requirements.
    Hope it all helps.
    Thank you !

  • How can i improve my system performance

    hello Experts,
    today i faced one problem
    i,e i got mails from users that my server is very slow
    i checked
    responce time--985ms
    and no long running jobs and also buffers also fine
    in my system all things are fine
    now want to know to check how much network utilization required to my sap server?
    i have doubt in network utilization ....i need help in tis way only
    Regards

    Is the DB on a separate server?  I ask because even a fairly slow public network (i.e. for SAPGUI) is unlikely to be a cause of major performance problems.  Note 'unlikely', not 'impossible'. If the DB is separated from CI and/or DI and that link is 100Mb/s, things can get nasty. 
    Slow performance is real tricky to solve blind. You say your system is fine and you mention SAP-level metrics.  How about other stuff?
    Paging rate
    CPU load averages
    Runaway processes at O/S level
    If Windows server, check for an antivirus program monitoring \usr\sap\.....
    Filesystem/volume access speeds
    System log messages (OS not SAP)
    Regards,
    Alan.

  • How can I reinstall Adobe CS5.5 Standard?

    I purchased Adobe CS5.5 Standard for Mac. I need to reinstall it on my computer. But cannot find how to do so. It does not appear in "My Order History." But I do have the serial number.

    Hi CCLOUG1,
    You may please download the required version of your Adobe product from the link mentioned below . Please make sure that you go through the very important information before downloading the product .
    Adobe CS5.5  : http://prodesigntools.com/adobe-cs5-5-direct-download-links.html
    Cheers,
    Kartikay Sharma

  • How can I download Adobe CS5 upgrade on a new computer.

    I just bought a new computer and am trying to download Adobe CS5 upgrade. When installing it says I do not have a qualifying adobe product. I have a student version of Adobe CS2 premium which the installer also does not recognize the serial number.....pay all this money and can't even use it....

    You are allowed on two activated copies.  If you already have reached this limit you will need to deactivate one and then try it.  If you old computer crashed it is still considered activated.  If so you need to call customer service tomorrow and explain and they can reactivate.
    Hope this helps get you up and running.

  • How can I improve NFSv3 file performance with MythTV thru a NAS Drive?

    Basically, I have both my MythTV combined frontend/backend machine and a NAS Drive connected via gigabit ethernet connections.
    The NAS drive stores my recordings, shared via NFSv3 (because that's all the NAS drive came with, unfortunatly).
    Now, my problem is that I can record or watch multiple *seperate* HD files simultaneously, travelling back and forth from the NAS without problem.
    However when I try to watch the same file *While it's still being recorded*, after around 20 minutes the network connection is full/overloaded (between the mythserver and the NAS drive) and playback grinds to a halt (It doesn't stop the recording, just the playback).
    It used to happen much quicker when I was only using a 100mb connection.
    As i've said, this doesn't happen at ALL with seperate files, just only if i watch the *same* file that's being recorded at the same time.
    Is there anyway (bar just having the recordings on a Hard Drive within my backend) that I can stop this problem? Is it because I use NFSv3 on the NAS drive? are there any options I can change in the /etc/exports file on the NAS drive that would help? Thanks
    If you want me to post what my NAS drive (Readynas Duo V2) has in its /etc/exports file, I will

    Post the file but I dunno how many Archer are using NFSv3... ;/

  • How can i improve sharepoint server farm scalability

    Hi,
    How can i improve sharepoint server farm scalability, like currently 1000 users can access my SharePoint.
    Now i am looking to improve my SharePoint users scalability (1000 to 2000) , How can i
    do this and how can i improve my SharePoint performance  
    Prakash

    Scale up or scale out.
    Either provide your SharePoint servers with more CPU and RAM (scaling up) or add more servers to the farm (scaling out) to allow the load to be shared between the servers.
    The exact breakdown depends on the details of your implementation.

  • How can I improve performance for BC4J/JSP-application

    Hi,
    I have developed a JSP-Applikcation with the master-detail views. This Application has been implemented
    as SSO enabled web portlet into portal. Whenn I click on a row retrieved from the master view (Master.jsp), it take 15 seconds
    until the associated data in detail view will be displayed on the other window(Detail.jsp). The master table has about 2500 records and detail table about 7000. (Other case: it takes 2 seconds for 162 rows (master) and 228 rows (detail) respectively)
    In Master.jsp I set rangesize = "10" to reduce data loading time.
    ======================== master.jsp =================================
    <jbo:DataSource id="dsMaster appid=testAppId viewobject="MaterView" rangesize="10">
    <a href="detail.jsp?RowKeyValue=<jbo:ShowValue datasource="dsMaster" dataitem="RowKey"/>">
    Here Click
    </a>
    ==================================================================
    Because all records from master view have firstly to be retrieved to locate right row, I set rangesize="-1" in detail.jsp. Consequently this leads to a lower performance.
    When rangesize="20" sets instead rangesize="-1", The performance is good, but the wanted Data from detail view are not displayed if the records of master view cliked on ist not in this range.
    ======================== detail.jsp ======================================
    <jbo:DataSource id="dsMaster appid=testAppId viewobject="MaterView" rangesize="-1">
    <jbo:RefreshDataSource datasource="dsMaster">
    <jbo:Row id="msRow" datasource="dsMaster" action="Find" rowkeyparam="RowkeyValue">
    <jbo:dataSource id="dsDetail" appid="testAppId viewobject="DetailView">
    ======================================================================
    Is my programming logic not to be suited for the high performance?
    How can I improve the performance, if it is so?
    Many thanks for your help.
    regards,
    Yoo

    http://forums.adobe.com/thread/1369260?tstart=0

  • How can I improve performance when scopes are open?

    How can I improve performance when scopes are open?
    When Color correcting, performance severely lags, stalls, freezes!
    Nothing to complex...simple 3 way color corrector, occasional curves filters.
    I am constantly waiting for the timeline to update as I move around....sometimes as long as a minute or so.
    I'm on brand new 2.7 Ghz 12-Core MacPro with 64 GB of RAM
    Thanks in advance.
    Jay

    http://forums.adobe.com/thread/1369260?tstart=0

  • HT1651 how can i improve my macbook's performance without installing memory

    how can i improve my macbook's performance without installing memory

    More RAM & bigger faster Hard Drive will help, maybe a better Graphics card also, since 10.5 ises the Video much harder.
    At the Apple Icon at top left>About this Mac.
    Then click on More Info>Hardware and report this upto *but not including the Serial#*...
    Hardware Overview:
    Machine Name: Power Mac G5 Quad
    Machine Model: PowerMac11,2
    CPU Type: PowerPC G5 (1.1)
    Number Of CPUs: 4
    CPU Speed: 2.5 GHz
    L2 Cache (per CPU): 1 MB
    Memory: 10 GB
    Bus Speed: 1.25 GHz
    Boot ROM Version: 5.2.7f1
    Then click on More Info>Hardware>Graphics/Displays and report like this...
    NVIDIA GeForce 7800GT:
      Chipset Model:          GeForce 7800GT
      Type:          Display
      Bus:          PCI
      Slot:          SLOT-1
      VRAM (Total):          256 MB
      Vendor:          nVIDIA (0x10de)
      Device ID:          0x0092
      Revision ID:          0x00a1
      ROM Revision:          2152.2
      Displays:
    VGA Display:
      Resolution:          1920 x 1080 @ 60 Hz
      Depth:          32-bit Color
      Core Image:          Supported
      Main Display:          Yes
      Mirror:          Off
      Online:          Yes
      Quartz Extreme:          Supported
    Display:
      Status:          No display connected

  • How can we improve performance while selection production orders from resb

    Dear all,
    there is a performance issue in a report which compares sales order and production order.
    Below is the code, in this while reading production order data from resb with the below select statement.
    can any body tell me how can we improve the performance? should we use indexing, if yes how to use indexing.
    *read sales order data
      SELECT vbeln posnr arktx zz_cl zz_qty
      INTO (itab-vbeln, itab-sposnr, itab-arktx, itab-zz_cl, itab-zz_qty)
      FROM vbap
      WHERE vbeln  = p_vbeln
      AND   uepos  = p_posnr.
        itab-so_qty = itab-zz_cl * itab-zz_qty / 1000.
        CONCATENATE itab-vbeln itab-sposnr
           INTO itab-document SEPARATED BY '/'.
        CLEAR total_pro.
    **read production order data*
        SELECT aufnr posnr roms1 roanz
        INTO (itab-aufnr, itab-pposnr, itab-roms1, itab-roanz)
        FROM resb
        WHERE kdauf  = p_vbeln
        AND   ablad  = itab-sposnr+2.

    Himanshu,
    Put a break point before these two select statements and execute in the production.This way you will come to know which select statement is taking much time to get executed.
    In both the select statements the where clause is not having the primary keys.
    Coming to the point of selecting the data from vbap do check the SAP note no:-185530 accordigly modify the select statement.
    As far as the table RESB is concerened here also the where clause doesn't have the primary keys.Do check the SAP Note No:-187906.
    I guess not using primary keys is maring the performance.
    K.Kiran.

  • How can I Improve the Performance using Global Temo Tables ??

    Hi,
    Can anyone tell me , How can i make use of Global Temporary Tables to improve the Performance.
    I have few sample scripts ,
    Say i have the View based on some Complex query like ,
    CREATE OR REPLACE VIEW Profile_values_view AS
    SELECT d.Profile_option_name, d.Profile_option_id, Profile_option_value,
    u.User_name, Level_id, Level_code
    FROM Profile_definitions d, Profile_values v, Profile_users u
    WHERE d.Profile_option_id = v.Profile_option_id
    AND ((Level_code = 'USER' AND Level_id = U.User_id) OR
    (Level_code = 'DEPARTMENT' AND Level_id = U.Department_id) OR
    (Level_code = 'SITE'))
    AND NOT EXISTS (SELECT 1 FROM PROFILE_VALUES P
    WHERE P.PROFILE_OPTION_ID = V.PROFILE_OPTION_ID
    AND ((Level_code = 'USER' AND
    level_id = u.User_id) OR
    (Level_code = 'DEPARTMENT' AND
    level_id = u.Department_id) OR
    (Level_code = 'SITE'))
    AND INSTR('USERDEPARTMENTSITE', v.Level_code) >
    INSTR('USERDEPARTMENTSITE', p.Level_code));
    Now i have created the Global temp Table as ,
    CREATE GLOBAL TEMPORARY TABLE Profile_values_temp
    Profile_option_name VARCHAR(60) NOT NULL,
    Profile_option_id NUMBER(4) NOT NULL,
    Profile_option_value VARCHAR2(20) NOT NULL,
    Level_code VARCHAR2(10) ,
    Level_id NUMBER(4) ,
    CONSTRAINT Profile_values_temp_pk
    PRIMARY KEY (Profile_option_id)
    ) ON COMMIT PRESERVE ROWS ORGANIZATION INDEX;
    Now I am Inserting the Records into Temp table as
    INSERT INTO Profile_values_temp
    (Profile_option_name, Profile_option_id, Profile_option_value,
    Level_code, Level_id)
    SELECT Profile_option_name, Profile_option_id, Profile_option_value,
    Level_code, Level_id
    FROM Profile_values_view;
    COMMIT;
    Now what my doubt is, when do i need to execute the Insert Statement.
    Say , if the View returns few millions of records , then loading such a data into Global Temporary table takes lot of time.
    Then what is the use of Global Temporary tables and how can i improve the Performance using the same.
    Raj

    Thanks for the responce ,
    There are 2 to 3 complex views in our database, and there always be more than 5000+ users will be workinf on the application and is OLTP application. Those complex views are killing the application performance.
    I what i felt was, if i create the Global Temporary tables for thow views and will be able to load the one third million of records returned by the views in to cache and can improve the application performance.
    I have created the Global Temporary tables for 2 views with the option On Commit Preserve , But after am inserting the records into the Temp table and when i Issue the commit statement, the Temp table is getting Cleared.
    I really got surpised of this behaviour as i know that with the Option On Commit Preserve , the rows should retain in the Temp Table, Instead , it's getting cleared.
    Pelase suggest , what to do ??
    Raj

Maybe you are looking for