How to imporve perofmance

Hi,
I am having insert query with select as mentioned below which takes long time to execute due to huge data.
Insert into compaign_pipe(x1,x2,x3,..............etc)
select rownum,a.* from (
select * from  lead_opty   where ....;
union all
select * from gom_fcrm    where....
union all
select * from compaign_lead  where...
union all
select * from claims_lead      where
) a;
can you please suggest how can i improve the performance
or
if i try same as mentioned below will it increase the performance
Insert into compaign_pipe(x1,x2,x3,..............etc)
select * from  lead_opty   where ....;
Insert into compaign_pipe(x1,x2,x3,..............etc)
select * from  gom_fcrm  where ....;
Insert into compaign_pipe(x1,x2,x3,..............etc)
select * from  compaign_lead where ....;
Insert into compaign_pipe(x1,x2,x3,..............etc)
select * from  claims_lead where ....;
Thanks and regards,
Ibrahim Sayyed.

I am having insert query with select as mentioned below which takes long time to execute due to huge data.
I suggest that you don't do anything until you determine that you actually have a performance issue.
Then don't do anything until you determine what the cause of that issue is.
You are inserting into ONE table and get data from FOUR tables using FOUR queries.
The INSERT could be the issue - there might be a lot of indexes, referential constraints or triggers.
Any, or all, of the four queries could be the issue - any of them might be using poor execution plans due to lack of statistics, lack of indexes, etc.
You need a LOT more information before you start trying to 'solve' the problem.
IMHO the simplest way to start is to forget the INSERT for now and work with each of the SELECT queries one at a time.
1. Create an execution plan for one of the queries and see if you spot any possible issues.
2. Execute the query and see how long it takes to execute.
3. Tune the query as indicated by the execution plan or the sql tuning advisor.

Similar Messages

  • Performace problem in a select statment how to imporve the performance

    fist select statment
    SELECT    a~extno
              a~guid_lclic       " for next select
              e~ctsim
              e~ctsex
    *revised spec 3rd
              f~guid_pobj
              f~amnt_flt
              f~amcur
              f~guid_mobj
              e~srvll     "pk
              e~ctsty     "PK
              e~lgreg  "PK
      INTO TABLE gt_sagmeld
      FROM /SAPSLL/LCLIC  as a
      INNER JOIN /sapsll/tlegsv as e on elgreg = algreg
    * revised spec 3rd
      inner join /sapsll/legcon as f on fguid_lclic = aguid_lclic   " for ccngn1 selection
      inner join /sapsll/corcts as g on gguid_pobj = fguid_pobj
                               where   a~extno in s_extno.
      sort gt_sagmeld by guid_lclic guid_pobj.
    lgreg ctsty srvll
      delete adjacent duplicates from gt_sagmeld comparing guid_lclic guid_pobj.
    it selects about 20 lakh records
    belos select statment whichs is taking time as it is based on the entreis of gt_sagmeld
    select /sapsll/corpar~guid_mobj
                /sapsll/corpar~PAFCT
                but000~bpext
                but000~partner
                /sapsll/corpar~parno
                into table gt_but001
        from    /sapsll/corpar
        INNER join but000  on  but000partner = /sapsll/corparparno
        for all entries in gt_sagmeld
        where  /sapsll/corpar~guid_mobj = gt_sagmeld-guid_mobj
        and    /sapsll/corpar~PAFCT = 'SH'.
       SELECT /sapsll/cuit~guid_cuit         " PK
              /sapsll/cuit~QUANT_FLT         " to be displayed
              /sapsll/cuit~QUAUM             " to be displayed
              /sapsll/cuit~RPTDT             " to be displayed
             /sapsll/cuhd~guid_cuhd         " next select
              /sapsll/cuit~guid_pr           " next select
      INTO table gt_sapsllcuit
      FROM  /sapsll/cuit
    inner join /sapsll/cuhd on /sapsll/cuitguid_cuhd = /sapsll/cuhdguid_cuhd
      FOR all entries in gt_sagmeld
      WHERE /sapsll/cuit~guid_cuit = gt_sagmeld-guid_pobj.
      Delete adjacent duplicates from gt_sapsllcuit[].
           if not gt_sapsllcuit[] is initial.

    hi navenet
    that didnt worked
    we need to try ur range options
    but not sure what you told in the last mail as not clear with range can u pls eloboragte more i am pasting the full code here
    SELECT     a~extno
               a~guid_lclic       " for next select but000
               e~ctsim
               e~ctsex
               e~srvll
               e~ctsty
               e~lgreg
      INTO TABLE gt_sagmeld
      FROM /SAPSLL/LCLIC  as a
      INNER JOIN /sapsll/tlegsv as e on elgreg = algreg
                               where   a~extno in s_extno.
    sort gt_sagmeld by guid_lclic.
    delete adjacent duplicates from gt_sagmeld comparing all fields.
      IF not gt_sagmeld[] is initial.
      SELECT  /sapsll/legcon~guid_lclic
              /sapsll/legcon~guid_pobj
              /sapsll/legcon~amnt_flt
              /sapsll/legcon~amcur
               but000~bpext
               *revised spec
               /sapsll/corpar~PAFCT
              /sapsll/legcon~guid_mobj
             /sapsll/cuit~guid_cuit
      INTO TABLE gt_but000
      FROM /SAPSLL/LEGCON
      for all entries in gt_sagmeld
      where /SAPSLL/legcon~guid_lclic = gt_sagmeld-guid_lclic.
            IF NOT GT_BUT000[] IS INITIAL.
           sort gt_but000 by guid_mobj.
           delete adjacent duplicates from gt_but000 comparing guid_mobj.
         select /sapsll/corpar~guid_mobj
                /sapsll/corpar~PAFCT
                /sapsll/corpar~parno
                into table gt_but001
        from    /sapsll/corpar
        for all entries in gt_but000
        where  /sapsll/corpar~guid_mobj = gt_but000-guid_mobj.
       and    /sapsll/corpar~PAFCT = 'SH'.
    DELETE gt_but001 where PAFCT <> 'SH'.
    *sort gt_corpar by parno.
    *delete adjacent duplicates from gt_corpar comparing parno.
    *select gd000~partner
          gd000~bpext
         from gd000 into table gt_but001
    for all entries in gt_corpar
    where  gd000~partner = gt_corpar-parno.
    my ultimat aim is to select bpext from gd000
    can u please explain how to use ranges here and what is the singnificance and how ill i read the data from the final  table if we use ranges
    regards
    Nishant

  • Is there a way of placing images in the timeline without having them snap to frames??

    I'm basically trying to make a short movie for a friend that involves stills that flick back and forth but i need them to sync to the music i have to go with it.  Clearly, the musics beat won't sit frame perfect so in order to get the effect i want, i assume there must be a function that would allow me to freely place images wherever i wish..... no??
    Sadly, my logic kind of dictates that this won't be possible but if it's not, how the hell do people get images to change on beat with music etc??  Do they adjust the music to make it fit in with frames or what??
    Any help is golden right now

    I didn't see anyone mention the Sequence> Snap toggle. With that on, it will snap to things you may not want it to, like the CTI, other clips, and markers. Of course, it will still be snapping to the fame rate for a given sequence, but Steven already made a fine suggestion for how to imporve accuracy there. He's also right that you don't need to be mathematically correct in your alignment, just intuitively.
    If it helps, here's an old trick for beat-matching. Play the audio, and with your best drummer's skillz, hit M on the keyboard each time you hear (feel?) the beat you want to sync to. M is the default shortcut for 'Marker', so when you're done you should have a marker near each beat which you can tweak by hand as needed. It may take a few tries to get it right, but it's a lot of fun.
    You can then turn the snap toggle 'on' (if it's not already) and drop your images in to align up with the corresponding markers. Make sure you have the still image duration set to the right length (probably err on the long side to avoid gaps) before you import your imahes or it will get confusing.
    In your case, you're flipping back and forth between just two images, so after setting up the markers where you want, I might put one image in v1 and one on v2, span them out to cover the whole audio you're matching to, and then use the razor tool. Or, better yet, use, shift M & cmd/ctrl K to make edits that correspond exactly to your markers. It will razor all the way through, but you can easily just trim the other tracks back out from the head afterwards because nothing will have moved.  However you do it, the point is to razor out portions of v2 where v1 needs to show through and then delete them (don't ripple).

  • As a DVD player

    My DVD pkayer just crapped out. I am going to byuy a new player should I condiser a Mini to play that role? I have wireless in the house can I then hook up to my network also? Primary is DVD player secondary would be network. It will be hooked to 52 inch TV and an Onkyo sound system. Comments on Mac mini vs DVD player.

    I use a mini as a DVD player currently. What I observe:
    - I have the mini set to 1080p output into a 1080p compatible TV. The DVD upscaling is truly stunning. Better than any dedicated DVD player I have owned. Many DVDs look as good as 720p cable channels I receive.
    - I miss having some functions from a dedicated DVD player, like a dedicated display on the front of the player with readout of chapter and time information.
    - It hasn't happened very often, but once or twice the mini had trouble reading a DVD and it beachballed forever to the point that I had to hit the power switch on the back of the mini to restart it.
    - It would be good if your Onkyo had a spare optical audio input because by far that is the best sound option out of the mini and the only way to get surround sound out of the mini.
    Overall I am happy with this set up. My family slightly less so, but they tolerate it. The DVD player in the upcoming Leopard release has major updates over the player in Tiger so it will be interesting to see how that imporves the overall experience. You might want to wait if you can and ask your question again in a month.
    Concerning the network part, were you thinking of being able to play a DVD on the mini and somehow network the DVD's output to other TVs in the house? If that's what you meant, then I don't think that can work. Otherwise, I don't understand your network question.

  • Insert select statement is taking ages

    Sybase version: Adaptive Server Enterprise/12.5.4/EBF 15432 ESD#8/P/Sun_svr4/OS 5.8/ase1254/2105/64-bit/FBO/Sat Mar 22 14:38:37 2008
    Hi guyz,
    I have a question about the performance of a statement that is very slow and I'd like to have you input.
    I have the SQL statement below that is taking ages to execute and I can't find out how to imporve it
    insert SST_TMP_M_TYPE select M_TYPE from MKT_OP_DBF M join TRN_HDR_DBF T on M.M_ORIGIN_NB=T.M_NB where T.M_LTI_NB=@Nson_lti
    @Nson_lti is the same datatype as T.M_LTI_NB
    M.M_ORIGIN_NB=T.M_NB have the same datatype
    TRN_HDR_DBF has 1424951 rows and indexes on M_LTI_NB and M_NB
    table MKT_OP_DBF has 870305 rows
    table MKT_OP_DBF has an index on M_ORIGIN_NB column
    Statistics for index:                   "MKT_OP_ND7" (nonclustered)
    Index column list:                      "M_ORIGIN_NB"
         Leaf count:                        3087
         Empty leaf page count:             0
         Data page CR count:                410256.0000000000000000
         Index page CR count:               566.0000000000000000
         Data row CR count:                 467979.0000000000000000
         First extent leaf pages:           0
         Leaf row size:                     12.1161512343373872
         Index height:                      2
    The representaion of M_ORIGIN_NB is
    Statistics for column:                  "M_ORIGIN_NB"
    Last update of column statistics:       Mar  9 2015 10:48:57:420AM
         Range cell density:                0.0000034460903826
         Total density:                     0.0053334921767125
         Range selectivity:                 default used (0.33)
         In between selectivity:            default used (0.25)
    Histogram for column:                   "M_ORIGIN_NB"
    Column datatype:                        numeric(10,0)
    Requested step count:                   20
    Actual step count:                      20
         Step     Weight                    Value
            1     0.00000000        <       0
            2     0.07300889        =       0
            3     0.05263098       <=       5025190
            4     0.05263098       <=       9202496
            5     0.05263098       <=       12664456
            6     0.05263098       <=       13129478
            7     0.05263098       <=       13698564
            8     0.05263098       <=       14735554
            9     0.05263098       <=       15168461
           10     0.05263098       <=       15562067
           11     0.05263098       <=       16452862
           12     0.05263098       <=       16909265
           13     0.05263212       <=       17251573
           14     0.05263098       <=       18009609
           15     0.05263098       <=       18207523
           16     0.05263098       <=       18404113
           17     0.05263098       <=       18588398
           18     0.05263098       <=       18793585
           19     0.05263098       <=       18998992
           20     0.03226340       <=       19574408
    If I look at the showplan, I can see indexes on TRN_HDR_DBF are used but now the one on MKT_OP_DBF
    QUERY PLAN FOR STATEMENT 16 (at line 35).
        STEP 1
            The type of query is INSERT.
            The update mode is direct.
            FROM TABLE
                MKT_OP_DBF
                M
            Nested iteration.
            Table Scan.
            Forward scan.
            Positioning at start of table.
            Using I/O Size 32 Kbytes for data pages.
            With LRU Buffer Replacement Strategy for data pages.
            FROM TABLE
                TRN_HDR_DBF
                T
            Nested iteration.
            Index : TRN_HDR_NDX_NB
            Forward scan.
            Positioning by key.
            Keys are:
                M_NB  ASC
            Using I/O Size 4 Kbytes for index leaf pages.
            With LRU Buffer Replacement Strategy for index leaf pages.
            Using I/O Size 4 Kbytes for data pages.
            With LRU Buffer Replacement Strategy for data pages.
            TO TABLE
                SST_TMP_M_TYPE
            Using I/O Size 4 Kbytes for data pages.
    I was expecting the query to use the index also on MKT_OP_DBF
    Thanks for your advices
    Simon

    The total density number for the MKT_OP_DBF.M_ORIGIN_NB column don't look very good:
         Range cell density:           0.0000034460903826
         Total density:                0.0053334921767125
    Notice the total density value is 3 magnitudes larger than the range density ... which can indicate a largish number of duplicates.  (NOTE: This wide difference between range cell and total density can be referred to as 'skew' - more on this later.)
    Do some M_ORIGIN_NB values have a large number of duplicates?  What does the following query return:
    =====================
    select top 30 M_ORIGIN_NB, count(*)
    from   MKT_OP_DBF
    group by M_ORIGIN_NB
    order by 2 desc, 1
    =====================
    The total density can be used to estimate the number of rows expected for a join (eg, TRN_HDR_DBF --> MKT_OP_DBF).  The largish total density number, when thrown into the optimizer's calculations, may be causing the optimizer to think that the volume of *possible* joins will be more expensive than a join in the opposite direction (MKT_OP_DBF --> TRN_HDR_DBF) which in turn means (as Jeff's pointed out) that you end up table scanning MKT_OP_DBF (as the outer table) because of no SARGs.
    From your description it sounds like you've got the necessary indexes to support a TRN_HDR_DBF --> MKT_OP_DBF join order. (Though it wouldn't hurt to see the complete output from sp_helpindex for both tables just to make sure we're on the same sheet of music.)
    Without more details (eg, complete stats for both tables, sp_help for both tables - if you decide to post these I'd recommend posting them as a *.txt attachment).
    I'm assuming you *know* that a join from TRN_NDR_DBF --> MKT_OP_DBF should be much quicker than what you're currently seeing.  If this is the case, I'd probably want to start with:
    =====================
    exec sp_modifystats MKT_OP_DBF, M_ORIGIN_NB, REMOVE_SKEW_FROM_DENSITY
    go
    exec sp_recompile MKT_OP_DBF
    go
    -- run your query again
    =====================
    By removing the skew from the total density (ie, set total density = range cell density = 0.00000344...) you're telling the optimizer that it can expect a much smaller number of joins for the join order of TRN_HDR_DBF --> MKT_OP_DBF ... and that may be enough for the optimizer to use TRN_HDR_DBF to drive the query.
    NOTE: If sp_modifystats/REMOVE_SKEW_FROM_DENSITY provides the desired join order, keep in mind that you'll need to re-issue this command after each update stats command that modifies the stats on the M_ORIGIN_NB column.  For example, modify your update stats maintenance job to issue sp_modifystats/REMOVE_SKEW_FROM_DENSITY for those special cases where you know it helps query performance.

  • Idocs Performance

    Hi,
    I have idocs with more segments"large size", the performance was affected , could you tell me witch parameters I must change or what I can do for change  the performance?
    Regards
                           Adam

    Hi Michael,
    The following notes 595479, 761332 , 760993
    discuss on how to imporve perfromance of the IDOC adapter.
    Maybe they will help you,
    Regards,
    Bhavesh

  • Notifications Search(/oracle/apps/fnd/wf/worklist/webui/NotifSearchPG)

    Notifications Search Page (/oracle/apps/fnd/wf/worklist/webui/NotifSearchPG)
    When search notifications without any conditions, it's very slowly,about 4 minutes
    then click a notification title to reach detail Page(/oracle/apps/fnd/wf/worklist/webui/NotifDetailsPG)
    Approve or do other actions, then return NotifSearchPG use link ' Return to worklist',
    It's very very slowly,about 4 minutes, worklists are searched again,
    How to imporve the performance about search worklist?
    there a about 4 millions records in TABLE WF_NOTIFICATIONS
    and the Hardware plat is strong enough to support the applications.
    ------------------------AMs , VOs , COs in NotifSearchPG
    NotificationSearchAM
    WorklistStatusVO
    AdminItemTypesVO
    SearchWorklistVO
    SentDateVO
    WorklistPriorityVO
    DueDateVO
    UserLOVVO
    DigSigSearchVO
    and following on webui:
    NtfSearchCO.class
    NtfUtil.class

    Hi ,
    Include some diagnostic message and see whether your controller class is getting invoked or not .
    note : While attaching your controller class from personalization give the full qualified path of the controller like xxx.oracle.apps.inv.icx.selfservice.xxmyControllerCO
    and tab out and then click on "APPLY " button .
    Keerthi K

  • Select or Select....for update nowait (data write concurrency issue)

    Hello everyone,
    I am working on a jsp/servlet project now, and got questions about which is the better way to deal with concurrent writing issues.
    The whole senario is described as following:
    First each user is viewing his own list of several records, and each record has a hyperlink through which user can modify it. After user clicks that link, there will be a popup window pre-populated with the values of that record, then user can do the modifications. After he is done, he can either click "Save " to save the change or "Cancel" to cancel it.
    Method1---This is the method I am using right now.
    I did not do any special synchronization measures, so if user 1 and user2 click the link of same record, they will modify the record
    at the same time, then whose updates will take effect depends on who submits the request later. If user1 submitted first, then user 2, user1
    will not see his updates. I know with this method, we will have the problem of "Lost Updates", but this is the simplest and efficient way to handle this issue.
    Method2--This is the method I am hesitating.
    I am considering to use "Select....for update nowait " to lock a record when user1 has selected one record and intended to modify it. If user2 wanted to modify the same record, he is not allowed. ( by catching the sql exception.)But the issue I am concerned about is because the "select .. For update" action and "Update action" are not so consecutive as many transaction examples described. There could be a
    big interval between " select " and "update" actions. You could not predict user's behavior, maybe after he open the popup window, it took him a while to make up his decision, or even worse, he was interrupted by other things and went away for the whole morning?.Then the lock is just held until he releases it.
    And another issue is if he clicks "cancel" to cancel his work, if I use method1, I don't need to interact with server-side at all, but if user method2, I still need to interact with the server to release the lock.
    Can someone give me some advice ? What do you do to deal with similar situation? If I did not make clear of the question, please let me know.
    Thanks in advance !
    Rachel

    Hi Rachel,
    Congratulation, you have found a way to overcome your programming business logic.
    Have you ever consider that the solution of using CachedRowset concept yet to be included in j2se 1.5 tiger next year too prove workable under the scenario , whereby you can disconnect from the database after you have execute your query and reconnect again if you have to do transactional activity later, so that the loading overhead as well as the data pooling activity could be well balanced off.
    Although rowset is still not an official API now, but its potential to me is worth consideration.
    I have written a simple but crude cut JSP programme posted on this forum under the heading "Interesting CachedRowset JSP code to share " to demonstrate the concept of CachedRowset and hoping that the Java guru or the developer could provide feedback on how to imporve on the programming logic or methodology.
    Thanks!!

  • IMessage takes ages to update on my IOS devices

    Im having a probelem where my imessage is not upadting on my iphone - ipad and vice versa quick enough. Its taking an hour even over the same wifi connection (when i did some tests)
    Its getting very frustrating when switching between the devices and half my conversation with someone is not there. I thought this was kinda the point of iMessage and iCloud?
    Anyone have any suggestions on how to imporve this or has apple got things a bit wrong?
    Thanks

    http://osxdaily.com/2013/09/19/ios-7-battery-life-fix/

  • How I can unlock my questions and imporve them

    Hi,may be this question is asked allready handred times,but I didn't find.How I can unlock my questions to imporve them

    Well, you can start with not asking questions to which you already have the answer. One of your locked threads points you to:
    [Rules of engagement|http://wiki.sdn.sap.com/wiki/display/HOME/RulesofEngagement]
    [Asking Good Questions in the Forums to get Good Answers|Asking Good Questions in the SCN Discussion Spaces will help you get Good Answers]
    Rob

  • How to create suitable aggregates for queries on multiprovider ?

    hi all,
    goal reduce db time of query and improve performance
    i have queries on a multicube. I have 5 cubes under the multiprovider. I having performance issue with one of the cubes.  it  had
    high slection/transfer ratio. The same cube had high 94% DB TIme. All the BW and DB indexes and stats are green. I chose
    the path of aggregates. when i tried suggest from proposal it is giving me query time and date range and i gave last
    3 days and  query time 150 sec. it is suggesting huge number of aggregates like 150 of them and not getting reduced much
    when i tried optimize funcitonality.
    The faulty cube had nearly 9 million records and 4 years of data
    1. generally how many aggregates do we need to create on a cube?
    2. how do i use propose from last navigation? it is not creating any aggregates
    3. is there a way for system to propose less number of aggregates?
    4. if nothing works i want to cut the volume of the aggregates base on years or quarters. how do i do that?
    i created with time charactersitic 0calquarter and dragged in ocal day and 0calmonth. activated and filled in ..but
    query is not hitting it when i do a monthly selection. i tried bringing in all the other dimensions...except line item
    dimenisons....no use ...it  is not hitting the manual aggregates in RSRT.the slection on 0calquarter is * .
    5. should i change it to fixed value and bring in the line items too and create it?
    6. I wanted to try propose aggregate from query option..but  my query is on multiprovider...and not able to copy it to cube..
    plz help me how to find the suitable aggregates for query on multiprovider
    7. should i create any new indexes on the cube using the chractrestics in the where condiotion of select statment...but in
    that case, select statement changes with drill down.......how do i handle it ?
    8. how will i make sure the aggregates work for all queries run time imporvement?
    9. plz suggest other approaches if any with procedures
    this is a urgent problem plz help...
    <b>thanks in advance
    points will be assigned for inputs</b>

    1. generally how many aggregates do we need to create on a cube?
    it depends on your specific needs, you can need none or several.
    2. how do i use propose from last navigation? it is not creating any aggregates
    Can you elaborate?
    3. is there a way for system to propose less number of aggregates?
    In any of the menus of screen for creating aggregates you have an option for sytem propose aggregates for one specific queries i am not sure it worked with multicubes.
    4. if nothing works i want to cut the volume of the aggregates base on years or quarters. how do i do that?
    You should delete 0calday from aggregates in order to accumulate data for any greater time unit. Other solution for times is to study try to do a partition in cube.
    5. should i change it to fixed value and bring in the line items too and create it?
    Can you elaborate?
    6. I wanted to try propose aggregate from query option..but my query is on multiprovider...and not able to copy it to cube..
    Answered before, maybe you can create a query only with data on that cube that appears in multicube query in order to proposal any aggregate in thath cube.
    7. should i create any new indexes on the cube using the chractrestics in the where condiotion of select statment...but in
    that case, select statement changes with drill down.......how do i handle it ?
    is not recomendable create new indexes in multidemensional structures. Try avoid selections for navigational attributes, if necesary add navigate attributte as dimension attributes, put filters in filter section in BEX.
    8. how will i make sure the aggregates work for all queries run time imporvement?
    try transaction st03
    9. plz suggest other approaches if any with procedures
    Some other approches yet answering
    Good luck

  • How do I make my DVD playable in DVD Studio Pro?

    I never use to have problems with DVD Studio Pro until I got my new MacBook Pro.
    I'm making a wedding DVD with a couple menus and projects and it seems to all work just fine.
    I only encouter a problem when I go to check to DVD after it is done buring.
    When I pop it in both a DVD player and the computer, nothing happens. It says the disk is unreadable or nothing at all happens.
    I'm not sure what to do because I don't even know what is wrong.
    DVD Studio Pro isn't telling me anything is wrong and that it is buring just fine.
    I have also tried multiple different exported versions of my projects and nothing has imporved or even changed.
    Any tips on how to make my DVD playable?

    Did you do a build and format? If so, check out the contents of the TS folder.
    Often people report similar problems and it turns out they inadvertently made an HD DVD…though I would think it should play in your Mac if that were the case. But check your settings nonetheless.
    Are you doing the encode in Compressor?
    Russ

  • How much RAM for ONE App & does OS X limit allocation?

    Hello all,
    This question concerns how much RAM I realistically need to run ONE application to it's fullest. I mainly use ProToolsLE 7.3, but sometimes use LogicPro 8 and Final Cut Express 4, again, only one running at a time. I've read on the ProTools forum that Windows (yuck) limits the amount of RAM that can be allocated for 1 application. Does Mac OS 10.4 or 10.5 do this?
    Right now I have the stock 1GB (2 x 512MB). I plan on pulling it and installing a matched quad of either 1GB sticks (4GB total) -OR- 2GB sticks (8GB total). I understand that matched quads in slots 1/2 of both risers is optimum for the early Mac Pros.
    The ProTools forum isn't helping much. A ProTools support person said that starting with 4GB will be good to start and that they test their systems with 4GB installed. What did they mean by "good to start"? Will it increase jumping to 8GB or is 4GB all ProTools can really use?
    Another guy on there said "keep in mind that to use 4GB of ram - you'd need a 64bit Operating system (not supported by pro-tools) or on a MAC PAE support enabled, which depending on your hardware may not be a great move." I dont know what that means! Mac OS 10.5 IS a 64bit OS, right? And ProTools 7.4 NEEDS OS 10.5 to run....even if it is only a 32bit app. (though right now I am running PT 7.3 on OS 10.4....not sure if 10.4 is 64bit)
    A quad set of 1GB is under $170 and a quad set of 2GB is under $270. Funds are tight right now, but I think I can swing the extra $100 to double the RAM -IF- it will make a difference.
    Any thoughts as to wether the extra $100 will be worth it, or am I wasting my money? Thanks!

    Hatter,
    Thank you so much for your advice! If I may bother you to receive one final suggestion....
    Based on the best prices I have found from all of the well known Mac Pro memory dealers, I have narrowed it down to 3 configurations that are balanced. Even though I'm still not 100% confident that 8GB of RAM will give me any noticable imporvement over 6GB, the price is only $42 more to get 8GB. Am I correct in saying that the early Mac Pro would do better to have one matched quad set of RAM over the new Mac Pro's increased performance of having all eight slots filled? Here are the 3 configurations and prices.
    4 x 1GB + 4 x 512MB = 6GB Quad+Quad $220 (using my stock 2x512MB sticks)
    4 x 2GB = 8GB Quad $262 (pulling my stock 2x512MB sticks)
    8 x 1GB = 8GB Octal $278 (pulling my stock 2x512MB sticks)
    Which do you think would be the best route?
    I already purchased a WD Caviar for my dedicated audio drive. I installed a Seagate 7200.9 in my PowerMac G4 for it's audio drive, but after reading various reveiws, concluded the WD Caviar is a better performer. I LOVE my WD My Book Studio external. I hadn't considered replacing the boot drive, but after reading articles suggesting that it improves overall system performance, and now that you've recommended it as well, I will probably buy a second Caviar for the boot/system drive.
    This computer is a DEDICATED audio computer, for ProTools & Logic only....other than a bit of fiddling in Final Cut Express. It's not even connected to the internet and all unnecessary apps are not installed or uninstalled. I also never migrate apps, but always do a fresh install of everything from the ground up.
    Thanks again for your advice! It's been a big help!

  • How to recover a file saved in 5.0 to be used again in 4.x?

    after having upgraded to Maverick, i innocently upgraded Pages, excited abotu imporvements... alas what a mistake!  most of my docs are now useless because i cannot selec images for example.  thankfully my old 4.1 is still around but any doc i saved in 5.0 cannot be opened.  is there a way to recover them?
    incidentally, how can uninstall Pages 5.0 ?  i trashed the icon from the Applications folder but it comes back.  or at least how to have 4.1 be the default?

    File >Export > Pages 09

  • How to get job

    Hi all,
    I am intermediate to this J2EE domain,I am in this technology and having more than 1 year exp.I know jsp,servlet,beans,Oracle.But as for my experiene concerned ,i want to improve my skills in J2EE (EJB,JMS...and knowledge of Application servers like Weblogic,JBoss etc)
    ---- how can i imporve my skills????
    By just reading Books,or doing some practical in home.
    plz advice me

    It's also good that if u start reading book. and then try to implement ur earlier projects into J2EE...which will help u
    Regards
    Chintan

Maybe you are looking for

  • What happens between the chime & the grey screen on boot?

    Hi, My Mac Pro has recently started taking up to 30secs to reach the grey screen when booting. The chime happens shortly after (re)starting, and then there's a significant pause, where nothing seems to happen (even the Apple Display stays in stand-by

  • Quicktime Pro - export as mpeg-4 works in iPod Touch but not iPhone 3G

    If I use Movie to iPhone / iPhone (cellular) the quality is pretty dismal considering I use the component cable to pipe it out to HDTV. So, I tried using movie to mpeg-4 as that allows options. However, the resulting video can be transferred over to

  • How to start career in SAP MM domain

    Hello Veterans, Need your advise..! I, shivakumar completed BBM in 2007 and have done MBA (marketing) thru open univeristy while working in MNC. I have total 3.8 year domain work experience (worked in order Mgt & supply chain mgt & BPO) and currently

  • Call manager 5 in cisco phone

    Hello All, Maximum 3 call managers can be added in CM group and one SRST reference. So totally 4 CM servers but why we have CM server 5 in all cisco phones under network configuration. -Murali

  • Mac Mini crashes during screen share

    Hi, I have just upgraded both my Mac Mini and MBP to Yosemite. Whilst using Back to my Mac feature from my MBP to my Mac Mini, after one to two minutes my Mac Mini completely crashes and re-boots. Has anyone else had a similar issue? Thanks eliast1