PRE 9 Performance question

Hi all,
Finally made the big move from PRE4 to PRE9.0.1.  As usual with PRE upgrades there are functions that were removed that make me scratch my head, such as no longer able to designate favorate transitions.  And moving the zoom in-out slider over to the edge of the display, and dumbing down the Markers icons (resulting in taking more mouse clicks now just to add a marker).  But those are just annoyances.
My actual problem is that some functions seem to be much slower than in my previous version (on same PC).  For example, a simple rendering of two 5 second title clips with a transition joining them takes longer than real-time to render (takes longer than 10 seconds to render 10 seconds worth of video).  Nothing special - two tracks, one A/V and one just the title.  Rendering other places such as fade-in/out is slow also.  This is SD AVI video from the same DV camera I've used for years (Panasonic PV-GS320).
Scrubbing works fine - I can't move the CTI faster than it can display/play audio.  DVD burn time seems reasonable (and successful) although I have not done anything lengthy yet.  No crashes, and on the plus side so far all the problems I had with the titler in PRE4 are fixed now.  I do notice the same problem Neale mentioned where displaying the thumbnails on the timeline is very slow.
Update: I did notice one wierd thing:  when you mouse-over a clip on the timeline and you get the little window with the start/end/duration information, the little window actually blinks off and on - and I'm not moving the mouse.  On (1/2 second or so) off (1 second).  It's on just briefly enough it takes a few tries to read the information.
Specs on the PC:
HP PC with AMD Athlon II x4 630 CPU @ 2.8 GHz
6GB RAM
Windows 7 64 bit - current on patches
Three internal SATA HD's (one for software, one for video, one for rendering)
Current video driver (NVIDIA GEFORCE 9100 version 8.17.12.6099)
Current Quicktime  7.6.9
Can anyone think of anything I'm not checking?    Or is PRE9 just slower?
Thanks!
Bob

... and to add to Steve's comment, you will have the same response with more
demanding video (Full HD TOD, AVCHD etc).
So to keep it seamless, they made it slow for SD as well

Similar Messages

  • Oracle EBS R12 Pre - Implementations phase question air

    Oracle EBS R12 Pre - Implementations phase question air
    Posted: Jun 30, 2009 10:22 AM Edit Reply
    Dear all Gurus,
    We are going to implement Oracle EBS r12, for industrial concern, we have following quires if any peer may suggest.
    1) we heard the oracle R12 has build new release with 11g db , is it been practical for choosing it for corporate ???
    2) Linux Read hat which version is more stable like 5 releases is compatible with R12 new release?
    3) We are also thinking for 64 bit architecture rather than 32 bit, could any one figure out the practical pros and cons for this.
    4) We are also wondering about the server machine brand and its configurations like HP DL380 G6, DELL 2850? could any one share abut his experience about the same.
    5) What sort of Server configurations (Processor , 2way -4way , RAM , HD and other accessories ) for r12 Multi node setup for 150 clients (DB Server , Apps Server , Test Prod )
    6) What should be backup strategies like tap backup and how much space requirements we must have provisioned for retaining almost 2~3 month backup.
    7) Application implementations methodologies?
    8)
    I know to address our queries would be time consuming for you bust peoples , but I would really oblige for being shared your journey this would defiantly the PATH way for other like mentoring for others.
    looking forward your valuable instructions ASAP.
    Thanks & Best Regards
    Muhammad Waseem
    Manager IT
    Neee-Has Textiles Divisions
    31-Q Gulberg II Lahore
    Pakistan
    92-0333-4240949

    Duplicate post.
    Oracle EBS R12 Pre - Implementations phase question air
    Oracle EBS R12 Pre - Implementations phase question air

  • Simple performance question

    Simple performance question. the simplest way possible, assume
    I have a int[][][][][] matrix, and a boolean add. The array is several dimensions long.
    When add is true, I must add a constant value to each element in the array.
    When add is false, I must subtract a constant value to each element in the array.
    Assume this is very hot code, i.e. it is called very often. How expensive is the condition checking? I present the two scenarios.
    private void process(){
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             if (add)
             matrix[i][ii][iii][...]  += constant;
             else
             matrix[i][ii][iii][...]  -= constant;
    private void process(){
      if (add)
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             matrix[i][ii][iii][...]  += constant;
    else
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
           matrix[i][ii][iii][...]  -= constant;
    }Is the second scenario worth a significant performance boost? Without understanding how the compilers generates executable code, it seems that in the first case, n^d conditions are checked, whereas in the second, only 1. It is however, less elegant, but I am willing to do it for a significant improvement.

    erjoalgo wrote:
    I guess my real question is, will the compiler optimize the condition check out when it realizes the boolean value will not change through these iterations, and if it does not, is it worth doing that micro optimization?Almost certainly not; the main reason being that
    matrix[i][ii][iii][...]  +/-= constantis liable to take many times longer than the condition check, and you can't avoid it. That said, Mel's suggestion is probably the best.
    but I will follow amickr advice and not worry about it.Good idea. Saves you getting flamed with all the quotes about premature optimization.
    Winston

  • Rating scale does not appear in Pre-defined Performance Mgmt Wizard

    Hi Experts,
    I am implementing SAP Pre-defined Performance template.
    When configuring IMG entry u201CDefine Templates for Performance Managementu201D I run the wizard for Performance, and when I get to the rating the only options that are presented are SAP standard delivered values e.g. Standard Quality Scale 1-5, Standard Quality Scale 0-10, Standard Quality Scale 1-3, Standard Quality Scale 1-3.
    Now I have previously configured rating scale Team Performance with rating values
    1-below expectations
    2-meets expectations
    3-exceeds expectations
    However, this rating scale does not appear in the dropdown list to select from.
    Please can someone explain what could be missing or if they had this issue before and were able to solve this.
    Many thanks
    Oliver

    Hi joker_of_the_deck
    thank you for your quick response. Much appreciated.
    I checked transaction OOHAP_BASIC and my scale is already valid within the Value List.
    There are many other scales in our system within this value list but only the entries I mentioned previously are available in the Pre-defined performance wizard.
    Do you have any other suggestions on why the system would not allow my scale  rating to be selected from the pre-defined performance template as part of the wizard setup?
    Regards
    Oliver

  • BPM performance question

    Guys,
    I do understand that ccPBM is very resource hungry but what I was wondering is this:
    Once you use BPM, does an extra step decreases the performance significantly? Or does it just need slightly more resources?
    More specifically we have quite complex mapping in 2 BPM steps. Combining them would make the mapping less clear but would it worth doing so from the performance point of view?
    Your opinion is appreciated.
    Thanks a lot,
    Viktor Varga

    Hi,
    In SXMB_ADM you can set the time out higher for the sync processing.
    Go to Integration Processing in SXMB_ADM and add parameter SA_COMM CHECK_FOR_ASYNC_RESPONSE_TIMEOUT to 120 (seconds). You can also increase the number of parallel processes if you have more waiting now. SA_COMM CHECK_FOR_MAX_SYNC_CALLS from 20 to XX. All depends on your hardware but this helped me from the standard 60 seconds to go to may be 70 in some cases.
    Make sure that your calling system does not have a timeout below that you set in XI otherwise yours will go on and finish and your partner may end up sending it twice
    when you go for BPM the whole workflow
    has to come into action so for example
    when your mapping last < 1 sec without bpm
    if you do it in a BPM the transformation step
    can last 2 seconds + one second mapping...
    (that's just an example)
    so the workflow gives you many design possibilities
    (brigde, error handling) but it can
    slow down the process and if you have
    thousands of messages the preformance
    can be much worse than having the same without BPM
    see below links
    http://help.sap.com/bp_bpmv130/Documentation/Operation/TuningGuide.pdf
    http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/3.0/sap%20exchange%20infrastructure%20tuning%20guide%20xi%203.0.pdf
    BPM Performance tuning
    BPM Performance issue
    BPM performance question
    BPM performance- data aggregation persistance
    Regards
    Chilla..

  • Pre-defined performance - MSS - Cascading team Goals

    Hi Experts,
    we are using the SAP pre-defined performance EHP4 system.
    We have created a template in our DEV system and are able to create the appraisal document from the Manager and update by the employee.
    Within the team goals screen (iview) of the pre-defined template, we can select the view u201CDirect Reportsu201D which allows us to Cascading of team Goals.
    Our issue is if we select the view u201COrg Unit structureu201D or the u201CEmployee structureu201D then we get a Portal crash
    The other related issue is the goal end date is being defaulted as 31.12.9999 and we are getting the error u201CEntered dates must be in the current yearu2019 where the plan start date is 01.08.2010 and the end date is 31.07.2011.
    Does the start and end date of the goals have to be a calendar year?
    Please can someone tell me if they did any additional config or found a solution to these issues?
    Many thanks
    Oliver

    Hi,
    in the standard the team goals period is limited to a maximum of one year. But SAP provides two enhancements (enhancement spot HRHAP00_GOAL_PERIOD) that can be used to allow different periods. These are:
    - HRHAP00GOAL_PERIOD_GEN For the generic UI
    - HRHAP00GOAL_PERIOD_PMP For the predifined process
    So you may use these to fulfill your requirement. Just take into account that the Goal assessment for the goals (if they are valid more than one year) would be the same for all the validity period of the goal.
    Regards,
    Ana

  • Pre-defined performance PMP_PROC_REJ u201CReject Overall Appraisalu2019

    Hi Experts,
    we are using the SAP pre-defined performance EHP4 system.
    We have created a template in our DEV system and are able to create the appraisal document from the Manager and update by the employee.
    At the dnd of the process of the overall appraisal, if the employee selects the pushbutton to u201CReject overall appraisalu201D this seems to stop the appraisal from being edited by the manger nor the employee.
    I would have expected a workflow to be generated or a means for the manager and the employee to have an offline conversation and then the manager could re-open the te team members document to make the final change and complete the document.
    The pushbutton PMP_PROC_REJ u201CReject Overall Appraisalu2019 only has subsequent status of u201CClosed rejectedu201D
    Please can someone tell me if they did any additional config or found a solution to this issue?
    Many thanks
    Oliver

    Hi Swapnil,
    we decided to stop the process when the employee hits reject overall appraisal and had a new workflow to go to HR and the Manager to notify them of the rejection.
    it may be possible to write some custom function to automate phap_admin but I would think extremely difficult as it would have to pass the correct credentials from the Portal to the backend to phap_admin and change the status to in process.
    It may be better to have client process to be informed of the rejection to understand why the employee did reject and for HR to control re-setting the status using phap_admin.
    Hope this makes sense
    Thanks
    Oliver

  • Swing performance question: CPU-bound

    Hi,
    I've posted a Swing performance question to the java.net performance forum. Since it is a Swing performance question, I thought readers of this forum might also be interested.
    Swing CPU-bound in sun.awt.windows.WToolkit.eventLoop
    http://forums.java.net/jive/thread.jspa?threadID=1636&tstart=0
    Thanks,
    Curt

    You obviously don't understand the results, and the first reply to your posting on java.net clearly explains what you missed.
    The event queue is using Thread.wait to sleep until it gets some more events to dispatch. You have incorrectly diagnosed the sleep waiting as your performance bottleneck.

  • Xcontrol: performance question (again)

    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Is there a way to reduce the cpu-load when using xcontrols? 
    If there isn't and if this is not a problem with my installation but a known issue, I think this would be a potential point for NI to fix in a future update of LV.
    Regards,
    soranito
    Message Edited by soranito on 04-04-2010 08:16 PM
    Message Edited by soranito on 04-04-2010 08:18 PM
    Attachments:
    XControl_performance_test.zip ‏60 KB

    soranito wrote:
    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Okay, I think I understand question  now.  You want to know why an equivalent xcontrol boolean consumes 10x more CPU resource than the LV base package boolean?
    Okay, try opening the project I replied yesterday.  I don't have access to LV at my desk so let's try this. Open up your xcontrol facade.vi.  Notice how I separated up your data event into two events?  Go the data change vi event, when looping back the action, set the isDataChanged (part of the data change cluster) to FALSE.  While the data input (the one displayed on your facade.vi front panel), set that isDataChanged to TRUE.  This is will limit the number of times facade will be looping.  It will not drop your CPU down from 10% to 0% but it should drop a little, just enough to give you a short term solution.  If that doesn't work, just play around with the loopback statement.  I can't remember the exact method.
    Yeah, I agree xcontrol shouldn't be overconsuming system resource.  I think xcontrol is still in its primitive form and I'm not sure if NI is planning on investing more times to bug fix or even enhance it.  Imo, I don't think xcontrol is quite ready for primetime yet.   Just too many issues that need improvement.
    Message Edited by lavalava on 04-06-2010 03:34 PM

  • Performance Questions

    Hi,
    i've following questions about report performance:
    1) If in my rpd i have a table or a view which degradates performances, all the other objects are impacted? In other words if i have a table or a view X which has long time to retrieve results, even queries that not have in their clause X , will have bad performance? It's all linked?
    2) When i log in Oracle BI, what are objects that BI Server pre-load, in addition to session or repository variables and users/groups?
    3) Indexes on my DB can improve performances ?
    4) I had a report based on Dimension A and a measure B. In it i had defined a filter on Year field (Year=2011). Opening NQSQuery in phisical query i see correctly the filter applied .
    Now i have created a new report based on Dimension A and a measure C calculated using Ago and Todate functions. Even in it i have same filter. Opening NQSQuery in phisical query i don't see plus the filter, but it's defined only in logical query. It's however executed or not? Executing a filter in logical level, can warsen performances?
    Thanks

    832596 wrote:
    Hi,
    i've following questions about report performance:
    1) If in my rpd i have a table or a view which degradates performances, all the other objects are impacted? In other words if i have a table or a view X which has long time to retrieve results, even queries that not have in their clause X , will have bad performance? It's all linked?As long as the table X / View X is not being used in the report ( in other words, is not used in the SQL generated against the database ), you should be fine with the performance.
    2) When i log in Oracle BI, what are objects that BI Server pre-load, in addition to session or repository variables and users/groups?I think that is about it. BI server activates all the session variables, repository variables ,and picks up groups/roles of the user that is logging in.
    3) Indexes on my DB can improve performances ? Yes, indeed. Create indexes to speed-up access times and improve join operations . Create indexes on dimension table columns that have a large number of distinct values. This will speed-up access time. Indexes should be designed sensibly and ideally should not contain more than 5 columns in a table.

  • MBP with 27" Display performance question

    I'm looking for advice regarding improving the performance, if possible, of my  Macbook Pro and new 27'' Apple display combination.  I'm using a 13" Macbook Pro 2.53Ghz with 4GB RAM, NVIDIA GeForce 9400M graphics card and I have 114GB of the 250GB of HD space available.  What I'm really wondering is is this enough spec to run the 27" display easily.  Apple says it is… and it does work, but I suspect that I'm working at the limit of what my MCB is capable of.  My main applications are Photoshop CS5 with Camera RAW and Bridge.  Everything works but I sometimes get lock ups and things are basically a bit jerky.  Is the bottle neck my 2.53Ghz processor or the graphics card?  I have experimented with the Open GL settings in Photoshop and tried closing all unused applications.  Does anyone have any suggestions for tuning things and is there a feasible upgrade for the graphics card if such a thing would make a difference?  I have recently started working with 21mb RAW files which I realise isn't helping.  Any thoughts would be appreciated.
    Matt.

    I just added a gorgeous LCD 24" to my MBP setup (the G5 is not Happy) The answer to your question is yes. Just go into Display Preferences and drag the menu bar over to the the 24 this will make the 24 the Primary Display and the MBP the secondary when connected.

  • Performance question about 11.1.2 forms at runtime

    hi all,
    Currently we are investigating a forms/reports migration from 10 to 11.
    Initialy we were using v. 11.1.1.4 as the baseline for the migration. Now we are looking at 11.1.2.
    We have the impression that the performance has decreased significantly between these two releases.
    To give an example:
    A wizard screen contains an image alongside a number of items to enter details. In 11.1.1.4 this screen shows up immediately. In 11.1.2 you see the image rolling out on the canvas whilst the properties of the items seem to be set during this event.
    I saw that a number of features were added to be able to tune performance which ... need processing too.
    I get the impression that a big number of events are communicating over the network during the 'built' of the client side view of the screen. If I recall well during the migration of 6 to 9, events were bundled to be transmitted over the network so that delays couldn't come from network roundtrips. I have the impression that this has been reversed and things are communicated between the client and server when they arrive and are not bundled.
    My questions are:
    - is anyone out there experiencing the same kind of behaviour?
    - if so, is there some kind of property(ies) that exist to control the behaviour and improve performance?
    - are there properties for performance monitoring that are set but which cause the slowness as a kind of sideeffect and maybe can be unset.
    Your feedback will be dearly appreciated,
    Greetigns,
    Jan.

    The profile can't be changed although I suspect if there was an issue then banding the line would be something they could utilise if you were happy to do so.
    It's all theoretical right now until you get the service installed. Don't forget there's over 600000 customers now on FTTC and only a very small percentage of them have faults. It might seem like lots looking on this forum but that's only because forums are where people tend to come to complain.
    If you want to say thanks for a helpful answer,please click on the Ratings star on the left-hand side If the the reply answers your question then please mark as ’Mark as Accepted Solution’

  • Controlfile on ASM performance question

    Seeing Controlfile Enqueue performance spikes, consideration are to move control file to separater diskgroup(need outage) ? or add some disk(from different luns,( i prefer this approach) in the same disk group , seems like slow disk is casing this issue...
    2nd question :can snapshot controlfile be placed on ASM storage?

    Following points may help:
    - Separating the control file to another diskgroup may make things even worse in case that the total number of disks are insufficient in the new disk group.
    - Those control file contention issues are usually nothing to do with the storage throughput you have but the number of operations requiring different levels of exclusion on the control files.
    - Since multiple copies of controlfiles are updated concurrently a possible, sometimes, problem is that the secondary copy of controlfile is slower than the other. Please check that this is not the issue (different tiers of storage may cause such problems)
    Regards,
    Husnu Sensoy

  • Editing stills with motion effects, performance questions.

    I am editing a video in FCE that consists solely of still photos.
    I am creating motion effects (pans and pullbacks, etc) and dissolve
    transitions, and overlaying titles. It will be played back on dvd
    on a 16:9 monitor (standard dvd,not blueray hi-def). Some questions:
    What is the FCE best setup to use for best image quality: DV-NTSC?
    DV-NTSC Anamorphic? or is it HDV-1080i or 720p30 even though it
    won't be played back as hi-def?
    How do best avoid squiggly line problem with pan moves etc?
    On my G-5, 2gb RAM, single processor machine I seem to be having
    performance problems with playback: slow to render, dropping frames, etc
    Thanks for any help!

    Excellent summary MacDLS, thanks for the contribution.
    A lot of the photos I've taken on my camera are 3072 X 2304 (resolution 314) .jpegs.
    I've heard it said that jpegs aren't the best format for Motion, since they're a compressed format.
    If you're happy with the jpegs, Motion will be, too.
    My typical project could either be 1280 X 720 or SD. I like the photo to be a lot bigger than the
    canvas size, so I have room to do crops and grows, and the like. Is there a maximum dimension
    that I should be working with?
    Yes and no. Your originals are 7,000,000 pixels. Your video working space only displays about 950,000 pixels at any single instant.
    At that project size, your stills are almost 700% larger than the frame. This will tax any system as you add more stills. 150% is more realistic in terms of processing overhead and I try to only import HUGE images that I know are going to be tightly cropped by zooming in. You need to understand that an 1300x800 section of your original is as far as you can zoom in , the pixels will be 100% in size. If you zoom in further, all you get are bigger pixels. The trade off you make is that if you zoom way out on your source image, you've thrown away 75% of its content to scale it to fit the video format; you lose much much more if you go to SD.
    Finally, the manual says that d.p.i doesn't matter in Motion, so does this mean that it's worth
    actually exporting my 300 dpi photos to 72 dpi before working with them in Motion?
    Don't confuse DPI with resolution. Your video screen will only show about 900,000 pixels in HD and about 350,000 pixels in SD totally regardless of how many pixels there are in your original.
    bogiesan

  • 9 shared objects performance question

    I have 9 shared objects 8 of which contain dynamic data which i cannot really consolidate into a single shared object because the shared object often has to be cleared.....my question is what performance issues will I exsperience with this number of shared objects .....I maybe wrong in thinking that 9 shared objects is alot.....anybody with any exsperience using multiple shared objects plz respond.

    I've used many more than 9 SO's in an application without issue. I suppose what it really comes down to is how many clients are connected to those SO's and how often each one is being updated.

Maybe you are looking for

  • C7 ovi maps - check in - system error

    I didn't have any problem with ovi maps but after I updated  my C7 from pr 1.1 to 1.2  check in function doesn't work and it says "SYSTEM ERROR". Any solution ? Please help... I am about to blow Solved! Go to Solution.

  • Weblogic 7 sp2 missing SOAP header

    Hi, I have been using a JAX-RPC Handler to add an element to the SOAP header of my requests. Everything was working fine until I upgraded from Weblogic 7 to Weblogic 7 service pack 2. Now, the server doesn't see the SOAP header that I have inserted.

  • What video file type is best for quicktime?

    I have a Macbook Pro and use Quicktime to play videos I download from p2p programs. I often find that many of the videos do not work. Messages such as "this is not a video file." and "additional software is needed to play this video" often pop up. Is

  • Fonts Not Saved in Menu

    I can add fonts to the menu in Contribute CS4, but when I quit the program and re-open, the fonts I've added aren't there. How do I get the program to remember the fonts I add?

  • Cannot insert duplicate key row in object 'dbo.NavNodes' with unique index 'NavNodes_PK' when trying to update quick launch menu sharepoint 2013

    When we try to deploy a wsp to sharepoint containing code to generate quick launch menu we get the following error messages when running the last enable-spfeature command in powershell. The same code is working in the development environment, but whe