Multicore Processor Dividing - Best Practices or Advice?

I am making a dedicated Cluster with QAdministrator that uses different models of Macs.
01 - Xserve with 4 cores (dual dual cores). (2i)
02 - Octocore MacPro 01 (4i)
03 - Octocore MacPro 02(4i)
04 - Intel Core Duo MacBook Pro (1i)
05 - G5 Dual Core (1i)
06 - Intel Core Duo Mac Mini (1i)
In the Apple QMaster Preferences, you can click on the Compressor Service and choose how many "Instances" to create for each computer.
I've been making the Octocore MacPros into 4 instances. And the Xserve Quad Core into 2 instances and everything else into 1 instance.
It is my understanding that Compressor looks at how many available instances are present and then divides the footage into 2 times that many Segments.
So counting above, I am getting 13 individual processing instances, so Compressor is breaking my footage up into 26 segments.
If every computer was 1 instance, I would have 6 instances and 12 larger pieces.
Is it better for the MacPros to use all 8 processors on 1 bigger chunk of video, or not? Is it better to break up the 8 cores so they are more balanced with the other processors?
Any thoughts and solid experience would be useful for me to hear.
Thank you. Jesus.

The easiest thing to do is to just install the latest into a clean directoryI believe different versions of jdk install into their own separate directory by default. All one needs to do is recreate the symbolic links that point to the version they want to use. The java install documentation has the details.

Similar Messages

  • Multiple jdk versions on solaris--best practices and advice

    I am a newcomer to solaris system administration (not by choice--I am normally just a Java programmer, but am now responsible for testing code on a new solaris box), so apologies for the newbie questions below.
    #1: is it typical for a brand new solaris install to have multiple versions of Java on it?
    After installation, which left me with this version of solaris:
         SunOS asm03 5.10 Generic_120011-14 sun4v sparc SUNW,SPARC-Enterprise-T5220I find from pkginfo, that their are 2 old versions of java installed:
         SUNWj3dev     J2SDK 1.4 development tools
    SUNWj3dmo     J2SDK 1.4 demo programs
    SUNWj3dvx     J2SDK 1.4 development tools (64-bit)
    SUNWj3irt     JDK 1.4 I18N run time environment
    SUNWj3jmp     J2SDK 1.4 Japanese man pages
    SUNWj3man     J2SDK 1.4 man pages
    SUNWj3rt      J2SDK 1.4 runtime environment
    SUNWj3rtx     J2SDK 1.4 runtime environment (64-bit)
    SUNWj5cfg     JDK 5.0 Host Config. (1.5.0_12)
    SUNWj5dev     JDK 5.0 Dev. Tools (1.5.0_12)
    SUNWj5dmo     JDK 5.0 Demo Programs (1.5.0_12)
    SUNWj5dmx     JDK 5.0 64-bit Demo Programs (1.5.0_12)
    SUNWj5dvx     JDK 5.0 64-bit Dev. Tools (1.5.0_12)
    SUNWj5jmp     JDK 5.0 Man Pages: Japan (1.5.0_12)
    SUNWj5man     JDK 5.0 Man Pages (1.5.0_12)
    SUNWj5rt      JDK 5.0 Runtime Env. (1.5.0_12)
    SUNWj5rtx     JDK 5.0 64-bit Runtime Env. (1.5.0_12)Both of these versions are years old; I am surprised that there is not just a single version of JDK 1.6 installed; it only came out, what, going on 2 years ago? I definitely need JDK 1.6 for all of my software to run.
    On my windows and linux boxes, I never usually have multiple JDKs; I always deinstall the current one before installing a new one. So, I went first to try and deinstall JDK 1.4 by executing
         pkgrm SUNWj3dev SUNWj3dmo SUNWj3dvx SUNWj3irt SUNWj3jmp SUNWj3man SUNWj3rt SUNWj3rtxThe package manager detected dependencies like
    WARNING:
         The <SUNWmccom> package depends on the package currently being removed.
    WARNING:
         The <SUNWmcc> package depends on the package currently being removed.
    [+ 8 more]and I decided to abort deinstallation because I have no diea what all these other programs are, and I do not want to cripple my system.
    If anyone has any idea what programs Sun is shipping that still depend on JDK 1.4, please enlighten me.
    #2: Is there any easy way to not only deinstall, say, JDK 1.4 but also deinstall all packages which depend on it?
    Maybe this is too dangerous.
    #3: Is there at least a way that I can find all the programs which depend on an entire group of packages like
         SUNWj3dev SUNWj3dmo SUNWj3dvx SUNWj3irt SUNWj3jmp SUNWj3man SUNWj3rt SUNWj3rtx?
    The above functionality would have come in real handy if I could have done it before doing what I describe next.
    I next decided to try removing JDK 1.5, so I executed
         pkgrm SUNWj5cfg SUNWj5dev SUNWj5dmo SUNWj5dmx SUNWj5dvx SUNWj5jmp SUNWj5man SUNWj5rt SUNWj5rtxI thought that this command would let me know of any dependencies of ANY of the packages that are listed. It doesn't. Instead, it merely checks the first one, and if no dependencies are found, then removes it before marching down the list. In the case above, it happily removed SUNWj5cfg because there were no dependencies on it. Then it stalled on SUNWj5dev because it found dependencies like:
    WARNING:
         The <SUNWmctag> package depends on the package currently being removed.
    WARNING:
         The <SUNWmcon> package depends on the package currently being removed.
    [+ 3 more]#4: Have I left my JDK 1.5 crippled by removing SUNWj5cfg? Or was this pretty harmless?
    #5: Was I fairly stupid to attempt the deinstallations above in the first place? Do solaris people normally leave old JDKs in place?
    #6: Or is it the case that those dependency warnings are harmless: I can go ahead and remove all old JDKs, because java programs will always find the new JDK and should run just fine with it?
    #7 Whats the deal with solaris and having multiple packages for something like the JDK? With Windows, for instance, the entire JDK has a single installer and deinstaller program. Its much easier to work with that the corresponding Solaris stuff. Do Solaris people simply need that much finer grained control over what gets installed and what doesn't? (Actually, with the Windows JDK, the gui installer program can let you install selected components should you wish; I am just not sure how scriptable this is versus the solaris stuff, which may be more sys admin friendly if you have to administer many machines.)

    The easiest thing to do is to just install the latest into a clean directoryI believe different versions of jdk install into their own separate directory by default. All one needs to do is recreate the symbolic links that point to the version they want to use. The java install documentation has the details.

  • Oracle PL/SQL best practice

    Hello experts,
    Is there any place I could find oracle PL/SQL best practice programming advice?  I got a requirement to write a small paper to help new people that come to work for us so I wanted to put some of best practices (or a set of standard tips) in that along with coding standards etc...
    Best regards,
    Igor

    Hello,
    my first links would be
    Re: 10 database commandments
    On Code Reviews and Standards
    Beware: Any discussion about coding standards tends to get lenghty with flame wars about upper/lower/camelcase keywords, indent style etc :-)
    As stated in the linked document: keep them simple.
    Regards
    Marcus
    Best Practices
    Doing SQL from PL/SQL: Best and Worst Practices
    Naming and Coding Standards for SQL and PL/SQL
    PL/SQL Coding Standards
    Also related:
    Performance question
    Re: How to re-construct my cursor ?

  • Best practices for realtime communication between background tasks and main app

    I am developing (in fact, porting to WinRT Universal App) an application connecting to Bluetooth medical devices. In order to support background connectivity, it seems best is to use background tasks triggered by a device connection. However, some of these
    devices provide a stream of data which has to be passed to the main app in real time when it is active - i.e. to show an ECG on the screen. So my task ideally should receive and store data all the time (both background and foreground) and additionally make
    main app receive it live when it is in foreground.
    My question is: how do I make background task pass real-time data to the app when it is active? Documentation talks about using storage, but it does not seem optimal for realtime messaging.. Looking for best practices and advice. Platform is Windows 8.1
    and Windows Phone 8.1.

    Hi Michael,
    Windows phone app has resource quotas, to prevent it from interfering with real-time communication functionality, background task using the ControlChannelTrigger and PushNotificationTrigger receive guaranteed resource quotas for every running task. You can
    find more information from
    https://msdn.microsoft.com/en-us/library/windows/apps/xaml/Hh977056(v=win.10).aspx. See Background task resource guarantees for real-time communication section. ControlChannelTrigger is not supported on windows phone, so you can have a look at PushNotificationTrigger
    class.
    https://msdn.microsoft.com/en-us/library/windows/apps/xaml/windows.applicationmodel.background.pushnotificationtrigger.aspx.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate
    the survey.

  • JDeveloper ADF development & CVS best practices?

    Greetings all,
    My team has chosen to use CVS for our ADF source control. Are there any best practices or advice on the source control lifecycle using CVS & JDev?

    Shay Shmeltzer wrote:
    We would recommend that if you are starting a new development project you'll use Subversion instead of CVS.
    I'll echo that - if you're familar with CVS, you'll find most SVN commands are similar (if not identical!) and you'll find that branching/merging operations and atomic commits will make the problem areas with CVS a little easier.
    Some good discussion here:
    http://stackoverflow.com/questions/245290/subversion-vs-cvs

  • Best Practice Advice - Using ARD for Inventorying System Resources Info

    Hello All,
    I hope this is the place I can post a question like this. If not please direct me if there is another location for a topic of this nature.
    We are in the process of utilizing ARD reporting for all the Macs in our district (3500 +/- a few here and there). I am looking for advice and would like some best practices ideas for a project like this. ANY and ALL advice is welcome. Scheduling reports, utilizing a task server as opposed to the Admin workstation, etc. I figured I could always learn from those with experience rather than trying to reinvent the wheel. Thanks for your time.

    hey, i am also intrested in any tips. we are gearing up to use ARD for all of our macs current and future.
    i am having a hard time with entering the user/pass for each machine, is there and eaiser way to do so? we dont have nearly as many macs running as you do but its still a pain to do each one over and over. any hints? or am i doing it wrong?
    thanks
    -wilt

  • Seeking advice on Best Practices for XML Storage Options - XMLTYPE

    Sparc64
    11.2.0.2
    During OOW12 I tried to attend every xml session I could. There was one where a Mr. Drake was explaining something about not using clob
    as an attribute to storing the xml and that "it will break your application."
    We're moving forward with storing the industry standard invoice in an xmltype column, but Im not concerned that our table definition is not what was advised:
    --i've dummied this down to protect company assets
      CREATE TABLE "INVOICE_DOC"
       (     "INVOICE_ID" NUMBER NOT NULL ENABLE,
         "DOC" "SYS"."XMLTYPE"  NOT NULL ENABLE,
         "VERSION" VARCHAR2(256) NOT NULL ENABLE,
         "STATUS" VARCHAR2(256),
         "STATE" VARCHAR2(256),
         "USER_ID" VARCHAR2(256),
         "APP_ID" VARCHAR2(256),
         "INSERT_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
         "UPDATE_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
          CONSTRAINT "FK_####_DOC_INV_ID" FOREIGN KEY ("INVOICE_ID")
                 REFERENCES "INVOICE_LO" ("INVOICE_ID") ENABLE
       ) SEGMENT CREATION IMMEDIATE
    INITRANS 20  
    TABLESPACE "####_####_DATA"
           XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA"  XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA" ENABLE STORAGE IN ROW CHUNK 16384 RETENTION
      NOCACHE LOGGING
      STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
    XMLSCHEMA "http://mycompanynamehere.com/xdb/Invoice###.xsd" ELEMENT "Invoice" ID #####"
    {code}
    What is a best practice for this type of table?  Yes, we intend on registering the schema against an xsd.
    Any help/advice would be appreciated.
    -abe                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi,
    I suggest you read this paper : Oracle XML DB : Choosing the Best XMLType Storage Option for Your Use Case
    It is available on the XML DB home page along with other documents you may be interested in.
    To sum up, the storage method you need depends on the requirement, i.e. how XML data is accessed.
    There was one where a Mr. Drake was explaining something about not using clob as an attribute to storing the xml and that "it will break your application."I think the message Mark Drake wanted to convey is that CLOB storage is now deprecated and shouldn't be used anymore (though still supported for backward compatibility).
    The default XMLType storage starting with version 11.2.0.2 is now Binary XML, a posted-parsed binary format that optimizes both storage size and data access (via XQuery), so you should at least use it instead of the BASICFILE CLOB.
    Schema-based Binary XML is also available, it adds another layer of "awareness" for Oracle to manage instance documents.
    To use this feature, the XML schema must be registered with "options => dbms_xmlschema.REGISTER_BINARYXML".
    The other common approach for schema-based XML is Object-Relational storage.
    BTW... you may want to post here next time, in the dedicated forum : {forum:id=34}
    Mark Drake is one of the regular user, along with Marco Gralike you've probably seen too at OOW.
    Edited by: odie_63 on 18 oct. 2012 21:55

  • SubFlow best practice advice please

    I am new to IPCCX scripting and would like some advice on whether multiple SubFlows are a good idea.
    We have 16 small Call Centers that all have very basic scripts. I plan on adding a Holiday/Emergency Closure SubFlow to all of them and I would like to add a few additional features as well.
    I plan on adding:
    ·         Position in Queue
    ·         Expected Wait Time
    ·         If more than X number of callers in queue, inform the caller that they cannot be helped at this time and to call back later.
    ·         If Expected Wait Time exceeds closing time, inform the caller that they cannot be helped at this time and to call back later.
    (I know the last two sound pretty harsh, but it’s government, and there is no budget to hire more operators. I think it is better to let the callers know early that they need to call back, than to have them wait for two hours just to be disconnected. And no, this is NOT the 911 call center!!!   LOL)
    My questions are:
    Would it be ok to add each feature as a SubFlow? Or could there possibly be performance or other issues by having so many SubFlows in one script.
    My other option is to add each item internal to each script, but that would be a lot to tackle 16 times…
    Lastly, is there a best practice on how short a script should be for performance? I know you can’t have one that is longer than 1000 steps, but should I try to keep the step count below a certain number?
    Any advice or insights would be greatly appreciated…
    Thanks,
    Doug.

    Doug,
    Most of the items your list are included natively in UCCX.  Use the Get Reporting Statistic step to obtain the expected wait time, position in queue, and total queue time information. The current time-of-day can be had using a Time variable.  You'll need to do some work to convert the values into something you can play to the calling party as a prompt, a subflow is great for that, but you shouldn't need to reinvent the wheel.
    Take the time to draw out the call flow for each of the 16 contact centers on paper or in MS Visio. If your 16 call flows are very similar in the way they operate, consider a master script that just changes the prompts/menus based on the number dialed.  Leverage XML or a database (if you have premium licensing) to pull in the relevant information you need for each DNIS.  You may find you can streamline the entire system, or at least a good portion of it, without it becoming 16 unwieldy applications.
    Steven

  • Advice for Soon-to-be MacPro Owner. Need Recs for Best Practices...

    I'll be getting a Quad Core 3 Ghz with 1GB of RAM, a 250Gig HD, the ATI X1900 card. It will be my first mac after five years (replacing a well-used G4 Tibook 1Ghz).
    First the pressing questions: Thanks to the advice of many on this board, I'll be buying 4GB of RAM from Crucial (and upgrading the HD down the road when needs warrant).
    1) Am I able to add the new RAM with the 1G that the system comes with? Or will they be incompatible, requiring me to uninstall the shipped RAM?
    Another HUGE issue I've been struggling with is whether or not to batch migrate the entire MacPro with everything that's on my TiBook. I have so many legacy apps, fonts that I probably don't use any more and probably have contributed to intermittent crashes and performance issues. I'm leaning towards fresh installs of my most crucial apps: photoshop w/ plugins, lightroom, firefox with extensions and just slowly and systematically re-installing software as the need arises.
    Apart from that...I'd like to get a consensus as to new system best practices. What should I be doing/buying to ensure and establish a clean, maintenance-lite, high-performance running machine?

    I believe you will end up with 2x512mb ram from the Apple store. If you want to add 4gb more you'll want to get 4x1gb ram sticks. 5gb ram is never an "optimal" amount but people talk like it's bad or something but it's simply that the last gig of ram isn't accessed quite as fast. You'll want to change the placement so the 4x1 sticks are "first" and will be all paired up nicely so your other two 512 sticks only get accessed when needed. A little searching here will turn up explanations for how best to populate the ram for your situation. It's still better to have 5 gigs where the 5th gig of ram isn't quite as fast than 4. They will not be incompatible but you WILL want to uninstall the original RAM, then put in the 4gigs into the optimal slots then add the other two 512 chips.
    Do fresh installs. Absolutely. Then only add those fonts that you really need. If you use a ton of fonts I'd get some font checking app that will verify them.
    I don't use RAID for my home machine. I use 4 internal 500gig drives. One is my boot, the other is my data (although it is now full and I'll be adding a pair of external FW). Each HD has a mirror backup drive. I use SuperDuper to create a clone of my Boot drive only after a period of a week or two of rock solid performance following any system update. Then I don't touch it till another update or installation of an app followed by a few weeks of solid performance with all of my critical apps. That allows me to update quicktime or a security update without concern...because some of those updates really cause havoc with people. If I have a problem (and it has happened) I just boot from my other drive and clone that known-good drive back to the other. I also backup my data drive "manually" with Superduper.
    You will get higher performance with Raid of course, but doing that requires three drives (two for performance and one for backup) just for data-scratch, as well as two more for boot and backup of boot. Some folks can fit all their boot and data on one drive but photoshop and many other apps (FCP) really prefer data to be on a separate disk. My setup isn't the absolute fastest, but for me it's a very solid, low maintenance,good performing setup.

  • Advice on Best practice for inter-countries Active Directory

    We want to merge three active directories with on as parent in Dubai, then child in Dubai, Bahrain and Kuwait. The time zones are different and sites are connected using VPN/leased line. With my studies i have explored two options. One way is to have parent
    domain/forest in Dubai and Child domain in respective countries/offices; second way is to have parent and all child domains in Dubai Data center as it is bigger, while respective countries have DCs connected to their respective child domains in Dubai. (Personally
    i find it safer in second option)
    Kindly advise which approach comes under best practice.
    Thanks in advance.

    Hi Richard
    Mueller,
    You perfectly got my point. We have three difference forests/domain in three different countries. I asked this question becuase I am worried for problems in replications. 
    And yes there are political reasons due to which we want to have multiple domains under one single forest. I have these following points:
    1. With multiple domains you introduce complications with trusts 
    (Yes we will face complications that is why  I will have a VM where there will be three child domains for 3 countries in HQ sitting right next to my main AD server which have forest/domain -  which i hope will help in fixing replication problems)
    2. and
    accessing resources in remote domains. (To address this issue i will implement two additional DCs in respective countries to make the resources available, these RODCs will be pointed toward their respective main domains in HQ)
    As an example:- 
    HQ data center=============
    Company.com (forest/domain)
    3 child domain to company.com
    example uae.company.com
    =======================
    UAE regional office=====================
    2 RODCs pointed towards uae.company.com in HQ
    ==================================
    Please tell me if i make sense here.

  • Rules Best Practice Advice Required

    I find that I'm fighting with the Business Rules in my BPM project, so I'd thought I throw the scenario out here and see what best practices anyone might propose.
    The Example*:
    Assume I have people, and each of them is assigned a list/array of "aspects" from an enumerated set: TALL; SPORTY; TALKATIVE; TRAVELER; STUDIOUS; GREGARIOUS; CLAUSTROPHOBIC.
    Also assume I have several Marketing campaigns, and as part of one or more processes, I need to occasionally determine whether a person fits the criteria for a particular campaign, which is based on the presence of a one or more aspects. The definitions of the campaigns may change, so the thought is to define them as business rules; if they change, the rule changes, without impacting the processes themselves (assume the set of campaigns doesn't change, just the rules for matching aspects to a particular campaign).
    My initial take is to to define each campaign as a bucketset, containing aspects, the presence of which indicates inclusion in the campaign. If a person has ANY of the aspects, they are considered a member.
    Campaigns (each perhaps defined as a LOV bucketset):
    DEODORANT: SPORTY, TRAVELER, GREGARIOUS
    E_READER:STUDIOUS,TRAVELER
    BREATH_MINT:TALKATIVE, GREGARIOUS
    HELMET:TALL, CLAUSTROPHOBIC
    So we want to create a service to check: Does a person belong to the BREATH_MINT campaign? We extract their aspects and check to see if ANY of them are in the BREATH_MINT campaign. If so, we return true. Basically: return ( intersection( BREATH_MINT.elements(), person.aspects() ) ).size() > 0
    The problem is: what's the best way to implement this using Business Rules? Functions? Decision Functions? Decision Tables? Stright IF/THEN? Some combination of the above? I find I'm fighting the tool, which means that, although this is a fairly simple problem, I don't understand the purpose of the various parts of the tool well.
    Things to consider:
    Purpose: test a person for inclusion in a specific campaign
    Input - the person's aspects, either directly, or extracted from the person
    Output - a boolean
    There can be a separate service for each campaign, or it could be specifed by an enumerated value as a parameter.
    Many thanks in advance!
    ~*Completely Fabricated~
    Edited by: 842765 on Mar 8, 2011 12:07 PM - typos

    hey, i am also intrested in any tips. we are gearing up to use ARD for all of our macs current and future.
    i am having a hard time with entering the user/pass for each machine, is there and eaiser way to do so? we dont have nearly as many macs running as you do but its still a pain to do each one over and over. any hints? or am i doing it wrong?
    thanks
    -wilt

  • Swtich with 2 wireless routers (configuration for best practice/advice?)

    HI folks,
    I have a gigabit switch, and 2 wireless G routers.  I'll leave the model numbers out as it's fairly irrelevant - all linksys.
    Router 1 is used as a router only (due to location in basement)
    Router 2 is used for wireless only
    My current network setup:
    DSL MODEM (accessed on 192.168.2.1 - can not be changed) > Router 1(192.168.1.1)
    Router 1 > Switch (i believe it can't be changed 192.168.2.12 - no webgui)
    Switch > everything else including Router 2
    Everything works except Router 2 - can't connect to it wired or wirelessly until connected directly to a pc.
    Is my setup wrong
    and/or is there a best practice?
    Many thanks!!!

    What is the model number of the switch?
    Normally a switch that cannot be changed does not have an IP address.  So if your switch has an address (you said it was 192.168.2.12)  I would assume that it can be changed and that it must support either a gui or have some way to set or reset the switch.
    Since Router1 is using the 192.168.1.x  subnet , then the switch would need to have a 192.168.1.x  address (assuming that it even has an IP address), otherwise Router1 will not be able to access the switch.
    I would suggest that initially, you setup your two routers without the switch, and make sure they are working properly, then add the switch.  Normally you should not need to change any settings in your routers when you add the switch.
    To setup your two routers, see my post at this URL:
    http://forums.linksys.com/linksys/board/message?board.id=Wireless_Routers&message.id=108928
    Message Edited by toomanydonuts on 04-07-2009 02:39 AM

  • Best practice for smooth workflow in PrE?

    Hi all.  I'm an FCP user for many many years, but I'm helping an artist friend of mine with a Kickstarter video...and he's insistent that he's going to do it himself on his Dell laptop running Win7 and PrE (I believe v11, from the CS3 package)...so I'm turning to the forum here for some help.
    In Apple Land (that is, those of us still using FCP 7), we take all our elements in whatever format they're delivered to us and transcode them to ProRes, DVCPro HD or XDCAM...it just makes it easier not to deal with mixed formats on the timeline (please, no snarky comments about that, OK, I turn out broadcast work every week doing this so this method's got something going for it...).  However, when I fired up PrE I see that you can edit in all sorts of formats, including long-GOP formats like .mts and mp4 files that I wouldn't dream of working with natively in FCP...I don't enjoy staring at spinning beachballs that much. 
    Now, remembering that he's working with a severely underpowered laptop, with 2 gig of RAM, and a USB2 connection to his 7200 rpm "video" drive...and also considering that most of the video he'll be using will come in two flavors (AVHCD from a Canon Vixia 100, and HDV from a Canon EX-something or other), what would be the best way to proceed to maximize the ease at which he can edit?  I'm thinking that transcoding to something like Motion-JPEG or some other inter-frame compressed AVI format would be the way to go...it's a short video and he won't have that much material so file size inflation isn't an issue...speed and ease of processing the video files on the timeline (or do they call it a "Sceneline") is.
    Any advice (besides "buy another computer") would be appreciated...

    Steve, thanks, this is helping me now.
    I mention MJPEG because, as an Interframe compression method, it's less processor-intensive than GOP style MPEG compressions.  Again, my point of reference is the Mac/FCP7 world (so open my eyes as to how an Intel processor running Win7 would work differently), but over there best practice says NOT to edit in a GOP-base codec (XDCAM being the exception which proves the rule, eg, render times), but to transcode everything from, say, h264 or AVCwhatever into ProRes.  YES, I know PrE (and PPRO) doesn't use ProRes...not asking that.  But, at least at this juncture, any sort of a hardware upgrade is out of the question...this is what he's gonna be using to edit.  Now if I was going to be using an underpowered Mac laptop to try and edit, I most certainly would not try to push native AVCHD .mts files or native h264 files through it...those don't even work well with the biggest MacPro towers.  What is it about PrE that allows it to efficiently work with these processor-intensive formats?  This is the crux of the issue, as I'm going to advise my friend to "work this way" and I don't want to send him down the garden path of render hell...
    And finally, your advice to run tests is well-given...since I have no experience with PrE and his computer, I guess that's where we'll start...

  • Best practice on Oracle VM for Sparc System

    Dear All,
    I want to test Oracle VM for Sparc System but I don't have new model Server to test it. What is the best practice of Oracle VM for Sparc System?
    I have a Dell laptop which has spec as below:
    -Intel® CoreTM i7-2640M
    (2.8GHz, 4MB cache)
    - Ram: 8GB DDR3
    - HDD: 750GB
    -1GB AMD Radeon
    I want to install Oracle VM VirtualBox on my laptop and then install Oracle VM for Sparc System in Virtual Box, is it possible?
    Please kindly give advice,
    Thanks and regards,
    Heng

    Heng Horn wrote:
    How about computer desktop or computer workstation with the latest version has CPU supports Oracle VM or SPARC?Nope. The only place you find SPARC T4 processors is in Sun Servers (and some Fujitsu servers, I think).

  • Best Practices to do software Patching and Software Deployment for bigger environment like 300 K computers

    Hi Friends,
    i am looking for low level suggestions and a ppt/document etc too  , The client base is 300 k users and spread globally ( major in three different regions), the requirement is
    1) methodology to do software patching, can we patch all in one go or do we have to divide as per region etc
    2) How many clients can be targeted for software patching in one go ( ex : can we target 20K clients in one go ?), i know there are other factors too will play key role here like band width etc , but i am looking answers out of real time experience
    3) What Methodology to follow when it comes to critical/emergency updates ?
    Regards
    Tanoj
    OSLM ENGINEER - SCCM 2007 & 2012

    There is no single best practice to patching, if there were then SCCM would ship preconfigured :).  As an example, Microsoft internally patches 300,000 workstations with 98% success in about a week according to their own podcast:
    Microsoft Podcast
    That said, I do follow a few rules when building a patching plan for a client.  Maybe you'll find it helpful:
    Always use a "soak tier".  I forget where I first heard the term, but the idea is to have a good cross section of users get patches one or more weeks before your general deployment.  This will help identify potential issues with a patch
    before it hits general release.  Make sure said group is NOT just the IT department ... we make the worst guinea pigs (we aren't known for closing out end of the month billing or posting legal documents).
    When it comes to workstations, avoid needlessly phased deployment.  99% of the time, using local time zones is enough of a phased deployment.  Unlike servers with very particular boot and patching orders, workstations can simply be patched.  You
    have enough collections in your environment ... so any new collection for patching should be justified.
    Keep your ADR count down.  It's tempting to build a new ADR for everything (workstations, general servers, exchange servers, etc.).  Problem is that best practice also has you building a new SUG every time each ADR runs ... so you end up getting
    flooded with update groups and that much more maintenance.  When possible simply use maintenance windows to break up patching schedules instead of using mostly duplicate ADRs that simply have separate start dates.
    Use Orchastrator.  To me Orchastrator is to Software Updates what MDT is to Operating System Deployments:  effectively mandatory.  Even if you don't have complicated cluster updates you need to automate with SCO integrated to SCCM (there
    are great examples on the web if you do), you can at the very least create run-books to manage that monthly maintenance you otherwise have to handle manually in SCCM (which is a lot IMO).  I have monthly run-books that delete expired updates from SUGs,
    consoldate SUGs older than 6 months unto a single annual group, and even create new update packages (and update all ADRs to use them) every 6 months to keep a single repository from getting too large.
    I'm sure others out there can give you more advice ... but that's my two cents.

Maybe you are looking for

  • How to print text in a new line in a text view.

    Hi, I have a situation where I need to print two lines inside a text view. for eg:     Bye Bye < user name>               Have a nice day!! I am assigning this to the context by conactenating   Bye Bye   < user name>   Have a nice day!!. But the text

  • Class-map with ACL rule

    In the following class-map: "class-map match-any voice match access-group 190" If the ACL 190 has more than one line with "permit" statements. In order for the policy-map using the above class-map to find a match and use the rules applied for the abo

  • APP..House Bank

    I having one query regarding APP.... In APP run...total amount payable to vendor comes out to Rs. 5 lacs ....and in FBZP under Bank determination in ranking order we have specified ranking order of banks in the following manner..... House bank 1.....

  • Datafile recovery

    Hi all, In one of the database, A datafile lets say datafile 5 got corrupted and not consistent with all other datafiles. The reason is because of the changes i have made are 1. The database was in archivelog mode and i have archived logs till sequen

  • Images with clean lines show up with steping

    I'm building some site content with flash and having a heck of a time with "stepping" or pixelized looking edges on very good quality crisp images. How can I stop this from happening? I have also found this when creating flash galleries through Light