CLI framerate and serial jobs on a cluster

Hello,
I have couple of questions for compressor (ver 4 )
1) Is there a way to specify framerate for image sequence using command line interface.
The compressor help lists the following but does not provide a way how such an option work. I tried few different ways in vain
-jobpath  -- url to source file.                        -- In case of Image Sequence, URL should be a file URL pointing to directory with image sequence.
-- Additional parameters may be specified to set frameRate (e.g. frameRate=29.97) and audio file (e.g. audio=/usr/me/myaudiofile.mov).
2) I have a managed cluster with 8 nodes each running 4 instances. For some reason compressor only uses 6 instances spread across many nodes and not all 32.
3) Is there a way to specify and process just one job for a given cluster? This is equivalent of serialize option but it does not seem to be available.
Currently when I submit multiple jobs few of them run at once and create a havok on NFS mounts and fail. I can limit the job queue to one but that is not ideal.
I would appreciate any pointers.
Thanks
--Amit

Hi Amit, just saw your post. I assume you are passing the settings via the "-settingspath" <my_specific_job_settings_in_a_file_path_somwhere.settings" on the compressor command?
If so, it's a very simple matter to specify the frame rate etc etc on the VIDEO setting itself, save it and use it.
I don't recall that such "atomic settings" we're actually available in the v3 of compressor.app. I'll check later for v4. I'd be surprised  if they are. :)
What I've done in the past is to simply make my own personalised settings (and destinations) up using  the Compressor.app V4 UI (save as ... my name .. i.e.  prores_noaudio_PAL" and path these file paths on the -settingspath" parm on the compressor cli. In your case I'd imagine a simple VIDEO setting for your frame rate and you're set!
Compressor.app v4 makes any of your own R.Y.O settings on your ~/library/application support/compressor/settings folder. You can copy or move or link these anywhere you like and pass these customised Parms to compressor cli as above.
I doubt also if these atomic settings are available as AppleScript properties, there's likely no vars like that there me thinks. . I recall the same ones exist as they do for the cli and now in compressor.app 4 they probably support Qmaster .. Yeah.. Compressor.app is apple scriptable.. It's not in the default list library so just add it...
Lastly as a guess these compressor ".setting" files are probably XML base.. So you might consider to tinker with one with an editor.
Anyway.. Try then"-settingspath" operand and see how u go.
2) the way Qmaster schedules across unmanaged transcode nodes is likely based on how busy each node is. You should be able to see if there is a pattern simply by looking on the Qmaster job controller.log .. See share monitor.app info or use console.app look on
your ~/library/application support/apple Qmaster/logs directory. There will be something in there.
Also have a look for any errors on the cluster services logs in case the services you expects to be there are actually not.
Are you using a managed cluster? Personally I have found this quite stable. Make sure u insure that those services are for managed climates only..
3) yes you can specify a specific cluster on the using the "-clusterid" operand. Should you have more than one managed cluster in the farm, this is a cool way to go. Also considering the "-priority" operand usage as well for hot jobs. Make sure all submissions are low priority... It's batch.. Works great!!
4) NDS mounts.. Well the simple rule is to keep them mounted on their own subnet, maker sure all destinations and sources are available to all hosts, set compressor options to copy to cluster only when ya must, and make sure the service time for NDS io READ Request is as fast as u can make it. ( jumbo frames, dedicated NICs and sinners and optimum fast read file systems... Keep other traffic away for it.. Works a treat!
Should you turn something up pleased at it here for others to see. I'm certainly interested.
Sorry for typos.. Just on MTR with iPhone on way home. :)
Hth
Warwick
Hong Kong

Similar Messages

  • OFFLINE backup and DB13 jobs configuratins error in MSCS cluster with OFS

    Hello,
    We have recently installed SAP ERP 6.0 EhP4 SR1 in MSCS cluster using Oracle Fail Safe.
    The installation is successful and all the failover scenarios work fine.
    I need to schedule offline backups and DB13 jobs, which is not possible in current scenario as once the database shuts down it fails over to node2.
    I am following sapnote 657999 to install standalone gateway on both the nodes of the cluster and assigned the service to ORACLE<SID> group. the gw<SID> is different from the ERP<SID>.
    However, the  service SAP<gwSID>_<gwSN> fail to come up with the following error.
    The service SAPSGA_30 cannot be started on local computer
    error 1067: the process terminated unexpectedly.
    The ../SapCluster/sapgw and associated directories have full permission.
    Following are the contents of the profiles in ../SapCluster/sapgw
    default.pfl
    SAPDBHOST = ORACLEPRD
    gw30.pfl
    SAPSYSTEMNAME = SGA
    INSTANCE_NAME = GW30
    SAPSYSTEM = 30
    DIR_PROFILE=C:\WINDOWS\SapCluster\sapgw
    DIR_EXECUTABLE=C:\WINDOWS\SapCluster\sapgw
    DIR_INSTANCE=C:\WINDOWS\SapCluster\sapgw
    SAPLOCALHOST = ORACLEPRD
    SAPLOCALHOSTFULL = ORACLEPRD
    startgw.pfl
    SAPSYSTEMNAME = SGA
    INSTANCE_NAME = GW30
    SAPSYSTEM = 30
    DIR_EXECUTABLE = C:\WINDOWS\SapCluster\sapgw
    DIR_PROFILE = C:\WINDOWS\SapCluster\sapgw
    DIR_INSTANCE = C:\WINDOWS\SapCluster\sapgw
    _GW=gwrd.exe
    Start_Program_00 = local $(DIR_EXECUTABLE)\$(_GW) pf=$(DIR_PROFILE)\gw30.pfl
    Now as the service is a part of my ORACLE<ERP SID> group, everytime I try to bring the group online, it fails due to the gw service and toggles many times in the two nodes before coming to rest on one node withfailed gw service and database in the  up condition.
    Any insight in this regard will be really helpful.
    Thanks
    Nischal

    Hello,
    Yes we were able to configure offline backups.
    We opted to install standalone gateway on both nodes and added it to the DB group. That provides the essential shell console.
    You need to follow the SAP Note 657999 and then 378648 for setting the necessary environment variables.
    One piece of advice, use the SID and system no. for the independent gateway, different from any of your SAP installations.
    Thanks
    Nischal

  • Need help in query to display lot and serial numbers for a Wip Job Component

    Hi ALL,
    I have a requirement as below.
    I need to display lot and serial numbers for a Wip Job Component, i have a xml report in that for each wip job component i need to check whether it is a lot control item or serial control item based on that i need to display some data. so can you please help me to get the query to indentify the lot and serial number for a wip job component.
    Thanks

    Thank you for replying Gordon.  I did not remember I had asked this before.  I no longer have access to the other account. 
    What I need on the query is that I want  a list of items with the on order quantity and when we are expecting this order to be received.  The receiving date is based on the PO delivery date.  The trick here is that  I need to Master sku to show the delivery date of the component sku.  In my scenario all components have the same delivery date for the Master sku.  If there are mulitple delivery dates because each warehouse is getting them on a different date then I want to get the delivery date for that warehouse.
    Let me know if this is possible and if yes please guide towards a query for it.
    Thank you.

  • Adobe Store takes too long to deliver download and serial information

    Earlier this week we received some Indesign files for a job, but they were in Indesign 5 format, so we were unable to open them with Indesign CS4. So, taking this as a sign that it was time to upgrade, yesterday, at around 1pm UK time I purchased the upgrade from Creative Suite Design Premium 4 to Creative Suite Design Premium 5.5. I logged onto the Adobe.com store, looked at the options - Home and Home Business, Small and Medium Business, Eductation and selected Small and Medium Business (because that is what we are). I purchased the upgrade (as a download), received a confirmation email from Adobe and waited....and waited...and waited. At the end of the day I called Adobe Custopmer support and asked why I have not had any email showing links to download sites, and serial numbers. I was told it can take 24 hrs to process the order.
    Well here we are over 24 hrs later and I have still not had any information from Adobe. I tried calling the customer support and it is only open between 9am and 5pm Monday to Friday. The customer support website is down. The Adobe Volume Licensing site is down - and was all day yesterday as well.
    In the meantime I have no way of opening the files. My deadline is Wednesday and we had planned to work on the job today and tomorrow (which is why I am sitting in my office now). It appears I cannot access customer support before Monday morning, and the customer support website is down until tomorrow evening UK time - at least. This means working all night on Monday and probably Tuesday - assuming we get access to the download information early Monday.
    Adobe is one of the biggest software vendors in the world. When I buy software - online and for download, with a credit card - I expect to be able to access the software immediately. There is no excuse for this delay. None. A business the size of Adobe should not have customer services down for 2 days. Support should run 24/7, 365 days a year. My enquiry yesterday was routed to an offshore call centre, so having a 9 to 5 cut off is simply ridiculous.
    The simple fact is that even the smallest software vendors we buy from deliver the goods within minutes. As you can tell I am mightily "annoyed" by this. At this point in time I am wishing these files were in Quark Xpress format (we have both). Quark used to have terrible support but they are now very good - and available.
    As a customer of Adobe for over 15 years I am disappointed. Today, in my email inbox I have emails from Adobe about new online cloud services. Well, why would I want to invest any more in Adobe software if I cannot even access the ones I have bought - quickly?
    If companies like Adobe want to know why piracy is rife - look at your own delivery systems. The only option I have now to do this job is to either download a pirated version (which I will not) or to download and run the trial for Indesign.  How stupid is that?
    Apologies for the rant.

    ProDesignTools wrote:
    Kevin Quigley wrote:
    ... selected Small and Medium Business (because that is what we are). I purchased the upgrade (as a download), ... 
    Well here we are over 24 hrs later and I have still not had any information from Adobe. ...
    In the meantime I have no way of opening the files. My deadline is Wednesday ...
    Hi Kevin, not sure why it's taken longer than usual, but you can just download the free trial in the meantime.  It will work 100% for 30 days, and you can convert it over to the full version once you receive your permanent serial number without reinstalling.  This is fine even with a business license.
    There's really no reason not to proceed in this fashion, especially with an imminent deadline.  Adobe recommends this in cases where there is any sort of delay (for example, student validation) or immediate need.  And absolutely you should avoid downloading the software from anywhere else, which is not only illegal but also hazardous to the health of your computer (malware).
    Hope this helps!
    Acrobat has never had a Trial for Mac. Probably never will.

  • Archiving Equipments and Serial Numbers

    Dear PM Experts,
    I have searched alot for how to do archiving for equipments and serial numbers and i used SARA and maintained variant what so ever but at the end i got the job cancelled in the log , can anyone please tell me steps to do archiving for equipments and serial numbers in brief, I really appreciate all the great support here,
    Much Thanks in advance

    Hi,
    [Archiving Serial Numbers|http://help.sap.com/saphelp_46c/helpdata/en/38/d2a784d02411d395c500a0c93029cf/content.htm]
    [Useful Link|http://www.sapfans.com/forums/viewtopic.php?f=7&t=169672]
    I have not tried this (equipment with serial number).
    BADI_CCM_EQUI_ARC. This is Equipment Archiving BADI => Analyse this with your ABAPer.
    Regards,
    Maheswaran.

  • I have cs4 installed on my old computer, i have since lost the disk and serial #s. Is there any way to recover this information and install the programs on my new computer?

    I have cs4 installed on my old computer, i have since lost the disk and serial #s. Is there any way to recover this information and install the programs on my new computer? I used my old computer for college and haven't used it in awhile, now I have a job that can benefit from having those programs. I was wondering if ADOBE keeps records or something to where I can download the programs on my new MacBook and use my old serial #s.

    Find your serial number quickly
    Download CS4 products
    Mylenium

  • SCSM 2012 R2 Workflows and DW Jobs Do Not Come Up After SQL Node Switch

    Hello,
    We have a MS SQL Server cluster consisting of 2 nodes in our environment. After nightly OS updates and consequent server restart of one of SQL Cluster nodes (where an update takes place) SCSM workflows stop working, in certain cases Data Warehouse jobs also
    stop until I restart SCSM server and start DW jobs manualy.
    According to logs SQL server switches a node succesfully, but still it is impossible to connect to the SQL Server for 1-3 minutes. Is this a normal behavior? Is there any best practice for handling SQL connection issues and patch management on SQL cluster
    nodes?
    Thanks!

    Restart the system center data access service on the workflow server (the SCSM management group server you installed) any time the SQL server migrates. the workflow service runs every 30 seconds, looking for work to do, and if it fails a few times, then
    it stops looking. restarting the service is the only way that i'm aware of to fix it. 

  • Hive and pig jobs to any container in storage account

    Hi
    While creating HDInsight account, we choose a storage account. 
    A container will be created in the name of the cluster and the required jars will be added to this.
    Can we submit pig, hive and sqoop jobs to any container in the storage account or only to the container that is created while creating a cluster.
    If we can submit data container and If I am using hive shell in RDP of headnode, then how to specify the container.

    Hi Sridevi_K
    It should work. While creating external table in hive you can define location (here considering file is at root location of an container) like 'wasb://[email protected]/'.
    Next time when you query the table it's knows where to look for the data since this info will be there in metadata. In case you want to check it type this command on hive console "DESCRIBE EXTENDED TABLE_NAME". In pig script you can also define
    the file location while loading the data like loaddata = LOAD 'wasb://[email protected]/FILE_NAME.EXTENSION';.
    You can check it on your environment. However I am curious why you want to do that.
    Thanks,
    Please mark it answered if it's answer your question.
    Sudhir Rawat

  • Having difficulty starting a Compressor job using a cluster

    The batch monitor will error out- something having to do with timing out before Qmaster can open it's storage area on the host machine (not sure if that makes sense). Anyway, the cluster i created, has 2 service nodes, both with Final Cut, and DVDSP4 installed. I've never been able to submit a job to the cluster. It will always error out.

    Are you submitting directly from Final Cut using "Export to Compressor"? Are you using a quickcluster, or a managed cluster? Do you see your nodes in Qadministrator? Do you have shared storage? If not did you turn on "Personal File Sharing" in system preferences and enable applesharing in network prefs on both machines. Did you disable the firewall?
    I'm not in front of my computers, but there is a setting in the Compressor settings that allows you to change "Always copy files to cluster storage", "Never", etc Try changing that to always. I really don't know if it will help, but it got me hung up on my setup for ahwhile. Did you change the cluster storage in sys prefs on either machine? (You shouldn't) Also try resetting the nodes by stopping the services on them, then holding option-apple and click reset.
    Finally, see if you can post the exact error that you are receiving. There is a log file somewhere in "/var/log" that might help to narrow down the problem.
    I use NFS with a Managed cluster, but if you are using Appleshare I think you should be able to go to the Go->Connect to Server from finder and connect to your machine by typing in the IP address or hostname.
    There's some ideas to get you started.
    ~half.italian

  • Submit remote job to HDInsight cluster using its IP address.

    Hi there,
    I am using HDInsight and trying to submit jobs in it. I am able to submit jobs using the API provided by Azure. This works fine. Also, I am able to submit job in remote machine by opening the remote machine using VM. 
    I am now trying to submit the job to the HDInsight cluster from my machine using the IP address of the remote machine. I am not able to submit any job into it. It throws out some sort of error.
    Please help me on this.
    Regards,
    Athiram S

    Hi Sudhir,
    Thanks for looking into this.
    We can submit job in hadoop cluster using the IP address by the following method.
    1) Configure certain XML files like core-site.xml,Hdfs-site.xml, Yarn-site.xml in cluster machine(namenode) with the IP address of the machine. I also make the similar change in the configuration files in my machine under the location "..\\etc\\hadoopcluster_IPAddress".
    2)Now, execute the command pig --config "..\\etc\\hadoopcluster_IPAddress" in my machine(which is connected to namenode machine of the cluster through LAN). Now, the Map-reduce job gets executed in remote machine.
    I am trying a similar approach for submitting the job in HDInsight cluster. I use the Headnode IP address and modified the configuration files and used the same command as above. But, I am wondering why it not working.
    I am able to get the job successfully executed to my cluster machine and job submission in HDInsight cluster fails.
    Please help me on this issue.
    Regards,
    Athiram S

  • Has anyone successfully sent a motion job to a cluster?

    I am trying to submit batch jobs to Compressor that have a motion element associated with them Whenever I submit jobs from the command line to Compressor via a cluster that do not contain Motion elements the jobs submit and render fine.
    When I try and submit a batch that requires Motion I get timeout errors with a "Service Down" message.
    I've filed the question under QMaster too (see here for thread: http://discussions.apple.com/thread.jspa?threadID=1984897&tstart=0) which details my setup.
    I am at a loss as to why Motion prevents jobs from being rendered once submitted to the cluster. The same problem occurs when I submit via Compressor - so it's not a command line only problem.
    Any suggestions would be very much appreciated.

    Hi Patrick,
    Thanks for the reply. I've been hammering away at this for two weeks solid now. I ended up having an almost 2 hour discussion with Apple on it Thursday. So here is what they told me...
    1. A "headless" installation is not supported (i.e. plug in a monitor to the server otherwise it isn't a supported setup).
    2. The graphics card I have in the server, although rated between two compatible devices, is not officially supported
    So basically I ended up with "it's not possible running it from OS X Server, with Motion in a headless setup".
    Well I'm a die-hard when it comes to being told "it can't" and so this is where I am at.
    I reinstalled Final Cut on the server. I then set up a cluster with services and as a controller. Next I checked my Motion files to make sure they were accessing assets over an NFS connection (even though they were on the local machine - and this might have been the problem all along). Started services. Setup QAdministrator to manage the instances and submitted the jobs.
    Success!! Loads of unprintable words were spewed with a few thanks to various beings in the universe. I basically had set up the cluster and it worked as advertised with Motion. I then installed the latest updates - boom. Everything stopped working. More expletives and back to square one.
    After uninstalling and reinstalling as per Apple's recommended route (see link in previous post), I now have an install working (without the latest updates) that renders properly. Here's the rub - and I think I might be pushing the envelope of feasibility here. When I submit a large number of jobs (say, 1000 - 4000) the render queue builds fine. The server renders multiple Motion jobs at once (in my test case 3 jobs at a time using 3 instances). After about 1000 jobs it craps out and dies. The only way to get it working again is to install the QMaster Service Node installer again. The jobs will pick up right where they left off, but now I'm babysitting renders...
    As far as I'm concerned I've still not finished solving this issue. I have to be able to farm out Motion jobs to a cluster (the sheer volume of render jobs to be done means I have no choice or the work just won't get done).
    What I need to do is figure out how to get a stable environment...
    So if anyone is following this. That's where I'm at. I'm still very much open to ideas, but I can definitely, without a doubt say that submitting motion jobs to a cluster IS possible and it works.
    So I'm closing this question.

  • What is the difference between qued delta  update and serialized delta upda

    what is the difference between qued delta  update and serialized delta update?

    Hi Ks Reddy,
    Queued Delta:
    In case of Queued delta LUW's are posted to Extractor Queue (LBWQ), by scheduling the V3 job we move the documents from Extractor queue to Delta Queue(i.e. RSA7) and we extract the LUW's from Delta Queue to BW side by running Delta Loads. Generally we prefer Queued Delta as it maintain the Extractor Log which us to handle the LUW's which are missed.
    Direct Delta:
    In case of Direct Delta LUW's are directly posted to Delta Queue(i.e. RSA7) and we extract the LUW's from Delta Queue to BW side by running Delta Loads. If we use Direct Delta it degrades the OLTP system performance because when LUW's are directly posted to Delta Queue (RSA7) the application is kept waiting untill all the enhancement code is executed.
    Non-serialized V3 Update:With this update mode, the extraction data for the application considered is written as before into the update tables with the help of a V3 update module. They are kept there as long as the data is selected through an updating collective run and are processed. However, in contrast to the current default settings (serialized V3 update), the data in the updating collective run are thereby read without regard to sequence from the update tables and are transferred to the BW delta queue.
    https://websmp102.sap-ag.de/~sapdownload/011000358700007535452002E/HOWTOCREATEGENERICDELTA.PDF (need id)
    http://help.sap.com/saphelp_bw33/helpdata/en/3f/548c9ec754ee4d90188a4f108e0121/frameset.htm
    Business Intelligence Performance Tuning [original link is broken]
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    How to load data
    /people/sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146
    /people/sap.user72/blog/2004/12/23/logistic-cockpit-delta-mechanism--episode-two-v3-update-when-some-problems-can-occur
    /people/sap.user72/blog/2005/01/19/logistic-cockpit-delta-mechanism--episode-three-the-new-update-methods
    Delta Types and methods;&#57245;
    Hope this helps.
    ****Assign Points if Helpful*****
    Regards,
    Ravikanth

  • Assambling and configuring an Apple Workgroup Cluster for Bioinformatics

    I'm assambling and configuring an Apple Workgroup Cluster for Bioinformatics following the steps in Apple's seminar: http://seminars.apple.com/seminarsonline/wgcforbio/apple/index1.html
    I have a xserver with 10.4 (the head) with a fixed IP and 2 more xservers with 10.5 (the nodes). All the servers connected with a switch.
    I had trouble with the networking at the beginning but I can ping between the servers at last.
    When I try to config the Open Directory I can't apply my configuration, the buton is disabled.
    Have I to config a DHCP Server for the servers without fixed IP?
    Is mandatory to config The Lights Out management and have it running?
    Any Idea?
    Thank you.

    What are they charging you (and why, if you have to go asking for help for them to be able to do their job), and what are you charging them ?

  • IQ02 -Display material and Serial Number

    IQ02 -Display material and Serial Number
    In IQ02 If we gives Serial Number as input if that Serial Number has more than one material it is displaying one more screen in ALV display.If i select one item in that list it is going to final screen displaying Serial number details.
    Now here we have a program like IQ02 (First Screen), I want that middle screen should be display in my program, I went in debug mode but i am not understanding how SAP is doing ? how can i call that middle screen..
    Is there any function module or some other way available to make my job easy....
    Thanks in advance,
    fractal

    I have written like this...
          submit riequi21 with dy_selm   = x_indsel
                with matnr     in range_matnr
                with sernr     in range_sernr
                with dy_mode   = 'X'
               with dy_tcode  = 'IQ08'
                and return.
    When i execute the program, it is displaying directly the final output screen. I want the middle screen (ALV display for selecting the row) should be display, in that middle screen if i select one line it has to go to final screen.
    how can i do this?
    thanks,
    fractal

  • Fcron: serialized jobs

    My crontab contains several pacman-related jobs which must not get dispatched in parallel, so I use the serial-keyword for them:
    # fcrontab
    !erroronlymail(true)
    @first(2),serial 12h /usr/bin/pacman --noconfirm -Sy
    %weekly,serial * * /usr/bin/pacman --noconfirm -Sc
    %weekly,serial * * /usr/bin/yaourt -B /mnt/archive/backup/thor/packman
    The first job never gets executed, can anyone tell me why?
    The man-page says "Fcron runs at most 1 serial jobs [...] simultaneously.". My understanding of this is that all jobs marked as "serial" get classically serialized, i.e. queued and executed one at a time. Maybe my interpretation is wrong and the "serial" keyword does not imply any queueing?
    Regards,
    lynix

    Okay, so I changed my fcrontab to look like this:
    @first(5),forcemail(true),erroronlymail(false) 2h /usr/bin/pacman --noconfirm -Sy
    And still there is no pacman invocation after 19 minutes of uptime. Fcron is running and there are no error messages in the logs.
    The entry looks exactly like the one from the man page saying "run after five minutes of execution the first time, then run every hour".
    Any ideas?

Maybe you are looking for