Batch processing query

Using the Process Multiple Files tool in Elements 6 (Mac) can I convert a large number of color images to sepia with the Monotone Color Photo Effect? If this not feasible is there a way of doing this? Grateful for any advice.

Do you mean an example? Yes, in the reference file.
Here's an example I sent recently to someone on a different forum:
This code will create 3 bookmarks that go to pages 1 to 3:
this.bookmarkRoot.createChild({cName:"Page 1", cExpr:"this.pageNum = 0", nIndex: 0})
this.bookmarkRoot.createChild({cName:"Page 2", cExpr:"this.pageNum = 1", nIndex: 1})
this.bookmarkRoot.createChild({cName:"Page 3", cExpr:"this.pageNum = 2", nIndex: 2})

Similar Messages

  • FDM Scripting Query for last imported source file using Batch Processing

    Hi Experts,
    I'm currently in the processing of automating the FDM load process on our version of FDM 9.3.3 using batch processing and the FDM Task Manager. Most of the process works fine including an email alert which notifies users of when a data load has taken place.
    As part of that email alert I am trying to attach the source file that has been loaded in batch processing. I have managed to get an attachment using the following FDM Script Object of:
    "API.MaintenanceMgr.fPartLastFile(strLoc, True, False)".
    But have noticed that using this only attaches the last "manually" imported file rather than the last file imported using the batch processing.
    My question is: Is it possible for someone to steer me into the right direction of either a more appropriate API or if I have missed a step in my script.
    Any help as always would be much appreciated.
    Cheers
    Pip

    Unfortunately the batch process does not work the same way as on-line. I am assuming you are using the normal batch load and not Multiload (although the batch is simisar).
    the batch file name gets recorded on the tBatchContents table, and moved to the import/batches folder under the folder for the current batch run. However, if successful the file gets deleted (and from memory does not get archived). To add the import file to the e-mail, after a successful load, i think you will need to store a copy of it prior to importing the file.

  • Gather_table_stats in a batch process

    Hi everyone,
    we have a bit of a problem with several batch processes. Their basic structure of a batch process this
    1. do some init tasks - record the batch process as running, load parameters etc. generally a very simple tasks (haven't seen those to fail)
    2. prepare some data into a TMP table (regular table, used from this single process - to pass data between runs, etc.)
    3. process the data from a TMP table (e.g. insert records to other tables, etc.)
    4. do some final tasks - mark the process as finished, create some simple records describing results (this fails, although very rarely)
    Steps 2-4 are always in a single transaction (usually a single procedure). One thing we've noticed is that the TMP table may be very different between the runs (e.g. it may be truncated and filled with new data on each run), which causes problems to the optimizer - in case of several batch processes, this often results in a very bad plan.
    So we have modified the processes so that a dbms_stats.gather_table_stats is called after steps 2 and 4, i.e. it looks like this
    1. do some init tasks - record the batch process as running, load parameters etc. generally a very simple tasks (haven't seen those to fail)
    2a. prepare some data into a TMP table (regular table, used from this single process - to pass data between runs, etc.)
    2b. gather stats on the TMP table
    3a. process the data from a TMP table (e.g. insert records to other tables, etc.)
    3b. gather stats on the modified tables (usually just one table)
    4. do some final tasks - mark the process as finished, create some simple records describing results (this fails, although very rarely)
    which works (the query plans are much better), but there's a BIG problem we haven't noticed before - the dbms_stats.gather_table_stats commits the transaction. That is not that big deal in case of the (2b) as the previous steps are rather trivial and easy to undo manually, but the (3b) is much worse.
    I thought we could fix this by running gather_table_stats from an autonomous transaction, but that does not work as it does not read uncommitted data :-(
    We could move the (3b) out of the batch process and run it at the very end, but that still does not solve the other gather_table_stats call (which is a PITA) and we would have to propagate what tables were modified in the process.
    Is there a way how to do this? I.e. how to gather stats within a single batch process (PL/SQL procedure) without committing the previous changes and not ignoring the uncommited data?

    First you refer to the table as a TMP (temporary) table but you also mention truncating it so is this actually a permanent table that is used as a work table? With a work table it is my experience that what you want to do is load it with a represenative sample (usually a larger one when there is significant variation between runs), collect statistics, and then leave those statistics in place.
    I find the same is true when you use a global temporary table (GTT) in place of a permanent work table. Create the GTT, load it, collect statistics, verify the plan, and leave those statistics in place. With a GTT every user session gets their own private copy of the work table so concurrent session access takes no special logic. With a permanent work table we have added a key column where each concurrent session selects a sequence value and uses it on every row and we have used dbms_lock to limit the process to one user in cases where only one user in the department needs to create the online report but multiple users will query the result.
    If this is not helpful then a fuller description of how the table is loaded and accessed may provide someone with the information necessary to provide you with a more useful reply.
    HTH -- Mark D Powell --

  • Best practices for batch processing without SSIS

    Hi,
    The gist of this question is in general how should a C# client insert/update batches records using stored procedures. The ideas I can think of are:
    1) create 1 SP with a parameter of type XML, and pass say 100 records at a time, on 1 thread.  The SP reads the XML as a table and does a single INSERT.
    2) create 1 SP with many parameters, that inserts 1 records.  I can either build a big block of EXEC statements for say 100 records at a time, or call the SP 1 and a time, on 1 thread.  Obviously this seems the slowest.
    3) Parallel processing version of either of the above: Pass 100 records at a time via XML parameter, big block of EXEC statements, or 1 at a time, and use PLINQ to make multiple connections to the database.
    The records will be fairly wide, substantial records.
    Which scenario is likely to be fastest and avoid lock contention?
    (We are doing batch processing and there is not a current SSIS infrastructure, so it's manual: fetch data, call web services, update batches.  I need a batch strategy that doesn't involve SSIS - yet).
    Thanks.

    The "streaming" option you mention in your linked thread sounds interesting, is that a way to input millions of rows at once?  Are they not staged into the tempdb?
    The entire TVP is stored in tempdb before the query/proc is executed.  The advantage of the streaming method is that it eliminates the need to load the entire TVP into memory on either the client or server.  The rowset is streamed to the server
    and SQL Server uses the insert bulk method is to store it in tempdb.  Below is an example C# console app that streams 10M rows as a TVP.
    using System;
    using System.Data;
    using System.Data.SqlClient;
    using System.Collections;
    using System.Collections.Generic;
    using Microsoft.SqlServer.Server;
    namespace ConsoleApplication1
    class Program
    static string connectionString = @"Data Source=.;Initial Catalog=MyDatabase;Integrated Security=SSPI;";
    static void Main(string[] args)
    using(var connection = new SqlConnection(connectionString))
    using(var command = new SqlCommand("dbo.usp_tvp_test", connection))
    command.Parameters.Add("@tvp", SqlDbType.Structured).Value = new Class1();
    command.CommandType = CommandType.StoredProcedure;
    connection.Open();
    command.ExecuteNonQuery();
    connection.Close();
    class Class1 : IEnumerable<SqlDataRecord>
    private SqlMetaData[] metaData = new SqlMetaData[1] { new SqlMetaData("col1", System.Data.SqlDbType.Int) };
    public IEnumerator<SqlDataRecord> GetEnumerator()
    for (int i = 0; i < 10000000; ++i)
    var record = new SqlDataRecord(metaData);
    record.SetInt32(0, i);
    yield return record;
    IEnumerator IEnumerable.GetEnumerator()
    throw new NotImplementedException();
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Batch processing and parallelism

    I have recently taken over a project that is a batch application that processes a number of reports. For the most part, the application is pretty solid from the perspective of what it needs to do. However, one of the goals of this application is to achieve good parallelism when running on a multi CPU system. The application does a large number of calculations for each report and each report is broken down into a series of data units. The threading model is such that only say 5 report threads are running with each report thread processing say 9 data units at a time. When the batch process executes on a 16-CPU Sun box running Solaris 8 and JDK 1.4.2, the application utilizes on average 1 to 2 CPUs with some spikes to around 5 or 8 CPUs. Additionally, the average CPU utilization hovers around 8% to 22%. Another oddity of the application is that when the system is processing the calculations, and not reading from the database, the CPU utilization drops rather increase. So goal of good parallelism is not too good right now.
    There is a database involved in the app and one of the things that does concern me is that the DAOs are implemented oddly. For one thing, these DAO's are implemented as either Singletons or classes with all static methods. Some of these DAO's also have a number of synchronized methods. Each of the worker threads that process a piece of the report data does make calls to many of these static and single instance DAO's. Furthermore, there is what I'll call a "master DAO" that handles the logic of what work to process next and write the status of the completed work. This master DAO does not handle writing the results of the data processing. When each data unit completes, the "Master DAO" is called to update the status of the data unit and get the next group of data units to process for this report. This "Master DAO" is both completely static and every method is synchronized. Additionally, there are some classes that perform data calculations that are also implemented as singletons and their accessor methods are synchronized.
    My gut is telling me that in order to achieve, having each thread call a singleton, or a series of static methods is not going to help you gain good parallelism. Being new to parallel systems, I am not sure that I am right in even looking there. Additionally, if my gut is right, I don't know quite how to articulate the reasons why this design will hinder parallelism. I am hoping that anyone with an experience is parallel system design in Java can lend some pointers here. I hope I have been able to be clear while trying not to reveal much of the finer details of the application :)

    There is a database involved in the app and one of
    the things that does concern me is that the DAOs are
    implemented oddly. For one thing, these DAO's are
    implemented as either Singletons or classes with all
    static methods. Some of these DAO's also have a
    number of synchronized methods. Each of the worker
    threads that process a piece of the report data does
    make calls to many of these static and single
    instance DAO's. Furthermore, there is what I'll call
    a "master DAO" that handles the logic of what work to
    process next and write the status of the completed
    work. This master DAO does not handle writing the
    results of the data processing. When each data unit
    completes, the "Master DAO" is called to update the
    status of the data unit and get the next group of
    data units to process for this report. This "Master
    DAO" is both completely static and every method is
    synchronized. Additionally, there are some classes
    that perform data calculations that are also
    implemented as singletons and their accessor methods
    are synchronized. What I've quoted above suggests to me that what you are looking at may actually be good for parallel processing. It could also be a attempt that didn't come off completely.
    You suggest that these synchronized methods do not promote parallelism. That is true but you have to consider what you hope to achieve from parallelism. If you have 8 threads all running the same query at the same time, what have you gained? More strain on the DB and the possiblility of inconistencies in the data.
    For example:
    Senario 1:
    say you have a DAO retrieval that is synchronized. The query takes 20 seconds (for the sake of the example.) Thread A comes in and starts the retrieval. Thread B comes in and requests the same data 10 seconds later. It blocks because the method is synchronized. When Thread A's query finishes, the same data is given to Thread B almost instantly.
    Senario 2:
    The method that does the retrieval is not synchronized. When Thread B calls the method, it starts a new 20 second query against the DB.
    Which one gets Thread B the data faster while using less resources?
    The point is that it sounds like you have a bunch of queries where the results of those queries are bing used by different reports. It may be that the original authors set it up to fire off a bunch of queries and then start the threads that will build the reports. Obviously the threads cannot create the reports unless the data is there, so the synchrionization makes them wait for it. When the data gets back, the report thread can continue on to get the next piece of data it needs if that isn't back it waits there.
    This is actually an effective way to manage parallelism. What you may be seeing is that the critical path of data retrieval must complete before the reports can be generated. The best you can do is retrieve the data in parallel and let the report writers run in parallel once the data the need is retrieved.
    I think this is what was suggest above by matfud.

  • Frequency Analysis - batch automation Query

    Hi,
    I was wondering if its possible within adobe audition to automate a process of performing frequency analysis on a batch of audio files and saving the frequency analysis windows in individual bmp files?
    There is a file batch utility which I am aware of for processing groups of files. However with the frequency analysis window - the only way of saving this window is by Alt-PrintScreen and manually saving file to another bitmap package.
    Is anyone aware of a way/system/tools of achieving this ?
    Thanks for any suggestions
    - Darragh

    ok, thanks for the tip.
    Date: Wed, 22 Jul 2009 17:54:18 -0600
    From: [email protected]
    To: [email protected]
    Subject: Frequency Analysis - batch automation Query
    There's no way of doing this within Audition, but I think that there are external batch process programs that might let you do this using Audition - if you see what I mean. I don't personally do things like that, but maybe somebody who does could recommend one? Alternatively, and probably rather a better bet, re-ask the question on the AudioMasters forum. You are more likely to get a meaningful response, quite frankly.
    >

  • In Bridge CS6, the "Tools" tab is missing the "Photoshop" option to batch process image

    Hello!
    Does anyone know why in Bridge cs6 under the "Tools" tab, my "Photoshop" option is missing?  That is how I have always batch processed images.  In Bridge, I choose the images to process, go to the "Tools" tab, down to the "Photoshop" option then to "Image Processor" and process.  Now when I go into my "Tools" tab it only has the options: Batch Rename, 4 kinds of Metadata related options and Cache.  Can someone tell me why this has happened and how to fix it?  Thank you!!!
    Eric

    Hello Omke,
    This problem lasted for two days.  I tried many suggestions but nothing worked immediately.  Suddenly I turned my computer on last night and it was fixed...inexplicably.  Not sure what worked but something did.  I appreciate you taking the time to offer advice. 
    Best,
    Eric

  • 105 lote esta processando - Erro 40 de sistema de PI - Batch Status Query

    Prezados,
    Nós temos um lote em GRC com os detalhes seguintes código de estado - 105 lote esta processando.
    Nós temos um lote em GRC com os detalhes seguintes:
    - Código de estado: 105 "lote esta processando"
    - Estado de lote: 04 "pedido enviou"
    - Estado de Error: 40 questão de estado de lote: Erro de sistema de PI"
    Reiniciando o lote por monitor de GRC resulta em um erro "Erro processo inicial Envie Lote (lote ID 000000000013825)"
    Algumas ideas ou sugestoes para proceder?
    Obrigado
    Marc de Ruijter
    Key words for thread search:
    - Error status 40 Batch status query: PI system error
    - Batch status 04 request sent
    - Status code 105 batch being processed

    Creio que estou com o mesmo problema,
    Estou com um lote com erro no status 5 mensagem "Consulta de status de lote: erro de sistema PI" e ao reiniciar o lote encontro a mensagem a abaixo:
    "Erro ao inicializar o processo Enviar lote (nº de lote 000000000000XXX)".
    Na sxi_monitor do PI não apresenta erro nenhum!! eu conferi a tabela citada na thread  e tinham vários registros e um deles referente ao meu lote. Apaguei apenas o referente ao meu lote porem ainda não reinicia.

  • How to display custom error message in Job log for batch processing

    Hi All,
    I am rexecuting one R/3 report in batch mode and i want to display all the custom error i have handled in job log when its executed from SM36,SM37. The custom error are like 'Delovery/Shipmet doe not exits' or others which we can display in online mode like message e100(ZFI) or any other way and accordingly we can handle the program control like come out of the program ro leave to transaction'Zxxx' or anything. But i want my program to be executed completely and accumulate all the error in job log of batch processing.
    Can anyone tell me how can i do so...
    Thanks,
    Amrita

    Hi,
    Thats what i have done from the begining. I have written message like this:
    Message i100(ZFI).
    I was hoping to see this message in the log. But i cant see. Can you help me pleae...

  • Batch process to add Javascript to PDF files

    Hi All,
    I have written a small piece of Javascript for my PDF files. The idea is to add a date stamp to each page of the document before printing. To do this, I have added the following code to the "Document Will Print" action:
    for (var pageNumber = 0; pageNumber < this.numPages; pageNumber++)
    var dateStamp = this.addField("Date","text",pageNumber,[700,10,500,40]);
    dateStamp.textSize=8;
    dateStamp.value = "Date Printed: " + util.printd("dd/mmm/yyyy",new Date());
    My question is this: Does anyone know of a way to batch process a whole directory (of around 600 PDF's) to insert my Javascript into the "Document Will Print" action of each file?
    Many thanks for any information you may have.
    Kind regards,
    Aaron

    > Can I just confirm a few things please? Firstly, should I be going into "Batch Sequences" -> "New Sequence" and selecting "Execute JavaScript" as my sequence type?
    Yes, you are creating new batch sequence that will use JavaScript.
    > My second question is, how can I insert my body of script into the variable "cScript"? I have quotation marks and other symbols that I imagine I will have to escape if I wish to do this?
    You ca either use different quotation marks or us the JavaScript escape character '\' to insert quotation marks
    Your will print code will only work for a full version of Acrobat and not Reader, because Reader will not allow the addition of fields. Also each time you print you will be creating duplicate copies of the field. So it might be better to add the form field only in the batch process and then just add the script to populate the date field in the WillPrint action.
    // add form field to each page of the PDF
    for (var pageNumber = 0; pageNumber < this.numPages; pageNumber++)
    var dateStamp = this.addField("Date","text",pageNumber,[700,10,500,40]);
    dateStamp.textSize=8;
    this.setAction("WillPrint", "dateStamp.value = \"Date Printed: \" + util.printd(\"dd/mmm/yyyy\",\new Date());");

  • Opening and closing files very slowly -MAC - Batch processing broke illustrator...

    I have been batch processing between 15 and 30 thousand svg files over the past few days(simply centreing the orbjects on the art board mergring overlapping objects and saving as AI's) - the individual files are very small 20kb max. this was working fine for a while but suddenly illustrator now takes nearly 30 seconds to open any one individual file - svg or otherwise - and also when I look at illustrator in activity monitor as files are opened illustrator uses 100% of the CPU.
    I have no Idea if this is related to all of the batch processing Ive been doing lately but Its the last thing Ive done so...
    Also I have run disk utility on my machine uninstalled and reinstalled illustrator with no success..
    Anyone have any thoughts?
    Thanks!!

    Have you tried at least restarting AI or rebooting the computer?

  • Please Help, Issues when batch processing Multi Frame images in Fireworks

    I hope someone will know how to do this and tell me where I am going wrong.
    Objective 1
    I have a large number of images that all have the exact same number of frames (4) and are all exactly the same size that I want to get resized, cropped and water marked (when I say 'watermarked I mean have my websites logo pasted in a specificlocation on each frame, I have my websites logo as a seperate .png file which I copy from).
    Current Process
    I create a command which will crop the image then paste my companys url/log onto the first frame, move it to the correct location on the frame then copy it again and then paste it onto each of the other frames, the command then resizes the image tot he exact proportions I want.
    I start a batch process and use my command, making sure that I also export from the .png to animated gif (and I edit the settings to make it 256 animated gif).
    Error 1
    The process described above resizes and crops my images however it does NOT put the watermark in the correct place, it seems to put it int he right place on the first frame and then in a different place on all the followingn frames thus giving the effect of the watermark jumping around. Which is obviously NOT what I want.
    Question 1
    Please let me know what process I should be following to correct the Error 1 above.
    Objective & Question 2
    I want to do exactly the same thing as shown above but this time the files have a varying number of frames (from 2 to 45 frames (or instances as CS4 seems to call them)). Is there a way to paste my logo to the to the exact same location on ALL frames?
    Other information
    I have tried WHXY Paste extension and I can not see how to use that to solve the issue. I have also tried 'Paste in Place' however I can not see how to use the Paste In Place extension as it is not appering in my list of commands.
    Many thanks in advance for your help.
    Andy

    Andy, you could start a batch process which will do most of the things you are asking for. THe batch process can be done using Fireworks. The right way to start is going to Archivo / Procesar por Lotes and then follow all the different option this tool has to offer.  Yes, I know I gave you the names in Spanish but that is the way I use Adobe Fireworks.
    I hope this was usefull for you!
    Best regards,
    Frank Meier | Curso Arreglos Florales

  • How to extract data from offline PDF files as batch processing

    Hello.
    I want to use Adobe Interactive forms as batch processing.
    For instances,
    1. Users download offline PDF files.
    2. Users inputs data on their local PCs.
    3. Users upload these PDF files in one folder.
    4. Program can read data form PDF files on that folder. and put data to ERP at night.
    I' d like to know how to implement a program with Java or ABAP.
    Regards.
    Koji.

    Hi,
    It's possible to do it but first be sure that the SAP system can read the directory while your program is executed in background .
    Then you have to read the content of the directory and process each file you found.
    Look at this standard ABAP object cl_gui_frontend_services , you will find method for browsing a directory and retrieve list of file .
    Afterwards you have to process each file , for this have a look at this wiki code sample i wrote for processing inbound mail with adobe interactive form, it should help you [Sample Code for processing Inbound Mail with Adobe Interactive Forms|https://www.sdn.sap.com/irj/sdn/wiki?path=/display/snippets/sampleCodeforprocessingInboundMailwithAdobeInteractive+Forms]
    Hope this help you .
    Best regards.

  • Report as batch process?

    Can we run a Z program or report as background process or batch process? if yes, how?

    Hi,
    1.  Go to SE38 give the program name and then from Menu path Program->Execute->Background. You need create the variant for selection screen
    2. Go to SE38 give the program name and then execute on the selection screen goto Menu path Program->Execute->Background.

  • Need help with batch processing picture packages

    Hi, I am having trouble batch processing picture packages is CS2.  (Windows).
    I have hundreds of images that need to be processed into picture packages and would love to find a speedier way to do this.
    I know how to create an action.  I know how to batch process from this action.  I also know how to create picture packages, but I cannot get the final result I am after - please read on....
    I have seperated all the images into their seperate folders for each style of picture package required.
    I can create an action for the picture package required and then do a batch process on the particular folder, but this leaves all the picture packages open on the desktop - as when you chose the close and save in the batch process - this only closes and saves the original image - the picture package has been created as a new document and is on the desktop still open - named Picture Package 1, Picture Package 2 - etc etc.
    I hope I am making some kind of sense here... (??!!)
    What I would like to happen is that the picture package will be saved over the original file (or to a new folder) with the original file name of the original image or maybe even with an adjustment to file name (e.g - orignal file name sc1234.jpeg - new file name sc1234packA.jpeg)
    So is this possible to do??  I'm thinking there must be a way.... i'm sure there are many group photographers out there who come across this everyday??
    Otherwise I have to save each picture package manually to original file name (via searching though files to match the original image to the picture package....) Very time consuming.
    Thanks for your help (in anticipation)...
    Jodie

    hmm - thanks for that - sounds like I will have to try and find some info and assistance regarding the scripting - it may be something I need to look into at a later time in the future....
    At the moment though I will have to plod along with this method I guess!
    Thanks for your assistance...
    Jodie

Maybe you are looking for