Post processing audio - workflow

Hi,
I am about to work on my audio and I don't know how to handle the process. Maybe someone can help me out. I am using Final Cut X and edited a piece where multiple people speak, separately (different locations). I need to normalize the audio and fix minor things such a little bit of background noise for some interviews and taking a clap out. I have Adobe Audition and Izotope RX. What steps do I take? Do I first normalize? If so, how? in Final Cut X or do I export all audio and import to Audition and normalize there then take it to Izotope to denoize? Can someone help?
Thanks

I don't use Audition but I know Larry Jordan has a lot on it.
Here is his FCP>Audition workflow.
Good luck.
Russ

Similar Messages

  • Workflow for Post Processing Magic Lantern RAW Video?

    Dear Adobe Community,
    I am interested in learning what steps/workflow fellow DSLR, Magic Lantern users take to Edit, Color Grade and add FX to their RAW video?  Here is my untested thought process so far:
    Adjust the Exposure, White Balance, Lens Distortion, Etc of DNGs in Adobe Camera Raw.
    Import DNG image sequence into Premiere Pro for Editing
    Use Dynamic Link to Color Grade in Speed Grade
    Use Dynamic Link to add effects in After Effects
    Render out in Premiere Pro
    One of the concerns I have is that I've read on the Magic Lantern Forums that Premiere Pro will reduce 14 Bit DNG to 8 Bit.  Is this something you've encountered?  Is there a workaround?
    I am fairly new to all of this and would greatly appreciate any feedback to point me in the right direction for Post Processing Magic Lantern Raw Video files.  I have a subscription to Lynda.com as well if you have any suggestions for courses to look at.
    Thank-you for your time.

    Nothing Adobe makes can read MLV or RAW video files, so you have to use something else to convert them into CinemaDNG (a folder of images) or a high bit depth movie file (ProRes etc.) - there are many tools listed in the ML forum which can do that.
    Premiere Pro can import cDNG footage but it struggles to play back in real time, and will not allow you to grade it using Camera Raw. To do that you have to use After Effects. The workflow is supposed to be "ingest in Prelude, edit in Premiere Pro, color in SpeedGrade", but quite frankly the learning curve for SG is massive. Adobe's attitude to CinemaDNG is strange; although the standard is open, support is only coded for very specific models of camera and the CC suite assumes a Hollywood workflow where professional colorists (who spend years learning how to use the software) operate offline from the rest of the edit, usually after all the cuts are made. It's not set up for a typical lone DSLR filmmaker, which is quite frankly why a lot of ML users prefer another well-known way to resolve the problem.
    The 'fast and dirty' approach to getting a cinematic grade in Premiere would be to apply LUT files to the footage using the Lumetri effect (basically running SpeedGrade presets), or you can hand-grade it using the inbuilt Color Corrector tools - but that doesn't cover the other important stuff that ACR can do, such as lens correction, alignment, noise reduction, camera calibration profiles, etc. - for that, you should import the cDNG footage into After Effects (which does support ACR), correct and calibrate the frames, then export back to something high quality that Premiere can work with (such as DNxHD or ProRes 444). You won't notice any visual loss in quality, and Premiere then will play the timeline without struggling, but it takes a loooong time to chew through every clip.
    As to if Adobe applications will ever support MLV files natively - well, since Adobe relies on close partnerships with camera manufacturers including Canon, you can guess what would happen if Adobe ever endorsed it. The only scenario I can imagine is if MLV is adopted in-camera by a major DC manufacturer (Alexa, etc) so vendors can support it without slapping Canon in the face. Flying pigs come to mind.

  • Audio post processing

    Hello everyone!
    I will start with a question as it goes:
    Is it possible to intercept the PCM stream that is routed by the system to the AudioSink (i.e. speakers or headphones) and inject some post-processing on system wide basis?
    My current investigation lead me to the following status.
    -There is a system-wide audio destination, the CMMFAudioOutput sink. Maybe it is possible to create another sink, that will perform the post-processing and then redirect the resulting stream to the original output.
    -There is a set of codecs that are instantiated by the DataPath depending on the FourCCs of the data source and data sink. Maybe it is possible to create a system-wide wrapper that will be called instead of any other codec, that will inside load the needed codec and after that perform the post-processing before returning the stream.
    -There might be some other place for such a sound hook that i haven't found in the documentation.
    I absolutely understand that such a scenario is a very dangerous thing to allow in an OS for phones. This is the reason that makes me wonder if this problem is solvable at all. Still, there might be some ways, that will surely include driver signing and any other actions preventing malicious usage.

    Sorry but I only use this support forum so can't tell you which section of the developers forum to use to get an answer to that question.  
    At a guess I would say here:
    http://discussion.forum.nokia.com/forum/forumdisplay.php?f=62 

  • Audio post processing and CS5

    I have a few questions regarding audio post processing done outside of PrPro.  The content is music that was recorded at 48khz/24-bit.  After post processing, it is in a 32-bit file, which is then combined with a video clip via Premiere.
    1)  Does Premiere change the audio bit rate when it compresses video clips during the DVD burn process?
    2) Does Premiere add dithering to the audio signal, prior to the compression process?
    3) Is it desirable to add dithering to audio clips that are added to video clips?
    Thanks,
    Steve

    Hi Hunt,
    Thanks.  A couple of questions - I am not considering DVD-audio, but audio that has been recorded off-camera, edited, and then combined with the corresponding video clips.
    1) Does 24-bit audio get converted to 16-bit audio by Premiere?  If so, this suggests that adding dither to audio files used within video clips could be beneficial.
    2) I am guessing BD refers to Blue-ray.  Yes?  If so, is it advisable to up-sample audio clips or leave them at the sample rate that they recorded at?
    Thanks,
    Steve

  • Workflow for post processing offline images

    I'm a professional real estate photographer who normally edits his own images after a photoshoot. I've recently become so busy that I'd like to outsource the post processing of my images to another individual who also uses Photoshop.
    Is there a way to send this individual lower resolution images so he can work on them and then have him send me the metadata from his edits, so I can apply them to the images I have on my system? The reason for this question is because I'd like to minimize the size of the images I'm sending to the editor, but still be able to produce high resolution stills.
    I understand there is a way to work with RAW images and save the metadata, but that doesn't fix the file size issue, and you can only save certain aspects like white balance, levels, etc... I really need a way to capture his history and save that to a file, that way if he were to use a certain filter, or brush on the image, the same results would be applied to my images.
    Thanks in advance for your help!

    You can't apply the history log, it is just information on what has been done to the file. You would have to read this file and simply repeat the same steps on the higher resolution version of the file.
    As for the replacement work flow I mention: you would open the low resolution file that your worker created. Then you would use the Place command (under the File menu) to add your original high resolution version of the image as a new layer. This will make your image a Smart Object layer. Move this layer to the bottom of the layer stack, just above Background, if that is an option. Make sure your smart object is at 100% scale (when you place the file, it is shrunk to fit the existing document boundaries), you can do this by using Free Transform (under the Edit menu) and verifying the values at the top. Then use the Reveal All command (under the Image menu) to resize the canvas to fit the full resolution version of the image. However, this really will only work if the editing involved was pretty much nothing by adjustment layers. Any pixel edits will have to be resized as well and may loose detail or not line up correctly once enlarged.

  • Output post processors and workflow engines doesn't come up after cloning

    ours is a RAC+ASM+PCP -enabled
    host1 : DB+ Concurrents
    host1 : DB+ Concurrents
    host3 : WEB+FORMS+OC4J
    host4 : WEB+FORMS+OC4J
    We are suffering from an “empty entropy pool” within the JDBC driver. Basically, the driver will attempt to gather a random number from the pool, but if the pool is empty the driver will wait (sometimes indefinitely) for values to come in. In most installations, the default configuration has the pool being “filled” by numbers created through a user interface such as mouse movements. On our servers, we are typically in a “headless” configuration so the pool may not be replenished properly. I’m not sure why we are seeing this new issue, but since the random number generator is related to security, maybe a security enhancement from the OS upgrade has made this more sensitive.
    Note that QA4 is “mostly” up and running. The concurrent managers are started on both wpsun144 and wpsun154, but the output post processors and workflow engines are still having issues
    The ultimate solution is to add some form of “entropy gathering daemon” to keep the pool filled, but we will need the Unix Admin team’s help and some research to find the best solution. Apparently, in Solaris 10, there is a kernel-level “cryptographic framework” for adding/removing entropy sources.
    The workaround is to add a java option to the java command line. For Solaris this seems to work: “-Djava.security.egd=file:///dev/urandom”. This is NOT a reasonable solution since this requires modifying multiple delivered scripts/templates. I made the following changes only on wpsun144.
    First, for the context file I had to modify the “s_adjreopts” variable to add the above workaround (per doc 1066312.1).
    For adgentns this was fixed by modifying the $AU_TOP/perl/ADX/util/Java.pm file and adding the option. Here are the new lines in the file:
    if($javaCmd =~ /jre/) {
    return ("-mx256m -Djava.security.egd=file:///dev/urandom", 0);
    } else {
    return ("-Xmx512M -Djava.security.egd=file:///dev/urandom",0);
    After this change, adgentns completed successfully and both the “tnanames.ora” and “listener.ora” files were created in the $TNS_ADMIN location.
    For adgendbc, I had to manually modify the template for the script (there are 10 java command lines in the file).
    $AD_TOP/admin/template/adgendbc_ux.sh
    For txkExecSetJaznCredentials.pl, this file was modified:
    $FND_TOP/patch/115/bin/txkSetJaznCredentials.pl
    At this point, adcfgclone did finally complete successfully! Also adautocfg is now completing successfully!
    Some references:
    http://www.usn-it.de/index.php/2009/02/20/oracle-11g-jdbc-driver-hangs-blocked-by-devrandom-entropy-pool-empty/
    Oracle E-Business Suite Applications Technology Readme for Release 12.1.3 (R12.ATG_PF.B.delta.3, Patch 8919491) (Doc ID 1066312.1)
    Look for a section titled, “Attention: JDBC connections may time out during the upgrade process when random number generation is slow on machines with inadequate entropy.”
    Can some one help me on this.

    ours is a RAC+ASM+PCP -enabled
    host1 : DB+ Concurrents
    host1 : DB+ Concurrents
    host3 : WEB+FORMS+OC4J
    host4 : WEB+FORMS+OC4JPlease post the details of the application release, database version and OS.
    We are suffering from an “empty entropy pool” within the JDBC driver. Basically, the driver will attempt to gather a random number from the pool, but if the pool is empty the driver will wait (sometimes indefinitely) for values to come in. In most installations, the default configuration has the pool being “filled” by numbers created through a user interface such as mouse movements. On our servers, we are typically in a “headless” configuration so the pool may not be replenished properly. I’m not sure why we are seeing this new issue, but since the random number generator is related to security, maybe a security enhancement from the OS upgrade has made this more sensitive.
    Note that QA4 is “mostly” up and running. The concurrent managers are started on both wpsun144 and wpsun154, but the output post processors and workflow engines are still having issuesWhat is the error from the Workflow and OPP log files?
    The ultimate solution is to add some form of “entropy gathering daemon” to keep the pool filled, but we will need the Unix Admin team’s help and some research to find the best solution. Apparently, in Solaris 10, there is a kernel-level “cryptographic framework” for adding/removing entropy sources.
    The workaround is to add a java option to the java command line. For Solaris this seems to work: “-Djava.security.egd=file:///dev/urandom”. This is NOT a reasonable solution since this requires modifying multiple delivered scripts/templates. I made the following changes only on wpsun144.
    First, for the context file I had to modify the “s_adjreopts” variable to add the above workaround (per doc 1066312.1).
    For adgentns this was fixed by modifying the $AU_TOP/perl/ADX/util/Java.pm file and adding the option. Here are the new lines in the file:
    if($javaCmd =~ /jre/) {
    return ("-mx256m -Djava.security.egd=file:///dev/urandom", 0);
    } else {
    return ("-Xmx512M -Djava.security.egd=file:///dev/urandom",0);
    After this change, adgentns completed successfully and both the “tnanames.ora” and “listener.ora” files were created in the $TNS_ADMIN location.If you are on R12, please see these docs (search for "Djava.security.egd").
    Oracle E-Business Suite Applications Technology Readme for Release 12.1.3 (R12.ATG_PF.B.delta.3, Patch 8919491) [ID 1066312.1]
    Oracle E-Business Suite Release 12.1.3 Readme [ID 1080973.1]
    For adgendbc, I had to manually modify the template for the script (there are 10 java command lines in the file).
    $AD_TOP/admin/template/adgendbc_ux.sh
    For txkExecSetJaznCredentials.pl, this file was modified:
    $FND_TOP/patch/115/bin/txkSetJaznCredentials.pl
    Can some one help me on this.Are you trying to avoid manual changes? If yes, please make sure you have all the patches mentioned in the docs referenced above applied and you have all the necessary configuration/setup.
    Thanks,
    Hussein

  • Custom Expression creation in Process controlled  workflow

    Hi All,
    I am working on SRM 7.0 and utilizing process controlled workflow to model my approval workflow scenario. Below are the steps that i have done but still i have not got the desired result. May be i am missing something which the forum members can let me know.
    step 1: i have copied the standard function module  /SAPSRM/WF_BRF_0EXP001 and made chnages in the code  based on the logic to populate the ev_value.
    basically in the coding it is done this way.
    initialize
    EV_TYPE = 'B'.
      EV_LENGTH = 1.
      CLEAR EV_CURRENCY.
      EV_OUTPUT_LENGTH = 1.
      EV_DECIMALS = 0.
      EV_VALUE = ABAP_FALSE. " no processing
      EV_DATA_MISSING = 'X'.
    get event object
      LO_WF_BRF_EVENT ?= IO_EVENT.
    get Context Container from BRF event
      LO_CONTEXT_PROVIDER = LO_WF_BRF_EVENT->GET_CONTEXT_PROVIDER( ).
      CALL METHOD LO_CONTEXT_PROVIDER->GET_DOCUMENT
        IMPORTING
          EV_DOCUMENT_GUID = LV_DOCUMENT_GUID
          EV_DOCUMENT_TYPE = LV_DOCUMENT_TYPE.
      CASE LV_DOCUMENT_TYPE.
        WHEN 'BUS2121'.
    Get shopping cart instance
          LO_WF_PDO_SC ?= /SAPSRM/CL_WF_PDO_IMPL_FACTORY=>GET_INSTANCE(
            IV_DOCUMENT_GUID = LV_DOCUMENT_GUID
            IV_DOCUMENT_TYPE = LV_DOCUMENT_TYPE
            IV_PDO_EVENT_HANDLING = ABAP_FALSE
    custom logic  with the GUID of the shopping cart then populate  value accordingly
            EV_VALUE = ABAP_TRUE.
            CLEAR EV_DATA_MISSING.
    step 2: create an event ZEV_001 linked to expresion ZEX_001.
    step 3: expression ZEX_001 is of type 0FB001. and the xpression is as below.
      ZEX_002 = OB_WF_TRUE.
    step 4: ZEX_002 is of type 0CF001 and output result type is B. the attached function module is the one created above by copying /SAPSRM/WF_BRF_0EXP001
    no parameters provided .
    step 5: the process step level config completed.
    now when i am creating shopping cart  go for approval preview its gives an exception occured  error . i cant see any dump in the system. however  if  in the custom fucnction module code if i do not clear the  EV_DATA_MISSING. then i d o not get the error but my steps is not executed. Inslg1 log i see the  process level executed  but return as space.
    as per other post in the forum we have to clear EV_DATA_MISSING but that casuing  exception eror for me. in the dlg1 log howere the steps  expressin i can see executed  with return =X.
    hope i am made myself clear . feel free to ask for any more info.
    i have below question.
    1) do we need to copy  /SAPSRM/WF_BRF_0EXP001  or  /SAPSRM/WF_BRF_0EXP000 for creating a custom FM expression.
    2)what does the check box "Calculation of Parameter in Function Module/Badi/method" does
    3) how can i do debugging for such FM expression , probably saving the cart and then debugging the expression.
    Thanks in advance for any help provided.
    Cheers
    Iftekhar Alam

    Hi ,
       Just put the below code in the FM and try..
        DATA: lo_wf_brf_event     TYPE REF TO /sapsrm/cl_wf_brf_event,
            lo_wf_pdo           TYPE REF TO /sapsrm/if_wf_pdo,
            lo_ctxt_provider    TYPE REF TO /sapsrm/cl_wf_context_provider.
      DATA: lv_document_guid    TYPE /sapsrm/wf_document_guid,
                lv_msg              TYPE string,
                lv_document_type    TYPE /sapsrm/wf_document_type.
      DATA: lx_exception        TYPE REF TO /sapsrm/cx_wf_abort.
    *=======================================================================
    Preset return parameters
    *=======================================================================
      ev_type          = /sapsrm/if_wf_rule_c=>type_bool.
      ev_length        = 1.
      CLEAR ev_currency.
      ev_output_length = 1.
      ev_decimals      = 0.
    get event object
      IF NOT io_event IS BOUND.
    BRF event Object not bound. No further execution possible.
        MESSAGE e089(/sapsrm/brf) INTO lv_msg.
        TRY.
            CALL METHOD /sapsrm/cl_wf_brf_ccms=>send_message( ).
          CATCH /sapsrm/cx_wf_abort INTO lx_exception.
            ev_data_missing = /sapsrm/if_wf_rule_c=>brf_data_missing.
            EXIT.
        ENDTRY.
        ev_data_missing = /sapsrm/if_wf_rule_c=>brf_data_missing.
        EXIT.
      ENDIF.
    *=======================================================================
    Get purchasing document
    *=======================================================================
    get event object
      lo_wf_brf_event ?= io_event.
    get context container from BRF event
      lo_ctxt_provider = lo_wf_brf_event->get_context_provider( ).
    get Content Container from BRF event
      IF NOT lo_ctxt_provider IS BOUND.
    BRF Context Container Object not bound. No further execution possible.
        MESSAGE e090(/sapsrm/brf) INTO lv_msg.
        TRY.
            CALL METHOD /sapsrm/cl_wf_brf_ccms=>send_message( ).
          CATCH /sapsrm/cx_wf_abort INTO lx_exception.
            ev_data_missing = /sapsrm/if_wf_rule_c=>brf_data_missing.
            EXIT.
        ENDTRY.
        ev_data_missing = /sapsrm/if_wf_rule_c=>brf_data_missing.
        EXIT.
      ENDIF.
    get document
      CALL METHOD lo_ctxt_provider->get_document
        IMPORTING
          ev_document_guid = lv_document_guid
          ev_document_type = lv_document_type.
    get instance
      lo_wf_pdo ?= /sapsrm/cl_wf_pdo_impl_factory=>get_instance(
      iv_document_guid = lv_document_guid
      iv_document_type = lv_document_type
    Case lv_document_type.
    When u2018BUS2121u2019.
    Pass the lv_document_guid to get SC details.
    IF THE CONDITION TRUE..
              CLEAR: ev_data_missing.
              ev_value        = c_x.
    ELSE.
              ev_data_missing = c_x.
              ev_value        = c_blnk.
    ENDIF.
            WHEN OTHERS.
              ev_data_missing = c_x.
              ev_value        = c_blnk.
          ENDCASE.
    Make sure the check expression will have check as shown below
    ZEX_002= 0C_WF_B_FALSE
    FYI-
    You don't need to have expression ZEX_001 to check the result of ZEX_002, because both expressios resut is type 'B". you can directly attached the ZEX_002 to main Event ZEV_001
    Regards,
    Saravanan
    Edited by: Saravanan Dharmaraj on Jun 23, 2010 12:25 PM

  • Preflight keeps saying  "Unable to save the PDF File after post processing"

    I'm at a loss how to overcome this. Spent almost a whole day, together with another person, trying to fix it with no success!
    I use Adobe pro CC on a PC
    I usually receive pdf files from this one client who edits and formats a book in Mac Pages. Up until a few days ago I had no problems converting the client's pdfs into pdf/x3, but the last three versions of the latest file have stumped me.
    Just to test, I first just tried to convert the file (33MB), unchanged, to pdf/x3 using the save as other option - message reads  "the document has been saved, however, it could not be converted according to the selected standard profile: convert to PDF/X. Please use preflight with the profile "Convert to PDF/X" in order to identify those properties of the document which prevent it from being compliant to this profile"
    if I then choose under Profiles - convert to PDF/X3 - it says no problems found, and appears to have saved the file. If I try to save again as a pdf/x3, just to make sure, it then tells me it's not pdf/x3 compliant
    OK - so then I go back to preflight - and choose the standards function - then pdf/x3, then continue with the default colour profile. About halfway through the conversion, at the point of saving the file,  I get the message "unable to save the pdf file after post processing"
    So far I've had no luck figuring out what this is.
    I then choose the option of "verify compliance of ppdf/x3" - message reads "pdf/x3 version key (GTS_PDFXVersion) missing", and "Trapped key not true or false"
    Help!! How can I be sure the file converted / or not?
    kim

    Yes I was/am aware of Preflight's inability to play nicely over cloud technologies in certain cases, especially wrt Standards technologies, this will be partially addressed in an upcoming version of Preflight without saying too much.  But the same thing could theoretically have happened if the file was also located on another local network client or server.  Leaving aside the argument that this may violate the Acrobat User Agreement - since purchasing the software, the user agrees to employ its functionality on a single host system and precludes host-client-based scenarios - this simply is not a supported use, meaning the user may not have expectations that it will work at all, if the applications requirements are not observed.  There do exist server-solutions for Preflight files within networks, but Acrobat and by extension Acrobat Preflight is not one of those solutions, and (still) belongs in the single host-based desktop environment.
    However most (99.9%) functions within Acrobat <-> acrobat.com file exchange are supported, file syncing across multiple devices will soon be supported, but Standards compliance is still admittedly a problem at this point.  Some testing has been done using 3rd party cloud technologies starting with enterprise-based solutions, such as Office 365, and this will continue to ramp up to include other 3rd party products.
    As for the second point, Preflight will usually change the PDF version to be complaint, are you saying that it was unable to do so in this case?  It seems that this error should have popped up during the normal Preflight conversion attempt.  Personally I think solving a workflow problem using the print path is a little bit of a heavy-handed approach, but if helped and the results are acceptable, then that is good.  Since that path is non-existent on a Mac, as one needs to Save as Adobe PDF from the Print dialog's PDF drop-down menu, I am assuming your workflow involved file creation on a Mac, then further processing on Windows using the PDF printer.  I am wondering if a simple resave/Save As... to PDF with overwrite on a Mac, or Preflighting the file using a PDF version compatibility profile before the PDF/X conversion would have helped.  Since there are such a multitude of methods that a PDF can be created, there are also many ways within Acrobat that a user can shape the file to be compatible with the expected workflow, ie, 'many ways to skin a cat', without being morbid.

  • Why does Premiere Pro CS5 process audio source before exporting clip in AME CS5?

    Hello everyone, my first post here. I am currently trying out the trial CS5 Master Collection on my MacBook Pro. I've been using Premiere Pro CS5 to put together family videos. I select a certain portion of a sequence and queue it for export in Media Encoder CS5 in the format H.264 for iPod with video and audio.
    Now in AME I have a queue list of about 5 clips from the sequence. But when I hit the Start Queue button, the bottom left of the progress bar states "Adobe Premiere Pro processing audio source from [then file name]"  for all 5 clips from the sequence then creates the first output file.
    Then it moves to encode the second and again states "Adobe Premiere Pro processing audio source from [then file name]"  for all 5 clips of mt sequence then spits out only the second output file.
    This continues for the remaining 3 output files. The total sequence length is ~32 minutes with transitions and it takes about 35 minutes to export which is all consumed by the process audio source of the 5 clips in my sequence for 5 separate times.
    Is there a certain reason why Media Encoder or Premiere Pro does this process of audio source? Is this process necessary or needed? I experimented with choosing different output formats but the results are the same. Any help would be appreciated, thanks.

    Now that's pretty good follow-up to your first post, and quickly too! Nice detective work.
    As you have found, there is basically a Render/Replace operation going on in PrPro, to get the material in the correct form for AME.
    Thank you for reporting, as it saved me, and others, from basically saying what you said.
    Good luck, and sorry that you beat us to the punch.
    Hunt

  • Inbound idoc processing by workflow

    Hello,
    How can I find if an inbound IDoc has been processed via workflow ?
    following are details -
    1)status records 50 and 64 for the idoc show RFC user ID whereas 62 and 53 show WORKFLOW_020 user ID.
    2)manual re process in test system using WE19 has all four status records by login user ID.
    3)BD67 values for this process code was checked for the start events.
    4)I checked standard task 30200090 following oss note 325361
    5)The invoice document posted via this idoc has created by user ID as WORKFLOW_020 
    I do not have full knowledge of inbound idoc processing via workflow and I am in the process of learning the same. Kindly help.
    thank you very much in advance,
    Bhakti

    Hello,
    How can I find if an inbound IDoc has been processed via workflow ?
    following are details -
    1)status records 50 and 64 for the idoc show RFC user ID whereas 62 and 53 show WORKFLOW_020 user ID.
    2)manual re process in test system using WE19 has all four status records by login user ID.
    3)BD67 values for this process code was checked for the start events.
    4)I checked standard task 30200090 following oss note 325361
    5)The invoice document posted via this idoc has created by user ID as WORKFLOW_020 
    I do not have full knowledge of inbound idoc processing via workflow and I am in the process of learning the same. Kindly help.
    thank you very much in advance,
    Bhakti

  • Get Page Count of PDF using Publication Post Processing Plugin

    Hi,
    I am using the publiching functionality of BOXI 3.1 to create a large volume of PDFs.  I have a report that I am publishing using a publication with a post processing plugin applied after distribution.  I need to get the total page count of the pdf and store this value in the database.  Becuase I am already using a post processing plugin after distribution, it would be ideal to extract it there...but I cannot figure out how to get the value at this point. 
    Currently we parse the actual binary pdf file, but I know that Crystal and BOXI know this total page count and some point in the processing cycle.  Does anyone know how to get at the total page count of each pdf/report instance?
    Thanks,
    Kristina

    I would post-process the PDF file as you're doing.
    The total page count isn't metadata - it's something computable from the report instance, that is lost after the PDF is generated.
    Trying to work with that would lead to a more complex workflow - i.e., schedule to report format, open the report instance using a reporting SDK, calculate the total page count, export using the reporting SDK to PDF, add the PDF to the publication artifact and remove the report instance, other steps I may have missed....
    Sincerely,
    Ted Ueda

  • Post processing agent in partner profile

    Hi
    Inbound idoc fails. The partner profile has a user (US) in the post processing:permitted agent, both on the actual partner (LS) and on the inbound message type. I would expect to see the error in the inbox of the business workplace, but I see nothing! - what have I done wrong?
    Cheers
    Rob

    Hi,
    I think for this you have to configure the workflow notifications for inbound process.
    Check if they are in place.
    Check this link.
    http://sap-f3.blogspot.com/2009/11/workflow-configuration-of-inbound-idoc.html
    Hope it helps.
    Regards,
    Raj

  • Minimizing loss on post-processing of jpegs

    Today somehow the setting on my camera had raw file saving turned off. To try to minimize loss from post processing of the jpegs I assume I should save the edited files as PSD and develop my final pictures from that? Are there any other helpful tips someone can give for my predicament?

    KarlKrueger wrote:
    Today somehow the setting on my camera had raw file saving turned off. To try to minimize loss from post processing of the jpegs I assume I should save the edited files as PSD and develop my final pictures from that? Are there any other helpful tips someone can give for my predicament?
    Yes, save the edited files as PSD for further editing.
    Another good option is to open the jpegs in camera raw depending on your OS and Elements version.
    Do all you can edit in camera raw. All those edits will be calculated in 16 bits mode and in the internal wide range prophoto color space. No pixels will be changed, only the slider settings will be recorded in the metadata section : it's a non destructive workflow preserving your original, and you don't need to duplicate your file in another large size format if you simply click 'Done'. If you want to do further editing in the editor, then save your editor edits in a psd or tiff format version set. You can even open in the editor in 16 bits if you want for further global adjustments before you convert to 8 bits for layers or local tools (I don't : the situations where 16 bits is useful are dealt with in the ACR conversion).

  • Idoc Post processing

    Dear Experts,
    I have a scenario where in once the delivery Idoc is kikked off, I need to kick a custom Idoc, can some one please guide me on how to handle this scenario.
    Please guide me on how to find a user exit post processing of an Outbound Idoc
    Thanks for your time.
    -Amit

    Hi Amit,
    One suggestion would be to create your custom idoc in a standard user exit called for creating delivery idoc. Another way is to create a workflow and trigger it with event once a change pointer is created for your delivery document. This workflow will be a single step task which will call a business object method which will internally call a custom function module to populate and trigger the idoc.
    In case you go for second solution and face any issues please let me know. we had a similar requirement in our project and solved it using solution 2.
    KR Jaideep,

  • Photoshop post processing

    I'm hitting a limitation with the way Lightroom handles TIFF files.
    I have nearly two thosands scanned slides saved as TIFF. In addition to the normal R, G and B channels, each TIFF file includes an additional "infrared" alpha channel. This channel is created by the scanner using an infrared light to isolate dust and scratches on the film surface. I can then use a Photoshop action to read this infrared channel and eliminate most of the dust and scratches from the TIFF image. This is a step I do when exporting the TIFF as a JPEG for instance. I do not want to overwrite the original TIFF file. It is like my "raw" scan and I want to save it in case I come across a better de-dusting technique in the future.
    Now the problem. Ligthroom is my tool of choice for cataloging and editing photos. It can read TIFF files but any edit applied to one during the export stage discarts the infrared alpha channel. Only the R,G,B channels are kept. This means that I can't use a Photoshop droplet or action to remove the dust as a post process step. I currently do not see a solution short of either dropping Lightroom, or keeping duplicate images (the original raw and the de-dusted version).  Keeping duplicates is not an appealing option. It would take hundreds of GB of extra space to start with.
    Has anyone come across a similar scenario? How can one handle Photoshop post processing that relies on alpha channel (or layers).
    Thanks

    Hi Jaap,
    Here are the steps that I use to implement my own ICE dust removal. I'm on Photoshop CS3 mac by the way.
    1) Copy the alpha channel (infrared) into a new channel named "Dust".
    2) Adjust the level on the Dust channel to isolate only the dust. You should get something like below. Note the markers. I would overlap them but I've spread them very slightly here to show where they are. I always err on the side of removing all image details from this channel even if that means that some of the dust/scratches won't make it in. Of course, you can also edit the channel and manually add (painting in Quick Mask mode) any missing dust or remove any image detail left.
    3) Select the RGB channel
    4) Load the Dust channel as a selection (Select -> Load Selection, Channel: Dust, Invert: checked)
    5) Expand the selection by 1 pixel (Select -> Modify -> Expand)
    6) Apply the Dust and Scratches filter (Filter -> Noise -> Dust & Scratches, Radius: 6, Threshold: 0)
    The Radius to use depends on the resolution of your scan. Play with it and try to find the smallest radius that will remove most of the dust. If you set the radius too large, the color used to fill in will average surrounding pixels farther away, which isn't as accurate for fine details. A resolution of 6 seems to give me good results with a 4000 ppi scanner like the Nikon CoolScan LS 5000.
    7) Work's done. Clean up by deselecting (Select -> Deselect, or simply Command D on a mac)
    I've created an action for step 1 and another for steps 3 to 7. Step 2 is fully manual unfortunately. I don't think there is a way to set the markers programmatically like in the image above since the histogram varies for each image.
    I also keep my alpha channel intact in case I find some better way to create the dust channel in the future. I'm a firm believer in the non destructive workflow. I won't scan these slides again. It may not even be feasible soon. Film scanners are being retired from the market much faster than I would have thought just a year ago.
    Hope this helps

Maybe you are looking for

  • How to integrate java into coldfusion MX7

    I am beginner and taking training in coldfusion since 2 months.I want to know how to integrate java into coldfusion MX7.

  • Gather system statistics

    Hi Friends, I want to gather system statistics in my Oracle 9.2.07 (windows) environment... created one statistics table... execute DBMS_STATS.CREATE_STAT_TABLE ('SYS','MY_STATS'); (2) Gathering SYSTEM Statistics Script during office hours ( 8 hours)

  • Chrome and firefox searches on address/URL field are crashing my Mac

    Hello, Since Friday (3 days ago), I've noticed that whenever I use the address/URL field in either Chrome or Firefox to search for something - not entering a valid URL, but entering something for the browse to use Google to search for what I entered

  • Integration of BI Administrator into SAP Portal

    Hi! I would like to integrate BI Administrator into SAP Portal. All the necessary steps has been succesfully executed (Activation of technical content in SAP BI, Deployment of Business Package BI Administration 1.0). The problem I have: When I start

  • I have the Verizon iPhone in hand! - need advice

    I'm not going to lie. I'm not an Apple guy, in fact I've always been a diehard msft person. Until I had the opportunity to beta test the iPhone for Verizon (i've had it a few days now). I'm in love - already planning on getting an iPad to complement