Workflow overhead

Hi Experts,
I have always struggled to find a method/tool that can give me an estimate of system overhead if i am implementing a workflow. Though I know that the answer for the most of the part 'depends' on number of things like, scope of the implementation, # of users, # of workflows, Expected # of work-items/day, is there anyway i can get a fair estimate of system resources workflow needs so that server sizing team knows what it takes to implement workflow. being a workflow consultant, i know of various ways to optimize workflow performance and Dos and Don'ts (e.g. keeping event trace off in PROD, see if Event Queue can be enabled, exploring possibility of reducing WF table sizes using archiving/deleting Old Work-items keeping audir requirements in mind etc.) but I have never been able to figure out a method to calculate/estimate system resource requirements and help sizing team plan for it.
PS: I have posted this question on BPM and Workflow forum as well and just thought to post in this forum if ABAP experts can help me out
Thanks,
Saurabh

Hello,
You can create simple approval WF to send email.This approach will help if you need to send email for all doc's.
http://office.microsoft.com/en-in/sharepoint-help/all-about-approval-workflows-HA102771433.aspx
>whenever a document requires approval
If you have specific requirement and having column to fulfill this then designer is good option to just send an email based on column value.
Hope it could help
Hemendra:Yesterday is just a memory,Tomorrow we may never see
Please remember to mark the replies as answers if they help and unmark them if they provide no help

Similar Messages

  • Large Still Images into PE - One Workflow

    Everyone wants the highest quality that they can obtain when doing their videos. It’s natural to want the best. Well, when dealing with still images, bigger is not necessarily better, for two reasons. First, overly large still images can really tax a system and second, one is limited to the frame size of the video, so these have to be resized somewhere - this resizing can be in the NLE (Non Linear Editor) program, or in an image processing program like PS (Photoshop), which does a better job anyway. Doing this in PS, or PSE, will result in better resized images, and they are easier for the NLE to work with. Quality is as high as your Project’s Preset will allow, and you are more efficient, with fewer crashes, slowdowns and hangs. It is a win-win situation.
    Here is my normal workflow when dealing with still images. This workflow is for NTSC 4:3 720x480 with a PAR (Pixel Aspect Ratio) of 0.9. If your Project’s Presets are different, use those specs to resize to.
    Since I shoot my still images in RAW, I Copy my files from the CF card to my system and catalog these images by location, subject and date (if necessary). I’ll do a quick conversion and Save_As Adobe DNG for backup. I then process these RAW images in PS with the ARC (Adobe Raw Converter), correcting them and then doing a Save_As PSD into a sub-folder. All of this is in my still photo library.
    Normally, I will edit these PSD’s to find the images that I wish to use in a Video Project, and will Copy the selected images to another folder. You’ll see that I work with a lot of Copies, so my original files are always untouched and stored elsewhere. This guards against anything happening to them.
    At this point, I’ll decide how I wish to use these selected images in my Video Project. Let’s just say that they are all horizontal images, and are still full-size from my camera. As stated, my Video Projects are DV-NTSC 4:3 720x480 PAR 0.9. [Remember, your Video Project may vary, so you will need to plug in the dimensions for YOUR Video Project in that case.] I also will have done my Cropping on each image individually, to get them to 4:3 Aspect Ratio. I do this my eye and by hand, rather than via an Action, because I want full aesthetic control.
    In PS, I have a set of Actions for Video. An Action is like a Script, but less powerful and less involved in the writing. As I have already done all of my image enhancements and additional processing before I did my Copy to the selected folder, I only have to worry about my Action resizing these selected images for use in my Video Project. My Action here is to resize to 720x480 with a PAR of 0.9, and I normally use the Action that does this with a particular resizing algorithm, Bicubic-Smoother (though I also use Bicubic-Sharper on occasion).
    For the next step, I go to my folder structure (remember, this folder contains copies of my selected still images in PSD format), and create a new sub-folder "[Project Name]_720x480." Back in PS, I choose File>Automate>Batch. Here I set my Source Folder, my Destination folder and the Action to perform. In my case, it’ll be the Destination Folder, that I just created, [Project Name]_720x480, and my Action will be my NTSC 4:3 720x480 Smooth. I check to have the Open command by-passed, because I do not need to see this take place on my monitor. When I hit OK, PS grabs all files in my Source Folder, runs the commands of my Action and does a Save_As for all files into my Destination Folder. I can process hundreds of large images down to a great 720x480 PAR 0.9 via Bicubic-Smoother interpolation, in moments. Now, I’m ready to go. Last, I Copy my Destination Folder to my Video Project’s folder hierarchy (usually on another HDD), and then Import these processed stills into my NLE.
    What if I need to pan on one, or more of these images, while they are zoomed out completely? I don’t have enough pixels in my horizontal dimension to do this. I am just filling the frame with my still. Well, if I find that there are such images, I go back to my folder with the full sized images in my still images library, and select the ones that need to be larger. I run another Action on these, but it’s one that resizes to something larger than 720x480, say 1000x750. Now, I have another Destination Folder with the name [File Name]_1000x750. I’ll Copy this over to my Video Project, and Import these into the NLE. Here, I can go to Project Panel and remove the 720x480 versions if I so choose, but since a Premiere Project file (.PRPROJ or .PREL) is only an XML database, I may just leave them. It does not contain any media files, just links to where they are on the system and to what operations are performed on them.
    By doing my resizing in PS, rather than in Premiere, I have accomplished two things:
    1.) I have better quality resized images, using the algorithms in PS, plus have a choice of several interpolation methods to work with.
    2.) I have lessened the processing load on my NLE and on my system, while doing the editing
    I get higher quality and lower resource overhead - hence my reference to "win-win."
    Now, back to my aesthetic control. I do not do any automatic zooming or panning. If one allows the NLE to do this, then they will want to probably process all of their images to 1000x750 (remember, this is for an NTSC 4:3 Project, so you will need to calculate what YOUR Project will require).
    The two programs that I use are Photoshop and Premiere Pro, but Photoshop Elements can do the same things, though the exact commands might be different. Premiere Elements will handle the resized still images, just like Premiere Pro and the only difference will be the terminology used when one wishes to Import the still images.
    I also keep all of my images in .PSD (the native format of PS), and do not convert to JPEG, or other. If one’s camera shoots only JPEG, I suggest writing the Action to do the Save_As to .PSD, as another JPEG compression will cost one quality. Yes, the JPEG’s will be smaller, but remember we are looking for the ultimate quality, so larger file sizes are just part of that equation.
    One does not have to deal with all of the Copies, as I do. However, this allows me to go back to the originals, or to the processed full-sized .PSD’s at any step along the way. There is only one thing worse than not being able to go back to an intermediate version with full Layers and Adjustment Layers, plus any Alpha Channels, and that is finding out that you’ve lost your original RAW and DNG backups! That’s why I do a lot of Save_As and also work from Copies all along the way.
    Hunt

    Your workflow looks good. I do similar, but use PS, in lieu of LightRoom. I also do DNG's for my archives.
    Provided that one chooses a JPEG compression algorithm setting that does not do too much compression, I doubt that anyone, but the most very critical, could tell the difference in Video. Most of my tests on PSD vs JPEG have been for print. There, one can more easily detect the differences. Video "hides" some of that.
    To date, I have not had a Project where the Asset size differences between equally sized PSD's vs JPEG's caused any slowdown, or problem. There could be a resources savings with the smaller JPEG files, but there is a tiny bit of overhead dealing with the JPEG compression. I have never tested this, so can only guess that the smaller Asset size of the JPEG would trump that overhead - just a guess on my part.
    For me, keeping the images in PSD does save a tiny bit of work in my Action (basically one less operation to perform), but I doubt that one could measure that time difference, even over the automation of hundreds of images. Besides, it's only one additional line in the Action. My feelings on JPEG vs PSD is firmly based in my print experience, and I am probably being too critical with images going to video. When I move up to HD and BD authoring, I need to apply a very critical eye, to see if I can tell the differences on an HD TV. So long as one does not apply too much JPEG compression, the differences should be very slight, at the worst, and maybe not even noticed, at best.
    I do minimize the impact of many files on my Project by sizing to what I need. If I will not be doing any pans on zoomed-out images, I size to my Project. For pans on zoomed-out images, I calculate just what I will need for those pans, and might end up with several groups of sizes, to accommodate each. Still, the vast majority will be sized to exactly what I need for the Project - very few extra pixels.
    In my case, and yours too, I have my RAW, my DNG, my working Layered PSD's, and then my sized output. I always keep all working PSD's, as I might change my mind, or my client might change theirs, and I do not want to have to go back and redo work, if I still have those working files. I also do as little destructive editing, as I can, using Dupe Layers, and Adjustment Layers, whenever possible. If I can, I never Flatten, or Merge Layers, so I can make any/all changes at any time, and only have to do the resizing via the same Actions. That is basically a "one-button" solution, once I have made the changes required.
    Good luck,
    Hunt

  • New to Mac: help me adjust my Windows workflow to iPhoto

    I recently switched to Mac, mostly in the hopes of spending less time thinking about how the computer works and more time doing what I want to do.
    In fact, what made me finally start seriously thinking about a Mac is when I heard about the iPhoto '08 events feature, where it automatically split the photos based on the dates. I thought that was genius in how simple, yet effective it was.
    I take quite a few photos. I take a lot of random photos of my car (I'm a car guy), but I also take some photos at parties, friend visits and family visits, etc. I often take several of photos of the same thing, because, heck, there's plenty of space on the memory card, and maybe one of the subsequent photos will be better than the first.
    One common scenario for me is going to a social gathering of some kind and then posting the photos of that event online.
    My Windows XP workflow used to be as follows:
    1. Connect camera
    2. Import photos into a folder named after the event using Windows Photo import. If several different events, select several sets and import them separately into separate folders. E.g. "My Pictures\Spring Cleaning Day Apr 2008"
    3. Browse the photos using Windows File Viewer and delete any obviously poor photos and select the least blurry of several duplicates.
    So that above is fairly straighforward. Now, I need to get these photos up online. A while back, I set up for myself a website that runs Gallery 1.x software, so I could just upload photos via a web interface and have them viewable with automatic thumbnails.
    The thing is: I don't want to upload ALL images online. Just most of them (the best).
    4. Open the photos I'm interested in a photo editor, one by one.
    5. Crop, adjust levels, shadows, colors, etc.
    6. Resize photo to a "large but not too large" web size, usually either 1200 or 1400 pixels wide.
    7. Adjust sharpness on the resized photo to make sure it looks good when viewed at actual pixel size.
    8. Save photo in a subfolder called "forweb", e.g. My Pictures\Spring Cleaning Day Apr 2008\forweb. This way, I get to keep all the originals from the camera and my resized web-ready photos separate.
    9. Upload all photos from the "forweb" folder using a web interface, typing up descriptions for each.
    When I'm done, I have a gallery that has thumbnails, mid-size versions of the photos (800 pixels wide, for easy viewing) and a "full-size" version for a detailed view (but not as huge as the one that came out of the camera... so it's not original size). In fact, the original size photos are huge (as I take them at 7.1MP resolution with the lowest compression), about 2-3MB each, and so I'd never want to upload a 100 3MB photos. A waste of space.
    Overall, the result is pretty good: most people view the midsize photos and save full-size (1200 or 1400 pix wide) photos of things they particularly like.
    However, the process is too time consuming. In particular, it takes forever to go through each photo and resize it and crop it. The process of resizing and saving each individual photo in the separate "forweb" is quite a bit of overhead. Sometimes I kind of dread having to post photos up online after events, and it should totally not be that way!
    I bought a .Mac subscription after seeing the fancy web galleries. I am hoping that the combination of iPhoto and .Mac can help me get my photos up on the web quicker.
    So now, the iPhoto workflow. What should it be?
    Here's my guess:
    1. Connect camera.
    2. Import all photos, naming each event after importing.
    3. Delete poor photos or blurry duplicates.
    4. Identify photos I want to display on the gallery (those worth keeping) and hide all others.
    5. Crop and adjust shadows/colors/etc.
    6. Select "Web gallery" to share the event.
    And then I'm done.
    So that sounds pretty good right there. Except, I have several reservations and questions:
    1. Is it better to mark photos I don't want in the web gallery as hidden (so I can just upload the entire gallery) or should I just select the photos I want to upload online into the web gallery? It seems to work either way. On one hand, it seems better to hide the photos, because then when they get synced to my iPhone, I have the most relevant photos and not the "extras." Also, if I'm showing photos from my laptop, I'm also showing the cream of the crop and not everything. On the other hand, the concept of hiding photos is kind of unusual to me. Feels strange to "hide" photos.
    2. What to do for a web gallery full size that's not as huge as the original camera size? When uploading to .Mac, I only have an option of Optimized or Actual Size. Optimized size is 1024 pixels wide. That's too small for a detailed views or for wallpaper purposes. The problem is that the "Actual Size" from my camera is huge (~3000 pixels wide). I don't really want to burden people with the downloading of the 2-3MB "Actual Size."
    However, it seems that it's tricky to implement such a 1200-1400-pixel-wide "full size" with iPhoto. There is no resize option within iPhoto that I could find. The best I could find was a crop with a constrain size option. I guess I would have to constrain-size crop to something like 1400x1000 first? And then crop again to get the shape I want?
    Also, sharpness adjustment seems useless when adjusting 7MP photos. In my Windows workflow, I found sharpness much more useful when tweaking photos after they have been resized to this "full-size" 1200-1400 wide resolution. This again means that I'd have to do the crop "trick". It feels strange though that everything else is so straighforward, but resizing photos within iPhoto isn't.
    I guess that's where the whole workflow question is. The iPhoto workflow is to keep your originals intact and maintain adjusted duplicates. But resizing is not a suggested part of this workflow--instead, it is expected that any resizing is done on the way "out" of the library (e.g. File->Export or web gallery publishing).
    I wish I could introduce an intermediate "optimized" setting for Web Gallery upload (although then I don't have that precise control of sharpness after resizing).
    Maybe I am overthinking this and, given that I have 10GB of storage space with .Mac, I should just give up and let it upload my 3000-pix wide 2-3MB JPG originals (i.e. use the "actual size" setting)?
    3. Can anyone provide some practical uses for the "albums" setting and contrast their use vs events? I just want to get a better insight into the practical difference. It seems that albums are not very useful or rather would be rarely used, given the main categorization of all photos into events. It seems that the only use for albums would be to bring together photos across different events. For example, I could make an album of all of the photos of my car's odometer or put together my best random city shots. Maybe I just answered my own question? Still, interested into your use of albums.

    ilp
    Welcome to the Apple Discussions.
    So now, the iPhoto workflow. What should it be?
    This is pretty good:
    1. Connect camera.
    2. Import all photos, naming each event after importing.
    3. Delete poor photos or blurry duplicates.
    4. Identify photos I want to display on the gallery (those worth keeping) and hide all others.
    Well I would make a small change here. There are reports of folks getting hidden photos turning up in their web galleries. So, instead I would flag the photos I want to upload and then create an ALBUM from them. (or you can just drag and drop them to an album) Also, you can add to your gallery simply by adding to the album. Or use Ratings and/or keywords, see below
    5. Crop and adjust shadows/colors/etc.
    6. Select "Web gallery" to share the event.
    (or Album)
    You cannot change the size of images that are in the Gallery. It's a flash based "movie". Those size options refer to the file size of images that you allow folks to download from the gallery.
    Events are a really limited way of organising your pics. Events in the iPhoto Window correspond exactly with the Folders in the Originals Folder in the iPhoto Library package file (Right click on it in the Pictures Folder -> Show Package Contents).
    You can move photos between Events, you can rename Events, edit them, create them, as long as you do it via the iPhoto Window. Check out the Info Pane (wee 'i', lower left) the name and date fields are editable. Edit a Event Name using the Info Pane, the Event Folder in iPhoto Library/Originals will also have the new name.
    That's it.
    Albums can be based on any criteria you can think of - and with Smart Albums this can be done automatically.
    For instance, back in your work flow above, you sort the pic and hide the one's you don't want to upload. Here's another way of doing it. Rate the pics, from 1 - 5, and then
    File -> New Smart Album
    Rating -> Is -> 5 stars
    will find your favourites. Best yet, it will update that album every time you rate new pics. You can base Smart Albums on a lot of criteria - date, rating, keyword, camera model... have a look and see the full list, and these can be combined together...
    The other great plus with Albums is that, because they are virtual, a pic can be in 1, 10 or 100 albums with no wasted disk space. But if a pic is in more than one Event you're dealing with a duplicate file.
    Frankly, a Event is a bucket of Pics, and Album is a coherent organisation based on any criteria you can think of.
    Regards
    TD

  • Max number of records in MDM workflow

    Hi All
    Need urgent recommendations.
    We have a scenario where we need to launch a workflow upon import of records. The challenge is source file contains 80k records and its always a FULL load( on daily basis) in MDM. Do we have any limitation in MDM workflow for the max number of records? Will there be significant performance issues if we have a workflow with such huge number of records in MDM?
    Please share your inputs.
    Thanks-Ravi

    Hi Ravi,
    Yes it can cause performance overhead and you will also have to optimise MDIS parametrs for this.
    Regarding WF i think normally it is 100 records per WF.I think you can set a particular threshold for records after which the WF will autolaunch.
    It is difficult to say what optimum number of records should be fed in Max Records per WF so I would suggest having a test run of including 100/1000 records per WF.Import Manager guide say there are several performance implications of importing records in a WF,so it is better to try for different ranges.
    Thanks,
    Ravi

  • Better way of Sending E-Mail Notifications -- Workflow or Function Module ?

    Hi All,
    I have an implicit enhancement written in a t-code and based on some conditions I am creating event which inturn will trigger the workflow which inturn send e-mail notification via send mail step.
    My question is:
    Which one of the ways is better in terms of performance or overhead to send an e-mail notification. (There are no Approval processes in the workflow.. Just One Step E-mail Notification).
    1) In the Implicit Enhancement, Trigger an event which inturn will trigger the workflow and e-mail is sent via send mail step.
    2) In the Implicit Enhancement, Send the e-mail notification via standard function modules available... such as
       "SO_OBJECT_SEND", "SO_DOCUMENT_SEND_API1".........
    Would be grateful if someone can post the Advantages and Disadvantages in the above 2 ways of sending e-mail notifications..
    Regards,
    PR.

    Just to throw in some additional factors, consider exception handling:
    An event-based send mail step is decoupled and thus independent of your application. This means your exception handling is separate. It means you do not need to hold up the transaction if there is a failure. All this depends on how important the mail is. You could of course add validation code to ensure the mail address is valid and send it elsewhere if not.
    Regarding performance, consider how often this happens. If it's an infrequent occurrence then I wouldn't worry about performance. Hundreds or thousands a day is a different story.
    So the answer is:
    Workflow for low-volume scenarios (low performance impact) that are important (better error handling in WF),
    Direct mail for high volume and noncritical
    In between these, use whatever you like
    One more thing to perhaps consider the future. What are the chances of the mail being replaced by a work item in furture? Sometimes it's only by observing a process after go live that you can identify the best solution. e.g. you may decide to replace the mail with a "Please go fix this" work item because you need deadline monitoring

  • Create Workflow template using WebPartPagesWebService

    Hello,
    I'm using SharePoint web service "WebPartPagesWebService", I used the method ValidateWorkflowMarkupAndCreateSupportObjects() to create a workflow template.
    below is my method:
    string DefinitionFile = @"C:\Users\Administrator\Desktop\WorkflowFolder\testwf.xoml";
    string RulesFile = string.Empty;// @"C:\Users\Administrator\Desktop\WorkflowFolder\Workflow.xoml.rules";
    string ConfigFile = @"C:\Users\Administrator\Desktop\WorkflowFolder\testwf.xoml.wfconfig.xml";
    string flag = "2";
    WebPartPagesWebServiceSoapClient client = new WebPartPagesWebServiceSoapClient();
    client.ClientCredentials.Windows.AllowedImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Impersonation;
    client.ClientCredentials.Windows.AllowNtlm = true;
    string str = client.ValidateWorkflowMarkupAndCreateSupportObjects(DefinitionFile, RulesFile, ConfigFile, flag);
    I get a error in the result: "Data at the root level is invalid. Line 1, position 1."
    Is there any idea how ca i fix it? 
    Many thanks

    Hi Jinming ,
    If you are creating a template to allocate overhead cost to the final product , you have to use either enviornment 009 ( Process Orders ) and enviornment 0012 ( Production Orders ).
    For planning , use enviornment 001 ( Cost Estimate/production Orders ).
    I dont think enviornment 104 is correct . I dont see the enviornment in my list ECC 5.0
    Enviorment 009 and 012 are tried and tested . Use them as per your requirement .
    Please revert back for any further clarifications .
    Regards
    Sarada

  • Best practice how to retrieve & update data w/o any jsf-lifecycle-overhead

    I have a request scoped jsf managed bean called "ManagedBean". This bean has a method annotated with "@PostConstruct" that retrieves data from a database. The data is shown in a jsp "showAndEditData.jsp" in <h:inputText /> components - so the data is editable.
    The workflow is as follows:
    First, when navigating to "showAndEditData.jsp", the ManagedBean is created, the "@PostConstruct"-method is invoked, and the data retrieved from the database is shown to the user.
    Second, the user changes the data.
    Third, the user presses the submit button, the ManagedBean is created again, the "@PostConstruct"-method is invoked again, and the data is retrieved from the database again. Then the data is overridden by the changes the user made and passed to the business-tier (where it will be saved to the database).
    Every step that i marked with "*again*" is completely unneccessary and a huge overhead.
    Is there a way to prevent these unneccessary steps.
    Or asking in other words: Is there a best practice how to retrieve and update data efficently and without any overhead using JSF?
    I do not want to use session scoped managed beans, because this would be a huge overhead as well.

    The first "again" is neccessary, because after successfull validation, you need new object in request to store the submitted value.
    I agree to the second and third, really unneccessary and does not make sense.
    Additionally I think it�s bad practice putting data in session beansTotal agree, its a disadvantage of JSF that we often must use session.
    Think there is also an bigger problem with this.
    Dont know how your apps are working, my apps start an new database transaction per commit on every new request.
    So in this case, if you do an second query on postback, which uses an different database transaction, it could get different data as for the inital request.
    But user did his changes <b>accordingly</b> to values of the first snapshot during the inital request.
    If these values would be queried again on postback, and they have been changed meanwhile, it becomes inconsistent, because values of snapshot two, do not fit to user input.
    In my opionion zebhed has posted an major mistake in JSF.
    Dont now, where to store the data, perhaps page scope could solve this.
    Not very knowledge of that section, but still ask myself, if this data perhaps could be stored in the components and on an postback the data are rendered from components + submittedvalues instead of model.

  • Various Workflow Questions

    I'm new to workflow and I'm trying to figure out the best way to construct my app.
    First, what is the best method of generating the item key?
    Second, when creating a new process how are the attribute values set? CreateProcess doesn't have any method of doing this. Would I need to use SetItemAttr? How would it react if the item didn't yet exist (as createProcess would not have been called)?
    Third, what is the best practice for storing data? One could conceivably store the data in tables of their own and not use the attributes at all. They could also store it in both their own tables and the attributes (which seems redundant to me). They could also just use the attributes for storing data (as the examples in the Workflow guide seem to do).
    If I only store the data in the attributes, how would I go about presenting it to my users on various web pages?
    Perhaps the solution is to store the data in an initial table, load it into attributes during the first function, only access it in the attributes until a final decision is made, at which point the last function would take the data in the attributes and store it in the final table?
    Thanks,
    Tom

    1) Usually the item key is derived from the primary key of the application object being handled by the workflow. For example, in a requisition workflow, a good item key would be the requisition number. The item key does not have to be meaningful, though; from a workflow point of view the main point is that the item key and item type combination must be unique.
    2) Yes, SetItemAttr is the right way to set item attributes, and you're correct that CreateProcess must be called first. The best order in which to call the APIs is:
    CreateProcess
    SetItemUserKey
    SetItemOwner
    SetItemAttr...
    StartProcess
    CreateProcess and StartProcess are required; the Set... APIs are optional, if you want to set any of those values. If you do not need to set anything, you can call LaunchProcess instead, which combines CreateProcess and StartProcess in one wrapper; but note that LaunchProcess does not give you any opportunity to set any values in between creating and starting the process.
    3) Item attributes are basically meant to hold the information that the workflow process itself needs to use. There is a trade-off between the convenience of storing global process data in item type attributes and the overhead incurred by loading item type attributes when a process instance is created. So we recommend that you do not use item type attributes as a denormalized data store. Instead, always refer back to the base application to retrieve up-to-date values. Minimizing the number of item type attributes will improve the performance of your workflow process. Also, runtime workflow information is usually purged from the WF tables at some time after the process is complete, so any information that you want to access from your own application should be stored back in your own tables.
    We recommend that the following types of information should be defined as item type attributes:
    - Information required by notification messages, such as the display format of a currency value
    - Information required by function activities, such as values that link back to applications data like a person_ID
    - Information maintained by a workflow activity, such as data for activities that identify who to forward information to and from in a loop construct
    - Business event messages required by Send or Receive event activities, or event details required by Raise event activities

  • Any best practices on workflow design??

    I feel difficult when migrating applications from DEV->TEST->PROD.
           This is because I have created web services in .net.
           So, for each migration, I am suppose to change all the WSDL links in all forms and Workflows.
           Currently:
              I do open the processes in notepad replace all wsdl connection with TEST/PROD connections.
              I do open each form and goto XML Source and replace all WSDl strings.
            Is there any other best practice(s) to do that?
    Nith

    I followed a different approach for one of my project as:
    created several form variables which hold the WSDL for each different servers.
    Calling the web service from javascript code (instead of data connections).
    This solves the problem but increases the development overhead.
    http://groups.google.com/group/livecycle/web/form%20variables.PNG
    Another way is to design all your web services within adobe itself. In this case we will have the same host name (localhost) forever which doesn't require any modification throughout its lifetime.
    Nith

  • Running a Windows Command From Workflow

    Hi All,
    I am working on a workflow to search for dormant accounts in AD. Although I have written a Java code to convert timestamp into AD format and then search AD using getResourceObjects method on lastlogontimestamp. But I am feeling that it will be more accurate to use the commands provided by AD to search for dormant accounts i.e. dsquery user.
    I have few questions arount it:
    1. How can I execute it on a remote server from my workflow. Is it possible to execute an action even if there is no account related activities?
    2. How can I use the output returned by the command or script?
    3. This command seraches the users on the basis of no. of weeks. Is there any way to do it on the basis of no. of days of inactivity?
    Any suggestions or sample code can be helpful.
    Thanks,
    Gaurav
    Edited by: gaurav_jain on Jun 13, 2011 1:49 PM

    note that in some cases you can use a normal
    Runtime.exec ("program arg1 arg2");
    but in some cases the commands are built into the shell, so you have to do
    Runtime.exec ("command.com /c copy myfile here");
    "copy" is one of them, and "echo" is probably one too. The latter should work for all programs and commands, but might involve some overhead as a new instance of 'command' is created and destroyed for every command.

  • Workflows and OWB 10.1

    We have run into a brick wall here and need some advise on how to proceed:
    Our DW has >50 facts and dimension tables, for each table we have a mapping that loads the table from an external table and a process flow (workflow) that:
    - looks out for the flat file generated by the extractor process. The extractor processes are written in Cobol and are executed outside OWB.
    - renames the flat file and validates its contents
    - executes the mapping
    - performs error handling (moves the file, send e-mail etc.)
    All the process flows are independent of each other and can be executed at any time.
    One of the requirement we have is to support execution is to support data loads every N minutes, where N is in the range of 2 mins to 1 Month. This means, each process flow can be executed very frequently and this is where is we run into problem.
    The issue we are seeing is OWB RTP service hang every few hours. This causes the session that executes process flow hangs. When we see log for RTP service 10.1.0.2, we see RPE-01003 (Infrastructure error caused by Closed Statement - next). Inside the workflow monitor, we see the activities in COMPLETE state and see no error. This can happen for any activity of the process flow.
    We have been working with Oracle Support to resolve this and have tried various things including upgrading to OWB 10.1.0.4 without any success.
    Lately, the message we have received from Oracle support is that what we are doing here is unusual i.e. process flows are being executed too frequently and the number of process flows are too high. As per them, the OWB-Workflow interaction has an overhead that causes this error.
    We disagree as we do get this error even when the jobs are executed at a low frequency but as obvious, the occurence then is quite infrequent. We think the error happens regardless of the frequency so Oracle should look it as an issue. Also, if it is an overhead issue, we need to know what kind of resource issue are we running into and how can we avoid it.
    What we would like to know is:
    - Are we doing something unusual here? If yes, how can we change our process flow design to avoid this?
    - Has anyone else run into this issue? If yes, how did you workaround it?
    - Can we depend on OWB RTP and Workflow to be fail proof? If no, then how can we recover from the failures?
    Any relavant information would be helpful.
    Thanks,
    Prashant.

    Had this issue and Oracle support doesnt able to procedure any stable solutions. I have tried with stop OWB serveice and start OWB service, which went quite nice. Do not required to bounce the DB for this.
    Approximately how many processes you have. I guess you can make all stage area(external to oracle) 1 or 2 and fact & dim will be based on the business rule. No matter I had it over 40 processes for DW with OWB 9.2.0.2.8
    Verify the v$resource_limit.
    If you want you can try with; from Workflow repo (owf_mgr default) exec wf_purge.total
    then run this wfrmitt.sql , which will remove all passing packages (almost clearing out the wf rep, if you pass all)

  • Sizing for Workflow

    Hello Gurus,
    I need information for sizing workflow for a customer who need activate a financial document workflow with a document attachment?
    This customer create 5.000 FI/Documents daily and each document with one attach.
    Regards,
    Carlos

    This is the answer of sizing PM:
    "Hi there,
    In general workflow itself is not sized, but the business application behind the workflow.
    The overhead caused by the workflow is negligible.
    Best regards,
    Susanne"

  • Anyone care to write up a workflow solution using referenced images?

    So, I'm having a little bit of a hard time figuring this out, but I think I got something. I'd like others to share their thoughts and opinions about how to get a good referenced workflow going. Maybe we can come up with an efficient method of working on, say, two machines with only one folder of images and a way to archive them efficiently too.
    I'll post what I think I can now do with Aperture 1.5.
    1 - I shoot a bunch of shots, in the field, and download them to my Powerbook
    2 - I rename the raw files in the finder (my preferred choice) and save them out to an external hard drive
    3 - I fire up Aperture on my laptop and import those pix from my external hard drive as referenced images. This does a couple of things: One, it keeps my internal laptop hard drive from filling up and, two, it will allow me, when I get back to my fast killer desktop, to transfer, or work on, the images from this external hard drive.
    4 - I doodle away, in Aperture, on my laptop until I'm happy.
    5 - When finished, I can export a copy of the project I just worked on to the external drive, but make sure that the "consolidate images" option is off (no need to have 2 copies of the RAW file on the same external hard drive). This will give me, on the external drive, a project with all the adjustments (and preview images) along with a folder containing the RAW files. I'll also have a copy of the project on my laptop hard drive as a backup (I suppose I'd have a backup of the raw files on a burned DVD or another hard drive too).
    6 - I get back to my desktop and plug in the external hard drive.
    7 - I import the project from that external hard drive. Since the RAW files are still on the hard drive, there's no problem with the project finding the actual files.
    8 - But, I do not want to work the files from the external hard drive (speed is an issue). So I copy the folder of RAW files to my internal hard drive and delete the RAW files from the external drive.
    9 - Aperture tells me (through the badges underneath the thumbnails) that the RAW images are no longer connected to the project.
    10 - I select all the images in the project and right click (or control click) and choose from the contextual menu, "Manage Referenced Files."
    11 - From the dialog box that appears, I navigate to the internal hard drive where I copied the RAW files to and choose to "reconnect all images." The badges under the thumbnails update to let me know that the images are back online.
    12 - I futz with the images some more and do whatever else I need to do.
    13 - Done. Job finished. No need to have access to RAW files anymore, although I want to keep preview images in Aperture for websites, email, etc.
    14 - At this point I think I can do several things. I can export the project and "consolidate images into exported project," and archive that exported project to whatever medium I use for archiving. Or I can just archive the folder of RAW images and eventually delete them off my internal drive. This last option will leave me with preview images inside Aperture (along with badges that tell me the originals are offline). I can back up that project using the vaults method that comes built into Aperture. That way I'll have several copies of the project (with the JPEG previews) and offline RAW files (also copied and archived) that I can reconnect at anytime I want to in the future. This will help keep my Aperture library smaller and more manageable with plenty of backups.
    How does that sound?

    I tried out something like your flow, though I attempted to let Aperture do the intake step right to the Aperture library on the laptop and then work to a portable drive by exporting the consolidated project (I tried my iPod as a temp drive which worked just ok) from there.
    The G4 17 in - lorez - is pretty marginal for Aperture. Working with a few images was just ok, but I really don't think it's up to a couple of gigs of NEF files. What I wanted to accomplish the same goal - a reference set of pictures on the working hard drives with originals offline and archived in a couple of places.
    But after trying this I'd say your scheme of importing to the portable drive and working in the finder and renaming with ABetterFinderRename prior to any import is a better one. The more I can hold down the Aperture's processing overhead the better. The other thing I plan to try is to use iPhoto for the first look. No messing with the images, but I can look at them and toss out the garbage, do some tagging and then, on the G5, let Aperture import the resulting file structure from the portable hard drive and carry on with your scheme.
    Once the images are to my liking in aperture I can export a finished set for iPhoto on my Laptop for emails and etc.
    After messing with this for awhile, I don't see anything in your workflow that is not going to work.
    I imagine as time goes on that lots of people will be going through all these steps. I hope the designers can figure out some simplifications. A media manager in Aperture, much as the one in Final Cut Pro would be welcome for much of this. Or droplets or buttons with the more tedious bits of the workflow included would be welcome.

  • Custom workflow steps in BCC / Prepending hostnames to images in media text

    We need to be able to prepend hostnames to image paths that are inside media internal text (MIT) items.
    We have MITs that contain raw HTML from a 3rd party that get imported into Merchandizer.
    Some of this HTML have <img> tags or reference images in inline <style> blocks, in the form of <img src="/foo/bar/xyz.png" alt="my image" /> or <style> .img { background-image: url(/foo/bar/xyz.png) } </style> respectively.
    We need to be able to programmatically append imgX.mydomain.com to the image paths (where X is calculated by an existing component depending on some custom logic to point the relevant images to the correct web server cluster). Also, the 'img' portion may be 'imgprev' or 'imgstg' depending on the environment. eg: img1.mydomain.com/foo/bar/xyz.png for production & imgstg1.mydomain.com/foo/bar/xyz.png for staging.
    We also need to be able to strip out the domains when the MITs are exported from Merchandizer.
    My initial thoughts were to create a servlet filter with a custom response wrapper to do this, but this is additional overhead at request time we'd rather not have.
    Is there some way we can create a custom workflow step to do this task, where the step would know which environment it was publishing to and programically append the necessary image path?
    Any other suggested approaches welcome.
    We are on 9.4.

    http://www.adobe.com/cfusion/mmform/index.cfm?name=wishform

  • FCP X to Vimeo colour workflow (plus vectorscope question)

    Two disclaimers first off:
    1. I know there has been discussion of gamma/colour/contrast shifts in videos uploaded to Vimeo for years eg:
    https://vimeo.com/forums/topic:46866
    https://discussions.apple.com/thread/3627103
    http://www.cinematography.com/index.php?showtopic=56262&st=0&gopid=372768&#entry 372768 (post #6)
    2. I'm not an expert on the technicalities of colour editing and realise there are issues with my workflow that are probably to blame for at least some of the problem I'm encountering.
    So....
    I edit my footage in FCP X and get it looking how I want in the viewer (probably a mistake, but I now have a project looking how I want and I need to figure how to compensate for Vimeo uploading). I export with x264 encoder via Compressor and after uploading the image is darker, less saturated and has a slight colour shift. Below is the Vimeo result on the left and the original how I want it on the right.
    What differences do people see and how would anyone suggest making alterations to the pre-upload verison to compensate for them?
    I've tried making gamma and brightness/contrast alternations with Compressor filters which gets it almost there, but there's still something about the FCP X image which is just more vivid. What is that?
    What is represented by the density/thickness of the vectorscope marks? Here is one for the original then one for the Vimeo screenshot reinserted to FCP (is that a stupid thing to do?)
    All advice welcome and appreciated. Thanks!

    There is no "one size fits all" solution, of course.
    Nevertheless, I think the idea of creating a library for each "production" is a sound one.
    If you have material that you tend to reuse, it may also be a good idea to keep a library for this kind of thing.
    FCP X ensures that there is no interlibrary dependency - so, for example, if you use a clip from library A in a project that sits in library B, it will be copied over.
    If you fear that this will start filling your drive with many copies of the same thing, there is a solution for it, too. It is called "external media". FCP X makes it easy to organize this stuff: when importing, you get to choose where to copy the files where you want them to be (into the library itself or somewhere else; or even to "keep in place" if the files are already in your drive).
    External media is never copied (nor, it must be stressed, is it deleted!) by FCP X, it simply works with symlinks (which are like aliases, very small files) that point to it. So you could reuse the same file in many libraries with minimal overhead.

Maybe you are looking for

  • [DWM] How to get window titles (w/xprop?)

    I'm trying to expand my "switch-to-tag"-script to include window titles, but have stumbled upon something I can't understand: If I issue "xprop" (with no arguments), and click on my firefox-window, I get the following output: user@host $ xprop (click

  • Sending an email via alternative SMTP

    Hi, up until recently Mail would politely prompt me to choose an alternate SMTP setting if I had sent an email using a non-available server setting. Example: When at home my ISP requires me to use smtp.johndoe.com but when at work I use their relay s

  • Daisy-chaining Thunderbolt and Mini DisplayPort

    I have a 13-inch MacBook Pro (Early 2011), i7. In the future, would it be possible to daisy-chain a Thunderbolt-compatible product like the Echo Express (http://bit.ly/pTlDFr) with a monitor that only has DisplayPort, HDMI, and DVI (http://amzn.to/no

  • MIRO Entry vat amount diff.

    Hi Expert, we are doing MIRO Entry, with the refrence of PO. in PO Packing Chages  Rs.250 Mentioned in Vendor Invoice see the below entry. Base Amount                          4320.00 Add Packing Charges                 50.00    Sub Total  -1        

  • Windows Media Player - Files Can't Open or Crashes trying

    Firstly, can I just say I hate this support system, I have idea where I'm going or what I'm doing and never find anything useful. Really frustrating:/ I just hope this is the right place to post this cry for help... apologies if not Right back to the