Process Version Control

What are most people doing to manage their process version numbers in both a DEV & PROD environment?
Because an LCA import follows the rule "A process with the same name and version is considered a duplicate and is not imported", it is of course important to make sure any changes to a process in a DEV environment are made on a version number that is different from PROD.
That suggests 3 main methods.
Easiest in tracking terms is that every process change starts with creating a new process revision.  If this is true, I don't care what the process version is on PROD, the latest process on DEV will always be higher.  The side effect of this is that I could possibly create a TON of unnecessary revisions on DEV and all the headaches that would go along with removing old revisions and whether any of those revisions are RUNNING.
Whenever an LCA is imported to PROD, I go back to DEV and create revisions of all the processes that were just imported.  This ensures that any changes are made against revision which can be imported and I don't have to constantly check to see if a revision is required..  This would take some discipline to enact.
I keep a version table in an XLS that keeps track of what is the top revision on both DEV and PROD.  If I want to update a process, I go check the table to see if I need to create a new revision.  This would require checking the table on every process change.
My other thought was that I could export the process XML for both the DEV and PROD versions and do a diff.  I was hoping there would be no differences, but in fact, I had a lot of processes whose only difference was the ID on some of the variables.  I am not totally sure why a process imported from an LCA would have different IDs on DEV and PROD, but I can guess it has to do with how LC indentifies all this information in the database.
So, my thinking is that if the only difference is the ID of some of the variables, the process are in fact identical.
Any thoughts?

Hi Laura,
There are two ways you can map your requirements into system.
Firstly, as you have mentioned, use of different document statues. To set control on documents use Authorization Object  C_DRAW_TCS. This object controls which users can process which document info records, based on a combination of activity, document type, and status.
Secondly, you can go with Change Masters (Engineering Change Management). This will help you in complete control of document creation/changes/release etc.
/Tilak Raj

Similar Messages

  • Process Version 2010

    I'm just curious how many other people out there are like me and still use Process Version 2010?  I find that I cannot achieve the same quality results using PV 2012 - those controls simply do not offer me the flexibilty and overal tonal control that I have with PV2010.  I shoot and edit fashion photography professionally, and I have very specific needs regarding tonality adjustments. I find I am reliant on Recovery, Fill Light, Blacks and Brightness sliders in order to fine tune lighting falloff on skin and clothing.  I have tried numerous times to use PV2012 but the results are far inferior to PV2010 for my needs.  Anyone else out there feel the same?  In case you're wondering, you can see examples of my PV 2010 work on pretty much all the photos on my website (davidwalden dot com). I just can't get these results with PV2012. 
    Out of curiosity, is there a technical reason why Adobe can't combine both process version controls, and offer all sliders from both versions in one layout?
    thanks,
    d

    trshaner wrote:
    Combining PV2012's and PV2010's controls would be like adding a 10-band audio frequency equalizer to your sound system and then still using the Bass and Treble controls. They simply wouldn't play (pun) together very well and would make PV2012 even more difficult to use.
    That's one way of looking at it, but I don't think the difference is that straighfroward. It's more like they've replaced some controls with other controls that they deem "superior", but honestly the 2 sets of controls yield completely difference results.  Based on the results that you want, you might prefer one set of controls over the other.  I definitely do not think that having both sets in one interface would make it more difficult - it would be the best solution.  That would be the 10-band equalizer you mentioned
    Highlights and Whites is supposed to be a more refined version of Recovery. I get that, and probably in some situations it is superior (architecture and nature photography). Btw, I looked at the Siggraph paper and yes, once again they are using architecture and naturally lit environments to show how much detail can be recovered.  Recovering detail, in a very basic sense, is only one (limited) use of a photo editing toolset.  Another (more important imho) use is controlling color and contrast on human skin at different tonality ranges.  This is my primary concern during photo editing, and architecture and nature photo editing does not address these concerns. I find that the contrast that I get in highlight regions using Recovery gives me better results. Also, I use this in combination with the ToneCurve - Lights and Highlights sliders - to tweak the highlight ratios.
    It is interesting that the 2012 Basic controls nearly mirror the ToneCurve sliders - Highlights, Lights, Darks and Shadows.  Now we have 2 sets of controls in 2012 - Basic and ToneCurve - that affect similar regions.  Is this redundant at all?  Perhaps.  I find that the ToneCurve and 2010 Basic controls actually compliment each other better, because they have distinct functional differences.   For example, the Blacks slider is more akin to setting a black points using a Levels adjustment in Photoshop.  This is different from using the Shadows slider in the ToneCurve, and different from setting the ToneCurve black point manually - these things yield different results. From my experience it is better to use the Blacks slider for this, and then tweak the value using the ToneCurve black point manually (although I typically use the the manual ToneCurve points for adjusting contrast, not for setting black points - that's why an additional Levels adjustment widget would be a superior solution imho).
    Also, getting back to my earlier point about contrast at different tonalities, one thing that I have historically used Blacks and Fill Light in PV2010 is for adding/controlling *mid-tone* contrast.  Increasing these together has the effect of increasing contrast in mid-tonal regions where previously no contrast was perceptible. This is an important part of color correction process. I have tried to replicate this effect in PV2012, but the mid-tones loose the contrast and detail. Adjusting the PV2012 Shadows and Blacks sliders from my experience produces "smooth" results, but not "desireable" results. Meaning that, the darker tonalities are lifted, but the contrast and detail is lost.
    I will try to post one or more examples showing some of these concerns.  I appreciate all the feedback you guys give.
    -d

  • OWB Process Flow - How is the best  version control tool ??

    HI all,
    I just start work with OWB and I have a question to know how is the best way to do something.
    Imagine the scenario below:
    If I have 2 or more requests for example:
    Request 1: Create a Dimension City.
    Request 2: Create a Dimension Products.
    I Have ONE process flow and i need put my changes inside. This is my problem.
    In my scenario I don't know what Request goes to Prod First.
    If I put the Request 1 and Request 2 in my PROCESS FLOW, maybe I need change is someone decide change MY REQUEST PRIORITY.
    There is something in OWB to "control the version or changes" ?? For a mapping I export the MDL and commit on SVN, but I dont know haw can i do with the process flow.
    Something to agree multiples peoples work in different mappings and a SAME PROCESS FLOW ??
    What is best way to work with process flow and version control.
    What are the best practices when it comes to version control?
    Thanks.

    Amit,
    Are you really doing this in 10.1.3.x and not 11g?
    At any rate, I don't see how #2 and #3 relate whatsoever to your choice of a version control system. OK, maybe in #2 if there is some "maintenance" activity to be done against the version control server. Subversion is the open source alternative that you listed there and is pretty commonly used. If your company is already using one of the mentioned tools, why change? About the only thing I'd mention is to advise you NOT to use CVS for well documented reasons (JDev does support it) - if you would have picked CVS otherwise, choose Subversion. As far as question #1 - I've only used Subversion (well, I did use CVS for a while) with JDeveloper, so I can say it was "effective enough for me." In 10.1.3.x, I also used the external svn tools for doing lots of things like merging and so forth; in 11g, the support is much much better.
    Best,
    John

  • Version control for word processing

    Hi, all.
    I've never used any kind of version control system (like git or cvs or anything), but know I think that I have a use for one.
    Basically, my point is that I write documents mostly in markdown/xhtml (perhaps I'll start using LaTeX one day, but that's another story), so I only use a text editor and not a word processor. So I think that a simple revision control solution would provide me with a feature that many word processors have: the ability to track all changes and revert to any past state of the document.
    So what would you suggest? Something simple, lightweight and smart. I'm not gonna use it for coding, or even publish it anywhere, so my requirements are rather humble.

    The only problem with using a version control system with a text editor is that you have to either set it up to commit automatically on every save and every few minutes.  Word processors especially docs.google.com are very good about saving revisions for you.
    So look at the DVCS list here and pick one (mercurial, git, and bazaar are the more popular ones) and think of a good system for auto-committing.  The day you realize you forgot to commit a version before a major change you will kick yourself for not having a good system in place.

  • How do I fix project after "Remove From Version Control" corrupted it?

    I am using RoboHelp 9.0.1 and installed both Tortoise SVN 1.6.9 and latest PushOK SVNSCC then added my large RoboHelp project to SVN. I was able to check in and out files from SVN but had several issues with it:
    1) Super super slow. Working with folders or any renames would take 10 seconds per file and up to 1 hour if needed to refresh the root folder.
    2) I could not perform some actions at all, such as delete, rename, or move folders. I kept getting COM errors.
    I therefore decided that working with SVN and RoboHelp is not practical, at least not on my VPN so I decided to disconnect the project from source control and just work locally. The only option that I saw that sounded like it would do that was the "Remove from Version Control". This started a process that lasted for several hours. At the end of it, I now have several significant issues:
    1) The order of the files and folders in my Project Manager is completely wrong now. I have almost 1000 topics and reordering all of them is not possible.
    2) The Table of Contents, Glossary, and Index files appear empty. They had content before.
    2) A couple of the Single Source Layouts I had created are completely missing.
    3) Many, but not all, of the folders have tons of files with the extension ending in "_temp_removed_by_svn"
    4) Many, but not all, of the files are actually gone from SVN so I can't recover a clean image. There was no warning that this command would actually delete the files from SVN (I thought it would just remove the version control connection).
    5) Who know what other issue exist that I haven't seen.
    Any idea how I can fix this?
    Thanks in advance,
    Dan

    Are the "_temp_removed_by_svn" files in your local folder or SVN? Let us know how you get on with the new project. It sounds like something is wrong with SVN. Can you use the SVN Log command to see whether there is a different version you can restore. This might also give you an indication of what might have caused the problem. You could try deleting your CPD file. It gets rebuilt it is isn't there anyway. This file can become bloated and it is good practice to delete it when it gets close to 2mb in size. Your project is fairly large and has a lot of folders and may affect performance. Have you considered splitting them and merging the output? I know you probably don't want to consider this right now, but I think it may be a better long term solution.
      The RoboColum(n)
      @robocolumn
      Colum McAndrew

  • Regarding version control

    Hi,
    Could u please help me out in giving some ideas of version control in sap?
    First let me give some example as follows:
    First if i develop something in the developement server then later when i transfer to the QA server and later to production server then is there any change in version.
    Please give me a details of this issue????
    Thanks,
    Batista....

    hi priya,
    Version Control
    Version control is a mechanism that helps maintaining the revision history of a development resource and tracking the changes done to it. It defines a set of constraints on how a development resource can be changed. A development resource that complies with the constraints defined by the version control is called a versioned resource. When a versioned resource is modified or deleted, a new version is created for the resource. A unique sequence number is associated with each version of the resource created in a particular workspace. This sequence number identifies the order in which the versions were created in that workspace. The DTR graphically represents the relationship between the different versions of a versioned resource in the form of a version graph.
    For the representation of version graphs, this document follows the conventions shown in this figure.
    The figure shows the meaning of the symbols in the version graph.
    The following changes are tracked by the version control mechanism of the DTR:
    ·        Addition of the resource to the repository
    ·        Modification of the resource in the repository
    ·        Deletion of the resource from the repository
    In all the above cases, a new version of the resource is created.
    Production Delivery
    Packaging
    To deliver your product, you have first to package it. There are different packages you can use for shipping your product to your customers:
    ●      Software Component Archives (SCAs) – this is the standard way to deliver software for the SAP NetWeaver platform.
    ●      Software Deployment Archives (SDAs) – for top-level applications you can deliver only the executable part of the software. You can directly deploy the SDA file.
    ●      Public Part Archives (PPAs) in Development Component Interface Archives (DCIAs) – for reusable components (Java EE server libraries, Web Dynpro components, Visual Composer components and so on). You can deliver only the metadata of the components. DCIA can be included in SCA file too.
    How to do that?
    Using the command line tool provided with the SAP NetWeaver Composition Environment you can:
    ●      package a collection of components into an SCA including only the deployable archives. This is required if you do not want others to reuse the delivered components.
    ●      package a collection of components into an SCA including the deployable archives and the corresponding interface archives. This allows customers to develop against these components. Those customers can directly import the SCA into their own SAP NetWeaver Development Infrastructure (NWDI) or into an SAP NetWeaver Developer Studio local installation.
    ●      package the public parts of a component together with the required metadata into a DCIA (and further into an SCA).
    ●      include source code into an SCA.
    ●      unpack a deliverable archive and drop the result into an existing version control system for example, or directly import them into an existing Design Time Repository (DTR).
    Delivery of Source Code for Further Customization
    In addition, you can deliver source code to your customers to allow further customizing or add-on development. The deliverable archive may contain sources for:
    ●      individual development components (DCs).
    ●      a collection of development components, for example a whole software component (SC).
    Example
    A customer can add a new source compartment to an existing configuration, and then locate that compartment in the file system where it is accessible by the version control system in charge. Then he or she extracts the sources with the command line tool to the compartments root directory and refreshes the configuration in the SAP NetWeaver Developer Studio. The compartment tree is populated with components from the archive. Afterwards, the customer may put those components under version control. Deliverables that contain only individual components may be treated accordingly.
    This mechanism may also be used for other purposes, for example for setting up a simple backup and restore mechanism for components in Developer Studio, or sharing DC sources without having a central version control system: a developer may pack a compartment and store the resulting SCA on a central share or backup system. Another developer may take that SCA and import it.
    Limitations
    Note the following limitations connected with this kind of source code delivery:
    ●      There is no support for handling conflicts when different actors in a delivery chain develop independently in the same source code. You cannot prevent the customer from modifying delivered sources. When you ship a new version of the sources, there is no special support for updating and no support for merging the update with modifications done by the customer. You and the customer have to agree on a process how those conflicts are handled. For example, the customer can decide not to import the update you deliver directly into the active development line, but to unpack the delivered sources to some unconnected sandbox system and perform the required merges manually.
    ●      When you deliver source code to customers, it is important that you also deliver the required libraries and generators that are needed to build these sources. For example, it may be necessary to ship some archive compartments that contain used components.
    ●      There is no support for delivering deletions in a new version. If a source file was deleted, the customer has to manually ensure that the file is also deleted in the Developer Studio or source code management system.
    ●      If a customer prefers to work with the SAP NetWeaver Development Infrastructure (NWDI), this customer cannot directly import the source delivery package into the NWDI landscape. Between NWDI landscapes at different places, sources usually are exchanged through a more sophisticated export format that contains not only the pure source code, but also the versioning meta information of the exporting DTRs. This ensures that the importing repository can detect conflicts that arise due to modifications. If this versioning information is not available, the only way to import source deliveries is to unpack them to a file system and manually put them under version control with the Design Time Repository perspective of the Developer Studio. In case of an update, the customer would have to check out all affected files, merge them with the new versions from the source delivery, and finally check them in as a new version.
    More information: Composition Environment Command Line Tool
    see this url
    http://www8.sap.com/businessmaps/0134713B1D6046C59DE21DD54E908318.htm
    thanks
    karthik
    reward me if usefull

  • Application size in terms of pages and performance and Version Control

    Currently I'm looking into the best way to version control our APEX applications. From other threads, it seems it's an area that leaves much to be desired. We are on the verge distributing a large APEX project commercially but I cannot find a suitable versioning method to support bug fixes and new development happening at the same time to the same set of applications. I just hope everyone out there realises versioning is a vital area of the development process and VOTES for it in the V3 poll.
    Anyway, enough of the my whinging. I did have a brain wave (quite rare !). What would the drawback be to have only 1 or 2 pages per application. This would allow a developer to always import the application at the start of work (ie from versioning software such as VSS or PVCS) and then export it at the end back into say PVCS. The application has everything self contained and correct versions etc.
    This would allow more developers to work on different areas at the same time as opposed to having many pages in the one application where developers could step on each others toes etc. I've considered importing/exporting pages but the fact you cannot lock shared objects means there is a possibility that if many developers are working on the same application someone will change something that affects pages other than their own. It would also be a nightmare to tie up different versions of of pages, shared objects, applications etc. Would there a performance problem with this method ? Incidentally, why can't TABS be shared/subscribed across applications. It means they have to be created separately in each application whereas things like Nav Bars and Templates can be shared across applications.
    Currently, my thoughts are that: bug fixing for a production release has to be in a separate stream (apex installation) from say new development work for the same set of applications BUT what this means is that the bug fixes have also to be manually applied in the new development stream - which is a considerable overhead (ie twice the work).
    Thanks for hearing me out - assuming you survived to the end !
    Any encouraging comments would be appreciated !

    Wim,
    I don't entirely understand the behavior. There should be little/no difference between the two cases. I'm assuming you have no indexes, which isn't recommended for such large containers anyway. Can you make your document set available to me so I can see if I can reproduce the behavior and look at it more closely? A single container, or dbxml_dump of a container is sufficient (both compress well).
    Contact me directly at george dot feinberg at you know where.
    George

  • Version Control for adobe forms

    Dear All,
    We have a situation where we already go live with version 0 of our adobe form. Now, we have an enhancement to the form i.e.: some new functionalities and new field. Previously we have tried to transport the new changes to the production without any version control and those processes that have been already started prior to the new changes encounter error when the user continues the process.
    I am wondering how do the version control works and any documentation on how to configure it?
    Thanks in advance
    Regards,
    Bryan

    Hi Brian,
    Here is some information that I found in the IMG on Create ISR Scenario...
    Create ISR Scenario
    Use
    In this IMG activity, you create an ISR scenario that has a one-to-one relationship with a form scenario. To be able to use a form scenario in a process, one ISR scenario must exist for each form scenario. The ISR scenario and the form scenario must be linked with each other. You make this setting in Customizing for HR Administrative Services in the IMG activity Link ISR Scenario with Form Scenario.
    In the form scenario, you define primarily the basic set of form fields and their processing through the backend services. In the ISR scenario, you specify the definition of the user interface. You also specify which form is used for the display and how the layout of this interactive form is designed. You use interactive forms based on Adobe software to create and process the forms.
    ISR scenarios and form scenarios are version dependent. The version numbers of the ISR scenario are assigned automatically. Note that a form scenario must have exactly the same version as the linked ISR scenario. For this reason, you should always create a new version in the ISR scenario first and then use the same version number when you create a version in the form scenario manually.
    Note
    If an ISR scenario or form scenario (with an existing version) has already been used in a productive process, you should not change the configuration. If you want to make changes to a process or an ISR scenario or form scenario, you should always create a new version, and only ever use that new version in the future. In this way, processes that have been started can be concluded with the old version and new processes can be started simultaneously with the new version.
    This is from the Create Form Scenario documentation...
    Create version
    Form scenarios are version dependent, which means that there is at least one version of each form scenario. Versions are linked with processes. Since processes can vary, you must also be able to adjust the associated scenarios. To be able to provide different forms for process variants, you create versions.
    You can still process and change an existing version at a later point in time. Once a version has been used to execute a process, you should not make any more changes to this version; instead, you should create a new version.
    The form scenario and the (linked) ISR scenario are both version dependent. They must always have exactly the same version numbers. Note that the version number of the ISR scenario is generated and cannot be entered manually. When you create a new version in the form scenario, you therefore have to use the version number generated in the ISR scenario.
    If you have already made extensive Customizing settings for the form scenario and want to create a new version based on the settings, you should use the IMG activity Manage Form Scenario.
    Hope this helps...
    Cheers,
    Kevin

  • Version Control of PL/SQL programs in Database

    Is there a version control feature available in Oracle v9.x ???
    I am trying to implement version control on PL/SQL programs(Packages/Functions/Procedures). I should be able to rollback to old version and keep the system running if the latest ones failed. This should be done automatically without bringing the database down or recompiling the PL/SQL programs and also users do not need to reconnect(users might be caught in a LOCK and might get their sessions killed by DBA...)
    Ex: I have heard that in .NET, u can have more than one version of a DLL and have only one version active. If we want to go back to the old version of the DLL, u don't need to recompile the DLL to make it active. I am looking something similar to this....
    I have thought of several ways like creating a small repository table for my PL/SQL programs and store the PL/SQL code in the repository and based on the situation, compile only that program (which might be a OLD or a NEW version) and so forth... But this does not satisfy the requirement of rolling back to the old version without recompiling.(RENAMING a PL/SQL program feature doesn't seem to be implemented yet...)
    I don't want to use Designer just for this purpose...
    Any ideas..
    Thanks,
    Purush

    Are you dealing with code that's being called remotely (i.e. via a database link)?
    No
    I'd be concerned about the concept of rolling back code changes in production on a regular basis-- that would seem to indicate problems that ought to be addressed in development and QA. Rolling back code in production seems like it ought to be a rather painful process, if only because it indicates a massive failure elsewhere.
    This is not on a regular basis at all. Our Database applications are tied with lots of programs which directly control the robots and machines. Certain machines and robots needs to be working all the time and any downtime will cost time and money. To make sure that the implementation goes into production smoothly(without shutting down machines and robots) and then into maintainance mode, we are looking for some kind of source control to control the implementation and make sure to revert back (without shutting down machines and robots) if there are major issues.(There are certain things here which cannot be tested outside of a shop floor due to the physical and other constraints.)
    I have thought of otherways like a compile flag (in the DB.. Ex: a packaged variable) to set before compiling and reset after compiling. The programs on the shopfloor will always read this flag and check buffer(time taken to do this will have to be considered) before calling a DB txns and if the flag is set, buffer the txns and and move on to the next task the machine should do. The next time it call a txn. if the flag is reset, it checks the buffer and if buffer exists, execute the buffer txns first.. and then proceed to actual txn. The things that bothers me is time taken to compile the huge package and the no. of txns getting buffered and the overall txn time.
    I am trying to come up with some kind of solution for this issue if possible.....
    Thanks,
    Purush

  • Version Control for BUSINESS OBJECTS repository

    Hi,
    Do we have any version control for business objects repository?
    Thanks

    Hi
    I am hoping someone can answer my Version Control queries. The LCM document is limited in its detail on VM.
    I am currently testing the BO LCM 3.1 and while it appears very easy to use especially for promotion, the Version Control Manager seems to be lacking in controls and a clear promotion path from dev to test to uat to prod.
    We have set up 2 identical environments for UAT and PROD.
    And using the Version Control part of LCM creating version control for a universe.
    Logged into VM in UAT
    We have selected a universe
    Added it to VM
    Made a change to the universe in Designer
    Exported it
    Then Checked it in
    Can now see 2 versions in the history and the VMS Version. All good
    I then click on swap system and log into PROD
    The VM history is also there in PROD
    I have a number of concerns and questions and can't seem to find the solution to them anywhere.
    1. VM seems to be lacking a controlled process from all the environments. Basically we want to deploy following this path;
    Dev - Test - UAT - PROD
    There does not seem to be any controls or security which would stop you from GET VERSION from the DEV environment and putting that straight into PROD. Obviously we would not want that to happen.
    We would only want to GET VERSION from UAT
    Similarly for UAT We would only want to GET VERSION from TEST
    And for TEST We would only want to GET VERSION from DEV.
    Granted, we currently only have 2 identical environments.
    But Is there controls that would stop you when in PROD from getting versions from any other system other than UAT?
    Also is there any reason why no promotion is required when using VM.
    This seems to negate the Promotion Function of the LCM
    Any advise would be greatly appreciated with this.
    Many thanks
    Eilish

  • 2012 Process Version Alters Exposure/Contrast/Tone Curve

    In working with the LR4 beta, I've come across a couple odd results.  I imported some image folders containing raw, tiff and jpeg.  The images were updated to the 2012 Process Version on import.  I noticed that I had not saved out the metadata on the raw files for earlier edits in LR3.  I closed LR4, opened LR3, saved out the metadata, reopened LR4, read the metadata from the files and this, unsurprisingly, reverted them back to the 2010 Process Version.  When I update the raw images to the 2012 Process Version, the Exposure is reduced by 1 stop and Contrast is set to -33.  Additionally, a significant Tone Curve adjustment is made.  No such alterations occur to tiff or jpeg files in toggling back and forth between 2010 and 2012.
    Please advise.
    Thanks.

    Eric, I guess what I'm trying to figure out is if they aren't going to 'look' the same then why would that be.  Further, if no changes to the file were made (i.e., exposure, recovery, curves, etc) then why would there be such a radical change to the image on updating to a new process version? 
    As I noted earlier, I don't recall that updating to the 2010PV from the prior made such a change to images.  In fact I just tried it.  I switched back to 2003, then to 2010 then to 2012.  No change to the image going from 2003 to 2010.  Big change going from either 2003 to 2012 or 2010 to 2012.  To my way of thinking, if I take an image processed with the 2010PV and am happy with the way it looks (it's displayed on my website, I've printed and sold it, etc.) then I update to the 2012 PV I shouldn't have to re-edit the image to get the same 'look'.  I understand that due to the different controls some of the positions of the sliders may be different.  But the two images shouldn't 'look' different.  Take another example.  If I've got a set of images from a commercial shoot, have edited them in LR and provided to the client, then I update those images to the 2012PV in LR 4 and the resulting images aren't the same, I've got to spend time (and money) re-editing to get back to what I had before.  Does that seem right?  It sure doesn't to me.  It seems, actually, the exact opposite of what should be. 
    In the screen grabs above, this is a marked change to the image.  It's categorically not just a difference in slider/control positions to get the same 'look'.  It's a completely different image with a completely different look.  The file has been radically altered.  To me, that shouldn't be the case.
    EDIT:  I guess we were posting at the same time, Eric.  Sorry, I understand the explanation but it just doesn't make a lot of sense. 

  • Update to process version 2010 & smart collection "has adjustments"

    Buying the new Lightroom 3 version ofcourse I wanted to profit of the new process version. So I selected all images I had "not adjusted" before and selected Update to Process version 2010. This works great.
    However I now have one problem.
    I had and have a Smart collection which helps me in my workflow to find all images that have not been touched, so have no adjustments.
    This Smart collection has become worthless because of the update to process version 2010 all images have been adjusted.
    Ok, this is correct and yet I miss my ability to select images which I have not adjusted.
    Once I have set the default process version to 2010 all new images are correctly shown.
    Question: how can I select all those images which have only the process version updated, but have no further adjustments?

    Changing the process to version 2010 is considered as an adjustment; it will appear in the photo's history in the same way as any other development adjustment. The smart filter condition "Has Adjustments" only has a true or false setting so you can't distinguish the process version adjustment from any other. Instead of using this method to spot the photos I need to work on, I find that I have more control over my workflow by using keywords such as "review", "develop", "print"; you can make these keywords not exportable so that they are not part of the keyword list in exported photos.
    Interestingly, when one or more photos are selected in the  film strip and the reset button is clicked in develop mode the photo  will revert back to its original state, the "Has Adjustments" condition  will become false, but the process version will remain at 2010.
    If you have a backup of your catalog that precedes the step where you updated the process version of all photos, open this backup, add a keyword of your choice to all the photos that have not been touched, then apply the process version change. You can then change your smart catalog to use this keyword as opposed to the "Has Adjustment" condition and remove the keyword you have set on photos that have not been touched as soon as you are done editing them. IMPORTANT: Any changes you have made following the date and time of the backup will be lost.
    If you don't have a backup to go to, you will have to manually identify which photos, which from your point of view, have not been touched.
    Some ideas:
    If you did not make other edits following the change of the process version, and have not made other edits on that date, you could built a smart collection based on the 'Edit Date" and then reset the develop settings of all the photos in that collection.
    Another possible condition is a "Capture Date" range where you know photos haven't been touched yet.
    There may be other options, there may be a plugin that can help you, or ultimately there would be means to access the database outside of Lightroom - contact me if you get that desperate to fix this!
    http://www.BDLImagery.com

  • Extremely slow performance on projects under version control using RoboHelp 11, PushOk, Tortoise SVN repository

    We are also experiencing extremely slow performance for RoboHelp projects under version control. We are using RoboHelp 11, PushOk and a Tortoise SVN repository on a Linux server. We are using a Linux server on our IT guys advice because we found SVN version control under Windows was unstable.
    When placing a Robohelp project under version control, and yes the project is on my local machine, it can take up to two hours to complete. We are using the RoboHelp sample projects to test.
    We have tried to put the project under version control from Robohelp, and also tried first putting the project under version control from Tortoise SVN, and then trying to open the project from version control in Robohelp. In both cases, the project takes a ridiculous amount of time to open. The Robohelp status bar displays Querying Version Control Status for about an hour before it starts to download from the repository, which then takes more than an hour to complete. In many cases Robohelp becomes unresponsive and we have to start the whole process again.
    If adding the project to source control completes successfully, and the the project is opened from version control, performing any function also takes a very long time, such as creating a topic. When I generated a printed documentation layout it took an astonishing 218 minutes and 17 seconds to complete. Interestingly, when I generated the printed documentation layout again, it took 1 min and 34 seconds. However when I closed the project, opened from version control, and tried to generate a printed documentation layout, it again took several hours to complete. The IT guys are at a loss, and say it is not a network issue and I am starting to agree that this is a RoboHelp issue.
    I see there are a few other discussions here related to this kind of poor performance, none of which seem to been answered satisfactorily. For example:
    Why does it take so long when adding a new topic in RH10 with PushOK SVN
    Does anybody have any ideas on what we can do or what we can investigate? I know that there are options for version control, but am reluctant to pursue them until I am satisfied that our current issues cannot be resolved.
    Thanks Mark

    Do other applications work fine with the source control repository? The reason I'm asking is because you must first rule out that there are external factors causing this behaviour. It seems that your it guys have already looked at it, but it's better to be safe than sorry.
    I have used both VSS and TFS and I haven't encountered such a performance issue. I would suggest filing it as a bug if you rule out that the problem is not related to external influences: https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform&loc=en
    Kind regards,
    Willam

  • OWB 10g/11g version control

    I am using OWB 10g release 2 but there is no feature for version control. The snapshot feature and export/import can help but Its very difficult with big project.
    Is there any new feature with OWB 11g release 1 or 2 for version control ? Do you have any proposition for version control in 10g ?
    Robin
    Edited by: user451399 on 2009-06-12 08:07

    Hi Robin,
    In the repository there can be only one version at the moment. If you have to work on an older release, you have to load it into another repository.
    We proceed as follows:
    1. Build a collection containing all objects that belong to the release
    2. Export that collection
    3. Check the file into CVS
    4. Import the file into production repository (in production db)
    5. Deploy from the production repository to the production target schema
    That way the release that is currently deployed on production is also in the production repository. You may do hotfixes directly here or just have a look at what is currently deployed.
    In our development repository we can work on new features.
    I gave a talk about automating this process on DOAG 2008. You may request the presentation here [http://www.metafinanz.de/leistungen/leistungsbereiche/bi-reporting/data-warehousing/kontaktformular/|http://www.metafinanz.de/leistungen/leistungsbereiche/bi-reporting/data-warehousing/kontaktformular/]
    Though the slides are in german they show the architecture and should give you some idea.
    Regards,
    Carsten.

  • Version Control on BI7?

    Hi Guys,
    During my current project ,i encounter one problem with version control.
    Due to some reason, we only have two environment (dev&prod),without QA system, So there comes the problem.
    All the development is performed on dev env,and then transferred to prod system. We cann't control all the developmemt version in BI. So the crisis happens, we didn't know the current version of my prod system. very bad news:(
    Can any expert give some advice on how to control the version( infoobjects,process chain,cube,queryetc)?
    Does the bi system has the version control componet? Can the objects(created in bi and neet to be transferred) be exported and marked with some version number ?
    My purpose is as following:
    1:Using some method to control the version on dev and prod system, i need to see clearly what's the difference of some data models between dev and prod system.
    2:Can any tools be used to control the version?
    3:IF the version conflict,can we change the version backward quickly and effectivly?
    Also can any experts explain the best practice of version control used in their own project ?
    Thanks Ahead
    Jinwei Zhang From Beijing China

    unfortunately there is nor version control in BI. So once you have made changes to any objects, the previous changes are overwritten.
    also for comparsion, you need to do a side-side comparison between DEv and PROD boxes in your case. no easy method.

Maybe you are looking for

  • Accounting in foriegn currency

    In my case, there is only one company code but they are involved in import/export. So I want to cofigure the SAP system which can record the transactions in foriegn currency without group currency configuration.

  • LOOKING FOR CUSTOMER SERVICE MANAGER EMAIL?

    I have called twice regarding someone adding a phone to my account.  Also send themselfs a new I phone to some where in PA.  I live in IN!   I've very upset that NO ONE has called me on this.  My bill still show the phone on there and I still owe for

  • PDF display at Biller Direct

    Hi Friends, I need to display billing data in PDF on portal for Biller Direct. I had done the reqired code for all BADI given by SAP.Still I am not getting any PDF display for billing. Please help me out/suggest the right approach to be follow. Ricky

  • Deployment from JDeveloper

    Hi, I am trying to deploy using JDeveloper i created public_html folder under my project directory and created deployment descriptor, now when i click on the deployment descriptor and select deploy to an ear file i get many deployment errors saying E

  • Copying Events in iCal via Web Browser

    How do you copy an event on iCloud (via web browser) from one day to another. For instance, I have some recurring events that get added to my calendar, but they are not in a predictable schedule (so I can't use the recurring controls). Do I have to a