Expressions - best practice for duplicating layers

Hello,
I have a fairly complex set of expressions on multiple layers.  These layers together make up an overall effect for a piece of footage.  For convenience I have these expression linked to a controller null object so I can control them from one place.  Now that I have it set up, I want to duplicate the group of layers (footage item, various effects, and null object controller) multiple times.  I basically have a template for a footage effect that I want to duplicate multiple times so I end up with many pieces of footage with this effect applied.  When I duplicate the group of layers, I'd love to set it up in a way that the  expression variables will now point to the duplicated controller object.  I noticed this behavior happens if the expression is a simple pick whip from one property to another.  Basically a direct connection.  But if I attempt to store this pick whip selection in a variable to be manipulated further, this functionality ceases.  So my question is, how can I create a set up that will permit me to duplicate layers and have the expressions update as if they were a simple direct pick whip from one property to another?  Is this possible?  If not, how can I better set this up to allow me to accomplish my end result?
Thanks!
-Justin

If you have things set up so that your layer names end with a space and a number, so that they increment when you duplicate them, you should be able to use the layer's name to navigate to the other layers in the set. For example, if your layers are named "layer 1", "layer 2", etc. and your control layers are named "control 1", "control 2", etc., you could do something like this in your expressions to construct the control layer's name on the fly:
n = name.split(" ");
ctrl = thisComp.layer("control " + n[n.length-1]);
Dan

Similar Messages

  • Best practice for nested scripting on UCCX

    Hello All!
    Newbie on scripting here and inheritted a HIGHLY customized UCCX environment.  In looking at the scripts, it seems the system integrator built out the logic for every script within every script.  That is, if one script queues calls into CSQ1 and if they are waiting for more than 2 minutes it sends them to CSQ2, it has all of the logic in that one script that the regular CSQ2 script has as well.  So that means anytime "a change" has to be made, it has to be made to the CSQ2 logic in the CSQ1 script AND the CSQ2 script as well.
    Is there a best practice for this?  I would think it would be easier to say, in the above scenario that if the caller is waiting in CSQ1 for more than 2 minutes, it shoudl redirect the call to a trigger for the application that uses CSQ2.  Thus the logic is in the respective scripts and not duplicated across the board.
    Any thoughts and experiences on how this is best accomplished and maintained would be greatly appreciated!
    Thanks!
    JG

    Yes, if all queue logic is the common for your different CSQs you can build one common script and then reference this script as a subflow from your main scripts and simply pass the CSQ name as a variable to it.  This way you avoid what you said the need to rebuild logic in several places if you need to make a global change, you simply change the one common script logic.
    HTH,please rate all helpful posts!
    Chris

  • Best practices for development / production environments

    Our current scenario:
    We have one production database server containing the APEX development install, plus all production data.
    We have one development server that is cloned nightly (via RMAN duplicate) from production. It therefore also contains a full APEX development environment, and all our production data, albeit 1 day old.
    Our desired scenario:
    We want to convert the production database to a runtime only environment.
    We want to be able to develop in the test environment, but since this is an RMAN duplicated database, every night the runtime APEX will overlay it, and the production versions of the apps will overlay. However, we still want to have up to date data against which to develop.
    Questions: What is best practice for this sort of thing? We've considered a couple options:
    1.) Find a way to clone the database (RMAN or something else), that will leave the existing APEX environment intact? If such is doable, we can modify our nightly refresh procedure to refresh the data, but not APEX.
    2.) Move apex (in both prod and dev environments) to a separate database containing only APEX, and use DBLINKS to point to the data in both cases. The nightly refresh would only refresh the data and the APEX database would be unaffected. This would require rewriting all apps to use DBLINKS though, as well as requiring a change to the code when moving to production (i.e. modify the DBLINK to the production value)
    3.) Require the developers to export their apps when done for the day, and reimport the following morning. This would leave the RMAN duplication process unchanged, and would add a manual step which the developers loath.
    We basically have two mutually exclusive requirements - refresh the database nightly for the sake of fresh data, but don't refresh the database ever for the sake of the APEX environment.
    Again, any suggestions on best practices would be helpful.
    Thanks,
    Bill Johnson

    Bill,
    To clarify, you do have the ability to export/import, happily, at the application level. The issue is that if you have an application that consist of more than a couple of pages, you will find yourself in a situation where changes to page 3 are tested and ready but, changes to pages 2, 5 and 6 are still in various stages of development. You will need to get the change for page 5 in to resolve a critical production issue. How do you do this without sending pages 2, 5 and 6 in their current state if you have to move the application all at once??? The issue is that you absolutely are going to need to version control at the page level, not at the application level.
    Moreover, the only supported way of exporting items is via the GUI. While practically everyone doing serious APEX development has gone on to either PL/SQL or Utility hacks, Oracle still will not release a supported method for doing this. I have no idea why this would be...maybe one of the developers would care to comment on the matter. Obviously, if you want to automate, you will have to accept this caveat.
    As to which version of the backend source control tool you use, the short answer is that it really doesn't matter. As far as the VC system is concerned, you APEX exports are simply files. Some versioning systems allow promotion of code through various SDLC stages. I am not sure about GIT in particular but, if it doesn't support this directly, you could always mimic the behavior with multiple repositories. Taht is, create a development repository into which you automatically update via exports every night. Whenever particular changes are promoted to production, you can at that time export form the development repository and into the production. You could, of course, create as many of these "stops" as necessary to mirror your shop's SDLC stages, e.g. dev, qa, intergation, staging, production etc.
    -Joe
    Edited by: Joe Upshaw on Feb 5, 2013 10:31 AM

  • SAP Best Practices for Data Migration :repositories only on MS SQL Server ?

    Hi,
    I'm implementing the "SAP Best Practices for Data Migration" (see https://websmp109.sap-ag.de/bp-datamigration).
    As part of the installation you have to install MS SQL Server Express Edition. The installation guide contains detailed steps to do this. All repositories for Data Services should be running on SQL Server, according to the installation guide.
    The customer I'm working for now does not want to use SQL Server, but DB2, as company standard.
    So I use DB2 for the local and profiler repositories.
    I notice however that the web application http://localhost:8080/MigrationServices does not support DB2.The only database type you can select in the configuration area is MS SQL Server.
    Is this a limitation, a by design ?

    Hans,
    The current release of SAP Best Practices for Data Migration, v1.32, supports only MS SQL Server.  The intent when developing the DM content was to quickly set up a temporary, standardized data migration environment, using tools that are available to everyone.  SQL Server Express was chosen to host the repositories, because it is easy to set up and can be downloaded for free.  Sone users have successfully deployed the content on Oracle XE, but as you have found, the MigrationServices web application works only with SQL Server.
    The next release, including the web app, will support SQL Server and Oracle, but not DB2.
    Paul

  • Best practices for graphic divs and classes

    Nice day webmates, I am looking for good and simple tutorials
    on creating divs with attractive looks, such as rounded corners,
    own-customized (made in FW), displaying shadows... so I can type
    content inside them... what are the best practices for this ?
    thanks a lot in advance...

    Bad example. When you increase text size that site becomes
    unusable.
    Best practice, don't use absolutely positioned divisions (aka
    layers).
    http://www.roundedcornr.com/
    --Nancy O.
    Alt-Web Design & Publishing
    www.alt-web.com
    "Sw Jiten" <[email protected]> wrote in message
    news:g1f2i7$rve$[email protected]..
    > here you can see an example: www.peru.com

  • JSR 168 best practice for saving inter-portlet state

    The portlet specification doesn't yet cover inter-portlet communication. Until
    it does, what is the best practice for saveing state so that multiple portlets
    can use the same data?

    The portlet session is layered on top of the HTTP session except for
    attribute scoping. So, all portlets in a webapp share the same
    HTTP/portlet session for a given client.
    All APPLICATION_SCOPEd portlet session attributes are automatically
    available to other portlets. With PORTLET_SCOPEd attributes, portlet
    containers namespace attribute names, so it would be hard to get
    attributes by name (but still possible).
    Subbu
    Chris Jennings said the following on 10/14/2003 09:17 AM:
    If I set an attribute in a PortletSession, though, it isn't visible form another
    PortletSession... right? Please tell me that's wrong ;-) If that's right, how
    can I have portlet B get to something set by portlet A?
    Subbu Allamaraju <[email protected]> wrote:
    Chris,
    For sharing transient state, the only possible mechanism in V1.0 is to
    use sessions.
    Subbu

  • Best Practices for JMS Service Documentation

    Our software consists of a variety of JMS producers and consumers used to transform and transmit business to business messages on a large scale.
    The service nodes do a variety of things, and we've had trouble over the years ensuring every queue-based service clearly identifies the parameters and payload it accepts, so that everyone from programmers to system administrators can easily see what services are available and how they are to be used.
    Some have advocated always adding a web service in front of each message-based service to guarantee interface contracts are all well publicized. I think there are reasons to choose web services and reasons to choose message-based services, and am not convinced this is the right answer to address limitations in expressing the design contract for a message-based service. That said, I really like what we've been able to do with self-documenting web services based on annotations, and wonder if there's a conceptual equivalent in message-based software.
    What are your best practices for ensuring your message-based services are as self-documenting as your modern web service?
    Thanks in advance for your advice!
    Edited by: lsamaha on Apr 16, 2012 11:46 AM

    *bump                                                                                                                                                                                                                                                           

  • Best practices for data migration

    Hi All,
    This thread is useful for those who can use their opportunity to share the ideas and knowledge for making better and best use of data migration using Business Objects Data Services.

    Hans,
    The current release of SAP Best Practices for Data Migration, v1.32, supports only MS SQL Server.  The intent when developing the DM content was to quickly set up a temporary, standardized data migration environment, using tools that are available to everyone.  SQL Server Express was chosen to host the repositories, because it is easy to set up and can be downloaded for free.  Sone users have successfully deployed the content on Oracle XE, but as you have found, the MigrationServices web application works only with SQL Server.
    The next release, including the web app, will support SQL Server and Oracle, but not DB2.
    Paul

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best practice for multi-language content in common areas

    I've got a site with some text in header/footer/nav that needs to be translated between an English and Spanish site, which use the same design. My intention was to set up all the text as content to facilitate. However, if I use a standard dialog with the component's path set to a child of the current page node, I would need to re-enter the text on every page. If I use a design dialog, or a standard dialog with the component's path set absolutely, the Engilsh and Spanish sites will share the same text. If I use a standard dialog with the component's path set relatively (eg path="../../jcr:content/myPath"), the pages using the component would all need to be at the same level of the hierarchy.
    It appears that the Geometrixx demo doesn't address this situation, and leaves copy in English. Is there a best practice for this scenario?

    I'm finding that something to the effect of <cq:include path="<%= strCommonContentPath + "codeEntry" %>" resourceType ...
    works fine for most components, but not for parsys, or a component containing a parsys. When I attempt that, I get a JS error that says "design.path is null or not an object". Is there a way around this?

  • Best Practice for utility in Sol Man 4.0

    We have software component ST-ICO of release 150_700 with Patch level 5
    We want a Template Selection for ‘Utility’ industry. I checked in
    the service market place and found that 'Baseline Package United
    Kingdom V1.50, Template: BP_BLKU150' is available in the above software
    component.
    But we are not getting any templates other than 'BP_UTUS147 - Best Practices for Water Utility' in the 'SOLAR_PROJECT_ADMIN'
    transaction.
    Kindly suggest any patch needs to be applied or some configuration need to be done.
    Regards
    Mani

    Hi Mani,
       Colud u plz give me the link of "where u find the template BP_BLKU150"?
    It will be helpful for me.
    Thanks
    Senthil

  • Best Practices for SRM Installation !!

    Hi
        can someone share the best Practices for SRM Installation ?
    What is the typical timeframe to install SRM on development server and as well as on the Production server ?
    Appericiate the responses
    Thanks,
    Arvind

    Hi
    I don't know whether this will help you.
    See these links as well.
    <b>http://help.sap.com/bp_epv170/EP_US/HTML/Portals_intro.htm
    http://help.sap.com/bp_scmv150/index.htm
    http://help.sap.com/bp_biv170/index.htm
    http://help.sap.com/bp_crmv250/CRM_DE/index.htm</b>
    Hope this will help.
    Please reward suitable points.
    Regards
    - Atul

  • Best practices for ARM - please help!!!

    Hi all,
    Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
    Create GRC request -> role approver -> risk manager -> security team
    options are : end user / Manager / Functional super users / security team.
    End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
    Thanks!!!!

    In this case, I recommend proposing that the department managers create GRC Access Requests.  In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables.  This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role.  From your workflow design, it looks like Role Owners should be brought into these workshops.
    You might consider a Role Catalog that the manager could filter on and make selections from.  For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles.  You could map business functions or tasks to specific technical roles.  The design flaw here, of course, is the way your technical roles have been designed.
    The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs.  They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
    At all costs, avoid having end-users create ARs.  They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact.  You should choose a Requestor who has the highest chance of requesting the correct access.  This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
    If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination).  I do not recommend going down this path, however.  It is very confusing, time consuming, and difficult to integrate properly.
    Good luck!
    -Ken

Maybe you are looking for

  • WHAT ALL MESSAGE'S ISSUED WHILE CREATING PO

    Sir, ply let me know after saving the po i want to know what all messages were issued while creating the po. Like "warning" or "information" message where  can i see this regards amey

  • Can't Set Value in JSpinner

    I am having trouble setting a value to a JSpinner in my code. I am using two JSpinners, both with SpinnerNumberModels. I use the Object JSpinner.getValue() method to get the current value from that JSpinner, change that object into an Integer format,

  • JDBC RECEIVER -- conditional stored procedure

    Hi All,            I have a situation where I need to call one stored procedure which returns to me if for a particular employee a record exists or not depending on that result I need to call further stored procedure. Can it be handled in one interfa

  • Creating custom user event. Event more suited for this situation than occurences or notifiers?

    Okay, Heres my situation. I have lets say 4 windows which plot data from a common data source. Now I only want the data read and displayed on the graph when an aquire button is pressed on one of the graph. So data is constantly aquired by only added

  • Mac Pro sled-Plextor compatiblity

    I researching a new SSD for my computer boot drive. I was considering the new Plextor M5 Pro Extreme, price isn't bad for the 128GB version.  I currently have an older Intel SSD drive installed inside an IcyDock adapter. The Plextor drive comes with