Best practice - applying Filters

What's the best practice in terms of performance versus code reuse, when applying filters?
Right now, I see 3 ways to specify a filter:
1. as a <mx:filter> or <s:filter>
2. in actionscript:
var glowFilter:GlowFilter = new GlowFilter();
--OR--
3. in SVG:
<s:rect>
  <s:filters>
       <s:DropShadowFilter blurX="20" blurY="20" alpha="0.32" distance="11" angle="90" knockout="true" />
  </s:filters>
</s:rect>
What's the best way to apply this, in terms of performance? Thanks.

Fredrik,
Though similar to your workflow, here is how I would do it.
Import those "raw" Clips into a Project, and do my Trimming in that Project, relying on the Source Monitor to establish the In & Out Points for each, and also using different Instances of any longer "master Clip.". I would also do my CC (Color Correction), and all density (Levels, etc.) Effects here. Do not Trim too closely, as you will want to make sure that you have adequate Handles to work with later on.
Use the WAB (Work Area Bar) to Export the material that I needed in "chunks," using either Lagarith Lossless CODEC, or UT Lossless CODEC *
Import my music into a new Project and listen over and over, making notes on what visuals (those Exported/Shared Clips from above) I have at my disposal. At this point, I would also be making notes as to some of the Effects that I felt went with the music, based on my knowledge of the available visuals.
Import my Exported/Shared, color graded Clips.
Assemble those Clips, and Trim even more.
Watch and listen carefully, going back to my notes.
Apply any additional Effects now.
Watch and listen carefully.
Tighten any edits, adjust any applied Effects, and perhaps add (or remove existing) more Effects.
Watch and listen carefully.
Output an "approval" AV for the band/client.
Tweak, as is necessary.
Output "final approval" AV.
Tweak, as is necessary.
Export/Share, to desired delivery formats.
Invoice the client.
Cash check.
Declare "wine-thirty."
This is very similar to your proposed workflow.
Good luck,
Hunt
* I have used Lagarith Lossless CODEC with my PrE 4.0, but have not tried UT. Both work fine in PrPro, so I assume that UT Lossless will work in PrE too. These CODEC's are fairly quick in processing/Exporting, and offer the benefit of smaller files, than Uncompressed AVI. They are visually lossless. The resultant files will NOT be tiny, so one would still need a good amount of HDD space. Neither CODEC introduces any artifacts, or color degredation.

Similar Messages

  • Obiee 11g : Best practice for filtering data allowed to user

    Hi gurus,
    I have a table of the allowed areas for each user.
    I want to show only the data facts associated with these allowed areas.
    For instance my user scott can see France and Italy data.
    I made a variable session. I put this session variable in a filter.
    It works ok but only one value (the first one i think) is taken in account (for instance, with my solution scott will see only france data).
    I need all the possible values.
    I tried with the row wise parameter of the variable session. But it doesn't work (error obiee).
    I've read things on internet about using stragg or valuelistof but neither worked.
    What would be the best practice to achieve this goal of filtering data with conditions by user stored in database ?
    Thanks in advance, Emmanuel

    Check this link
    http://oraclebizint.wordpress.com/2008/06/30/oracle-bi-ee-1013332-row-level-security-and-row-wise-intialized-session-variables/

  • Best Practice on Creating Queries in Production

    We are a fairly new BI installation. I'm interested in the approach other installations take to creation of queries in the production environment. Is it standard to create most queries directly into the production system? Or is it standard to develop the queries in the development system and transport them through to production?

    Hi,
    Best practices applied to all developments whether it is R/3, BI modelling or Reporting and as per the best practice we do development in Development system, testing in testing box and finally deploy successful development to production. yes for user analysis purpose, user can do adhoc analysis or in some scenario they create user specific custom queries (sometimes reffere as X-query created by super user).
    So it is always to do all yr developement in Development Box and then transport to Production after successful QA testing.
    Dev

  • Best Practice on Moving Standard Hierarchy (Cost /Profit)  into production

    Hi,
    What is the best practice to move standard hierarchy into production? Is it better to move it as a transport? Or is it better to upload into production with LSMW?
    Thanks,
    Santoshi

    Hi,
    Best practices applied to all developments whether it is R/3, BI modelling or Reporting and as per the best practice we do development in Development system, testing in testing box and finally deploy successful development to production. yes for user analysis purpose, user can do adhoc analysis or in some scenario they create user specific custom queries (sometimes reffere as X-query created by super user).
    So it is always to do all yr developement in Development Box and then transport to Production after successful QA testing.
    Dev

  • RTLS best practices for outdoor environments

    Hi there,
    I was curious as to what the requirements are for outdoor location based services are. Do the indoor best practices apply to outdoor environments? As in APs located 40 - 70 feet apart, clients heard at -75 dbM @ 3 APs etc etc.
    I have a client that we surveyed using these specs and 1532i APs. However they now running fibre outdoors so we are thinking 1552i models to accommodate. However I can't find the Tx power levels to know if the 1552i could emit a cell size small enough.
    Any advice?
    -Brett

    If you look at the data sheet for both models, the maximum transmit power is close. So lowering the TX power would be the same on both models. 
    -Scott

  • Best Practice for applying patches in WL 8.1

    Below is what I did to apply patches to WL 8.1 SP2. Is there a better way?
    Best Practice? Thanks in advance.
    1) Created directory.
    C:\bea\weblogic81\server\lib\patches
    2) Copied jar files to patches directory.
    CR127930_81sp2.jar
    CR124746_81sp2.jar
    3) Modified startWebLogic.cmd to include jar files first in the classpath.
    set
    CLASSPATH=%WL_HOME%\server\lib\patches\CR124746_81sp2.jar;%WL_HOME%\server\l
    ib\patches\CR127930_81sp2.jar;%WEBLOGIC_CLASSPATH%;%POINTBASE_CLASSPATH%;%JA
    VA_HOME%\jre\lib\rt.jar;%WL_HOME%\server\lib\webservices.jar;%CLASSPATH%
    4) Restarted server and saw in console that the patches were applied.

    Hi:
               SAP Standard does not recommend you to update quantity field in asset master data.  Just leave the Qty Field Blank , just mention the Unit of Measure as EA. While you post acquisition through F-90 or MIGO this field will get updated in Asset master data automatically. Hope this will help you.
    Regards

  • Does JSP best practice of putting under WEB-INF apply to JSF pages?

    I'm new to JSF and wondering if the "best practice" advice that used to be given of storing your jsp pages under WEB-INF (when using Model2) to keep them from being served up without going through your controller still applies with JSF.
    Since the component lifecycle is so different, I'm wondering if it would still apply? If anyone can explain why it might or might not apply I'd appreciate it!
    Thanks!

    The rule is:
    keep all the pages you don�t user to "browse", under the WEB-INF directory.
    MeTitus

  • Best Practice for where to apply ACL's on a router

    I have a 1760 router with a 4 port ethernet card. It has the Vlan1 int on it for f0/0 in the IOS. I need to apply an ACL to that interface/subnet with the phyical cable in f0/0 and ip range of vlan1. When appling the ACL should I apply it to the physical interface or the Vlan (mgt) interface. What is the best practice and is there any docs on this on cisco?
    Thanks
    Chris

    Chris
    The f0/0 is operating as a switch port and as such you can not apply the access list directly to the physical interface. You should apply the access list to the vlan interface.
    HTH
    Rick

  • Best practices for applying sharpening in your workflow

    Recently I have been trying to get a better understanding of some of the best practices for sharpening in a workflow.  I guess I didn't realize it but there are multiple places to apply sharpening.  Which are best?  Are they additive?
    My typical workflow involves capturing an image with a professional digital SLR in either RAW or JPEG or both, importing into Lightroom and exporting to a JPEG file for screen or printing both lab and local. 
    There are three places in this workflow to add sharpening.  In the SLR, manually in Lightroom and during the export to a JPEG file or printing directly from Lightroom
    It is my understanding that sharpening is not added to RAW images even if you have added sharpening in your SLR.  However sharpening will be added to JPEG’s by the camera. 
    Back to my question, is it best to add sharpening in the SLR, manually in Lightroom or wait until you export or output to your final JPEG file or printer.  And are the effects additive?  If I add sharpening in all three places am I probably over sharpening?

    You should treat the two file types differently. RAW data never has any sharpening applied by the camera, only jpegs. Sharpening is often considered in a workflow where there are three steps (See here for a founding article about this idea).
    I. A capture sharpening step that corrects for the loss of sharp detail due to the Bayer array and the antialias filter and sometimes the lens or diffraction.
    II. A creative sharpening step where certain details in the image are "highlighted" by sharpening (think eyelashes on a model's face), and
    III. output sharpening, where you correct for loss of sharpness due to scaling/resampling or for the properties of the output medium (like blurring due to the way a printing process works, or blurring due to the way an LCD screen lays out its pixels).
    All three of these are implemented in Lightroom. I. and II. are essential and should basically always be performed. II. is up to your creative spirits. I. is the sharpening you see in the develop panel. You should zoom in at 1:1 and optimize the parameters. The default parameters are OK but fairly conservative. Usually you can increase the mask value a little so that you're not sharpening noise and play with the other three sliders. Jeff Schewe gives an overview of a simple strategy for finding optimal parameters here. This is for ACR, but the principle is the same. Most photos will benefit from a little optimization. Don't overdo it, but just correct for the softness at 1:1.
    Step II as I said, is not essential but it can be done using the local adjustment brush, or you can go to Photoshop for this. Step III is however very essential. This is done in the export panel, the print panel, or the web panel. You cannot really preview these things (especially the print-directed sharpening) and it will take a little experimentation to see what you like.
    For jpeg, the sharpening is already done in the camera. You might add a little extra capture sharpening in some cases, or simply lower the sharpening in camera and then have more control in post, but usually it is best to leave it alone. Step II and III, however, are still necessary.

  • Any best practice to apply role based access control?

    Hi,
    I am starting to apply the access permissions for new users as being set by admin. I am choosing Role Based Access Control for this task.
    Can you please share the best practices or any built-in feature in JSF to achieve my goal?
    Regards,
    Faysi

    Hi,
    The macro pattern is my work. I've received a lot of help from forums as this one and from the Java developers community in general and I am very happy to help others and share my work.
    Regarding the architect responsibility of defining the pages according to the roles that have access to them : there is the enterprise.software infrastructure.facade
    java package.
    Here I implemented the Facade GoF software design pattern in the GroupsAndRolesAccessFacade java class. Thus, this is the only class the developer uses in order to define groups and roles of users and to define their access as per page.
    This is according to Java EE 6 tutorial, section VII Security, page 471.
    A group, role or user is created with an Identity Management application or by a custom application.
    Pages of the application and their sections are defined or modified together with the group, role or user who has access to them.
    For this u can use the createActiveGroup and createActiveRole methods of the GroupsAndRolesAccessFacade class.
    I've been in situations where end users very strict about the functionality of the application.
    If you try to abstract web development, u can think of writing to database, reading from database and modifying the database as actions.
    Each of these actions should have suggester, approver and implementor.
    Thus u can't call the createActiveGroup method for example, without calling first the requestActiveGroupCreationHelper and then the approveOrDeclineActiveGroupCreationHelper method.
    After the pages a group has access to have been defined with the createActiveGroup method, a developer can find out the pages and their sections a group has access to by calling the getMinimumInformationAboutGroup method.
    Further more, if the application is very strict, that is if every action which envolves writing to the database must be recorded, this concept of suggester, approver and implementor is available throught the recordActiveGroupAction method.
    For example, there is a web shop, its managers can change the prices of the products, but the boss will want to know who had the dared to lower prices.
    This action of lowering prices, is an action of modifying the information in the database and u can save in the database who suggested it, who approved it and who implemented it.
    Now that I write about the functionality of the macro pattern, I realise that some methods should have more proper names and I haven't had time to write documentation in the API, but this will be a complete when I add the web pages for the architect to use for defining access control and for the end users to view who and what is doing with their application.

  • Content filters based on Group Best Practice

    What is best practice for Content filters based on Group.
    What we wanna accomplish.
    We have few groups but i'll make an example on two.
    We have one group that have allowed "Media" and another group that have allowed "Exe".
    What is best practice if one user is in both group.
    How would you do Content filtering?
    I dont see in Content filtering condition
    if (Envelope Recipient does not mach group) then Block.
    Is the best way to create first?
    If (attachment.type="Media") then (insert header="sometext);
    and after in Content filter below
    if (Envelope Recipient) and (Header does not contain "sometext") then Block.

    Hi,
    I understand that I will have to use BPM. What is the best way?

  • DKIM/DomainKeys Content Filters Best Practices?

    Hi All,
    I was wondering if anyone has some best practices on implementing content filters for domain keys/dkim results on incoming mail. I am having a tough time figuring out a good solution to this problem as we have various users who also subscribe to mailing lists, which obviously break domain keys if the server doesn't resign the message?
    Any suggestions would be helpful.
    Thanks!

    Hi,
    SDN is using Web Page Composer. You should also take a look into WPC for publishing EFP. WPC is based on KM, but has several advantages over XML Forms.
    Regarding the security aspect:
    There are several SDN articles / documentations about how to implement an EFP (like: Look & Feel, Framework Pages and Portal Navigation).
    You can restrict access for anonymous users - in fact, you'll have to explicitly allow access for anonymous users. If you don't like that your users can access something else than /etc/public/, just don't give the guest user read access. You can alsouse a reverse proxy to allow access to only the necessary KM folders and redirect the rest to the start page.
    br,
    Tobias

  • Applying common styles to multiple HNCS: What is the best practice?

    Hi Community
    Adhering to best practices, we have built a SharePoint 2013 intranet with multiple Host Named Site Collections all accessible via HTTPs, for example
    https://home.domain.com   -  Landing Page
    https://this.doamin.com
    https://that.domain.com
    https://other.domain.com
    We have noticed issues with the home page on each site having an affect on the Meta Data Navigation Menu so thought it was time we reviewed our references.
    Ok, we want to have a common master page and CSS, JavaScript, Fonts etc throughout the intranet.  So what is the best way of implementing this
    and what is a candidate provision strategy say from Dev
    My thoughts are copy a common custom master page to each Master Page Gallery but with options as to how we reference external files
    Option 1:  replicate on each of the HNSCs:
    Local copies of CSS, JS  etc  in  •   /siteAssets/  and or,  •   /Style Library/syles.css Or 
    Option 2:  explicit reference to the styles held on the home site   collection
    The master page might have common reference to
    https://home.domain.com/SiteAssets/css/styles.css
    Or 
    Option 3: use the _layouts file structure  - not my favourite as not accessible in SPD 2013 or using sp2013 built in document management
    Use the hive and not the content database structure.  Hence, all master pages would have references similar to: •   _layouts/15/styles/mystyles.css •   _layouts/15/images/client/home.jpg
    Would be interested to hear you thoughts as clearly there is more that one way to achieve styles nirvana!
    Daniel
    Freelance consultant

    Hi Daniel,
    If you need to use the master page for multiple site collections, then you’d better to choose the option 2 or options 3, as you do not need to make copies of the CSS or JS files and re-upload them to each site collection.
    And per my knowledge, option 3 is better. Because the CSS or JS files are stored at the local system of SharePoint server in option 3, it is faster than referring a file which is stored in database in option 2.
    Generally, it depends based on your situation as you don’t like option 3 when the CSS or JS files are not accessible in SPD.
    Thanks,
    Victoria
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Victoria Xia
    TechNet Community Support

  • Subclass design problems/best practices

    Hello gurus -
    I have a question problem regarding the domain objects I'm sticking in my cache. I have a Product object - and would like to create a few subclasses - say BookProduct and MovieProduct (along with the standard Product objects). These really need to be contained in the same cache. The issue/concern here is that both subclasses have attributes that I'd like to index AND query on.
    When I try to create an index on the subclasses attributes when there are just "standard" products - I get the following error (which only exists on one of the subclasses):
    2010-10-20 11:08:43.280/227.055 Oracle Coherence GE 3.5.2/463 <Error> (thread=DistributedCache:MyCache, member=2): Exception occured during index rebuild: java.lang.RuntimeException: Missing or inaccessible method: com.test.domain.product.Product.getAuthors()
    So I'm not sure the indexing is working or stopping once it hits this exception.
    Furthermore, I get a similar error when attempting to Filter based on that attribute. So if I want to add the following filter:
    Filter filter = new ContainsAnyFilter( "getAuthors", authors );
    I will receive the following exception:
    Caused by: Portable(java.lang.RuntimeException): Missing or inaccessible method: com.test.domain.product.Product.getAuthors()
    What is considered the best practices for this assuming these really should be part of the same names cache? Should I attempt to subclass the extractors to "inspect" the Object for its class type during indexing or applying filters? Or should I just add all the attribute in the BookProduct and MovieProduct into the parent object and just forget about subclassing? That seems to have a pretty high "yuck" factor to me. I'm assuming people have run into this issue before and am looking for some best practices or perhaps something that deals with this that I'm missing. We're currently using Coherence 3.5.2. Not sure if it matters, but we are using the POF format for serialization.
    Thanks!
    Chris

    Hi Chris,
    I had a similar problem. The way I solved it was to use a subclass of the ChainedExtractor that catches all RuntimeException occurring during the extraction like the following:
    * {@link ChainedExtractor} that catches any exceptions during extraction and returns null instead.
    * Use this for cases where you're not certain that an object contains that necessary methods to be extracted.
    * F.e. an object in the cache does not contain the getSomeProperty() method. However other objects do.
    * When these are put together in the same cache we might want to use a {@link ChainedExtractor} like the following:
    * new ChainedExtractor("getSomeProperty.getSomeNestedProperty"). However this will result in a RuntimeException for those entries that
    * don't contain the object with the someProperty. Using the this class instead won't result in the exception.
    public class SafeChainedExtractor extends ChainedExtractor
         public SafeChainedExtractor()
              super();
         public SafeChainedExtractor( String sMethod )
              super( sMethod );
         @Override
         public Object extract( Object entry )
              try
                   return super.extract( entry );
              catch(RuntimeException e)
                   return null;
         @Override
         public Object extractFromEntry( Entry entry )
              try
                   return super.extractFromEntry( entry );
              catch(RuntimeException e)
                   return null;
    }For all indexes and filters we then use extractors that subclassed the SafeChainedExtractor like the following:
    public class NestedPropertyExtractor extends SafeChainedExtractor
         private static final long serialVersionUID = 1L;
         public NestedPropertyExtractor()
              super("getSomeProperty.getSomeNestedProperty");
    //adding an index:
    myCache.addIndex( new NestedPropertyExtractor(), false, null );
    //using a filter:
    myCache.keySet(new EqualsFilter(new NestedPropertyExtractor(), "myNestedProperty"));This way, the extractor will just return null when a property doesn't exist on the target class.
    Regards
    Jan

  • SOA Suite 11g Coding Best Practice Document

    Hello,
    I am looking for coding best practice dosument for SOA suite 11g. I have seen one document for "soa_best_practices_1013x_drop3" but this was for SOA 10g. I could not find any such document for SOA 11g. Please let me know if some one has document for best practice, coding standard, naming convention for BPEL,OSB,B2B etc.
    Regards,
    Prashant

    Now we need to publish our services on the internet. I am looking for the security mechanism that I should apply in order to make the services secure. I may even like to verify that the request invoking service A is only coming from specified context.One approach we followed at a customer :
    - SOA was installed within internal firewall zone
    - A F5 BigIP Load Balancer was setup in DMZ. This load balancer terminated one way SSL connections coming from service consumers over internet. The load balancer forwaded the request to a pool of apache of web servers within DMZ
    - The Apache web servers had a redirection rule which forwarded the request to the soa server ports within the internal firwall zone. The internal firewall was opened to allow connections between apache web servers and soa server ports.
    - WS Security Username token/plain text password was used for message level security at the soa services layer.
    Some alterations you can do :
    1. enforce 2 way ssl and make load balancer to validate the CN of the client certificate. This can make sure only authorized clients are to make calls to the service
    2. OR setup some sort of IP filtering at DMZ firewall i.e. allow traffic only from authorized clients IP addresses to the load balancers virtual address for this specific service.

Maybe you are looking for