Best Practice for commenting clips?

I have started working on a project that includes taking a bunch of previously shot video for a series of marketing campaigns. Each video is a short 2 - 3 minute vignette of one part of the institution I work for.
The clips I get are already dumped from the camera and come in the form of:
{date}-{time}-{incremental}.mov
I import them all into the clips area and, while watching them, I add comments in the Comment A and Comment B fields so that I have an idea what the clip is. However, as I was doing this, I found that there was more information I really needed, so here is how I was categorizing things:
Clips were all renamed Clip-{incremental} (ie: Clip-001, Clip-002, etc) so that I could have an easy reference method when talking to clients.
*Comment A:* Generally what is occurring in the clip
If there are a number of subjects in a clip (some of the interviews are contained in a single clip that is very long) I will put markers in and add comments to the market points
*Comment B (renamed to "Talent"):* Who the primary speaker is (makes it easier to find clips of whomever I am looking for)
*Master Comment A:* Technical (audio/video) Quality from 1 to 5
*Master Comment B:* Content Quality from 1 to 5
This allows me to sort on any of the columns and find what I am looking for easily.
Because I am new to this, I have no idea if there is a better or standard system that others use. How do you go about categorizing clips? Is it different if you shot the video?
Also, is there a way I can customize the clip bin to be the same each time (based on my own settings?)
Message was edited by: a.calder

If you are going to be at this project for a while and will need to reference a bunch of clips, you might want to look into CatDV by Squarebox. It is a well regarded digital asset manager/database that integrates very well with Final Cut Pro. Not sure how well it will work with FCE but it is worth investigating.
http://www.squarebox.co.uk/products.html
Good luck,
x

Similar Messages

  • Best practices for animation clips in FCPX

    I used to be able to treat Motion projects just like any other clip on the time line. Is this no longer possible?
    For example, I want to set in and out points in FCPX, but a Motion project imported as a Generator will only allow me to start from the first frame. I can not set an in point. Did I import it improperly? Im not really sure how the best way to bring in my Moption animations. They are not really Titles, Generators or Transitions. Any advice would be appreciated.

    Ah, then you have some optoins, if it's something you'll be using a lot, even if only in one production.
    Save As Template, and just save it as a Generator template, it'll work the same way pretty much.  When you right click it in the Generator, you can Open In Motion to make changes.  Once you make the changes, and save, come back to FCP X, you'll just drag-and-drop the new "updated" Generator onto the old one in the Timeline (Replace) and it's that easy.  Not the same, but still pretty easy.
    I'm doing a mixed media (animations of various types, photos, real life video, etc) project now, and am finding saving my stuff as Generator Templates, in a Theme I've titled the same as my production, very easy to use.  There's the extra step of doing a replace edit once a change is made, but it's not a big deal, very easy, very fast.
    All of these custom templates are stored in your Movies folder, inside a "Motion Templates" sub-folder, if you ever want to archive or access them directly.

  • Basic Strategy / Best Practices for System Monitoring with Solution Manager

    I am very new to SAP and the Basis group at my company. I will be working on a project to identify the best practices of System and Service level monitoring using Solution Manager. I have read a good amount about SAP Solution Manager and the concept of monitoring but need to begin mapping out a monitoring strategy.
    We currently utilize the RZ20 transaction and basic CCMS monitors such as watching for update errors, availability, short dumps, etc.. What else should be monitored in order to proactively find possible issues. Are there any best practices you all have found when implimenting Monitoring for new solutions added to the SAP landscape.... what are common things we would want to monitor over say ERP, CRM, SRM, etc?
    Thanks in advance for any comments or suggestions!

    Hi Mike,
    Did you try the following link ?
    If not, it may be useful to some extent:
    http://service.sap.com/bestpractices
    ---> Cross-Industry Packages ---> Best Practices for Solution Management
    You have quite a few documents there - those on BPM may also cover Solution Monitoring aspects.
    Best regards,
    Srini
    Edited by: Srinivasan Radhakrishnan on Jul 7, 2008 7:02 PM

  • JSF - Best Practice For Using Managed Bean

    I want to discuss what is the best practice for managed bean usage, especially using session scope or request scope to build database driven pages
    ---- Session Bean ----
    - In the book Core Java Server Faces, the author mentioned that most of the cases session bean should be used, unless the processing is passed on to other handler. Since JSF can store the state on client side, i think storing everything in session is not a big memory concern. (can some expert confirm this is true?) Session objects are easy to manage and states can be shared across the pages. It can make programming easy.
    In the case of a page binded to a resultset, the bean usually helds a java.util.List object for the result, which is intialized in the constructor by query the database first. However, this approach has a problem: when user navigates to other page and comes back, the data is not refreshed. You can of course solve the problem by issuing query everytime in your getXXX method. But you need to be very careful that you don't bind this XXX property too many times. In the case of querying in getXXX, setXXX is also tricky as you don't have a member to set. You usually don't want to persist the resultset changes in the setXXX as the changes may not be final, in stead, you want to handle in the actionlistener (like a save(actionevent)).
    I would glad to see your thought on this.
    --- Request Bean ---
    request bean is initialized everytime a reuqest is made. It sometimes drove me nuts because JSF seems not to be every consistent in updating model values. Suppose you have a page showing parent-children a list of records from database, and you also allow user to change directly on the children. if I hbind the parent to a bean called #{Parent} and you bind the children to ADF table (value="#{Parent.children}" var="rowValue". If I set Parent as a request scope, the setChildren method is never called when I submit the form. Not sure if this is just for ADF or it is JSF problem. But if you change the bean to session scope, everything works fine.
    I believe JSF doesn't update the bindings for all component attributes. It only update the input component value binding. Some one please verify this is true.
    In many cases, i found request bean is very hard to work with if there are lots of updates. (I have lots of trouble with update the binding value for rendered attributes).
    However, request bean is working fine for read only pages and simple binded forms. It definitely frees up memory quicker than session bean.
    ----- any comments or opinions are welcome!!! ------

    I think it should be either Option 2 or Option 3.
    Option 2 would be necessary if the bean data depends on some request parameters.
    (Example: Getting customer bean for a particular customer id)
    Otherwise Option 3 seems the reasonable approach.
    But, I am also pondering on this issue. The above are just my initial thoughts.

  • Best practice for integrating oracle atg with external web service

    Hi All
    What is the best practice for integrating oracle atg with external web service? Is it using integration repository or calling the web service directly from the java class using a WS client?
    With Thanks & Regards
    Abhishek

    Using Integration Repository might cause performance overhead based on the operation you are doing, I have never used Integration Repository for 3rd Party integration therefore I am not able to make any comment on this.
    Calling directly as a Java Client is an easy approach and you can use ATG component framework to support that by making the endpoint, security credentials etc as configurable properties.
    Cheers
    R
    Edited by: Rajeev_R on Apr 29, 2013 3:49 AM

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best Practice for Customization of ESS 50.4

    Hi ,
    We have implemented ESS 50.4 on EP 6.0 SP 14 and R3 4.6C . I want to know what is the best practice for minor modification of ESS transaction . For eg : I need to hide the change button in Personal information screen .
    Pls let me know .
    PS : Guaranteed award points
    Aneez

    @Aneez
       "Best Practice" is just going to be good ole' ITS custom development. All the "old" ESS services are all ITS based. What can not be done through config is then done by developing custom version of the ESS services. For what you describe (ie. the typical "hide a button" scenario) it is simply a matter of:
    (1) create custom version(ie. "Z" version) of the standard service. The service file will still call the same backend transaction via the ITS parameter ~transaction.
    (2) Since you are NOT making changes that require anything changed on the backend transaction (such as adding new fields, changing business logic, etc) you are lucky to ONLY have to change the web templates. Locate the web template in your new custom service file that corresponds to the screen in the transaction where the "CHANGE" button appears. The ITS naming convention for web templates is <sapprogramname>_<screennumber>.
    (3) After locating the web template that corresponds to your needed screen, simply locate in the HTMLb where the "CHANGE" button code is and comment it out. Just that easy!
    (4) Publish your new customized service and test it out directly through ITS. ie. via the direct URL to it: http://<yourdomain>/scripts/wgate/<yourservice>!
    (5) once you see that it works, you can then make an iView for it in your portal (or simply change the iView you have to now point to your custom ITS service.
    LOTS and LOTS more info on ITS development all around this site and in the ITS sepcific forum.
    Hope this helps!
    Award points or save them...I really don't care. I think the points system here is one of the dumbest ideas since square wheels. =)

  • Best practice for a site with a lot of images?

    I am working on a site that will have over a hundred images
    and I wanted to see what is the best practice for designing a site
    like this. Should a go with xml(please give examples or
    explanation), a text file or just loadMovie("image1project1.jpg",
    "bottomsec") with named external images that will stay the same.
    Any help is appreciated on staying up to date with this kind of
    site.
    Thanks,
    Randy

    ok I am new please be nice - I think I want to set it up like
    this
    <project1>
    <section>Architecture</section>
    <name>New Building for CREATiVENESS</name>
    <comment>The major challenge to designing this new
    tower was the site constraints  a small 3 acre urban corner site.
    It is located adjacent to a community center to facilitate extended
    use in the evenings and weekends for the entire community.
    </comment>
    <thumb>thumbs/project1.jpg</thumb>
    <img1>images/project1img1.jpg</img1>
    <img2>images/project1img2.jpg</img2>
    <img3>images/project1img3.jpg</img3>
    <img4>images/project1img4.jpg</img4>
    </project1>
    <project2>
    <section>Interiors</section>
    <name>New Building for Me</name>
    <comment>The major challenge to designing this new
    tower was the site constraints  a small 3 acre urban corner site.
    It is located adjacent to a community center to facilitate extended
    use in the evenings and weekends for the entire community.
    </comment>
    <thumb>thumbs/project2.jpg</thumb>
    <img1>images/project2img1.jpg</img1>
    <img2>images/project2img2.jpg</img2>
    <img3>images/project2img3.jpg</img3>
    <img4>images/project2img4.jpg</img4>
    </project2>
    <project3>
    <section>Architecture</section>
    <name>New Building for You</name>
    <comment>The major challenge to designing this new
    tower was the site constraints  a small 3 acre urban corner site.
    It is located adjacent to a community center to facilitate extended
    use in the evenings and weekends for the entire community.
    </comment>
    <thumb>thumbs/project3.jpg</thumb>
    <img1>images/project3img1.jpg</img1>
    <img2>images/project3img2.jpg</img2>
    <img3>images/project3img3.jpg</img3>
    <img4>images/project3img4.jpg</img4>
    </project3>
    <project4>
    <section>Interiors</section>
    <name>New Building for that guy</name>
    <comment>The major challenge to designing this new
    tower was the site constraints  a small 3 acre urban corner site.
    It is located adjacent to a community center to facilitate extended
    use in the evenings and weekends for the entire community.
    </comment>
    <thumb>thumbs/project4.jpg</thumb>
    <img1>images/project4img1.jpg</img1>
    <img2>images/project4img2.jpg</img2>
    <img3>images/project4img3.jpg</img3>
    <img4>images/project4img4.jpg</img4>
    </project4>
    but I am not sure of the way to create the way to run through
    it to find if it is in a section to put it in the menu and then to
    call the images and text once they are in a project area. I dont
    know if the
    this.firstChild.nextSibling.childNodes[0].childNodes[2]
    is the best way to call things in the file. Any help is
    appreciated. Please let me know what are the best practices and
    easiest way to work with a large xml file.
    Thanks,
    Randy

  • Best practice for use of spatial operators

    Hi All,
    I'm trying to build a .NET toolkit to interact with Oracles spatial operators. The most common use of this toolkit will be to find results which are within a given geometry - for example select parish boundaries within a county.
    Our boundary data is high detail, commonly containing upwards of 50'000 vertices for a county sized polygon.
    I've currently been experimenting with queries such as:
    select
    from
    uk_ward a,
    uk_county b
    where
    UPPER(b.name) = 'DORSET COUNTY' and
    sdo_relate(a.geoloc, b.geoloc, 'mask=coveredby+inside') = 'TRUE';
    However the speed is unacceptable, especially as most of the implementations of the toolkit will be web based. The query above takes around a minute to return.
    Any comments or thoughts on the best practice for use of Oracle spatial in this way will be warmly welcomed. I'm looking for a solution which is as quick and efficient as possible.

    Thanks again for the reply... the query currently takes just under 90 seconds to return. Here are the results from the execution plan ran in sql*:
    Elapsed: 00:01:24.81
    Execution Plan
    Plan hash value: 598052089
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 156 | 46956 | 76 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 156 | 46956 | 76 (0)| 00:00:01 |
    |* 2 | TABLE ACCESS FULL | UK_COUNTY | 2 | 262 | 5 (0)| 00:00:01 |
    | 3 | TABLE ACCESS BY INDEX ROWID| UK_WARD | 75 | 12750 | 76 (0)| 00:00:01 |
    |* 4 | DOMAIN INDEX | UK_WARD_SX | | | | |
    Predicate Information (identified by operation id):
    2 - filter(UPPER("B"."NAME")='DORSET COUNTY')
    4 - access("MDSYS"."SDO_INT2_RELATE"("A"."GEOLOC","B"."GEOLOC",'mask=coveredby+i
    nside')='TRUE')
    Statistics
    20431 recursive calls
    60 db block gets
    22432 consistent gets
    1156 physical reads
    0 redo size
    2998369 bytes sent via SQL*Net to client
    1158 bytes received via SQL*Net from client
    17 SQL*Net roundtrips to/from client
    452 sorts (memory)
    0 sorts (disk)
    125 rows processed
    The wards table has 7545 rows, the county table has 207.
    We are currently on release 10.2.0.3.
    All i want to do with this is generate results which fall in a particular geometry. Most of my testing has been successful i just seem to run into issues when querying against a county sized polygon - i guess due to the amount of vertices.
    Also looking through the forums now for tuning topics...

  • Best practice for putting together scenes in a Flash project?

    Hi, I'm currently working on a flash project with the following characteristics:
    using a PC
    2048x1080 pixels
    30 fps
    One audio file that plays (once) continuously across the whole project
    there are actions that relate to the audio, so the timing is important
    at least 10 scenes
    about 7 minutes long total
    current intent is for it to be played in a modern theater as a surprise
    What is the best practice for working on this project and then compiling it together?
    Do it all in one project file?
    Split the work into different project (xfl) files for each scene and then put it together when all the scenes are finalized?
    Use one project file but create different "scenes" for each respective scene?  I think this is the "classic" way (?).
    Make the scenes "movie clips" and then insert them into the timeline with the audio as its own layer?
    Other?
    I'm currently working on it by having it all in one project file.  But I've noticed that there's some lag (or it gets choppy) at certain parts during playback and the SWF history shows 3.1 MB with a yellow triangle with exclamation point symbol.  Thanks in advance. 

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • Best Practices for Removing Shots from BDMV folder

    CS6 Production Premium Suite
    Win7x64
    Canon XA10
    I would appreciate feedback on the best practices for the following situation:
    Using Windows Explorer, I copy the BDMV folder from the XA10 to my Talk2 project folder.
    The BDMV folder has three one hour shots (talks) where each shot is one hour called Talk1, Talk2, and Talk3.
    Each shot consists of several MTS files in the STREAM folder since MTS files have a maximum file size so a new MTS file is created when a given MTS file reaches the maximum file size.
    Since I only want to have Talk2 stuff in my Talk2 project folder, I need to remove the Talk1 and Talk3 stuff from the BDMV folder.
    I delete the Talk1 and Talk3 MTS files from the STREAM folder.
    I delete the Talk1 and Talk3 CPI files from the CLIPINF folder.
    I leave the PLAYLIST folder as is.
    Using the Media Browser, I import Talk2 (which consists of two MTS files).
    I edit the clip.
    This procedure seems to work, but I do not know if there are any "got you" issues.
    Thanks in advance.

    Oh, don't do it that way.  I know a lot of people do, heck, my boss does, but it's just asking for trouble.
    Treat your card as if it was your original tape master (because it is).  It is the most important thing you have.  Don't delete or move any part of it. 
    If you want to break up the talks, do it as you shoot them.  Use separate cards for each talk and archive each one separately.  There is too much valuable information in the structure of the card format.  You may not need it now but your editing program may need it later.
    Hard drive space is cheap but digital recordings are priceless

  • Best Practices for Export

    I have recently begun working with a few AIC-encoded home movie files in FCPX. My goal is to compress them using h.264 for viewing on computer screens. I had a few questions about the best practices for exporting these files, as I haven't worked with editing software in quite some time.
    1) Is it always recommended that I encode my video in the same resolution as its source? For example, some of my video was shot at 1440x1080, which I can only assume is anamorphic. I originally tried to export at 1920x1080 but then changed my mind as I assumed the 1440x1080 would just stretch naturally. Does this sound right?
    2) FCPX is telling me that a few of my files are in 1080i. I'd like to encode them in 1080p as it tends to look better on computer screens. In FCPX, is it as simple as dragging my interlaced footage into a progressive timline and then exporting? I've heard about checking the "de-interlace" box under clip settings and then doubling the framerate but that seemed to make my video look worse.
    3) I've heard that it might be better practice to export my projects as master files and then encode h.264 in Compressor. Is there any truth to this? Might it be better for the interlaced to progressive conversion as well?
    Any assistance is greatly appreciated.

    1) yes. 1440 will display ax 1920.
    2) put everything in a 1080p project.
    3) Compressor will give you more options for control. The h264 from FCP is a very high data rate and makes large files.

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best practices for development / production environments

    Our current scenario:
    We have one production database server containing the APEX development install, plus all production data.
    We have one development server that is cloned nightly (via RMAN duplicate) from production. It therefore also contains a full APEX development environment, and all our production data, albeit 1 day old.
    Our desired scenario:
    We want to convert the production database to a runtime only environment.
    We want to be able to develop in the test environment, but since this is an RMAN duplicated database, every night the runtime APEX will overlay it, and the production versions of the apps will overlay. However, we still want to have up to date data against which to develop.
    Questions: What is best practice for this sort of thing? We've considered a couple options:
    1.) Find a way to clone the database (RMAN or something else), that will leave the existing APEX environment intact? If such is doable, we can modify our nightly refresh procedure to refresh the data, but not APEX.
    2.) Move apex (in both prod and dev environments) to a separate database containing only APEX, and use DBLINKS to point to the data in both cases. The nightly refresh would only refresh the data and the APEX database would be unaffected. This would require rewriting all apps to use DBLINKS though, as well as requiring a change to the code when moving to production (i.e. modify the DBLINK to the production value)
    3.) Require the developers to export their apps when done for the day, and reimport the following morning. This would leave the RMAN duplication process unchanged, and would add a manual step which the developers loath.
    We basically have two mutually exclusive requirements - refresh the database nightly for the sake of fresh data, but don't refresh the database ever for the sake of the APEX environment.
    Again, any suggestions on best practices would be helpful.
    Thanks,
    Bill Johnson

    Bill,
    To clarify, you do have the ability to export/import, happily, at the application level. The issue is that if you have an application that consist of more than a couple of pages, you will find yourself in a situation where changes to page 3 are tested and ready but, changes to pages 2, 5 and 6 are still in various stages of development. You will need to get the change for page 5 in to resolve a critical production issue. How do you do this without sending pages 2, 5 and 6 in their current state if you have to move the application all at once??? The issue is that you absolutely are going to need to version control at the page level, not at the application level.
    Moreover, the only supported way of exporting items is via the GUI. While practically everyone doing serious APEX development has gone on to either PL/SQL or Utility hacks, Oracle still will not release a supported method for doing this. I have no idea why this would be...maybe one of the developers would care to comment on the matter. Obviously, if you want to automate, you will have to accept this caveat.
    As to which version of the backend source control tool you use, the short answer is that it really doesn't matter. As far as the VC system is concerned, you APEX exports are simply files. Some versioning systems allow promotion of code through various SDLC stages. I am not sure about GIT in particular but, if it doesn't support this directly, you could always mimic the behavior with multiple repositories. Taht is, create a development repository into which you automatically update via exports every night. Whenever particular changes are promoted to production, you can at that time export form the development repository and into the production. You could, of course, create as many of these "stops" as necessary to mirror your shop's SDLC stages, e.g. dev, qa, intergation, staging, production etc.
    -Joe
    Edited by: Joe Upshaw on Feb 5, 2013 10:31 AM

  • What’s the best practice for this scenario?

    Hi,
    My users want the ability to change the WHERE and/or ORDER BY clause at runtime. They may define user preferences on each screen ( which is bind to a view object). They want to see the same records based on WHERE/ORDER BY defined on the last visit. That is why I keep the users preferences and load the screen based on that, using :
    View.setWhereClause(...);
    View.setOrderByClause(...);
    View.executeQuery();
    This works good when only one user working with the application but faced low performance when more than one user working with the application.
    What are the points to increase the performance and what is the best practice for this scenario?
    Thanks for your help in advance.

    Sung,
    I am talking only about 2 users in my testing. I am sure i missed something but could not recognize that.
    This page is my custom query page including a tag to instantiate app module in stateful mode at the top <jbo:ApplicationModule..> and a tag to instantiate data source <jbo:Datasource...> and release tag at the bottom <jbo:ReleasePageResources..> and some java code in the middle(body). The java code constructed the query statement and then fires the query to set the view object based on the query statement using the above methods.
    So, I am facing very slow performance(speed) when two clients load this page at the same time. Looks like the entire application locks for others when one client load this page and fire the query. i realized the battle neck is where executeQuery() is executing.
    what do you think.
    Thanks in advance for your comments.

Maybe you are looking for

  • Import from Excel-Mapping Specific cells

    I've used the SQL Import/Export tool to import simple Excel data in the past. These were often simple column to table imports...easy mappings. I currently have an Excel spreadsheet kept by our Sales people that needs to be imported. The data in these

  • Why does my Share Name ID get reset every month?

    With my personal account, I prefer to be identified on the receiving end as "WIRELESS CALLER".  I can change this on my account.  However, once a month my Share Name ID gets reset back to my account name.  After updating this one a month for the last

  • Beta Testing Opportunity Continues: EBS R12 Administration exam (1Z1-206)

    T minus 33 days! http://bit.ly/FcNWx Beta opportunity continues for Oracle EBS R12 System Administration exam 1Z1-206 http://bit.ly/geqfX

  • Zoom from outer space

    Hi, to illustrate to the viewer which part of a country the action takes place, I have used Google zooming functionality to show first the continent, then the country and then the region. The Google material is rather patchy. Someone mentioned a NASA

  • Trip Approver

    Hi, This is related to Travel Expense Management in ESS/MSS. We don't have any workflow set up for trip approval. Rather it is done through POWL of portal. We are receiving lot of queries regarding the name of the approver with whom expense claims ar