Best Practice for full motion

I'm new to this whole publishing video to the web thing and
I'm quickly learning that full motion recording is a bit large to
deliver via the web. So I guess I'm asking for advice on how to
show/demonstrate a full motion action in the most efficient manner
possible. When I import the created SWF from Captivate into Flash
8, it becomes blocky (black pixel looking blocks) and does not
export in a usable fashion. I think it's from changing the frame
rate from 30 to 10.
It is possible to record at a lower frame rate for full
motion? If not, then is there a better compression scheme that I
can be using to better deliver the video?
Any help is greatly appreciated. Thank you!

http://www.macromedia.com/devnet/flash/articles/flash_to_video.html
~~~~~~~~~~~~~~~~
--> Adobe Certified Expert
--> www.mudbubble.com
--> www.keyframer.com
~~~~~~~~~~~~~~~~
shadeland wrote:
> As I start my venture into a web cartoon we are starting
I am having issues
> with keeping sounds and video together for a video. I
have tried almost
> everything but I would just like to know what the best
practice is for getting
> a cartoon to full motion video. I am very good with
actionscripting, writing
> games and apps, but I am struggling to get my content
into full motion video
> with audio. Please help!
>

Similar Messages

  • What is the best practice for full browser video to achieve the highest quality?

    I'd like to get your thoughts on the best way to deliver full-browser (scale to the size of the browser window) video. I'm skilled in the creation of the content but learning to make the most out of Flash CS5 and would love to hear what you would suggest.
    Most of the tutorials I can find on full browser/scalable video are for earlier versions of Flash; what is the best practice today? Best resolution/format for the video?
    If there is an Adobe guide to this I'm happy to eat humble pie if someone can redirect me to it; I'm using CS5 Production Premium.
    I like the full screen video effect they have on the "Sounds of pertussis" web-site; this is exactly what I'm trying to create but I'm not sure what is the best way to approach it - any hints/tips you can offer would be great?
    Thanks in advance!

    Use the little squares over your video to mask the quality. Sounds of Pertussis is not full screen video, but rather full stage. Which is easier to work with since all the controls and other assets stay on screen. You set up your html file to allow full screen. Then bring in your video (netstream or flvPlayback component) and scale that to the full size of your stage  (since in this case it's basically the background) . I made a quickie demo here. (The video is from a cheapo SD consumer camera, so pretty poor quality to start.)
    In AS3 is would look something like
    import flash.display.Loader;
    import flash.net.URLRequest;
    import flash.display.Bitmap;
    import flash.display.BitmapData;
    import flash.ui.Mouse;
    import flash.events.Event;
    import flash.events.MouseEvent;
    import flash.display.StageDisplayState;
    stage.align = StageAlign.TOP_LEFT;
    stage.scaleMode = StageScaleMode.NO_SCALE;
    // determine current stage size
    var sw:int = int(stage.stageWidth);
    var sh:int = int(stage.stageHeight);
    // load video
    var nc:NetConnection = new NetConnection();
    nc.connect(null);
    var ns:NetStream = new NetStream(nc);
    var vid:Video = new Video(656, 480); // size off video
    this.addChildAt(vid, 0);
    vid.attachNetStream(ns);
    //path to your video_file
    ns.play("content/GS.f4v"); 
    var netClient:Object = new Object();
    ns.client = netClient;
    // add listener for resizing of the stage so we can scale our assets
    stage.addEventListener(Event.RESIZE, resizeHandler);
    stage.dispatchEvent(new Event(Event.RESIZE));
    function resizeHandler(e:Event = null):void
    // determine current stage size
        var sw:int = stage.stageWidth;
        var sh:int = stage.stageHeight;
    // scale video size depending on stage size
        vid.width = sw;
        vid.height = sh;
    // Don't scale video smaller than certain size
        if (vid.height < 480)
        vid.height = 480;
        if (vid.width < 656)
        vid.width = 656;
    // choose the smaller scale property (x or y) and match the other to it so the size is proportional;
        (vid.scaleX > vid.scaleY) ? vid.scaleY = vid.scaleX : vid.scaleX = vid.scaleY;
    // add event listener for full screen button
    fullScreenStage_mc.buttonMode = true;
    fullScreenStage_mc.mouseChildren = false;
    fullScreenStage_mc.addEventListener(MouseEvent.CLICK, goFullStage, false, 0, true);
    function goFullStage(event:MouseEvent):void
        //vid.fullScreenTakeOver = false; // keeps flvPlayer component from becoming full screen if you use it instead  
        if (stage.displayState == StageDisplayState.NORMAL)
            stage.displayState=StageDisplayState.FULL_SCREEN;
        else
            stage.displayState=StageDisplayState.NORMAL;

  • Best practices for Full fledged CQ5 Development environment

    Hi All,
    I  am working on CQ5.5. The following environemnt we areusing for development.
    CRXDELite < - >  Eclipse Java Content Repository Perspective < - > VLT Check-in/Check out <-> J2EE Perspective <-> SVN
    CRXDELite, Eclipse Java Content Repository Perspective  ----> component/template creation (mainly jsp coding)
    J2EE Perspective  ----> Java servlet coding (mainly java coding)
    VLT Check-in/Check out  ----> For moving the data from/to J2EE Perspsective from/to CRXDELite
    SVN  -----> Versioning from J2EE Perspective
    is there any other possible way for a full fledged development environment using eclipse?
    If you are using any other pluging/best prctices kindly let know.
    It will be really helpful.
    Regards,
    Raja R

    Hi,
    we have a similar setup as the described but uses VLT mostly without CRXDE. Most of our developers uses other tools do develop, such as IDEA/IntelliJ for OSGi development and script development. We have also choosen to use the CQ Blueprint Maven Artifacts (http://www.cqblueprints.com/xwiki/bin/view/Blue+Prints/The+CQ+Project+Maven+Archetype) and separate our different parts of the sites in modules and use Maven to build and deliver packages.
    If you are not forced by policies to use SVN, I would advice you to use Git instead.

  • Best Practices for new iMac

    I posted a few days ago re failing HDD on mid-2007 iMac. Long story short, took it into Apple store, Genius worked on it for 45 mins before decreeing it in need of new HDD. After considering the expenses of adding memory, new drive, hardware and installation costs, I got a brand new iMac entry level (21.5" screen,
    2.7 GHz Intel Core i5, 8 GB 1600 MHz DDR3 memory, 1TB HDD running Mavericks). Also got a Superdrive. I am not needing to migrate anything from the old iMac.
    I was surprised that a physical disc for the OS was not included. So I am looking for any Best Practices for setting up this iMac, specifically in the area of backup and recovery. Do I need to make a boot DVD? Would that be in addition to making a Time Machine full backup (using external G-drive)? I have searched this community and the Help topics on Apple Support and have not found any "checklist" of recommended actions. I realize the value of everyone's time, so any feedback is very appreciated.

    OS X has not been officially issued on physical media since OS X 10.6 (arguably 10.7 was issued on some USB drives, but this was a non-standard approach for purchasing and installing it).
    To reinstall the OS, your system comes with a recovery partition that can be booted to by holding the Command-R keys immediately after hearing the boot chimes sound. This partition boots to the OS X tools window, where you can select options to restore from backup or reinstall the OS. If you choose the option to reinstall, then the OS installation files will be downloaded from Apple's servers.
    If for some reason your entire hard drive is damaged and even the recovery partition is not accessible, then your system supports the ability to use Internet Recovery, which is the same thing except instead of accessing the recovery boot drive from your hard drive, the system will download it as a disk image (again from Apple's servers) and then boot from that image.
    Both of these options will require you have broadband internet access, as you will ultimately need to download several gigabytes of installation data to proceed with the reinstallation.
    There are some options available for creating your own boot and installation DVD or external hard drive, but for most intents and purposes this is not necessary.
    The only "checklist" option I would recommend for anyone with a new Mac system, is to get a 1TB external drive (or a drive that is at least as big as your internal boot drive) and set it up as a Time Machine backup. This will ensure you have a fully restorable backup of your entire system, which you can access via the recovery partition for restoring if needed, or for migrating data to a fresh OS installation.

  • Best practice for install oracle 11g r2 on Windows Server 2008 r2

    Dear all,
    May I know what is the best practice for install oracle 11g r2 on windows server 2008 r2. Should I create a special account for windows for the oracle database installation? What permission should I grant to the folders where Oracle installed and the database related files located (datafiles, controlfiles, etc.)
    Just grant Full for Administrators and System and remove permissions for all others accounts?
    Also how should I configure windows firewall to allow client connect to the database.
    Thanks for your help.

    Hi Christian,
    Check this on MOS
    *RAC Assurance Support Team: RAC Starter Kit and Best Practices (Windows) [ID 811271.1]*
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=811271.1
    DOC Modified: 14-DEC-2010
    Regards,
    Levi Pereira

  • Best practice for use of spatial operators

    Hi All,
    I'm trying to build a .NET toolkit to interact with Oracles spatial operators. The most common use of this toolkit will be to find results which are within a given geometry - for example select parish boundaries within a county.
    Our boundary data is high detail, commonly containing upwards of 50'000 vertices for a county sized polygon.
    I've currently been experimenting with queries such as:
    select
    from
    uk_ward a,
    uk_county b
    where
    UPPER(b.name) = 'DORSET COUNTY' and
    sdo_relate(a.geoloc, b.geoloc, 'mask=coveredby+inside') = 'TRUE';
    However the speed is unacceptable, especially as most of the implementations of the toolkit will be web based. The query above takes around a minute to return.
    Any comments or thoughts on the best practice for use of Oracle spatial in this way will be warmly welcomed. I'm looking for a solution which is as quick and efficient as possible.

    Thanks again for the reply... the query currently takes just under 90 seconds to return. Here are the results from the execution plan ran in sql*:
    Elapsed: 00:01:24.81
    Execution Plan
    Plan hash value: 598052089
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 156 | 46956 | 76 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 156 | 46956 | 76 (0)| 00:00:01 |
    |* 2 | TABLE ACCESS FULL | UK_COUNTY | 2 | 262 | 5 (0)| 00:00:01 |
    | 3 | TABLE ACCESS BY INDEX ROWID| UK_WARD | 75 | 12750 | 76 (0)| 00:00:01 |
    |* 4 | DOMAIN INDEX | UK_WARD_SX | | | | |
    Predicate Information (identified by operation id):
    2 - filter(UPPER("B"."NAME")='DORSET COUNTY')
    4 - access("MDSYS"."SDO_INT2_RELATE"("A"."GEOLOC","B"."GEOLOC",'mask=coveredby+i
    nside')='TRUE')
    Statistics
    20431 recursive calls
    60 db block gets
    22432 consistent gets
    1156 physical reads
    0 redo size
    2998369 bytes sent via SQL*Net to client
    1158 bytes received via SQL*Net from client
    17 SQL*Net roundtrips to/from client
    452 sorts (memory)
    0 sorts (disk)
    125 rows processed
    The wards table has 7545 rows, the county table has 207.
    We are currently on release 10.2.0.3.
    All i want to do with this is generate results which fall in a particular geometry. Most of my testing has been successful i just seem to run into issues when querying against a county sized polygon - i guess due to the amount of vertices.
    Also looking through the forums now for tuning topics...

  • Best practice for application help for a custom screen?

    Hi,
    The system is Netweaver 7.0 SP 15 with e-recruiting .
    We have some custom SAP GUI transactions and have written Word documents with screen prints and explanations. I would like to make the procedure document accessible from the custom transaction or at least provide custom help text that includes a link to the full documents.
    Can anyone help me out with options and best practices for providing customized application help for custom SAP GUI transactions?
    Thanks,
    Margaret

    Hello Margaret,
    sorry I though you might be still in a design or proof of concept phase where the decision for the technology is still adjustable.
    If the implementation is already done things change of course. The standard in-system documentation is surely not fitting your needs as including screenshots won't work well.
    I would solve the task the following way:
    I'd make a web or pdf document out of the word document and put it on a web ressource - as you run e-recruiting you have probably the possibility for that.
    I would then just put a button into the transaction an open a web container to show the document.
    I am not sure if this solution really qualifies as "best practise" but SAP does the same if you call the Help for application in the help menue. This is implemented in function module SAPGUIHC_OPEN_HELP_CENTER. I'd just copy it, throw out what I do not need and hard code the url to call.
    Perhaps someone could offer a better solution but I think this works a t least without exxagerated costs.
    Kind Regards
    Roman

  • SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

    We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
    is currently taking up to 30 hours to complete.
    Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
    is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
    get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
    I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
    it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
    So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
    Bill Thacker

    I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
    I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
    objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
    the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
    The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
    Come on SQL Server Community. Show me some love :)
    Bill Thacker

  • Best practices for dealing with Exceptions on storage members

    We recently encountered an issue where one of our DistributedCaches was terminating itself and restarting due to an RuntimeException being thrown from our code (see below). As usual, the issue was in our own code and we have updated it to not throw a RuntimeException under any circumstances.
    I would like to know if there are any best practices for Exception handling, other than catching Exceptions and logging them. Should we always trap Exceptions and ensure that they do not bubble back up to code that is running from the Coherence jar? Is there a way to configure Coherence so that our DistributedCaches do not terminate even when custom Filters and such throw RuntimeExceptions?
    thanks, Aidan
    Exception below:
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=201, BackupPartitions=204}
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException

    Bob - Here is the full stacktrace:
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=205, BackupPartitions=204}
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47):
    java.lang.RuntimeException: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:84)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2599)
         at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.partialRequest.FilterRequest.read(FilterRequest.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:169)
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:82)
         ... 25 more
    2010-02-09 13:04:23.122/90182.743 Oracle Coherence GE 3.4.2/411 <Info> (thread=Main Thread, member=47): Restarting Service: StyleCacheOur code was doing something simple like
    catch(Exception e){
        throw new RuntimeException(e);
    }Would using the ensureRuntimeException call do anything for us here?
    Edited by: aidanol on Feb 12, 2010 11:41 AM

  • Noticing a lot of database index fragmentation yet no Health Analyzer alerts...? Best practice for database maintenance in 2013?

    Could someone point me to a document for best practices for database maintenance with SharePoint 2013? I have read the 2010 document, but I'm hoping their is an updated one that I'm just missing.
    My problem is that our DBA recently noticed that many of our SharePoint databases have high index fragmentation.  I have the Health Analyzer rules enabled for index fragmentation and they run daily, but I've never received an alert despite the majority
    of our databases having greater than 40% fragmentation and some are even above 95%.  
    Obviously it has our attention now and we want to get this addressed.  My understanding (which I now fear is at best incomplete, more likely just plain wrong) was that a maintenance plan wasn't needed for index fragmentation in 2010/2013 like it was
    in 2007. 
    Thanks,
    Troy

    It depends. Here are the rules for that job:
    Sampled mode
    Page count >24 and avg fragmentation in percent >5
    Or
    Page count >8 avg page space used in percent < fill_factor * 0.9 (Fill Factor in SharePoint 2013 varies from 80 to 100 depending on the index, it is important not to adjust index fill factors)
    I have seen cases where the indexes are not automatically managed by the rule and require a manual defragmentation with a Full Scan, instead of Sampled. Once the Full Scan defrag completed, the timer job started handling the index fragmentation automatically.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Best practice for SD Cube

    Hi Gurus!
    I have to active this cubes:
    Deliveries        
    0SD_C02
    Delivery Service 0SD_C04
    I can´t find the best practices for that.
    Any ideas?
    I will assign full points.
    Thanks in advance!

    Did you wanted Best Practice for Installation.
    Find out datasources feeding 0SD_C02 from                                                        <a href="http://help.sap.com/saphelp_nw2004s/helpdata/en/3d/5fb13cd0500255e10000000a114084/frameset.htm">Help</a> .
    Activate the same in ECC  - RSA5
    Replicate in BW.
    Install 0SD_C02 with Grouping as Dataflow Before

  • Best practices for development / production environments

    Our current scenario:
    We have one production database server containing the APEX development install, plus all production data.
    We have one development server that is cloned nightly (via RMAN duplicate) from production. It therefore also contains a full APEX development environment, and all our production data, albeit 1 day old.
    Our desired scenario:
    We want to convert the production database to a runtime only environment.
    We want to be able to develop in the test environment, but since this is an RMAN duplicated database, every night the runtime APEX will overlay it, and the production versions of the apps will overlay. However, we still want to have up to date data against which to develop.
    Questions: What is best practice for this sort of thing? We've considered a couple options:
    1.) Find a way to clone the database (RMAN or something else), that will leave the existing APEX environment intact? If such is doable, we can modify our nightly refresh procedure to refresh the data, but not APEX.
    2.) Move apex (in both prod and dev environments) to a separate database containing only APEX, and use DBLINKS to point to the data in both cases. The nightly refresh would only refresh the data and the APEX database would be unaffected. This would require rewriting all apps to use DBLINKS though, as well as requiring a change to the code when moving to production (i.e. modify the DBLINK to the production value)
    3.) Require the developers to export their apps when done for the day, and reimport the following morning. This would leave the RMAN duplication process unchanged, and would add a manual step which the developers loath.
    We basically have two mutually exclusive requirements - refresh the database nightly for the sake of fresh data, but don't refresh the database ever for the sake of the APEX environment.
    Again, any suggestions on best practices would be helpful.
    Thanks,
    Bill Johnson

    Bill,
    To clarify, you do have the ability to export/import, happily, at the application level. The issue is that if you have an application that consist of more than a couple of pages, you will find yourself in a situation where changes to page 3 are tested and ready but, changes to pages 2, 5 and 6 are still in various stages of development. You will need to get the change for page 5 in to resolve a critical production issue. How do you do this without sending pages 2, 5 and 6 in their current state if you have to move the application all at once??? The issue is that you absolutely are going to need to version control at the page level, not at the application level.
    Moreover, the only supported way of exporting items is via the GUI. While practically everyone doing serious APEX development has gone on to either PL/SQL or Utility hacks, Oracle still will not release a supported method for doing this. I have no idea why this would be...maybe one of the developers would care to comment on the matter. Obviously, if you want to automate, you will have to accept this caveat.
    As to which version of the backend source control tool you use, the short answer is that it really doesn't matter. As far as the VC system is concerned, you APEX exports are simply files. Some versioning systems allow promotion of code through various SDLC stages. I am not sure about GIT in particular but, if it doesn't support this directly, you could always mimic the behavior with multiple repositories. Taht is, create a development repository into which you automatically update via exports every night. Whenever particular changes are promoted to production, you can at that time export form the development repository and into the production. You could, of course, create as many of these "stops" as necessary to mirror your shop's SDLC stages, e.g. dev, qa, intergation, staging, production etc.
    -Joe
    Edited by: Joe Upshaw on Feb 5, 2013 10:31 AM

  • What is best practice for installing Yosemite

    I am currently on OS X Mavericks version 10.9.5 Macbook pro 13.  2.6 ghz intel for i5, 8gb 1600 mhz ddr3.
    I am now downloading yosemite 10.10.1 but since i've been reading all these negative feedback so far, i am having second thoughts if i should continue to install the upgrade or not.
    Any suggestion What is best practice for installing Yosemite?  Or is it not yet time to upgrade since the platform is premature yet?
    Thanks in advance.

    Check your apps are compatible with 10.10 - roaringapps.com
    http://www.etresoft.com/etrecheck can show what is running & installed - look for updates on the developer own sites.
    If you have many kernel extensions or startup items look for updates to them too
    Take a full bootable backup to another disk via Carbon Copy Cloner, Super Duper! or Disk Utility
    Disconnect the backup before you begin any install (ideally set it aside & leave it untouched incase you need to go back to 10.9)
    Personally I prefer a clean install when there are signs of multiple migrations (if you have upgraded several OS for a period of years). Setup Assistant/ Migration Assistant can import user data from a backup, but consider that Apps & 'other data' should be manually reinstalled from the latest versions.
    If you clean install (erase the HD before installation) then make sure you deauthorise iTunes & any other apps that are associated online (like find my Mac).
    Basically the steps you would take before selling a Mac…
    What to do before selling or giving away your Mac - Apple Support

  • Best Practices for zVM/SLES10/zDB2 environment for dialog instances.

    Hi,  I am a zSeries system programmer who has just completed an IBM led Proof of Concept which demonstrated the viability of running SAP instances on SUSE SLES10 Linux booted in zVM guests and accessing zDB2 data via hipersockets. Before we build a Linux infrastructure using the 62 IFLs we just procured, we are wondering if any best practices for this environment have been developed as an OSS note or something else by SAP.    Below you will find an email which was sent and responded to by IBM and Novell on these topics...
    "As you may know, Home Depot has embarked on an IBM led proof of concept using SUSE SLES10 running in zVM guests on IBM zSeries hardware to host SAP server instances.  The Home Depot IT organization is currently in the midst of a large scale push to modernize our merchandising and people systems on SAP platforms.  The zVM/SUSE/SAP POC is part of that effort, as is a parallel POC of an Intel Blade/Red Hat/SAP platform.  For our production financial systems we now use a pSeries/AIX/SAP platform.
          So far in the zVM/SUSE/SAP POC, we have been able to create four zVM LPARS on IBM z9 hardware, create twelve zVM guests on those LPARS, boot SLES10 in those guests, install and run SAP instances in those guests using hipersockets for access to our DB2 SAP databases running on zOS, and direct user workloads to the SAP instances with good results.  We have also successfully developed cloning scripts that have made it possible to create new SLES10 instances, configured and ready for SAP installs, in about 10 seconds using FLASHCOPY and IBM DASD.
          I am writing in the hope that you can direct us to technical resources at IBM/Novell/SAP who may be able to field a few questions that have arisen.  In our discussions about optimization of the zVM/SUSE/SAP platform, we wondered if any wisdom about the appropriateness of and support for using zVM capabilities to virtualize SAP has ever been developed or any best practices drafted.  Attached you will find an IBM Redbook and a PowerPoint presentation which describes the use of the zVM discontiguous shared segments and the zVM named saved system features for the sharing of reentrant code and other  elements of Linux and its applications, thereby conserving storage and disk resources allocated to guest machines.   The specific question of the hour is, can any SAP code be handled similarly?  Have specific SAP elements eligible for this treatment been identified? 
          I've searched the SUSE Knowledgebase for articles on this topic to no avail.  Any similar techniques that might help us reduce the total cost of ownership of a zVM/SUSE/SAP platform as we compare it to Intel Blade/Red Hat/SAP and pSeries/AIX/SAP platforms are of great interest as we approach the end of our POC.  Can you help?
          Greg McKelvey is a Client I/T Architect at IBM.  He found the attached IBM documents and could give a fuller account of our POC.  Pat Downs, IBM zSeries IT Architect, has also worked to guide our POC. Akshay Rao, IBM Systems IT Specialist - Linux | Virtualization | SOA, is acting as project manager for the POC.  Jim Hawkins is the Home Depot Architect directing the POC.  I've CC:ed their email addresses.  I am sure they would be pleased to hear from you if there are the likely questions about what the heck I am asking about here.  And while writing, I thought of yet another question that I hoping somebody at SAP might weigh in on; are there any performance or operational benefits to using Linux LVM to apportion disk to filesystems vs. using zVM to create appropriately sized minidisks for filesystems without LVM getting involved?"
    As you can see, implementation questions need to be resolved.  We have heard from Novell that the SLES10 Kernel and other SUSE artifacts can reside in memory and be shared by multiple operating system images.  Does SAP support this configuration?  Also, has SAP identified SAP components which are eligible for similar treatment?  We would like to make sure that any decisions we make about the SAP platforms we are building will be supportable.  Any help you can provide will be greatly appreciated.  I will supply the documents referenced above if they are not known to any answerer.  Thanks,  Al Brasher 770-433-8211 x11895 [email protected]

    Hello AL ,
    first, let me welcome you on board,  I am sure you won't be disapointed with your choice to run SAP on ZOS.
    as for your questions,
    it wan't easy to find them in this long post , so i suggest you take the time to write a short summary that contains a very short list of questions.
    as for answers.
    here are a few usefull sources of information :
    1. the sap on db2 for Z/os sdn page :
    SAP on DB2 for z/OS
    in it you can find 2 relevant docs :
    a. best practices for ...
    b. database administration for db2 udb for z/os .
    this second publication is excellent , apart from db2 specific info , it contains information on all the components of the sap on db2 for z/os like zlinux,z/vm and so on ...
    2. I can see that you are already familiar with the ibm redbooks , but it seems that you haven't taken the time to get the most out of that resource.
    from you post it is clear that you have found one usefull publication , but I know there are several.
    3. a few months ago I wrote a short post on a similar subject ,
    I'm sure its not exactly what you are looking for at this moment , but its a good start , and with some patience you may be able to get some answers.
    here's a link
    http://blogs.ittoolbox.com/sap/db2/archives/index-of-free-documentation-on-sap-db2-administration-14245
    good luck.
    omer brandis.

  • Best practices for search service in a sharepont farm

    Hi
    in a sharepoint web application there is many BI dashboards are deployed and also we have plan to
    configure enterprise search  for this application.
    in our sharepoint 2010 farm we have
    2  application server s
    2 WFE servers
    here one application server is running
    c.a + webanalytics service and itself is a domain controller
    second application server is for only running secure store service+ Performance point service only
    1 - here if we  run search server service in second application server can any issues to BI performance and
    2 - its best practice to run Performance point service and search service in one server
    3 -also is it  best practice to run search service in a such a application server where already other services running
    and where we have only one share point web application need to be crawled and indexed with  below crawl schedule.
    here we only run full crawl per week and incremental crawl at midnight daily
    adil

    Hi adil,                      
    Based on your description, you want to know the best practices for search service in a SharePoint farm.
    Different farms have different search topologies, for the best search performance, I recommend that you follow the guidance for small, medium, and large farms.
    The article is about the guidance for different farms. 
    Search service can run with other services in the same server, if condition permits and you want to have better performance for search service and other services including BI performance, you can deploy search service in dedicated server.
    If condition permits, I recommend combining a query component with a front-end Web server to avoid putting crawl components and query components on the same serve.
    In your SharePoint farm, you can deploy the query components in a WFE server and the crawl components in an application server.
    The articles below describe the best practices for enterprise search.
    https://technet.microsoft.com/en-us/library/cc850696(v=office.14).aspx
    https://technet.microsoft.com/en-us/library/cc560988(v=office.14).aspx
    Best regards      
    Sara Fan
    TechNet Community Support

Maybe you are looking for

  • Boot Camp Partition Failed

    So I tried to partition my hard drive and I got the following error message: +Verification failed. This disk could not be partitioned.+ +Use Disk Utility to repair this disk.+ And so I continue on to Disk Utility and I try to verify the disk and I ge

  • I am getting an error -50 when downloading a purchased tv episode

    recently, i purchased a tv series, "the fades" and proceeded to download past episodes.  Episode 1, 2 and 4 downloaded without any problems but episode 3 stopped after 440.1 MB of 608.1 MB with the explanation stopped (err = -50).  a popup window app

  • What are all these duplicates in iTunes 11?

    Why am I seeing all these duplicates in an album in iTunes 11? How can I get rid of them? Why do some have download from iCloud buttons? I don't want to think. What's going on here. What has Apple done to a nice simple interface? doug

  • Custom Date Intervals

    I am looking for a way for the end user to pick date intervals such as "This Month", "This Quarter", "This Year", "Last Month", "Last Quarter", "Last Year". I know about how to setup queries that select dates between two date ranges, but I have a req

  • IPhoto does not recognize devices

    Cant upload any photos to iPhoto because it won't recognize phones or iPads, help?