Best practices for animated 3D renderings of products for Muse portfolio site?

We design books for independent authors. Page layout is done with ID CC, and covers are designed using PS CC, AI CC, etc.
My idea for our Muse portfolio site is to create animated videos from image sequences exported from Luxology Modo. I've built 3D meshes of the various book formats: perfect bound, case laminate, hard bound with dust jacket, etc. I've created an animation which starts out with the book standing upright on a white surface and angled 30° to the right to show the spine and the front cover. The book then rotates clockwise to straight on, pauses, opens to a spread, still centered on the recto page, pauses, pans left to view the verso page, pauses, closes right to left to show the back cover, pauses, then rotates back around to its starting position.
I use Premiere Pro CC to make the videos from image sequences generated from Luxology Modo. The reason for image sequences is so that I can change durations of various hold frames of the products during the animation.
I'm thinking of using an HD-like resolution of 1080 pixels tall, but also with a square format. So it'd be 1080 x 1080.
Not sure about the frame rate, but I'd like to try 12 fps to keep the file sizes down.
Is mp4 the only format? What about bit rate?
So far, my videos look kind of soft. I'd like them to be nice and crisp like an Apple keynote stream.
Any suggestions?

Hi!
My suggestion is to skip the folders all together! It will end as a total mess after a couple of years. My recommendation is to use the classification of the document type and classify the document with the right information. You can then search for the documents and you don't need to look through tons of folders to find the right document.
I know that you have to put the document in a folder to be able to create it in EasyDMS but at my customers we have a year folder and then month folders underneath where they just dump the documents. We then work with either object links or classification to find the right documents in the business processes. Another recommendation is to implement the TREX engine to be able to find your documents. I donu2019t know if this was the answer you wanted to get but I think this is the way forward if you would like to have a DMS system that could be used to 10+ years. Imagine replacing Google with a file browser!
Best regards,
Kristoffer P

Similar Messages

  • Not a question, but a suggestion on updating software and best practice (Adobe we need to create stickies for the forums)

    Lots of you are hitting the brick wall in updating, and end result is non-recoverable project.   In a production environment and with projects due, it's best that you never update while in the middle of projects.  Wait until you have a day or two of down time, then test.
    For best practice, get into the habit of saving off your projects to a new name by incremental versions.  i.e. "project_name_v001", v002, etc.
    Before you close a project, save it, then save it again to a new version. In this way you'll always have two copies and will not loose the entire project.  Most projects crash upon opening (at least in my experience).
    At the end of the day, copy off your current project to an external drive.  I have a 1TB USB3 drive for this purpose, but you can just as easily save off just the PPro, AE and PS files to a stick.  If the video corrupts, you can always re-ingest.
    Which leads us to the next tip: never clear off your cards or wipe the tapes until the project is archived.  Always cheaper to buy more memory than recouping lost hours of work, and your sanity.
    I've been doing this for over a decade and the number of projects I've lost?  Zero.  Have I crashed?  Oh, yeah.  But I just open the previous version, save a new one and resume the edit.

    Ctrl + B to show the Top Menu
    View > Show Sidebar
    View > Show Staus Bar
    Deactivate Search Entire Library to speed things up.
    This should make managing your iPhone the same as it was before.

  • What is the best practice to roll out ApEx to production?

    Hi,
    My first ApEx application :) What is the best practice to deploy an ApEx application to production?
    Also, I created end users account and use the end user's accountsto log in to ApEx via URL (http://xxx.xxx.xxx:8080/apex/f?p=111:1). However, how come sometimes it's still in development mode (ie: with the Home|Application#|Edit Page#|Create|Session|... ) tool bar showing up at the bottom, but sometimes not?
    Thanks much :)
    Helen

    When you setup your users, make sure the radio button for both workspace admin and developer are set to No. This should make them an "end user" and they should not see the links. Only developers and workspace admins can see it.

  • Best practice implementation. What are the steps for this?.

    We had an upgrade from CRM 4 to 7 undertaken by some outfit (name deleted). After the upgrade was complete it would look as though the comapny carried on using the GUI interface rather than the WebUI interface, mainly because that's how CRM 4 worked. Now they would like to use the WebUI interface of CRM 7, as the GUI interface is no longer supported, but are receiving a number of errors when using certain features of teh WebUI. It would look as though a lot of config is missing, especially in the UI Framework area. I can only assume that whichever company performed the upgarde simply skipped this section when upgrading.
    I assume that I could download the best practice install/upgrade (?) and then just execute the section regarding the UI Framework, if such a section exists. bearing in mind that there seems to be a lot of config missing in the UI Framework section, would you recommend the course of action that I have mentioned?.
    Our WebUI Interaction centre is giving errors when we go in and I have been informed that I need to complete the config for:
    Customer Relationship Management->UI Framework->UI Framework Definition->Maintain Runtime Framework Profile
    But as I mentioned, there are lots of other sections in the UI Framework area that are empty and hence the suggestion I made above. Hopwever, I would specifically be interested to hear from anyone who can tell me what the entries are in the view table BSPWDV_RF_PROF_C and possibly the table BSPWDV_CTRL_REPL and BSPWDV_DEF_CC.
    I know this only completes part of the config, but it might be enough so that the WebUI IC can be viewed.
    On another subject, I have just come into this company and if I wanted to see what had been installed how do I go about that. for example, if I wanted to know if there had been an upgrade from 4 to 7 for a particular Industry solution, where do I check this?.
    Jason

    I have been through the following steps:
    Entered this URL http://help.sap.com/bp/initial/index.htm
    Clicked on 'Cross-industry Packages'
    Clicked on 'CRM'
    Clicked on 'Englilsh'
    Then the following page is displayed:
    http://help.sap.com/bp_crm70/CRM_DE/HTML/index.htm displayed
    But now what?. How do I get the Best practice instructions for a CRM implemenation?.
    Jason

  • Best Practice to use one Key on ACE for new CSR?

    We generate multiple CSR on our ACE....but our previous network admin was only using
    one key for all new CSR requests.
    i.e.......we have samplekey.pem key on our ACE
    we use samplekey.pem to generate CSR's for multiple certs..
    is this best practice or should we be using new keys for each new CSR
    also .is it ok to delete old CSR on the lb..since the limit is only 8?..thx

    We generate multiple CSR on our ACE....but our previous network admin was only using
    one key for all new CSR requests.
    i.e.......we have samplekey.pem key on our ACE
    we use samplekey.pem to generate CSR's for multiple certs..
    is this best practice or should we be using new keys for each new CSR
    also .is it ok to delete old CSR on the lb..since the limit is only 8?..thx

  • Best practice with respect to wcf configuration files for SSIS

    So after reading a lot of posts and blogs on how to configure SSIS to read from configuration files , I am still not clear and would like any expert to provide a definitive stance. In my case the WCF service consumption is wrapped into a separate assembly.
    I am referencing the assembly in an embedded C# script within my SSIS package.
    When I make the helper class call to the webservice , I get the endpoint not found WCF exception.
    Keep in mind I am running this from VS 2012 IDE and did the following to make sure the WCF call works:
    1. Googled and found that you need to have config entries in DtsDebugHost.exe.config file. But it still did not work
    2. Had the same entries in the associated app.config file for C# script but it still did not work.
    It seems like SSIS is very fragile w.r.t consuming WCF entires in a config file. Is the best practice to just have the end point created in code and externalize the end point as a SSIS variable / xml file or is there really a way to get these config files working.
    Attached is the wcf snippet of my config file.
    <system.serviceModel>
    <bindings>
    <basicHttpBinding>
    <binding name="ITransactionProcessor">
    <security mode="TransportWithMessageCredential" />
    </binding>
    </basicHttpBinding>
    </bindings>
    <client>
    <endpoint address="https://ics2wstest.ic3.com/commerce/1.x/transactionProcessor" binding="basicHttpBinding" bindingConfiguration="ITransactionProcessor" contract="CyberSource.ITransactionProcessor" name="portXML" />
    </client>
    </system.serviceModel>
    SM

    I have the code working without use of config files. I am just disappointed that it is not working using the configuration files. That was one of the primary intents of my code re-factoring. 
     Katherine
    Xiong , If you are proposing this as an answer then does this imply that Microsoft's stance is not to use configuration files with SSIS?? Please answer.
    SM

  • Best practice - which server OS should I use for Exchange 2010 install

    We currently have 2 exchange 2007 boxes running Server 2003. the plan is to upgrade to exchange 2010 (I'd prefer 2013 but the powers that be want 2010), and I have been asked to follow Microsoft best practice. The problem is I can't find anything to point
    me in the direction of the recommended server OS, I can find ones it will work on but nothing to say Microsoft recommend this...
    We have licenses for Server 2008, 2008 R2 and 2012 available, which one should I advise is the Microsoft recommendation?

    Thanks Andy,
    So is there no actual best practice recommendation for a server OS to run Exchange 2010 on? I agree that 2012 would be the one to go for, but the people making the decision on how we do this want reasons, and as they don't really have a lot of technical
    understanding I need to be able to go to them with "Use Server 20xx because it's Microsoft best practice".
    If there isn't a best practice recommendation I will try the longer support life and more options for high availability with 2012.
    Well, you probably wont find a "best practice" as much as a "its supported" stance from Micorosoft.
    As in all these things, there may be other reasons a business chooses to use 2008 over 2012 etc...
    Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • Best practice data source from ECC 6.0 for legal consolidation in BPC NW7.5

    Hi there,
    after scanning every message in this forum for "data source" I wonder if there isn't any standard approach from SAP to extract consolidation data from ECC 6.0. I have to say that my customer is not using New g/l so far and therefore the great guide "how to get balances from ECC 6.0 ..." does not fully work for us.
    Coming from the old world of EC-CS there is the first option to go via the GLT3 table. This option requires clever customization and the need to keep both GLT0 and GLT3 in line. Who has experiences regarding maintenance of these tables in a production environment?
    We therefore plan to use data source 0FI_GL_4 which contains all line items to the financial documents posted. Does this approach make sense or will it fail because of performance issues?
    Any help is appreciated!
    Kind regards,
    Dierk

    Hi Dierk,
    Do you have a need for the level of detail provided by the 0FI_GL_4 extractor? Normally I would recommend going for the basic 0FI_GL_6 extractor, which provides a much more manageable data volume since it only gives the periodic activity and balances as well as a much smaller selection of characteristics. Link: [http://help.sap.com/saphelp_nw70/helpdata/en/0a/558cabb2e19a4db3097b81bba4fd0e/frameset.htm]
    Despite this initial recommendation, every client I've had has eventually had a need for the level of detail provided by the 0FI_GL_4 extractor (or the New G/L equivalent - 0FI_GL_14). Most BW systems can handle the output of the line-item extractors without much issue, but you should test using production data and make sure your system sizing takes into account the load.
    The major problem you can run into with the line-item extractors is that if your delta somehow gets compromised it can take a very long time (days, sometimes weeks) to reinitialize and this can cause a large load in your ECC and BW system. Also, for the first transport to production, it is important to plan time to initialize the delta.
    Ethan

  • Best practice when it comes to inserting audio for powerpoint slides

    I'm creating a mixture of PowerPoint slides with animation and software simulation projects from captivate.
    Currently I have developed the audio files associated with the animation objects for the PowerPoint slide. My question
    Should I insert the audio files into PowerPoint and edit the timing of the animation based on the duration of the audio. I have been playing around with the editing and too be honest, I'm getting a tad frustrated trying to conduct the timing.
    or
    Should I insert the audio via Captivate after importing the PowerPoint slides?
    Any help is appreciated.
    Regards
    AJ

    In the Library all audio clips will be stored. Either from the Library or using the Audio Management panel, you can edit the audio clips directly in Audition (part of the Adobe Creative Cloud, one of the best audio applications available) and when saving they are automatically edited in Captivate as well. Same work flow exists with Photoshop: you can edit a source Photoshop file, where Captivate will keep the wanted layers as separate images in the Library, and you can edit them from Captivate right away in Photoshop. The integration of Captivate with other Adobe applications is unique in the eLearning tools.

  • Best Practice loading Dimension Table with Surrogate Keys for Levels

    Hi Experts,
    how would you load an Oracle dimension table with a hierarchy of at least 5 levels with surrogate keys in each level and a unique dimension key for the dimension table.
    With OWB it is an integrated feature to use surrogate keys in every level of a hierarchy. You don't have to care about
    the parent child relation. The load process of the mapping generates the right keys and cares about the relation between the parent and child inside the dimension key.
    I tried to use one interface per Level and created a surrogate key with a native Oracle sequence.
    After that I put all the interfaces in to one big Interface with a union data set per level and added look ups for the right parent child relation.
    I think it is a bit too complicated making the interface like that.
    I will be more than happy for any suggestions? Thank you in advance!
    negib
    Edited by: nmarhoul on Jun 14, 2012 2:26 AM

    Hi,
    I do like the level keys feature of OWB - It makes aggregate tables very easy to implement if your sticking with a star schema.
    Sadly there is nothing off the shelf with the built in knowledge modules with ODI , It doesnt support creating dimension objects in the database by default but there is nothing stopping you coding up your own knowledge module (use flex fields maybe on the datastore to tag column attributes as needed)
    Your approach is what I would have done, possibly use a view (if you dont mind having it external to ODI) to make the interface simpler.

  • Best practice needed: how to dynamicly change rowset for a dataTableModel

    Hello creator folk,
    I need an advice on the following problem.
    I start from the insertUpdateDelete tutorial, and I stick to the very first part - creation of the first page with a dropdown and at table.
    Now I add a second dropdown to add another control level on my table, on tripType for example - simple, it work without problem.
    My problem: my dropdowns have a "off" value - that is a value indicating that the filtering according to this value should be disabled. For example, i want to filter displayed data according to person, tripType, or both.
    As a result, we now have 3 different request, one with personId = ?, one with tripTypeId = ? and the last one with both. But the displayed table is the same.
    I already done such a page, by using the "rendered" option: my JSP contains 3 time the same page, each with a dedicated rowset, but only one is rendered at a time. But I don't like this solution, it is a hell to maintain, and I don't want to imagine if my client ask for a third dropdow!!!
    Another possibility: create a separate page for each possibility. Well, quite the same than the previous one.
    Is it possible at runtime level to change the command associated to a rowset, and then to the linked RowSetDataModel? I tried the following way:
    In the constructor of the page:
                if (isPersonAndTripType()) {
                    myRowSet.setCommand(REQUEST_PERSON_TRIPTYPE);
                    myDataTableModel.setObject(1, this.getSessionBean1().getPersonId());
                    myDataTableModel.setObject(2, this.getSessionBean1().getTripTypeId());
                } else if (isTripTypeOnly()) {
                    ewslive_lasteventIlotRowSet.setCommand(REQUEST_TRIPTYPE);
                    myDataTableModel.setObject(1, this.getSessionBean1().getTriptypeId());
                } else {
         // the default rowset, no change.
                    myDataTableModel.setObject(1,
    this.getSessionBean1().getPersontId());
                myDataTableModel.execute();And in each dropdown_processValueChange, after updating tripId or personId:
                if (isPersonAndTripType()) {
                    myRowSet.setCommand(REQUEST_PERSON_TRIPTYPE);
                    myDataTableModel.setObject(1, this.getSessionBean1().getPersonId());
                    myDataTableModel.setObject(2, this.getSessionBean1().getTripTypeId());
                } else if (isTripTypeOnly()) {
                    ewslive_lasteventIlotRowSet.setCommand(REQUEST_TRIPTYPE);
                    myDataTableModel.setObject(1, this.getSessionBean1().getTriptypeId());
                } else {
              myRowSet.setCommand(REQUEST_PERSON);
                    myDataTableModel.setObject(1,
    this.getSessionBean1().getPersontId());
                myDataTableModel.execute();First run (one person selected by default), everything is OK. But when I change a dropdown I got an exception: the page constructor is called, all ok. The dropdown_processValueChange is called, the correct request is linked to the dataTableModel, and the function return normally, then the exception occures:
    Exception Details:  javax.faces.el.EvaluationException
      javax.faces.FacesException: java.sql.SQLException: [OraDriver] Not on a valid row.
    Possible Source of Error:
       Class Name: com.sun.faces.el.ValueBindingImpl
       File Name: ValueBindingImpl.java
       Method Name: getValue
       Line Number: 206 Help needed!!!

    I've done something similar in my current app, the only difference I see being that I retrieve the value from the dropdown directly rather than going through the sessionbean as I don't need to save the selection.
    I've managed to iron out all the bugs and it works well now. Not near my development machine or I'd post the code. I do have a couple of questions:
    Why do you have the if/else setup in the constructor? If the page is being called for the first time I don't see why you need it.
    Why do you useewslive_lasteventIlotRowSet.setCommand(REQUEST_TRIPTYPE);instead ofmyRowSet.setCommand(REQUEST_TRIPTYPE);?
    I think this is causing your problem as you haven't shown where you set the datacache for myDataTableModel
    to ewslive_lasteventIlotRowSet instead of myRowSet.
    You can also set all of your dropdowns to use the same event handler, cuts down on the duplicate code :)

  • Best practice to setup media server/central location for Itunes, Iphoto

    Hey guys,
    I currently have a Airport Extreme w/ two Western Digital My Books connected that have my Iphoto and Itunes Library.
    This is not ideal since copying things over the network is extremely slow and accessing the media(especially photo librarys) is super janky and leads to disconnects often.
    I am looking to revamp my setup. What do you guys recommend?

    Should I just get a Mac Mini and connect it to the Airport Extreme?

  • The system requirements for Mavericks used years of production for the computer requirements.  Why can't they use the generational number system?

    Office location where Mac Pro is located has a slow internet connection.  I downloaded Mavericks installer onto thumbdrive.  Attempted to install on a dual 3 core Xenon Mac Pro, model 2,1.  After a few hours, I figured out that the error message that was generated by the installer was telling me that my computer doesn't support Mavericks.  Do other people get confused with which generation of hardware is supported with software?  I understand the difference between PowerPC and Intel.  The tricky part is the difference between 32bit and 64bit with the EFI and the CPU instruction set.  Also, why can't they stick with a clear numbering system that is visible when you get info on the Mac? 

    If you are desperate you can use hackintosh tools to boot in 'legacy' mode which allows 10.9 to run on unsupported Macs. Chameleon bootloader & some google fu will get it running in an unsupported way (assuming your graphics card is supported).
    I think Apple dug this hole when they told everyone that these were '64bit Macs'. Sadly if they don't support 32bit EFI in their new OS it's not going to work by default. In other discussions I have read that drivers need to be rewritten for actual 64bit support on a 32bit EFI.
    I have run 10.8 on my 1,1 MP, but 10.6 was nicer, at some point I'll try 10.9 on it
    P.S. http://mactracker.ca and http://www.everymac.com are handy for working out what model supports what features etc.

  • Best Practice for CQ Updates in complex installations (clustering, replication)?

    Hi everybody,
    we are planning a production setup of CQ 5.5 with an authoring cluster replicating to 4 publisher instances. We were wondering what the best update process looks like in a scenario like this. Let's say, we need to install the latest CQ 5 Update - which we actually have to -:
    Do we need to do this on every single instance, or can replication be utilized to distribute updates?
    If updating a cluster - same question: one instance at a time? Just one, and the cluster does the rest?
    The question is really: can update packages (official or custom) be automatically distributed to multiple instances? If yes, is there a "best practice" way to do this?
    Thanks for any help on this!
    Henning

    Hi Henning,
    The CQ5.5 servicepacks are distributed as CRX packages. You can replicate these packages and on the publishs they are unpacked and installed.
    In a cluster the situation is different: You have only 1 repository. So when you have installed the servicepack on one node, the new versions of bundles and other stuff is unpacked to the repository (most likely to /libs). Then the magic (essentially the JcrInstaller) takes care, that the bundles are extracted to started.
    I would not recommend to activate the service pack in a production environment, because then all publishs will be updated the same time. And as a restart is required, you might encounter downtimes. Of course you can make it work when you play with the replication agents :-)
    cheers,
    Jörg

  • Best practice for number of result objects in webi

    Hello all,
    I am just wondering if SAP has any recommendation or best practice document regarding number of fields in Result Objects area for webi. We are currently running on XI 3.1 SP3...one of the end user is running a webi with close to 20 objects/dimensions and 2 measure in result objects. The report is running for 45-60 mins and sometimes timing out. The cube which stores data has around 250K records and the report would return pretty much all the records from the cube.
    Any recommendations/ best practices?
    On similar issue - our production system is around 250GB what would be the memory on your server typically...currently we have 8GB memory on the sap instance server.
    Thanks in advance.

    Hi,
    You mention Cubes so i suspect BW or MS AS .   Yes,  OLAP data access (ODA) to OLAP DataSets is a struggle for WebIntelligence which is best at consuming Relational RowSets.
    Inefficient MDX queries can easily be generated by the webi tool, primeraly due to substandard (or excessive) query and document design. Mandatory filters and focused navigation (i.e. targetted BI questions) are the best for success.
    Here's an intersting article about "when is a webi doc too big" https://weblogs.sdn.sap.com/pub/wlg/18706
    Here's a best practice doc about webi report design and tuning ontop of BW MDX : https://service.sap.com/~sapidb/011000358700000750762010E 
    Optimization of the cube itself, including aggregates and cache warming is important. But especially  use of Suppress Unassigned nodes in the BW hierarchy, and "query stripping" in the webi document.
    finally,  patch level of the BW (BW-BEX-OT-MDX) component is critical.  i.e. anything lower than 7.01 SP09 is trouble. (memory management, mdx optimization, functional correctness)
    Regards,
    H

Maybe you are looking for