Best Practices: iPad/MacBookPro synching for video production in education

My organization just bought 14 Macbook Pros and 14 iPads Minis. Our goal is to have students in single-day classes use the iPads to film something, then synch/export the video to a MacBook Pro where they can then edit that video in iMovies. Once that single-day class is over, all of the video will (likely) be deleted and new students come in a couple days later and start fresh. I'm trying to figure out the best practices for this to make it as painless as possible for all involved.
So, matching AppleIDs for each pair? One AppleID for all devices and manual synch through iTunes? Dropbox/cloud synching instead of iTunes?
All of these devices are brand new. I have already started prepping the MacBook Pros, but have not even turned on the iPads since I'm not sure which AppleID I should attach to the iPads -- I assume the first AppleID on an iPad will accept the iLife apps much the same way they do on the MacBook Pros.
Any help is appreciated.
Thanks
Jack

well the most important fact to accept is that ALL DRIVES WILL FAIL.  It's just a matter of when.  I can tell you about a nightmare situation with g-drives (before Hitachi bought them).   What format are you shooting?  If you shoot on tape, you can always recapture as long as you captured with abort clips on dropped frames on make new clip on timecode break are enabled.  But that's gonna take "real time."  If you shot on a chip based format, backing up the chips in multiple places (and I mean multiple) can provide a sense of security.  But if you need to be able to get back to work immediately if you have a drive fail, having a back up of your media or if you've stored it on a redundant raid is crucial.  I also seriously recommended having a clone of your startup drive so if your startup (boot) drive fails, you can get back to work quickly. 
https://discussions.apple.com/docs/DOC-2494

Similar Messages

  • The best MacBook for video production

    Hello,
    I want to start working on video production (from camera shooting to video editing and compositing).
    As a first step, mobility is very important, so I want to know what is the best MacBook to start with (best price/performance ratio)?
    Also is Final Cut Studio package all I need for video production?
    Thanks!

    Have you ever worked with video before?
    What software do you intend to use? What formats will you acquire, edit, output?
    Do you intend to make money from this or is it a hobby?
    If you intend to make money from your efforts, do you have a capitalization plan and have you talked to your banker? Do you have a business plan in place that has been vetted by knowledgeable people?
    Head down to your local library and pull anything you can find in the business section regarding starting a small business then read what ever you can get your hands on about video production. Then, walk over to the public access channel and sign up for one of their "intro to video" sessions.
    Honestly, the speed of the laptop is the least of your concerns.
    On the other hand, if this is just a way to play, purchase whatever you want.
    x

  • Best practice to define length for varchar field of table in sql server

    What is best practice to define length for a varchar field in table
    where field suppose Remarks By Person  varchar(max) or varchar(4000)
    Could it affect on optimization in future????
    experts Reply Must ... 
    Dilip Patil..

    Hi Dilip,
    Varchar(n/max) is a variable-length, non-unicode character data. N defines the string length and can be a value from 1 through 8,000. Max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size is the actual length of the data entered
    + 2 bytes. We always use varchar when the sizes of the column data entries vary considerably. While if the filed data size might exceed 8,000 bytes in some way, we should use varchar(max).
    So the conclusion is just like Uri said, use varchar(max) or varchar(4000) is depends on how much characters we are going to store.
    The following document about varchar in SQL Server is for your reference:
    http://technet.microsoft.com/en-us/library/ms176089.aspx
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Best practice on Oracle VM for Sparc System

    Dear All,
    I want to test Oracle VM for Sparc System but I don't have new model Server to test it. What is the best practice of Oracle VM for Sparc System?
    I have a Dell laptop which has spec as below:
    -Intel® CoreTM i7-2640M
    (2.8GHz, 4MB cache)
    - Ram: 8GB DDR3
    - HDD: 750GB
    -1GB AMD Radeon
    I want to install Oracle VM VirtualBox on my laptop and then install Oracle VM for Sparc System in Virtual Box, is it possible?
    Please kindly give advice,
    Thanks and regards,
    Heng

    Heng Horn wrote:
    How about computer desktop or computer workstation with the latest version has CPU supports Oracle VM or SPARC?Nope. The only place you find SPARC T4 processors is in Sun Servers (and some Fujitsu servers, I think).

  • Best practices to reduce downtime for Database releases(rolling changes)

    Hi,
    What are best practices to reduce downtime for database releases on 10.2.0.3? What DB changes can be rolling and what can't?
    Thanks in advance.
    Regards,
    RJiv.

    I would be very dubious about any sort of universal "best practices" here. Realistically, your practices need to be tailored to the application and the environment.
    You can invest a lot of time, energy, and resources into minimizing downtime if that is the only goal. But you'll generally pay for that goal in terms of developer and admin time and effort, environmental complexity, etc. And you generally need to architect your application with rolling upgrades in mind, which necessitates potentially large amounts of redesign to existing applications. It may be perfectly acceptable to go full-bore into minimizing downtime if you are running Amazon.com and any downtime is unacceptable. Most organizations, however, need to balance downtime against other needs.
    For example, you could radically minimize downtime by having a second active database, configuring Streams to replicate changes between the two master databases, and configure the middle tier environment so that you can point different middle tier servers against one or the other database. When you want to upgrade, you point all the middle tier servers against database A other than 1 that lives on a special URL. You upgrade database B (making sure to deal with the Streams replication environment properly depending on requirements) and do the smoke test against the special URL. When you determine that everything works, you configure all the app servers to point at B and have Streams replication process configured to replicate changes from the old data model to the new data model), upgrade B, repeat the smoke test, and then return the middle tier environment to the normal state of balancing between databases.
    This lets you upgrade with 0 downtime. But you've got to license another primary database. And configure Streams. And write the replication code to propagate the changes on B during the time you're smoke testing A. And you need the middle tier infrastructure in place. And you're obviously going to be involving more admins than you would for a simpler deploy where you take things down, reboot, and bring things up. The test plan becomes more complicated as well since you need to practice this sort of thing in lower environments.
    Justin

  • Upgrading PowerMac G4 to the MAX for video production?????

    I need to know if I can just upgrade my Power Mac G4 (AGP graphics) to the max for video production? I'll be using 4 main programs.
    1. Final Cut Studio 2
    2. After Affects
    3. Protools
    4. Photoshop CS 3
    Final Cut recommends this for all applications:
    * 2GB of RAM when working with compressed HD and uncompressed SD sources
    * 4GB of RAM when working with uncompressed HD sources
    Here are my computer stats:
    Machine Model: PowerMac3,1
    CPU Type: PowerPC G4 (2.9)
    Number Of CPUs: 1
    CPU Speed: 450 MHz
    L2 Cache (per CPU): 1 MB
    Memory: 768 MB
    Bus Speed: 100 MHz
    Boot ROM Version: 3.2.2f1
    Can anyone please help? This is my first mac.

    Hey DJ,
    I'm thinking that, for those specific needs, dollars are better thrown at a newer model than upgrades for your existing one. Here's why:
    1) You do not have native support for large hard drives, something that's nice for video work. You'd have to spend money on a PCI adaptor card to add that support. Figure US$40-100.
    2) Your video card is likely not up to the task unless it's already been upgraded. Another $70-250 depending on how good a scrounger you are.
    3) You need a minimum 1.25Ghz G4 processor and, if you get the upgrade, who's to say if the next version of FC will still support a G4? 1.2G processor upgrades start at over $200 and faster ones are up to $700 (OWC pricing for new gear).
    4) You can't stick more than 2G RAM in your computer.
    And, after all the upgrades, you still have the basic bus speed limitation inherent in all G4s.
    Bottom line is that you can quickly invest enough to buy a faster used Mac. Examples of used G5 prices (usually at the top of the price scale) are here. Even an entry level G5 should be much faster than a fully upgraded G4.
    Don't get me wrong--I'm an upgrader and love to do it regardless of cost. In this case, I see the potential for ending up with an upgraded computer that struggles at its intended task and could be obsoleted by the next upgrade of the software.

  • Best practices with LDIF Development for RBAC?

    I'm currently working on enforcing RBAC (Role Based Access controls) in OID that may be subject to change every few months. What I've currently been doing is writing LDIF files to make changes to the existing RBAC once the changes have been finalized.
    Unfortunately, now we have ended up with a growing list of LDIF files that must be run in sequential order if we were to build a new environment. Any defects or development errors that slip through developer unit testing must be handled in the same manner.
    What is the best practice process for performing this type of development? Would it make more sense to have one LDIF file that removes all of the RBAC enforcement (via ldapmodify -c), and then a separate file that will install the latest and most up to date version? I've also considered just using one LDIF file, appending any updates to the end of it and using the ldapmodify command with the -c parameter

    With regard to the 29.97/30 thing, you'll find that video people are idiosyncratically imprecise about that. We say 60 when we mean 59.94, we say 30 when we mean 29.97 and we say 24 when we mean 23.976.
    We're quirky.
    Whenever somebody says one of those nice, round numbers, you can assume they're really talking about the corresponding ugly fraction.
    Unless they're film people, in which case +24 means 24, dangit.+

  • What is the best practice to roll out ApEx to production?

    Hi,
    My first ApEx application :) What is the best practice to deploy an ApEx application to production?
    Also, I created end users account and use the end user's accountsto log in to ApEx via URL (http://xxx.xxx.xxx:8080/apex/f?p=111:1). However, how come sometimes it's still in development mode (ie: with the Home|Application#|Edit Page#|Create|Session|... ) tool bar showing up at the bottom, but sometimes not?
    Thanks much :)
    Helen

    When you setup your users, make sure the radio button for both workspace admin and developer are set to No. This should make them an "end user" and they should not see the links. Only developers and workspace admins can see it.

  • Best practices or design framework for designing processes in OSB(11g)

    Hi all,
    We have been working in oracle 10g, now in the new project we are going to use Soa suite 11g.For 10g we designed our services very similar to AIA framework. But in 11g since OSB is introduced we are not able to exactly fit the AIA framework here because OSB has a structure different than ESB.
    Can anybody suggest best practices or some design framework for designing processes in OSB or 11g SOA Suite ?

    http://download.oracle.com/docs/cd/E12839_01/integration.1111/e10223/04_osb.htm
    http://www.oracle.com/technology/products/integration/service-bus/index.html
    Regards,
    Anuj

  • Archiving Best Practices / How To Guide for Oracle 10g - need urgently

    Hi,
    I apologize if this is a silly question. But i need a step by step archiving guide for Oracle 10g and cannot find any reference document. I am in a rather remote part of S.E. Asia & can't seem to find DBA's with the requisite experience to do the job properly. I have had 1 database lock up this week at a big telecoms provider and another one at a major bank is about to go. I can easily add LUNS & re-structure mirrors etc at the Unix level [ i am a Unix engineer ]
    but i know that is not the long run solution. I am sure the 2 databases i am concerned about have never been archived properly.
    This is the sort of thing DBA's must do all the time. Can someone point me to the proper documentation so i can do a proper job and archive a few years data out of these databases. I do not want to do a hack job. At least i can clone the databases and practise on the clones first before i actually do production.
    -thanks very much
    -gregoire
    [email protected]

    I'm not so sure this is a general database question, as it would be specific to an application and implementation, and as the technology has changed, the database options to support it has too.
    So for example, if you have bought the partitioning option, there may be some sensible procedure for partitioning off older data.
    Things may depend on whether you are talking about an OLTP, a DW, a DSS, or mixed systems.
    DBA's do it all the time because the requirements are different everywhere. Simply deleting a lot of data after copying the old data to another table (as some older systems do) may just wind up giving you performance problems scanning swiss-cheesed data.
    Some places may not archive at all, if they've separated out OLTP from reporting. If all the OLTP stuff is accessed through indices, all the older stuff just sits there. The reporting DB may only have what is needed to be reported on, or be on a standby db where range scans are sufficient to ignore old data. There there's exadata, which has it's own strengths.
    Best Practices have to be on similar enough systems, otherwise they are a self-contradiction.
    Get yourself someone who understands your requirements and can evaluate the actual problem. No apology needed, it is not a silly question. But what is silly is assuming what the problem is with no evidence.

  • Best Practice setting up NICs for Hyper V 2008 r2

    I am looking at some suggestions for best practice for setting up a hyper V 2008 r2 at a remote location with 5 nics, one for managment vlan and other 4 on the data vlan.  This server will host  2 virtual machines, one is a DC and the other
    is a member local DHCP server.  The server is setup now with one nic on the management Vlan and the other nic's set to get there ip from the local dhcp server on on the host.   We have the virtual networks setup in Hyper V to
    point to each of the nics using the "external connection".  The virtual servers 'DHCP and AD" have there own ip set within them.  Issues we are seeing,  when the site looses external connections for a while they cannot get ip
    addresses from the local dhcp server anymore.
    1. NIC on management Vlan -- IP Static -- Physical host
    2. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V  -- virtual server DHCP
    3. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- Virtual server domain controller
    4. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    5. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    Thanks in advance

    Looks like you may be over complicating things here.  More and more of the recommendations from Microsoft at this point would be to create a Logical Switch and then layer on Logical Networks for your management layers, but here is what I would do for
    you simple remote office.  
    Management NIC:  Looks good (Teaming would be better, but only if you had 2 different switching to protect against link failures at the switch level.  Doesn't seem relevant in this case however.
    NIC for Data Network VLAN:  I would use one NIC in your case if you can have the ability to Trunk multiple VLANs at the switch level to the NIC.  That way you are setting the VLAN on the VMs NIC that you want to access and your
    Virtual Switch configuration is very simple.  On this virtual switch however, I would uncheck IPv4 and IPv6.  There is no need to give this NIC an address as you are just passing traffic through them from the VMs that are marked with VLAN tags.  Again,
    if you have multiple physical switches in the building teaming could be an option, but probably adds more complexity than is necessary for a small office. 
    Even if you keep your Virtual Switches linked to separate NICs unchecking IPv4 and IPv6 makes sense. 
    Disable all the other NICs
    Beyond that, check your routing.  Can you ping between all hosts when there is not interruption? What DHCP server are they getting there addresses on normally?  Where are your name resolution servers (DNS, WINS)?  
    No silver bullet here, but maybe a step in the right direction.
    Rob McShinsky (VirtuallyAware.com)
    VirtuallyAware - Experiences in a Virtual World (Microsoft MVP - Virtual Machine)

  • Best Practice : Creating Custom Renderer for Standard Component

    I've been reading the docs and a few threads about Custom Renderers. The best practice seems to be to create a Custom Component where you need a Custom Renderer. Is this the case?
    See [this post|http://forums.sun.com/thread.jspa?forumID=427&threadID=520422]
    I've created several Custom Renderers to override the HTML provided by the Standard Components, however I can't see the benefit in also creating a Custom Component when the behaviour of the standard component is just fine.
    Thanks,
    Damian.

    It all depends on what you are trying to accomplish. Generally speaking if all you need is for the user interface output to be changed then a renderer will work just fine. A new component is usually made in order to provide some fundamental change in server side functionality not related to the user interface. - Ponderator

  • Best Practice Advice - Using ARD for Inventorying System Resources Info

    Hello All,
    I hope this is the place I can post a question like this. If not please direct me if there is another location for a topic of this nature.
    We are in the process of utilizing ARD reporting for all the Macs in our district (3500 +/- a few here and there). I am looking for advice and would like some best practices ideas for a project like this. ANY and ALL advice is welcome. Scheduling reports, utilizing a task server as opposed to the Admin workstation, etc. I figured I could always learn from those with experience rather than trying to reinvent the wheel. Thanks for your time.

    hey, i am also intrested in any tips. we are gearing up to use ARD for all of our macs current and future.
    i am having a hard time with entering the user/pass for each machine, is there and eaiser way to do so? we dont have nearly as many macs running as you do but its still a pain to do each one over and over. any hints? or am i doing it wrong?
    thanks
    -wilt

  • BPC 5 - Best practices - Sample data file for Legal Consolidation

    Hi,
    we are following the steps indicated in the SAP BPC Business Practice: http://help.sap.com/bp_bpcv151/html/bpc.htm
    A Legal Consolidation prerequisit is to have the sample data file that we do not have: "Consolidation Finance Data.xls"
    Does anybody have this file or know where to find it?
    Thanks for your time!
    Regards,
    Santiago

    Hi,
    From [https://websmp230.sap-ag.de/sap/bc/bsp/spn/download_basket/download.htm?objid=012002523100012218702007E&action=DL_DIRECT] this address you can obtain .zip file for Best Practice including all scenarios and csv files under misc directory used in these scenarios.
    Consolidation Finance Data.txt is in there also..
    Regards,
    ergin ozturk

  • Best practices when carry forward for audit adjustments

    Dear experts,
    I would like to know if someone can share his best practices when performing carry forward for audit adjustments.
    We are actually doing legal consolidation for one customer and we are facing one issue.
    The accounting team needs to pass audit adjustments around April-May for last year.
    So from January to April / May, the opening balance must be based on December closing of prior year.
    Then from May / June to December, the opening balance must be based on Audit closing of prior year.
    We originally planned to create two members for December period, XXXX.DEC and XXXX.AUD
    Once the accountants would know their audit closing balance, they would have to input it on the XXXX.AUD period and a business rule could compute the difference between the closing of AUD and DEC periods and store the result on an opening flow.
    The opening flow hierarchy would be as follow:
    F_OPETOT (Opening balance Total)
        F_OPE (Opening balance from December)
        F_OPEAUD (Opening balance from the difference between closing balance of Audit and December periods)
    Now, assume that we are in October, but for any reason, the accountant run a carry forward for February, he is going to impact the opening balance because at this time (October), we have the audit adjustments.
    How to avoid such a thing? What are the best practices in this case?
    I guess it is something that you may have encounter if you did a consolidation project.
    Any help will be greatly appreciated.
    Thanks
    Antoine Epinette

    Cookman and I have been arguing about this since the paleozoic era. Here's my logic for capturing everything.
    Less wear and tear on the tape and the deck.
    You've got everything on the system. Can't tell you how many times a client has said "I know that there was a better take." The only way to disabuse them of this notion is to look at every take. if it's not on the system, you've got to spend more time finding the tape, and adding "wear and tear on the tape and the deck." And then there's the moment where you need to replace the audio for one word from another take. You can quickly check all the other takes (particularly if you've done a thorough job logging the material - see below)_.
    Once it's on the system, you still need to log and learn the material. You can scan thru material much faster once it's captured. Jumping around the material is much easier.
    There's no question that logging the material before you capture makes you learn the material in a more thorough way, but with enough selfdiscipline, you can learn the material as thoroughly once it's been captured.

Maybe you are looking for

  • NoO!!! not my beloved macbook pro.. battery/hot heat issues..

    I have a macbook PRO 17" serial w8622.... I have read about the heat/noise/battery issues wi macbooks.. but I didn't see any complains on the 17" version so i assumed we were lucky or we didn't have those issues.... I got my mine middle of July of 06

  • Catch snapshot event in Acrobat

    Hi all I scope is catch the snapshot event from acrobt and lanchuing photoshop and paste the snapshot. How I can catach the event? Thanks in advance Regards arul

  • Tables Disappeared???

    I have a problem... I have 10g installed and have created a database of music with BLOB (mp3 files). The table has 1204 rows in it, making it a pretty large table. I got the bright idea to create another table using the unique column and the blob col

  • The resource could not be saved due to unknown error

    Hello, After we installed the April 2015 CU on our Project 2010 farm, we have started to get the below error when we attempt to edit a Resource. We did run the configuration wizard on both the app and WFE servers. Also, we are not using any custom ma

  • Docking station stops working intermitan​tly

    I purchased the split x2 in  November (about 3 months ago) and every so often every function from the docking station stops working completely. I have tried rebooting and a couple of times and it worked but usually it does not. Is this a known issue