Best Practice: Editing

I'm using the version previous to iMovie 8 which I call iMovie 7. I think iMovie converts all DV to Apple Intermediate Codec (AIC). Knowing that I did not want iMovie to go through all the heavy work at the end of my edit during compression. So I created QT file(s) using AIC and imported that into my final edit. Basically creating complete video packages as QT files and then importing them into my final video creation.
1. Am I doing this all for not? (does it matter?)
2. Am I better off creating the entire 60 min plus video in one edit?
3.  When compressing as QT file details shows medium quality, any way to make it high? (does it matter?

"..the version previous to iMovie 8.." ..is called iMovie HD 6, which is where you've posted this message, so you must realise that it is called iMovie HD 6!
"..I think iMovie converts all DV to Apple Intermediate Codec (AIC).." ..No: it only converts non-DV into AIC, for instance it converts HDV into AIC. Ordinary tape-based DV (recorded on full-size DV or miniDV tapes) can be imported directly into iMovie without any conversion: it's what iMovie is built to handle!
"..I did not want iMovie to go through all the heavy work at the end of my edit during compression.." ..during compression for what? It would compress for burning a DVD, but it wouldn't compress for exporting an edited movie back to a camcorder tape, it wouldn't compress if just producing a DV file. It might compress for a QuickTime file or other format ..what exactly do you want to do with your movie after it's been edited in iMovie?
"..I created QT file(s) using AIC and imported that into my final edit.." ..Not necessary if you really are importing from a miniDV camcorder. But if your material comes from a hard-disc camcorder, or a mini-DVD camcorder, or from a memory chip or some other source then it might be necessary.
You don't say what your source is ..it'd be very helpful if you explained what your source video is coming from.
1. Am I doing this all for not? ..Possibly; what's your source?
2. Am I better off creating the entire 60 min plus video in one edit? ..Huh? D'you mean in one import?
3. When compressing as QT file details shows medium quality, any way to make it high? (does it matter?
To make it high, play it back in QuickTime Pro, select 'Window' and 'Movie Properties' (or press ⌘J), choose the Video Track and 'Visual Settings', and tick the High Quality box at bottom right of the window:
..or just set the QuickTime encoding to DV Stream:
..or under QuickTime movie settings choose the appropriate size (720x480 or maybe 640x480 for American/NTSC, 720x576 for European PAL video ..unless you're using 16:9 shaped video) and it should be saved correctly (..the pic below shows sample compression settings for web streaming, but you'd choose what's appropriate for your purpose..)
..then play it and set the High Quality box if you want High Quality.
Does it matter? ..That depends what you want to do with it.. no point in setting it High for YouTube or for web distribution, but if you want to display it on a High-Definition TV then you'd want it High. You haven't said what you want to do with your export from iMovie..

Similar Messages

  • Best practices for Edit Proxies in Final Cut Server?

    We just bought Final Cut Server, and for the most part are pretty happy with the product. We are a small production facility, and primarily work with DVCPro HD footage at 720P. One feature we'd like to use would be the edit proxies feature in FCS, but they don't seem to be working for us.
    _Hosting Computer_
    We're using a mac mini server with snow leopard server and final cut server 1.5.
    Problem
    Whenever we specify in the Administrator Pane of FCS, Under Preferences and Analyze, we'd like to use a custom transcode setting for the edit proxies, keeping Frame Size, TC, and Frame rate the same, but changing the codec to h264 least quality, the effects are a different compression than expected. We tested out the compression settings using Compressor on the server, and we get the desired results:
    Input: DVCPro HD 720P Quicktime 960X720 (1248 x 702) 23.98 fps
    Output: H264 Quicktime 960X720 (1248 x 702) 23.98 fps
    Also did the same outputs using Apple Prorez, Photo Jpeg and got the following outputs:
    Output: Apple Prorez Quicktime 960X720 (1248 x 702) 23.98 fps
    Output: Photo Jpeg Quicktime 960X720 (1248 x 702) 23.98 fps
    (We also made new transcode settings for these compressor settings and we not able to control the compressions through FCS with these settings as well. Our target codec is H264.)
    After connecting the new compression settings on the Mac Mini Server to a new transcode setting in the FCS Admin Pane, we restarted the Java Client, logged in to FCS as an FCSadmin, opened the Admin Pane, and under preferences/analyze change the edit proxy setting to the New H264 setting that worked perfectly when using Compressor. After uploading a final cut project, with one associated media file (DVCPro HD 720P Quicktime 960X720 (1248 x 702) 23.98 fps), the resulting edit proxy did not match the specification: 384 x 216, 23.98fps, TC matches source.
    Also did not dynamically connect when checking out the project from FCS and selecting edit proxies and keep media with project and saving to the desktop.
    Question 1:
    Does anyone have any best practice transcode setting(s) to create for DVCPro HD 720P using edit proxies that dynamically connect and are smaller size than the original? H264? Photo Jpeg?
    Question 2:
    Why is Final Cut Server's Compressor giving a different output, when the same settings work well with just Compressor?
    Question 3:
    Does H264 work in creating dynamically linking edit proxies (ie, no need to reconnect)?
    I can imagine this information to be very useful to the community, so any input or solutions will be greatly appreciated.
    Thank you.

    H.264 is a puzzling choice of codec for edit proxies. H.264 is not an edit-friendly codec since it is a complex long-GOP structure and will require a ton of rendering just to play back the timeline. ProRES 422 Proxy would be a much better choice for editing. I'm afraid I can't account for the differences between Compressor and FCServer in this case, but my gut tells me the non-I-frame codec you are trying to use for edit proxies might have something to do with it. Maybe for an edit-proxy workflow to operate properly on the server-side the codec must be I-frame? Is there any reason you do not want to use ProRES 422 Proxy? They are about 1/3 the footprint of 720p24 DVCPRO-HD, which is already very efficient. For that frame rate and frame size they would be roughly half the heft of good ol' DV25.
    So my answer to all three questions would be to try ProRES 422 Proxy for your edit proxies and see if everything lines up.

  • Best practice: interface for editing documents

    Hello
    I use IDeveloper 11g 11.1.1.3.0, ADF Faces
    I have got a task to create a web interface for editing documents.
    Every document have a head and a specification.
    Head have a lot of field, and every row in a specification have a lot of fields also.
    There are few PL/SQL procedures I need call to save document in the database, and I need to call them in the single transaction for it.
    So, I need to fill up all document and only after that save it to the database.
    For fill up some of fields I need to use component like List of Values (with autoSuggestBehavior and with selecting value from the list).
    There is next question: what is the best practice to develop interface like this?
    I had some troubles when I tried to use ADF BC.
    May be, there are tutorials?
    Will be very thankful for any advices or links.
    Anatolii

    Hello
    I use IDeveloper 11g 11.1.1.3.0, ADF Faces
    I have got a task to create a web interface for editing documents.
    Every document have a head and a specification.
    Head have a lot of field, and every row in a specification have a lot of fields also.
    There are few PL/SQL procedures I need call to save document in the database, and I need to call them in the single transaction for it.
    So, I need to fill up all document and only after that save it to the database.
    For fill up some of fields I need to use component like List of Values (with autoSuggestBehavior and with selecting value from the list).
    There is next question: what is the best practice to develop interface like this?
    I had some troubles when I tried to use ADF BC.
    May be, there are tutorials?
    Will be very thankful for any advices or links.
    Anatolii

  • Best Practices for Professional video editing

    Hi
    I'd like to know your thoughts on what the most proffessional / effeciant method for editing are. At the moment, I archive all the footage from a DV tape through iMovie (I just find iMovie easier for doing this) save / archive all the imported segments of clips I need, name them, then import them into FCP
    When I finish an edit I export and uncompressed Quicktime movie, then back up the entire project on an external drive
    Is this good practise, Should I export the final edit to tape?
    I've just started out as a video-maker as a paid proffession and I'd like to know the most 'by the book' methods
    THanks
    G5 Dual   Mac OS X (10.4.8)  

    Sounds to me that you're doing a whole lot of extra steps using i-movie as your import. You're going to lose some of FCP best media features by not digitizing with FCP. Batch Capture in FCP isn't hard to learn.
    I wouldn't say there's any "rulebook" for professional editors. We all work a little differently but here are some of my "best practices"
    Always clearly name and label all of the tapes that you are using in a fashion that makes sense to you. When I cut a large project I may have multiple tapes. If I lose a piece of media accidentally, it's easier to go back and re-digitize if I have organized the project early in.
    Clearly label bins and use them wisely. For example, on a small project I might have a "video" bin, a "music" bin and a "graphics" bin. This saves searching through one large bin.
    On larger projects, I try to think ahead to how I will edit and make bins accordingly. For example I might have bins as follows, interviews, b-roll location a, b-roll location b and so on. Then I'll have music bins, animation bins and still graphic bins. I generally try to save all to one hard drive which saves me looking through three or four drives. This isn't always possible depending upon the size of the project.
    As for back-up. Lots of peope buy harddrives for each project and then store them until they need them next. Of course, keep all of your raw-footage and you can always re-digitize.
    When I'm done with a project I save the completed project to tape...this is for dubs and library. I save the FCP information on a DVD and I burn the media from the drive, because I can't afford multiple hard drives. I would rather re-digitize my raw if I need to re-do the project in the future.
    That's how I do it, but other editors have other methods. I would highly suggest digitizing in FCP and not i-movie, but that's entirely up to you. You're not doing anything "wrong."
    G4 Dual Processor   Mac OS X (10.4.1)  
    G4 Dual Processor   Mac OS X (10.4.1)  

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Best practice for if/else when one outcome results in exit [Bash]

    I have a bash script with a lot of if/else constructs in the form of
    if <condition>
    then
    <do stuff>
    else
    <do other stuff>
    exit
    fi
    This could also be structured as
    if ! <condition>
    then
    <do other stuff>
    exit
    fi
    <do stuff>
    The first one seems more structured, because it explicitly associates <do stuff> with the condition.  But the second one seems more logical because it avoids explicitly making a choice (then/else) that doesn't really need to be made.
    Is one of the two more in line with "best practice" from pure bash or general programming perspectives?

    I'm not sure if there are 'formal' best practices, but I tend to use the latter form when (and only when) it is some sort of error checking.
    Essentially, this would be when <do stuff> was more of the main purpose of the script, or at least that neighborhood of the script, while <do other stuff> was mostly cleaning up before exiting.
    I suppose more generally, it could relate to the size of the code blocks.  You wouldn't want a long involved <do stuff> section after which a reader would see an "else" and think 'WTF, else what?'.  So, perhaps if there is a substantial disparity in the lengths of the two conditional blocks, put the short one first.
    But I'm just making this all up from my own preferences and intuition.
    When nested this becomes more obvious, and/or a bigger issue.  Consider two scripts:
    if [[ test1 ]]
    then
    if [[ test2 ]]
    then
    echo "All tests passed, continuing..."
    else
    echo "failed test 2"
    exit
    fi
    else
    echo "failed test 1"
    fi
    if [[ ! test1 ]]
    then
    echo "failed test 1"
    exit
    fi
    if [[ ! test2 ]]
    then
    echo "failed test 2"
    exit
    fi
    echo "passed all tests, continuing..."
    This just gets far worse with deeper levels of nesting.  The second seems much cleaner.  In reality though I'd go even further to
    [[ ! test1 ]] && echo "failed test 1" && exit
    [[ ! test2 ]] && echo "failed test 2" && exit
    echo "passed all tests, continuing..."
    edit: added test1/test2 examples.
    Last edited by Trilby (2012-06-19 02:27:48)

  • Best Practice to implement row restriction level

    Hi guys,
    We need to implement a security row filter scenario in our reporting system. Following several recommendations already posted in the forum we have created a security table with the following columns
    userName  Object Id
    U1             A
    U2             B
    where our fact table is something like that
    Object Id    Fact A
    A                23
    B                4
    Additionally we have created row restriction on the universe based on the following where clause:
    UserName = @Variable('BOUSER')
    If the report only contains objects based on Fact table the restriction is never applied. This has sense as docs specify that the row restrictions are only applied if the table is actually invoked in the SQL statement (SELECT statment is supposed).
    Question is the following: Which is the best practice recommended in this situation. Create a dummy column in the security table, map into it into the universe and include the object in the query?
    Thanks
    Edited by: Alfons Gonzalez on Mar 8, 2012 5:33 PM

    Hi,
    This solution also seemed to be the most suitable for us. Problem that we have discover: when the restriction set is not applied for a given user (the advantage of using restriction set is the fact that is not always applied) the query joins the fact table with the security table withou applying any where clause based on @variable('USER'). This is not a problem if the secuity table contains a 1:1 relationship betwwen users and secured objects , but (as in our case) relathion ship is 1:n query provide "additional wrong rows".
    By the moment we have discarded the use of the restriction sets. The effect of putting a dummy column based on the security table may have undesired effects when the condition is not applied.
    I don't know if anyone has found how to workaround this matter.
    Alfons

  • Best Practice for setting systems up in SMSY

    Good afternoon - I want to cleanup our SMSY information and I am looking for some best practice advice on this. We started with an ERP 6.0 dual-stack system. So I created a logical component Z_ECC under "SAP ERP" --> "SAP ECC Server" and I assigned all of my various instances (Dev, QA, Train, Prod) to this logical component. We then applied Enhancement Package 4 to these systems. I see under logical components there is an entry for "SAP ERP ENHANCE PACKAGE". Now that we are on EhP4, should I create a different logical component for my ERP 6.0 EhP4 systems? I see in logical components under "SAP ERP ENHANCE PACKAGE" there are entries for the different products that can be updated to EhP4, such as "ABAP Technology for ERP EHP4", "Central Applications", ... "Utilities/Waste&Recycl./Telco". If I am supposed to change the logical component to something based on EhP4, which should I choose?
    The reason that this is important is that when I go to Maintenance Optimizer, I need to ensure that my version information is correct so that I am presented with all of the available patches for the parts that I have installed.
    My Solution Manager system is 7.01 SPS 26. The ERP systems are ECC 6.0 EhP4 SPS 7.
    Any assistance is appreciated!
    Regards,
    Blair Towe

    Hello Blair,
    In this case you have to assign products EHP 4 for ERP 6 and SAP ERP 6 for your system in SMSY.
    You will then have 2 entries in SMSY, one under each product, the main instance for EHP 4 for ERP 6 must be central applications and the one for SAP ERP 6 is SAP ECC SERVER.
    This way your system should be correctly configured to use the MOPZ.
    Unfortunately I'm not aware of a guide explaining these details.
    Some times the System Landscape guide at service.sap.com/diagnostics can be very useful. See also note 987835.
    Hope it can help.
    Regards,
    Daniel.
    Edited by: Daniel Nicol on May 24, 2011 10:36 PM

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best Practice: Dynamically changing Item-Level permissions?

    Hi all,
    Can you share your opinion on the best practice for Dynamically changing item permissions?
    For example, given this scenario:
    Item Creator can create an initial item.
    After item creator creates, the item becomes read-only for him. Other users can create, but they can only see their own entries (Created by).
    At any point in time, other users can be given Read access (or any other access) by an Administrator to a specific item.
    The item is then given edit permission to a Reviewer and Approver. Reviewers can only edit, and Approvers can only approve.
    After the item has been reviewed, the item becomes read-only to everyone.
    I read that there is only a specific number of unique permissions for a List / Library before performance issues start to set in. Given the requirements above, it looks like item-level permission is unavoidable.
    Do you have certain ideas how best to go with this?
    Thank you!

    Hi,
    According to your post, my understanding is that you wanted to change item level permission.
    There is no out of the box way to accomplish this with SharePoint.               
    You can create a custom permission level using Visual Studio to allow users to add & view items, but not edit permission.   
    Then create a group with the custom permission level. The users in this group would have the permission of create & add permission, but they could no edit the item.
    In the CodePlex, there is a custom workflow activities, but by default it only have four permission level:
    Full Control , Design ,Contribute and Read.
    You should also customize some permission levels for your scenario. 
    What’s more, when use the SharePoint 2013 designer, you should only use the 2010 platform to create the workflow using this activities,
    https://spdactivities.codeplex.com/wikipage?title=Grant%20Permission%20on%20Item
    Thanks & Regards,
    Jason
    Jason Guo
    TechNet Community Support

  • What is a best practice for managing a large amount of ever-changing hyperlinks?

    I am moving an 80+ page printed catalog online. We need to add hyperlinks to our Learning Management System courses to each reference of a class - there are 100s of them. I'm having difficulty understanding what the best practice is for consistent results when I need to go back and edit (which we will have to do regularly).
    These seem like my options:
    Link the actual text - sometimes when I go back to edit the link I can't find it in InDesign but can see it's there when I open up the PDF in Acrobat
    Draw an invisible box over the text and link it - this seems to work better but seems like an extra step
    Do all of the linking in Acrobat
    Am I missing anything?
    Here is the document in case anyone wants to see it so far. For the links that are in there, I used a combination of adding the links in InDesign then perfecting them using Acrobat (removing additional links or correcting others that I couldn't see in InDesign). This part of the process gives me anxiety each month we have to make edits. Nothing seems consistent. Maybe I'm missing something obvious?

    what exatly needs to be edited, the hyperlink or content or?

  • Best practices of having a different external/internal domain

    In the midst of migrating from a joint Windows/Mac server environment to a completely Apple one. Previously, DNS was hosted on the Windows machine using the companyname.local internal domain. When we set up the Apple server, our Apple contact created a new internal domain, called companyname.ltd. (Supposedly there was some conflict in having a 10.5 server be part of a .local domain - either way it was no worries either way.) Companyname.net is our website.
    The goal now is to have the Leopard server run everything - DNS, Kerio mailserver, website, the works. In setting up the DNS on the Mac server this go around, we were advised to just use companyname.net as the internal domain name instead of .ltd or .local or something like that. I happen to like having a separate local domain just for clarity's sake - users know if they are internal/external, but supposedly the Kerio setup would respond much better to just the one companyname.net.
    So after all that - what's the best practice of what I should do? Is it ok to have companyname.net be the local domain, even when companyname.net is also the address to our external website? Or should the local domain be something different from that public URL? Or does it really not matter one way or the other? I've been running companyname.net as the local domain for a week or so now with pretty much no issues, I'd just hate to hit a point where something breaks long term because of an initial setup mixup.
    Thanks in advance for any advice you all can offer!

    Part of this is personal preference, but there are some technical elements to it, too.
    You may find that your decision is swayed by the number of mobile users in your network. If your internal machines are all stationary then it doesn't matter if they're configured for companyname.local (or any other internal-only domain), but if you're a mobile user (e.g. on a laptop that you take to/from work/home/clients/starbucks, etc.) then you'll find it a huge PITA to have to reconfigure things like your mail client to get mail from mail.companyname.local when you're in the office but mail.companyname.net when you're outside.
    For this reason we opted to use the same domain name internally as well as externally. Everyone can set their mail client (and other apps) to use one hostname and DNS controls where they go - e.g. if they're in the office or on VPN, the office DNS server hands out the internal address of the mail server, but if they're remote they get the public address.
    For the most part, users don't know the difference - most of them wouldn't know how to tell anyway - and using one domain name puts the onus on the network administrator to make sure it's correct which IMHO certainly raises the chance of it working correctly when compared to hoping/expecting/praying that all company employees understand your network and know which server name to use when.
    Now one of the downsides of this is that you need to maintain two copies of your companyname.net domain zone data - one for the internal view and one for external (but that's not much more effort than maintaining companyname.net and companyname.local) and make sure you edit the right one.
    It also means you cannot use Apple's Server Admin to manage your DNS on a single machine - Server Admin only understands one view (either internal or external, but not both at the same time). If you have two DNS servers (one for public use and one for internal-only use) then that's not so much of an issue.
    Of course, you can always drive DNS manually by editing the zone files directly.

  • Unicode Migration using National Characterset data types - Best Practice ?

    I know that Oracle discourages the use of the national characterset and national characterset data types(NCHAR, NVARCHAR) but that is the route my company has decide to take and I would like to know what is the best practice regarding this specifically in relation to stored procedures.
    The database schema is being converted by changing all CHAR, VARCHAR and CLOB data types to NCHAR, NVARCHAR and NCLOB data types respectively and I would appreciate any suggestions regarding the changes that need to be made to stored procedures and if there are any hard and fast rules that need to be followed.
    Specific questions that I have are :
    1. Do CHAR and VARCHAR parameters need to be changed to NCHAR and NVARCHAR types ?
    2. Do CHAR and VARCHAR variables need to be changed to NCHAR and NVARCHAR types ?
    3. Do string literals need to be prefixed with 'N' in all cases ? e.g.
    in variable assignments - v_module_name := N'ABCD'
    in variable comparisons - IF v_sp_access_mode = N'DL'
    in calls to other procedures passing string parameters - proc_xyz(v_module_name, N'String Parameter')
    in database column comparisons - WHERE COLUMN_XYZ = N'ABCD'
    If anybody has been through a similar exercise, please share your experience and point out any additional changes that may be required in other areas.
    Database details are as follows and the application is written in COBOL and this is also being changed to be Unicode compliant:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    NLS_CHARACTERSET = WE8MSWIN1252
    NLS_NCHAR_CHARACTERSET = AL16UTF16

    ##1. while doing a test convertion I discovered that VARCHAR paramaters need to be changed to NVARCHAR2 and not VARCHAR2, same for VARCHAR variables.
    VARCHAR columns/parameters/variables should not by used as Oracle reserves the right to change their semantics in the future. You should use VARCHAR2/NVARCHAR2.
    ##3. Not sure I understand, are you saying that unicode columns(NVARCHAR2, NCHAR) in the database will only be able to store character strings made up from WE8MSWIN1252 characters ?
    No, I meant literals. You cannot include non-WE8MSWIN1252 characters into a literal. Actually, you can include them under certain conditions but they will be transformed to an escaped form. See also the UNISTR function.
    ## Reason given for going down this route is that our application works with SQL Server and Oracle and this was the best option
    ## to keep the code/schemas consistent between the two databases
    First, you have to keep two sets of scripts anyway because syntax of DDL is different between SQL Server and Oracle. There is therefore little benefit of just keeping the data type names the same while so many things need to be different. If I designed your system, I would use a DB-agnostic object repository and a script generator to produce either SQL Server or Oracle scripts with the appropriate data types or at least I would use some placeholder syntax to replace placeholders with appropriate data types per target system in the application installer.
    ## I don't know if it is possible to create a database in SQL Server with a Unicode characterset/collation like you can in Oracle, that would have been the better option.
    I am not an SQL Server expert but I think VARCHAR data types are restricted to Windows ANSI code pages and those do not include Unicode.
    -- Sergiusz

  • Printing best practices

    I've been asking a bunch of questions recently on TS printing and realize that I should just start from scratch. Since I'm not sure what best practices are for this environment, I would like to get everyone's opinion. 
    This environment:
    20 Server 2008 R2 TS' and approximately 200 fat clients (mixed XP and 7). Currently, all network printers are installed on each TS individually (not shared). We also have about 10 USB printers that redirect. Our network printers are set up on 5 different local
    servers since we have multiple locations. We print both from local desktops and terminal servers.
    What we need is for all network printers to be on each server like they are currently but I'd like to eliminate the need to manage each one on each and every server whenever there is a change. Our current environment was set up by previous IT personnel and
    I'm not sure if it's optimal
    I understand there are multiple ways to deploy printers but I don't know what is best for our environment. I've tried Print Management but I need to be able to set preferences. I've tried GPP in Computer Configuration but it doesn't seem to work (possibly because
    of the current set up). I would like to know how others would manage the printers in this environment, even if I need to delete everything and start over. I am also inexperienced with servers and group policy so I will ask follow up questions to most responses.
    Sorry in advance!
    Edit:To be more clear about my scope of knowledge- I know where the Active Directory and Group Policy Management reside. I have modified existing group policies
    but not made new ones. Since all of our changes always apply to all users/terminal servers/roaming profiles, I've never needed to create OU's or use any kind of item-level targeting so I am not familiar with those.
    Also, I would greatly appreciate not being redirected to another site/forum for answers. I've read hundreds
    and am getting mixed responses since I'm not sure what is appropriate for this particular environment. That and because I need layman's terms :) Thank you!

    Hey Lynnette
    I read through some of the other questions you were asking. 
    Deployed Printers from Print Management is only for adding printer connections, it's not for adding local printers and Deployed Printers does not support setting a default printer.
    Group Policy Preferences supports adding local printers and connections.  It can be used to set the default but not sure if that's for connections or local printers.
    If the end result is to have the same configuration of local printers on multiple machines, I suggest using \windows\system32\spool\tools\PrintBRM.exe to backup the local printers from your Primary machine, then restore to all the other targets. 
    You can create a scheduled task to perform the backup and restores.
    If you are looking to add printer connections in the "Computer" context (all users logging on will get the connection to the shared printer), you can achieve this using the local machine policy or using a domain policy that only applies to a specific
    set of computers.  But once again no default is set but it's fairly easy to set the default with printui.exe or prnmngr.vbs both included with the operating system.
    Alan Morris Windows Printing Team

  • Home Movie Cataloging - BEST PRACTICES

    I have about 200 hours of old home movies on VHS which I am in the process of adding to my iMac. I am wondering about 'best practices' on how much video can be stored inside of iMovie '08, when how much video becomes too much inside of the program, etc.
    In a perfect world, I'd like to simply import all of my home videos into iMovie, leave them in the 'library' section, and make 2-5 minute long clips in the 'projects' section for sharing with family members, but never deleting anything from the 'library'. Is this a good way to store original data? Would it be smarter to export all of the original video content to .DV files or something like that for space saving, etc?
    Can I use iMovie to store and catalog all of my old home movies in the same way I use iPhoto to store ALL of my photos, and iTunes to store ALL of my music/hollywood-movies, etc?

    We-ell, since no-one else has replied:
    1 hour of DV (digital video in the file system which iMovie uses) needs 13GB of hard disc space.
    You have 200 hours of video. 200 x 13 = 2,600 gigabytes. Two point six terabytes. If you put all that on one-and-a-bit 2TB hard discs, and a hard disc fails - oops! - where's your backup? ..Ah, on another one-and-a-bit 2TB hard discs ..or, preferably, spread over several hard discs, so that if one fails you haven't lost everything!
    iMovie - the program - can handle video stored on external discs. But are you willing to pay the price for those discs? If so; fine! Digitise all your VHS and store it on computer discs (prices come down month by month).
    Yes, you can "mix'n'match" clips between different projects, making all sorts of "mash-ups" or new videos from all the assorted video clips. But you'll need more hard disc space for the editing, too. You could use your iMac's internal hard disc for that ..or use one of the external discs for doing the editing on. That's how professionals edit: all the video "assets" on external discs, and edit onto another disc. That's what I do with my big floorstander PowerMac, or whatever those big cheesegraters were called..
    So the idea's fine, as long as you have all the external storage you'd need, plus the backup in case one of those discs fails, and all the time and patience to digitise 200 hours of VHS.
    Note that importing from VHS will import material as one long, continuous take - there'll be no automatic scene breaks between different shots - so you'll have to spend many hours chopping up the material into different clips after importing it.
    Best way to index that? Dunno; there have been several programs which supposedly do the job for you (..I can't remember their names; I've tried a few: find them by Googling..) but they've been more trouble - and taken up more disc space - than I've been prepared to bother with. I'd jot down the different clips as you create them, either by jotting in TextEdit (simplest) or in a database or spreadsheet program such as Excel or Numbers or similar ..or even in a notebook.
    Jot down the type of footage (e.g; 16th Birthday party), name of clip (e.g; 016 party), duration (e.g; 06:20 mins and seconds) and anything else you might need to identify each clip.
    Best of luck!

Maybe you are looking for

  • Windows Update Failed Due To Services Stopped

    Generally, the Windows Update issue can be caused by one of the following factors: I. The Windows Update service has been stopped. II. Corrupted Windows Update Temporary folder. In order to narrow down the cause of this issue and resolve it, you can

  • Ledger Group field NOT showing in FB03 after activating Non- Leading ledger

    Dear All Experts, I have activated non leading ledger for my company code and posted the document successfully from FB60. Now coming to FB03 I can see the document in Local Currency and Group Currency - USD but the Ledger Group field still shows comp

  • I cannot sync my iphone 4 to my imac it connects for a few seconds then goes off

    please can someone help me   When i connect my iphone 4 to my imac using bluetooth it pairs up but only lasts for a few seconds then goes off and shows not connected  i have tried all sorts of ways  but  without success  why is this happening am i th

  • OBIEE Reporting Requirement

    OBIEE Reporting Requirement I have the following OBIEE reporting requirement that I am needing assistance with: I have to produce a report from the following table. CUSTOMER_ID|     EARNINGS |     CATEGORY A1234     |     1000     |     A A1234     |

  • Photoshop CC accidentally deleted but still installed.

    Hi, I accidently deleted all my original adobe folders including Photoshop CC when i went back into the Adobe Appication Manager It shows that Photoshop CC is still installed and up to date but i can not access it. I've tried going into the hidden fo