Wich is correct way to handle Workflow of documents ?

We are starting a project to handle Workflow of documents.
We have an Enterprise with some Users that edit and publish documents;
and related subcompanies that have Users that read only the documents.
We have to handle the revisions of documents, the process phases and the validation
of a revisor for each phase and also control the document access.
Our backend database is Oracle 8 for Wordgroup (in the Enterprise) and IIS for the client access (from subcompanies).
We want to use this architecture : client (HTML) and server (servlet, JSP).
These documents are written with Word97 and are stored in Oracle 8.
We plan in the future on updating Oracle 8 FWG > Oracle8 Enterprise > Oracle8i and the migration from IIS (NT Web Server) to (UNIX Solaris).
My questions are:
1) How control the open and save of documents with Oracle connection ?
2) Is better to store the documents inside Oracle or just insert the URL in tables ?
3) If I want to use ConText cartridge for searching mechanism where I have to store
this documents ?
4) For the servlets I need an Application Server ? Wich release ?
Could you help me to get the correct solution. I would appreciate any suggestions.
Thanks
Lorenzo Baldovini.
[email protected]

Hi,
You really need to take a look at the XMLDB Developers guide.
For updating XML with SQL/XML see UPDATEXML and for XQuery see [Using XQuery with Oracle XMLDB|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14259/xdb_xquery.htm#sthref1673]
HTH,
Chris

Similar Messages

  • Best way to handle multi-page documents?

    Hi,
    I'm creating a multi-page editor on top of TLF and I'd like to hear some opinions about the best way to handle multi-pade documents.
    I'm basicaly considering two options:
    1.) A single RichEditableText and several ContainerControllers.
    2.) Several RichEditableText.
    Tks.

    I'm not sure you can create a RichEditableText and give it multiple ContainerControllers. If you use multiple RichEditableText
    components, then you will have to decide in advance which text goes in which one.
    We have posted sample code for doing this in straight ActionScript, which you could presumably also host inside a Flex container. See:
    http://blogs.adobe.com/tlf/examples/
    Look for "ActionScript Pagination Example".
    - robin

  • What's the correct way to handle changes in RDBMS/DBadapter?

    In my project all changes to the database are not done via Jdeveloper but via TOAD. This means DBadapters must be made aware of changes in the database.
    I tried to re-run the DBad.apter wizard twice (2 different services) - to make it aware of changes in the DB. Both times it failed. I think was after the import database tab. The next tab was just blank.
    So what's the correct way of reconile changes in the db backwards into Jdev?
    BTW, in the DBadapter wiz its not possible the remove a already imported table. How do I come across the situation where I want the DBadapter to point to af different table? - and possibly remove old references to another one - which might have been removed in the DB.
    As It is now - I have to re-work all my DBadapters, which is not very much fun...
    Rgds, Henrik

    Trust me, I hv done that umpteen nbr of times.
    I hate BA's coming to me with changes, for which I hv to modify the DB adapter.
    One big loop-hole with BPEL is if we try to modify the adapters/toplink, it doesnt tend to work properly.
    The manthra for such modifications is ... "recreate", which is definetely not a good practise.
    You may not like but gottu live with it, my friend.
    Pointing to a different table, I achieve it by doing a "Shift+Delete" to all the references of the old table in the BPEL project ... :|
    There isnt a specific provision in the wzd (I am not sure of the latest version, though).

  • Correct way to handle the updated object

    Hi,
    I have a thread, test2.java, that periodically update a object that pass from test.java. In test.java, the "data" object need to be most updated coz this object is used in other thread also. I can ensure that only test2.java do the write, others threads are read only.
    My question is: I can write some dirty code to do what I want. But for a programmer, I want to know the formal, oo way to handle this situation.
    Thanks.
    Tommy
    public class test{
    data d;
    test2 t2;
    public test(){
    d = new data(1, getClass().toString());
    t2 = new test2(d);
    t2.start();
    public void runServices(){
    while(true){
    //If i don't do anything, the following code only print out
    //the instance that i init. here.. Not the updated one
    System.out.println(d.toString());
    try{
    Thread.sleep(5000);
    }catch(InterruptedException ex){}
    public static void main(String[] args){
    test t = new test();
    t.runServices();
    * This class will periodically update the "data" object that
    * passed from test.java
    public class test2 extends Thread{
    int count = 1;
    data d;
    public test2(data a){
    this.d = a;
    public void run(){
    while(true){
    d = new data(count++, getClass().toString());
    System.out.println(d.toString());
    try{
    sleep(5000);
    }catch(InterruptedException ex){}
    public class data{
    int count;
    String s = "";
    public data(int a, String b){
    count = a;
    s = b;
    public String toString(){
    return s + " count:" + count;

    Sorry nearly missed that :(
    You should try to modify the instance of data you have been given in the constructor for test2 instead of creating a new object:
    * This class will periodically update the "data" object that
    * passed from test.java
    public class test2 extends Thread {
      int count = 1;
      data d;
      public test2(data a) {
        this.d = a;
      public void run() {
        while(true) {
          d.refreshWith(count++, getClass().toString());
          System.out.println(d.toString());
         try {
           sleep(5000);
         } catch(InterruptedException ex){}
    }

  • Correct way to handle an exception in a constructor?

    Hi I was wondering if anyone could tell me if I implemented my code correctly. It compiles and runs fine, I'm just wondering if there is a more efficient or better way to do this.
         static Cabin cabinTest;
    public static void main(String[] args)
              testEquals();
              testCompareTo();
              try
                   cabinTest = new Cabin(5, 2, true);
              catch(Exception e)
                   System.out.println("Invalid Input");
    public Cabin (int cabinNumber, int rooms, boolean kitchen) throws Exception
              super("C" + cabinNumber, rooms==1 ? ONE_ROOM_RATE : TWO_ROOM_RATE,
                     rooms==1? ONE_ROOM_GUESTS : TWO_ROOM_GUESTS);
                   this.rooms = rooms;
                   this.kitchen = kitchen;
              

    Thanks for your response.
    I think I have done all I can do as far as what you pointed out. This is a class assignment so I am working with code and instructions I have been given.
    The teacher said the constructor throws Exception. Here is the UML she provided for the constructor of the class she had us write.
    +Cabin(in cabinNumber: int, in rooms: int,
             in kitchen: boolean) throws ExceptionAnd, this is simple test code :)
    Thanks again

  • Clients connect to wifi with certificate that expires every month - correct way to handle expired certificates?

    Hi all
    I'm sorry if this is the wrong forum to ask this question. Also my knowledge in this area is somewhat limited, which I why I need your help :-)
    We use wireless networks primarily in my company for all our clients and use a certificate to authenticate to the network. This certificate expires after 1 month and we automatically renew them 1 week before expiry. Relatively often we have users that
    are not connected to the network for a few weeks or more and then the certificate expires before being renewed. Then we have to connect them to the wired network to get the certificate updated, so they can connect to the wireless network again.
    What is the correct approach to solve this issue? We feel extending the life of the certificate would be a too big security compromise. Is there some way you could automatically allow an expired certificate briefly with the sole purpose of renewing the certificate?
    Or how would you normally resolve this issue?
    Thanks for any help/knowledge you can provide :-)

    > Setting the validity period that high, means that the certificate could be cracked before expiry.
    then you should be scary of CAs which validity is 10 or more years. And they use the same cryptography as end-entity certificates (key length and signature algorithms). It is a paranoya. Just make sure if client certificates use at least 2048 bit long
    keys and use SHA1 (or better) signature algorithm. In this case there is a little chance that certificate will be successfully cracked in 2 years.
    If there is an evidence (or indications) of client private key compromise -- immediately revoke the certificate and publish new CRL ASAP. You cannot protect clients from key compromise by using short-living certificates, because key compromise is ususally
    achieved by gaining a control over the private key (malware on client computer). Therefore, there is nothing wrong in issuing client certificates with 1 or 2 year validity.
    My weblog: en-us.sysadmins.lv
    PowerShell PKI Module: pspki.codeplex.com
    PowerShell Cmdlet Help Editor pscmdlethelpeditor.codeplex.com
    Check out new: SSL Certificate Verifier
    Check out new:
    PowerShell FCIV tool.

  • What's the correct way to handle some simple database actions with EF 6.1.3?

    I realize the title is very generic, so I'll go into more detail. I have experience working with Entity Framework, but most of my experience is with older versions of EF. Specifically EF 4.x. I'm writing a very simple app; it will return records from 2 look
    up tables, and insert a record into a data table. With EF 4.x what I would do is create a data model, a .EDMX file and then place the relevant tables onto the design surface. Then I'd have a data context object to work with in my C# code.
    However I get the feeling that it's different with newer versions of EF. I'm not even sure that there's a design surface any more. I've seen things like DbSet objects and other things. And I've done a little bit of development using newer versions of
    EF, specifically code first, or what might be more appropriate in this case code first with existing data. Because most certainly that is what I've got here. I don't want to replace or wipe out the existing data! And yet I tend to think more in terms
    of data contexts; I want to use what's appropriate for the newer versions of EF.
    I'm sure that ultimately it would be good for me to get into a class (which unfortunately won't happen) or do some training on my own. I'll do that as I can, but in the interim I'd like to know how I can do what I want to do with two lookup tables and
    one data table that I've got to insert one record at a time into. Could someone please give me a quick run down as to how to do this?
    Rod

    Never mind. I found a good example of what I'm looking for on Channel 9,
    Code First to Existing Database (EF 6.1 Onwards). This is what I've done before, but not too often. Anyway, I hope this will help others.
    Rod

  • Correct way to handle updates of XMLtype columns in standard tables.

    Hello to whoever may read this,
    I am currently studying the XML functionality of oracle DB for a uni project.
    We have been asked to compare/contrast solutions to publishing product and price data for data stored in standard relational tables, and data stored in XML type tables. For extra marks, i am looking at a table containing an XMLType column for multiple items of data relating to the primarykey.
    I have managed to get my head around publishing the data - pretty straight forward, but we have also been asked to show how we can update data, which isn't a problem within the standard tables/columns, but when it comes to the XMLType columns/tables, i dont have a clue.
    At the moment i am working on trying to update an XMLtype column. The table itself is a "product" table, and contains product information, as well as an XMLType column containing multiple changes to the prices. In the relational tables, this "product" table has a one-to-many link to another table called price_history which contains details about past prices (which is populated by a trigger on update/insert of a new price). But in this table all the product changes are stored in XML format in the XML type column "prices".
    Table columns: id number(4), name varchar2(25), prices xmltype;
    example data: 1781, CDW 20/48/E, <product_prices><price_change>
    <change_id>1</change_id>
    <date_changed>2009-10-13</date_changed>
    <details>price increased</details>
    <new_value>234</new_value>
    </price_change>
    <price_change>
    <change_id>2</change_id>
    <date_changed>2009-10-13</date_changed>
    <details>price increased</details>
    <new_value>235</new_value>
    </price_change></product_prices>
    We need to give examples of an update. I have been looking around the net, and these forums for a solution now for about 4 hours. My own thoughts are that to update this with a new price change i need to, SELECT the current data INTO a variable, then concatenate that variable with the new price change info e.g.
    <price_change>
    <change_id>3</change_id>
    <date_changed>2009-10-13</date_changed>
    <details>price decreased</details>
    <new_value>230</new_value>
    </price_change>
    then insert that whole chunk of data again to overwrite the old data.
    Now im fairly certain there is some function somewhere which will allow me to do this update/insert operation without going through this process... After i am done with this update of XMLType column data, i need to tackle updating data in an XMLType Table with XQuery(? apparently), so if you have any pointers for that please let me know.
    Could one of you experts point me in the right direction for this? Any advice at this stage is a great help and will stop me losing my mind.
    p.s. im sorry about the lengthy description of the problem/solution required. How to describle something i dont understand? I ask myself.

    Hi,
    You really need to take a look at the XMLDB Developers guide.
    For updating XML with SQL/XML see UPDATEXML and for XQuery see [Using XQuery with Oracle XMLDB|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14259/xdb_xquery.htm#sthref1673]
    HTH,
    Chris

  • Correct way to use SSL?

    I have just implemented a SSL certificate on my site. By
    default, site
    striucture is such that I have a HTTPDocs foder where I keep
    all my web
    pages etc but I also have a HTTPSDocs folder which is empty.
    The HTTPSDocs folder appears to be a mirror of the HTTPDocs
    folder, in that
    I do not need to manually put webpages that I wish to be
    secured into the
    HTTPS folder. In fact if I put the pages I want to be secured
    in the
    HTTPSDocs folder then the browser cannot find these pages.
    So, I have left all the pages etc. in the HTTP folder and for
    the pages I
    want secured I have linked from the form on the previous page
    by the full
    name eg. https://www.mysite.com/securepage.asp instead of
    just
    ../securepage.asp as it was before and this works fine. To
    leave the
    security I do the opposite eg.
    http://www.mysite.com/exitsecurepage.asp
    and
    this takes the browser back to a non-secured state. (the
    above links are
    dummy links)
    Is this the correct way to handle SSL?
    Thanks.

    Well, Is it???
    "GrantB" <[email protected]> wrote in message
    news:f0vieu$eeh$[email protected]..
    >I have just implemented a SSL certificate on my site. By
    default, site
    >striucture is such that I have a HTTPDocs foder where I
    keep all my web
    >pages etc but I also have a HTTPSDocs folder which is
    empty.
    >
    > The HTTPSDocs folder appears to be a mirror of the
    HTTPDocs folder, in
    > that I do not need to manually put webpages that I wish
    to be secured into
    > the HTTPS folder. In fact if I put the pages I want to
    be secured in the
    > HTTPSDocs folder then the browser cannot find these
    pages.
    >
    > So, I have left all the pages etc. in the HTTP folder
    and for the pages I
    > want secured I have linked from the form on the previous
    page by the full
    > name eg. https://www.mysite.com/securepage.asp instead
    of just
    > ../securepage.asp as it was before and this works fine.
    To leave the
    > security I do the opposite eg.
    http://www.mysite.com/exitsecurepage.asp
    > and this takes the browser back to a non-secured state.
    (the above links
    > are dummy links)
    >
    > Is this the correct way to handle SSL?
    >
    > Thanks.
    >

  • (workflow question) - What is the best way to handle audio in a large Premiere project?

    Hey all,
    This might probably be suitable for any version of Premiere, but just in case, I use CS4 (Master Collection)
    I am wrestling in my brain about the best way to handle audio in my project to cut down on the time I am working on it.
    This project I just finished was a 10 minute video for a customer shot on miniDV (HVX-200) cut down from 3 hours of tape.
    I edited my whole project down to what looked good, and then I decided I needed to clean up all the Audio using Soundbooth, So I had to go in clip by clip, using the Edit in SoundBooth --> Render and Replace method on every clip. I couldn't find a way to batch edit any audio in Soundbooth.
    For every clip, I performed similar actions---
    1) both tracks of audio were recorded with 2 different microphones (2 mono tracks), so I needed only audio from 1 track - I used SB to cut and paste the good track over the other track.
    2) amplified the audio
    3) cleaned up the background noise with the noise filter
    I am sure there has to be a better workflow option than what I just did (going clip by clip), Can someone give me some advice on how best to handle audio in a situation like this?
    Should I have just rendered out new audio for the whole tape I was using, and then edit from that?
    Should I have rendered out the audio after I edited the clips into one long track and performed the actions I needed on it? or something entirely different? It was a very slow, tedious process.
    Thanks,
    Aza

    Hi, Aza.
    Given that my background is audio and I'm just coming into the brave new world of visual bits and bytes, I would second Hunt's recommendation regarding exporting the entire video's audio as one wav file, working on it, and then reimporting. I do this as one of the last stages, when I know I have the editing done, with an ear towards consistency from beginning to end.
    One of the benefits of this approach is that you can manage all audio in the same context. For example, if you want to normalize, compress or limit your audio, doing it a clip at a time will make it difficult for you to match levels consistently or find a compression setting that works smoothly across the board. It's likely that there will instead be subtle or obvious differences between each clip you worked on.
    When all your audio is in one file you can, for instance, look at the entire wave form, see that limiting to -6 db would trim off most of the unnecessary peaks, triim it down, and then normalize it all. You may still have to do some tweaking here and there, but it gets you much farther down the road, much more easily.Same goes for reverb, EQ or other effects where you want the same feel throughout the entire video.
    Hope this helps,
    Chris

  • Is there a way to handle system exception ERROR_MESSAGE?

    Hi,
    I have a program executed in background, which produces a bunch of consecutive documents for a set of Bulk Shipments -> TD Loading and TD Delivery Confirmation. To create those documents I use function modules 'OIGI_LOADING_CREATE' and 'OIGI_DEL_CONF_CREATE'  - both from Industry-Solution Oil-and_Gas (IS-Oil).
    In some cases these FM-s produces error messages (E-type) which cancel execution of the program and broke my flow-logic.
    Below are few messages recorded in a job log for my task:
    18.08.2005 15:56:41 Job started                                                                         
    18.08.2005 15:56:41 Step 001 started (program /PTRL/TAS_POSTPONDED_SYNC, variant , user name IMUTAFCHIEV)
    18.08.2005 15:56:58 Shipment 180753 saved                                                               
    18.08.2005 15:57:06 The plant data of the material 177 is locked by the user BMINKOV                    
    18.08.2005 15:57:06 The plant data of the material 177 is locked by the user BMINKOV                    
    18.08.2005 15:57:06 The plant data of the material 177 is locked by the user BMINKOV                    
    18.08.2005 15:57:06 Job cancelled after system exception ERROR_MESSAGE                                  
    Both function modules are not designed to handle any exceptions, and in owr environment (4.6c) there is no documented system exception 'ERROR_MESSAGE' which to be handled in CATCH-ENDCATCH block.
    Is there a way to handle this exception and to track the list of error messages produced by some FM into an internal table, log, whatever, as it is done in the log of the background job. I need to find a way write these messages in my log-tables and to proceed further with my flow-logic.
    FYI: my program executes an RFC call to a remote system and retrieve a list of documents which need to be synchronized with R/3. I loose information, if the R/3 broke my flow-logic.
    Any help would be highly appresiated.
    Many thanks in advance.
    Ivaylo Mutafchiev

    Sven,
    I made few programs where we used business scenario:
    IS-Oil Shipment => IS-Oil Loading Confirmation => IS-Oil Delivery Confirmation.
    All of them are based on Function Module call:
    1. OIGI_LOADING_CREATE and
    2. OIGI_DEL_CONF_CREATE.
    To load shipment I call 1st FM in a way:
      CALL FUNCTION 'OIGI_LOADING_CREATE' DESTINATION 'NONE'
           EXPORTING
                I_SUBRC     = 9  "save and commit
                I_SHNUMBER  = shNumber
                I_VEHICLE   = vehicle
                I_LDPLT     = plant
                I_LDDATE    = loadDate
                I_LDTIME    = loadTime
                I_LDCDAT    = loadDate
                I_VEH_NR    = veh_nr
           TABLES
                T_OIGISVMQ  = quantity_items
                T_OIGISVMQ2 = hpm_append
                T_OIGISIQ   = doc_quan_items
           EXCEPTIONS
                COMMUNICATION_FAILURE = 1 MESSAGE p_error
                SYSTEM_FAILURE = 2 MESSAGE p_error.
    To confirm shipment (status 4) I call the same FM with:
      CALL FUNCTION 'OIGI_LOADING_CREATE' DESTINATION 'NONE'
           EXPORTING
                I_SUBRC    = 39  "confirm & commit 2nd step
                I_SHNUMBER = shNumber
                I_VEHICLE  = vehicle
                I_LDPLT    = werks
           EXCEPTIONS
                COMMUNICATION_FAILURE = 1 MESSAGE sh_error
                SYSTEM_FAILURE = 2 MESSAGE sh_error.
    And finaly to finish process (status = 6) I call 2nd FM in a way:
      CALL FUNCTION 'OIGI_DEL_CONF_CREATE' DESTINATION 'NONE'
           EXPORTING
                I_SUBRC         = 19  "save, confirm and commit
                I_SHNUMBER      = shNumber
                I_RAPID_CONFIRM = 'X'
                I_DDCDAT        = loadDate
                I_DLDATE        = loadDate
                I_DLTIME        = loadTime
           EXCEPTIONS
                COMMUNICATION_FAILURE = 1 MESSAGE p_error
                SYSTEM_FAILURE = 2 MESSAGE p_error.
    FYI: It tooks me some time to 'investigate' and find correct use of these function modules. And I worked VERY CLOSE with our SD consultant.
    For details (what the export parameters and tables consist of) and sample code, please contact me at:
    ivaylo dot mutafchiev at vbs dot bg
    I would be glad to share my knowlege.
    Regards,
    Ivaylo

  • Proper way to handle people raising dead threads?

    Someone just 'bumped' a thread from 2008
    Re: Problem providing download link for BLOB data in apex report
    What is the polite way to handle this?
    thanks
    MK

    Yep, as jgarry points out, the lifetime of a thread is not fixed, and the mods have to look at each one in context.
    As an example, yesterday, someone dragged up an old(ish) thread that already had answers marked as correct and helpful (and I could see they were from experts and they were correct answers).  This new member had added their answer to it, which was just repeating answers that had already been given and added nothing new to the thread.  The nature of the question and the answers given meant that there really was nothing new that I could envisage being added to the discussion, and so I put a standard message about not reviving old threads on there, and locked it.
    In other examples, we have people dragging up old threads from years ago, just to ask the OP how they solved their problem.... an OP who quite clearly in a lot of cases is no longer active on the forums and isn't likely to answer them.  If the person dragging up the thread has an issue, they should start their own question, referencing that old thread if they need, but not drag it back out of history.
    The other problem is when people drag up an old thread to provide an answer, often saying that the previous answers are not good and that it can be done "this way...", providing some method of resolving the issue that uses new features of the database.... new features that weren't around when the question was originally asked e.g. using LISTAGG function of 11g to solve an issue that was asked around the time of 9i when sys_connect_by_path had to be used.  Posting such answers on old threads help nobody, as there are plenty of recent threads that already have examples of that new functionality being used to answer similar questions.  In these cases, I suspect that the person is just seeing a question that is 'unanswered' and thinking they can get some points for themselves, and perhaps haven't realised just how old the thread is, or that the OP is no longer around.
    Of course, there are those discussions which are open-ended, and can be revived after some time if people have new and relevant information to add.  Those sorts of discussions I wouldn't necessarily consider locking.
    There are no hard and fast rules for it, the moderators have to use discerned judgement.

  • Best way to handle all erros and get performance(OCI)?

    Hi there,
    Im using[b] Oracle Call Interface to execute batch file process. But I have got a problem.
    I set ExecuteBatch with the same number of commit time, i.e: 100, 1000 or just 10, for thats ok.
    ((OraclePreparedStatement)globalStmt).setExecuteBatch(commit);
    And I handle executeUpdate to catch all SQL Exceptions. I made some proposital files with invalid erros but when I handle "executeUpdate" it�s doesn�t get the corrent error line, and puts out another line that is corret.
    ((OraclePreparedStatement)globalStmt).executeUpdate();
    Check in the code I concluded that its always get the same sequence of commit number like the error line. For example, I have between line 1 - 50 a line error, this line is 31, but I set the commit time for 50, its show me that line errror is 50 wherever 31. But if I put setExecuteBatch with '1' so it can handle corret lines erros, but the system performance bring down. What is the best way to handle all erros and keep the perfronace?
    Sorry for my english, I am not native. Thanks all.

    So by doing this, everything will transfer and look exactly the way I have it on the old machine?
    That is correct, if your old machine is Intel based after using MA the new machine will look just like the old machine. Here is information from Apple on MA, I'd recommend looking it over.
    My recommendation is to answer NO when setting up the new machine when it asks "Are you moving from another Mac?" The reason being let you new machine get set up and run for a couple of hours to ensure it's fine. Then launch MA and follow the prompts, it's very easy and if you use a fast connection like FW it should go smoothly.
    Regards,
    Roger

  • Best way to handle source files

    Hi there,
    After some pretty general advise please.
    The company I work for looks after a lot of websites, and one
    of the
    headaches we have is the best way to handle source files. By
    source files I'm
    referring to Photoshop files, Flash .fla files and also other
    none Adobe
    files that relate to a site, not the .html, .asp, .aspx,
    .css, .js etc type
    files.
    Now I'm NOT after a version control system, just a simple way
    to store the
    source files in a location that is separate from the website
    but still to be
    able to have a smooth workflow between the Dreamweaver site
    and it's source
    files.
    At the moment, and I know this is unwise, we have a
    subdirectory within the
    site where we store the source files, and use WebDAV to
    transfer both site
    and source files to and from the server. But I really want to
    separate the
    site from source but still maintain a link between site and
    source...... if
    you see what I mean. I think the upshot is I would like to be
    able to open a
    site within Dreamweaver and instantly be able to access that
    sites source
    files if needed. This method needs to be shared across a
    small team spread
    round the UK.
    I looked at the Repository Subversion version control, but
    like I said I'm
    not after a source control system, plus it appeared to
    conflict with WebDAV
    and Contribute, that some of our clients use to maintain
    content on their
    sites. I also looked at Version Cue which looked promising,
    but can't see a
    clear workflow between Dreamweaver and Version Cue which
    separates site from
    source. I might be missing something.... part of my brain
    perhaps. :)
    Would be grateful for any advice please.
    Cheers,
    @ndyB

    Take a deep breath. Relax. All is fine.
    iDVD does not look at the size of your video file, it looks at the length. iDVD can accomodate up to 2 hours of movie
    iDVD gives you different options depending on the length of your movie. Although I won't agree with your friend about reducing the length of your movie to 15 minutes, if you could trim out a few minutes to get it under an hour that setting in iDVD (Best Performance though the new version may have renamed it) gives you the best quality. Still, any iDVD setting will give you good quality even at 64 minutes
    In FCE export as Quicktime Movie NOT any flavour of Quicktime Conversion. Select chapter markers if you have them. If everything is on one system unchecked the Make Movie Self Contained button. Drop the QT file into iDVD

  • Best way to handle tcMultipleMatchFoundException

    Can any one tell me what is the best way to handle tcMultipleMatchFoundException during Reconciliaiton.
    One way which i know is to manually correct the data. Apart form is there any way..?
    Thanks,
    Venkatesh.

    Hi,
    I've done a great deal of work with mobile accounts in Snow Leopard and I'm now having a "play" with Lion. To be honest you have to sit down and think about why you need mobile accounts.
    If your user only uses one computer then your safer having a local account backed up by a network Time Machine, this avoids the many many woes that the Servers FileSyncAgent brings to the table.
    If your users are going to be accessing multiple computers on the network and leaving the network then a mobile account is good for providing a uniform user experience and access to files etc. However, your users will have to make a choice as to whether they want their iPhoto libraries on one Local machine (backed up by Time Machine) or whether they want their library to be hosted on the server and not part of the Mobile Home Sync schedule (adding ~/Pictures to the excluded items on the home sync settings).
    With the latter, users will be able to access their iPhoto libraries on any computer when they are within the network (as it's accessed from the users server home folder).
    With the first option the user would have their iPhoto library on one computer (say the laptop they used the most) but then would not be able to access it from other computers they log on to.
    iPhoto libraries are a pain, and I'm working hard to come up with a workaround. If your users moved over to using Apeture then you could include the aperture library as part of the home sync thanks to Deeport (http://deepport.net/archives/os-x-portable-home-directories-and-syncing-flaw-wit h-bundles/)
    He does suggest that the same would work with IPhoto libraries - but it doesn't for a number of mysterious reasons regarding how the OS recognizes thie iPhoto bundle (it does so differently compared to Apeture).
    Hope this helps...

Maybe you are looking for

  • Hyper-V as User on Windows Embedded 8.1 Industry Pro

    I believe I'm having permission issues when attempting to use Hyper-V on Windows Embedded 8.1 Industry Pro.  Hyper-V seems to install cleanly, and from an admin account I can use the Hyper-V Manager without issue and connect to a running VM.  However

  • INBOUND_BINDING_ERROR in File Sender Adapter scenario

    Hello, In my file scenario file is picked up by Adapter Engine but I am not able to see in SXMB_MONI. When i checked in message monitoring it is giving "com.sap.aii.af.ra.ms.api.RecoverableException: INBOUND_BINDING_ERROR:" Pls suggest. Regards

  • Apple TV not in the list of devices: Win XP, Rendezvouz 1.0.6, iTunes 8.2.1

    I bought a new Apple TV yesterday I plugged it in and followed the instructions. It picked up an IP via DHCP from my router 192.168.0.102/24 My laptop is hosting my iTunes library, 192.168.0.101/24, no firewall. When prompted to connect to a computer

  • Tr ME1M with Purchase Order

    Hi guys, one question:my customer says that in the release 4.6C and peraphs 4.7 when he run the transaction ME1M and the inforecord found had linked a Purchase Order with an item deleted, the code 'L' of cancel apperaed near item deleted. Now I'm wor

  • Document header text in Fb03

    Hi all, In ml81n transaction there is a field called external number which is stored in ESSR table. Now when i go to mrrl and display the fi document i will be taken to Fb03 transaction. If i click the documnet header i will get another window which