Recommended workflow-way to manage exports for stock sites

Dear All,
Is there an efficient workflow - way that someone came up with, that could recomend to manage exports for stock site photography easily?
The main problem, is that if you upload 10 images for instance, to 3 different sites you most propably will face different accepted images and different rejected from each site.
Then you have to modify adjustments to reupload the rejected images for a second try, or even a third one.
And this is the exact point where the problem occures.
There must be an efficient way to have in different folders (or smart albums or whatever)  the same images with different status (rejected or accepted for instance, or red or green or whatever) and at the same time those images with differenet settings (those you had to make in order to get images accepted) keep also in mind that the images may also have different keywords aswell (in case an image gets rejected due to keywording).
Its important to  mention that in the end, the folders must match the real status of the images at the stocksite submitted, regarding state (accepted or not) and of course settings.(its important in the end to know that in  site ''A'' you have exactly those images and in site ''B'' the others, and so on.)
Its more than sure that those 3 different folders (or smart albums or whatever) will eventually have the same images, but most of them will have different status, settings and keywords.
The only way i found to manage it, is by creating export folders using virtual copies at the publish services section of LR. Note that for publish services i used my hard drive.
But the drawbacks of this way are:
1. You double - triple (depending the stock site number) files (at least they are virtual ones)
2. When an image is rejected from a stock site and you mark the coresponding image as ''marked for republish'' even thow in lr it is clear which files were rejected and should be edited and republished, in the export folder you cant track them, you must export to a new folder evry time creating double - triple jpg files. I think it would be better if LR could sync automaticaly  the folder showing somehow images marked for repubplish.
Sorry for long post, but how could it be smaller????
Thanks in advanced.
Vasilis

The point of Publish is not that the image disappears afterwards, but that it remains there afterwards, after moving between the categories into which the publish window is divided. IIRC you can collapse the parts you are not interested in... (?) I don't have LR in front of me just now. Pictures then move categories again, the moment you make further changes. It's an ongoing relationship between the Catalog and the publish location, and you use Publish when this is what you want.
It is not very easy AFAICT to terminate this relationship, using the built-in Publish function, without also marking the externally published file for removal. However some 3rd-party LR plugins offering a Publish facility, can do this for you cleanly.
Personally, I have concluded that the standard one-off (but repeatable, via named preset) Export inherently meets my needs better than Publish does, for this very reason - and have therefore ducked fully dealing with this Publish life-cycle issue. So often, a change is made to an image which is not really substantive and does not justify re-publishing.
But I definitely suggest looking into plugins, if you like the idea of Publish but find it restrictive currently, including (in no particular order and not comprehensive)
http://www.photographers-toolbox.com/
http://regex.info/blog/lightroom-goodies
http://www.robcole.com/Rob/ProductsAndServices/PublishServiceAssistantLrPlugin/

Similar Messages

  • Is there any way to raise alerts for stock outages in inventory

    Is there any way to raise alerts for stock outages in inventory

    You can use the Check Material Shortages attribute in the Org items under Inventory tab. Oracle will trigger shortage alert notifucations for the checked items. This works in the conjuntion with the Shortage Parameters defined at the Inv ORg level (these parameters are nothing but what business wants to consider as a shortage in WIP and OM). You can notify the cmp planner/buyer , assy planner/job creator..

  • Edit Color Sync for Exports for Web Sites

    Hi,
    Perhaps this has been covered, but I've not found the thread. Most of the export color sync questions seem to revolve around printing or displaying on one's own monitor. Neither is an issue for me. However, I note that my RAW images exported from Aperture 2.0 as jpg's and then uploaded to web sites such as Jalbum and Orkut are washed out. Some forums have suggested this is due to those websites being optimized for sRGB. I understand that Aperture uses a proprietary color sync profile; however, my default EXIF - Expanded shows "Color Model" as simply "RGB", and "Profile Name" is Adobe RGB (1998) (my Canon 50D is set to sRGB). Again, I have seen suggestions on this forum that one can change these settings, but the process appears to be only for printers and monitors. Where/how can I change the export for uploading jpg's to the web.
    Thanks,
    Nick

    I understand that Aperture uses a proprietary color sync profile
    If the profile were proprietary, it would be an illegal profile under the ICC Specification.
    However, I note that my RAW images exported from Aperture 2.0 as jpg's and then uploaded to web sites such as Jalbum and Orkut are washed out.
    If you did not disable embedding of an ICC source colour space in Aperture, if you did not use a non-ICC enabled browser, the OS X system level services will set up a ColorWorld and convert the pixels in your JPEGs matching from the ICC source colour space to the ICC destination colour space of your display. Thus you will get what you uploaded in the first place.
    I have seen suggestions on this forum that one can change these settings, but the process appears to be only for printers and monitors.
    The idea is that each and every colour device is calibrated and that calibration is captured in a colourimetric colour characterisation from which an ICC colour device profile is computed. Colourants that form certain colours one one colour device at one calibration will form other colours in the same colour device at another calibration and in other colour devices at yet other calibrations. By means of device profiles, the COLOURS can be kept constant while the COLOURANTS are recomputed to compel the device to paint the closest possible match it is capable of.
    Some forums have suggested this is due to those websites being optimized for sRGB.
    The issue is whether people who post pictures do or do not embed ICC source colour spaces into their pictures and whether they pick browsers that are ICC-enabled or pick colour blind browsers.
    To determine if there is an embedded ICC source colour space in a picture, download the picture to disk, open iApple Preview and select Tools > Get Info > Summary.
    Colour Model: This defines the format or model of the colourants, e.g. three component RGB, four component CMYK.
    ColorSync Profile: This defines the ICC source colour space. If there is none in the image file, the system level service assigns the Generic RGB Profile for model RGB, the Generic CMYK Profile for model CMYK and so forth. Therefore, any ICC source colour space than a Generic Profile source colour space is embedded in the image file itself.
    In technical terms, the rule is that to construct a ColourWorld that does a colour space conversion through the PCS Profile Connection Space, you have to have an ICC source colour space and an ICC destination colour space (typically your display profile). It follows that the system level service ASSIGNS an ICC source colour space automatically where is none embedded. The ICC source colour space that is assigned when there is none embedded is the Generic RGB Profile for colourant model RGB, as explained.
    /hh

  • Does Apple have any good way of managing iPhones for business?

    I finally broke down and allowed iPhones into our company when Apple introduced the ability to provision phones like phones (e.g. don't have to teather them to a computer to set up).  This and the "business" management tools opened the door, so we got about 30 of them.
    WHAT A HUGE MISTAKE!!!!
    Apple is now going backward.  When a phone comes into our shop, we have to load up iTunes and get the phone to sync before we can do simple things like wipe the phone or troubleshoot "why did my bluetooth quit working".  All of their "enterprise focus" seems to be around security and app control, with nothing I've seen for the day-to-day keeping things running.  (Not saying app control and secuirty are unimportant, just not enough).
    Does Apple have any good business management tools that I'm missing?  I know about MDM, but too much setup for a relatively small number of phones.  Besides, my understanding is the MDM only does app management / security policy.  Right now we use Apple's stand-alone configurator.  Fine for pushing policy and apps, but useless for troubleshooting and day-to-day management (like resetting a phone back to factory configs).
    Also, is my carrior correct that Apple will not allow them to replace phones that have gone bad under warranty?  They told me for the iPhones we have to physically take them to an Apple store for troubleshooting / replacement even after 100% identifying the issue!  That is an hour (at least) wasted to send one of my folks to the store.  I know we don't just have a bad carrior, becaue they happily send out replacement on other brands after troubleshooting over the phone.
    I actually do like the overall look/feel of the iPhones.  But unless Apple becomes much more corporate-friendly, I can only recommend that companies refuse to purchase any iDevice.  BYOD, sure...but when you buy tens or hundreds of phones stick to Android, Blackberry, or even (sorry) Microsoft based devices.

    Does not seem to work that way for us.  We plug the phone in, and the first thing iTunes does is say the phone needs to be "authorized for this computer".  It also states that the phone can only be authorized for x computers.
    It then goes through it's jig while we wait, and finally comes up with the phone.  Only then can we try to wipe / update the phone.  We do stop the sync process, that automatically starts wanting to run.  Maybe has to do with a version difference between what you use and what we use.  I did notice that an older version of iTunes seemed a bit friendlier, but would not support our iPads.
    However, when we were troubleshooting the bug where wireless / bluetooth quits working it was quite a bit more difficult than this for a phone a user handed us.  Troubleshooting steps required that the phone be "restored" in iTunes.  However, iTunes insisted on our providing a password that neither we nor the user knew.  Maybe just learning curve, but it took about an hour before we were able to get that phone to a point where we could reset it.
    Here is a sincere question for you, since you say you support a fleet of phones.  Do you really NOT get annoyed when your tech has to take a iPhone to a computer with iTunes, plug it in, and work from there whenever there is a user issue that needs to have the phone wiped back to factory standard?  This compared to being able to punch a few buttons on a different phone, right there in the users office, then hand it back to the user and say "your are fixed".
    My only assumptions are that IT staff want to solve issues as quickly and efficiently as possible.  Requireing a tech to deal with iTunes for simple procedures is not, IMHO, "as quick and efficient as possible".

  • Can anyone recommend a good Password manager / safe for iPhone & Mac?

    Looking for a combination of an App and an Mac application that can safely encrypt all of my Passwords, memberships, bank details etc. etc.
    *I've found the following, but was curious in anyone can recommend one over another or a completley different solution altogether?*
    SpalshID
    http://www.splashdata.com/splashid/iphone/
    1Password
    http://agilewebsolutions.com/onepassword/iphone
    eWallet
    http://www.iliumsoft.com/site/iphone/products_ewmac.php
    Thoughts & suggestions gladly appreciated.

    1Password has a very strong reputation among Mac owners--lots of people love it.
    I use and like Wallet on Mac and iPhone, but not really for the password manager aspects, although it is one:
    http://www.acrylicapps.com/wallet/
    For me, one really important issue is "will it sync without me having to think about it, or do I have to consciously open it and hit sync while on the same wireless network?" Wallet uses WebDAV or MobileMe (which works for me), 1Password has a Dropbox solution, not sure about the others.
    Download the trial versions for the Mac, see which one you are most comfortable with.

  • Managing statistics for object collections used as table types in SQL

    Hi All,
    Is there a way to manage statistics for collections used as table types in SQL.
    Below is my test case
    Oracle Version :
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Original Query :
    SELECT
         9999,
         tbl_typ.FILE_ID,
         tf.FILE_NM ,
         tf.MIME_TYPE ,
         dbms_lob.getlength(tfd.FILE_DATA)
    FROM
         TG_FILE tf,
         TG_FILE_DATA tfd,
              SELECT
              FROM
                   TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
         )     tbl_typ
    WHERE
         tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:02.90
    Execution Plan
    Plan hash value: 3970072279
    | Id  | Operation                                | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  1 |  HASH JOIN                               |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  2 |   HASH JOIN                              |              |  8168 |   287K|   695   (3)| 00:00:09 |
    |   3 |    VIEW                                  |              |  8168 |   103K|    29   (0)| 00:00:01 |
    |   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|              |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   5 |      FAST DUAL                           |              |     1 |       |     2   (0)| 00:00:01 |
    |   6 |    TABLE ACCESS FULL                     | TG_FILE      |   565K|    12M|   659   (2)| 00:00:08 |
    |   7 |   TABLE ACCESS FULL                      | TG_FILE_DATA |   852K|   128M|  3863   (1)| 00:00:47 |
    Predicate Information (identified by operation id):
       1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
    Statistics
              7  recursive calls
              0  db block gets
          16783  consistent gets
          16779  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
    select
         index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
    from
         all_indexes
    where table_name in ('TG_FILE','TG_FILE_DATA');
    INDEX_NAME                     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR     NUM_ROWS SAMPLE_SIZE
    TG_FILE_PK                          2        2160        552842             21401       552842      285428
    TG_FILE_DATA_PK                     2        3544        852297             61437       852297      852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
    But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
    So my question is, is there any way by which I can change the statistics while using collections in SQL ?
    I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
    Modified query with hints :
    SELECT    
        /*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
        9999,
        tbl_typ.FILE_ID,
        tf.FILE_NM ,
        tf.MIME_TYPE ,
        dbms_lob.getlength(tfd.FILE_DATA)
    FROM
        TG_FILE tf,
        TG_FILE_DATA tfd,
            SELECT
            FROM
                TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
        tbl_typ
    WHERE
        tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1670128954
    | Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                          |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   1 |  NESTED LOOPS                             |                 |       |       |            |          |
    |   2 |   NESTED LOOPS                            |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   3 |    NESTED LOOPS                           |                 |  8168 |  1363K| 16379   (1)| 00:03:17 |
    |   4 |     VIEW                                  |                 |  8168 |   103K|    29   (0)| 00:00:01 |
    |   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   6 |       FAST DUAL                           |                 |     1 |       |     2   (0)| 00:00:01 |
    |   7 |     TABLE ACCESS BY INDEX ROWID           | TG_FILE_DATA    |     1 |   158 |     2   (0)| 00:00:01 |
    |*  8 |      INDEX UNIQUE SCAN                    | TG_FILE_DATA_PK |     1 |       |     1   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN                      | TG_FILE_PK      |     1 |       |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID             | TG_FILE         |     1 |    23 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
           filter("TF"."FILE_ID"="TFD"."FILE_ID")
    Statistics
              0  recursive calls
              0  db block gets
             16  consistent gets
              8  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed
    Thanks,
    B

    Thanks Tubby,
    While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
    But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
    http://www.oracle-developer.net/display.php?id=427
    If we go across the document, it has mentioned in total 3 hints to set statistics :
    1) CARDINALITY (Undocumented)
    2) OPT_ESTIMATE ( Undocumented )
    3) DYNAMIC_SAMPLING ( Documented )
    4) Extensible Optimiser
    Tried it out with different hints and it is working as expected.
    i.e. cardinality and opt_estimate are taking the default set value
    But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
    With CARDINALITY hint
    SELECT
        /*+ cardinality( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     5 |    10 |    29   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With OPT_ESTIMATE hint
    SELECT
         /*+ opt_estimate(table, e, scale_rows=0.0006) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Execution Plan
    Plan hash value: 4043204977
    | Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   1 |  VIEW                                  |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   2 |   COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   3 |    FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With DYNAMIC_SAMPLING hint
    SELECT
        /*+ dynamic_sampling( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     2 |     4 |    11   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     2 |     4 |    11   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
    I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
    By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
    Regards,
    B

  • Automatic PR creation for stock items (Category - L)

    Hi All,
    My requirement is as follows:
    When I create PM orders for stock item, system generates a reservation number.
    But the actual qty is sytem for such items are less than planned qty or NIL qty.
    How to create PR's for such items automatically by system ?
    Is MRP run required to be activated for such items (i.e: maintenance of MRP views in material master) or any other way to do it for stock items (L) ?
    If so, what all fields need to be filled in material master?
    Is there a way for doing MRP run only for material type ERSA (spare parts) or only against reservation numbers created through maintenance orders or against MRP controller?
    Kindly post your valuable feedback.
    Thanks in advance and best regards.
    Kannan J

    Hi,
         For automatic p.o creation, you can use reorder point.
    In material master mrp1 view, use mrp type VB, reorder level ex. 50 pc.
    Maintain lot size HB, maintain max stock level 100 pc.
    When the requirement comes it will consume from stock. If the stock goes below reorder ponit(50), it will create planning file entry.
    When you run mrp (MD01), it will automatically create p.o.
    But without running mrp, system can not create p.o
    With that you can maintain your stock.
    Hope it will help.
    Regards,
    Dharma

  • Property Management software for OSX?

    Does anyone know of (or can recommend) a quality property managment sofware for OSX?  I'd prefer to not use Boot Camp/Virtualization for Quicken Property Manager if at all possible.
    Thanks

    Google will be your friend as will MacUpdate or CNET Downloads.

  • MySite as a Publishing Site Collection (for Cross Site Publishing)

    Hello,
    is it possible to connect the MySite in SharePoint with an Product Catalog? I have an Authoring Site Collection and a MySite Collection. In the MySite i have allready activated the Publishing Feature and then connected my Catalog from the Authoring Site.
    After that i've made a Navigation from the Term Store Management Tool.
    The Problem is, that i cant see any changes an no Navigation at the MySite.
    Is it somehow possible to get it working?
    Thanks

    Hi Arnold,
    As MySite use My Site Host site template in SharePoint, so the master page is different and MySite use custom master page for rendering the navigation.
    If you want to use Managed Navigation for MySite, then you need to create a custom master page for rendering the terms in navigation.
    Here is a similar thread for you to take a look:
    https://social.technet.microsoft.com/forums/sharepoint/en-US/5c48c32f-f102-4935-83d3-1ba06be072ec/using-managed-navigation-for-my-sites
    Here is a link for creating a custom master page for rendering the managed navigation:
    http://blogs.c5insight.com/Home/tabid/40/entryid/378/SharePoint-2013-Managed-Navigation-Lessons-Learned.aspx
    Best regards.
    Thanks
    Victoria Xia
    TechNet Community Support

  • What's a good way to manage custom schema for DS  5.1?

    What's a good way to manage custom schema?
    Custom Schema for Object Class and Attributes
    The reason I ask this is because there might be a need in the future where I need to export those custome schema into different branded directory server. I just want to make this as painless as possible.
    Right now, I thought of 2 options
    1) Create my own LDIF file with my custom attributes and object classes, so if one day I need to export to another directory server, I can just copy that custom created LDIF file over. (Will this work?)
    2) Create a JAVA application using JNDI. What this Java App. will do is read through a XML file and create those object classes and attributes on-the-fly. (of course, the XML structure will be predefined by me, so that my Java App. will be able to parse through it correctly. Will this work?)
    Anymore suggestion? I would want to hear more advices and suggestions.
    Also, I assume that will work even with replication. All I need to update is the master server, and the slaves will replicate automatically.
    Thank you very much! :)

    Demo: I'm using the nul character to represent the end of the word, so that the data structure can represent that "hell" and "hello" are both in the vocabulary:
    import java.util.*;
    class Node {
        private SortedMap<Character, Node> children = new TreeMap<Character, Node>();
        //0 <= index <= word.length()
        private void add(String word, int index) {
            if (index == word.length()) {
                children.put(Character.valueOf('\u0000'), null);
            } else {
                char ch = word.charAt(index);
                Node child = children.get(ch);
                if (child == null) {
                    children.put(ch, child = new Node());
                child.add(word, index+1);
        public void add(String word) {
            if (word == null || word.length()==0)
                throw new IllegalArgumentException();
            add(word, 0);
        public String toString() {
            return children.toString();
    public class Example {
        public static void main(String[] args) throws Exception {
            Node root = new Node();
            root.add("hello");
            root.add("how");
            root.add("who");
            root.add("hell");
            System.out.println(root.toString());
    }

  • Best Way of Exporting for iDVD when using FCP?

    What would anyone recommend in terms of getting the best output from FCP for making an iDVD?
    I usually export as a self-contained QT movie. Is there a better way?
    J.M. Prater

    There's an easier way... export a non self contained movie. The result will be the same... just faster file export and smaller file to boot.
    DVD SP will make a lot better looking DVD's if you have that. The encoding done through Compressor is better than what can be done in iDVD.
    Jerry

  • What is best way to manage external storage for 160 GB Tiger iMac

    Unfortunately I bought my 160 GB G5 a month before the Intel iMacs came out. I never thought the teens would take to movie making the way they have. I am constantly at 150 GB plus and have filled a 300 GB Maxtor external with a whole system backup image and then a whole boatload of iMovie files. I have purchased a new 500 GB WD, but is still unopened. I have also purchased but not installed Leopard. What is the recommended way to manage the external storage of all of my video - they want to make more movies! Would Time Machine work by keeping movies on for a couple of weeks and then deleting so I would have access through that interface? Help, I can't afford a newer iMac yet.

    If they make movies, you will eat space. Be grateful, because if they get real good, they will be able to buy you a room full of storage. LOL.
    Seriously, you do have a problem. Even though you could exchange the internal for a much larger drive, they will keep filling them. Film real estate is large. Just keep daisy chaining till you reach your tolerance limit, and if they want too make a movie after that, burn any you must delete to DVD, and then erase it.
    Of course, remember, any file that is not at least 2 copies is at risk. You may want to burn some of those to DVd as backups, anyway. DVD's are very cheap.

  • What's the best way to manage Apple IDs for multiple devices?

    Hi,
    We have
    a shared Macbook air
    a shared iPad
    my iPhone
    his iPhone
    We want to put one ID on the Macbook and iPad that we can use to have the same iMessage on both and that we can put a card on and use in the iTunes store etc.
    We also want to have our own iMessages on our iPhones, plus be able to use the account with the cards on them to purchase and share across devices...
    What's the best way to manage them all?

    Welcome to the Apple Community.
    iTunes is straight forward, just use the same ID on all of them.
    What exactly do you want to see in messages on the Mac and iPad, messages combined from each of your phones or a different account just for both of you.
    You should think about what you want in calendars, contacts etc, on your shared devices.

  • What's the best way to export for print?

    I usually upload images to my local photoprinting shop. What's the best way to do that with Lightroom? Since lightroom does the final sharpening in the print module, how do I get that into the exported file since there is no export function in the print module? Also, how do I get the proper dpi and sizing info into the photo for the print shop? Thanks.

    There is no good way to prepare images for printing from a file from a commercial printer by using Lightroom alone. You need to do an export of the image without any resizing or color space conversion, and then use some other program of your choice to 'finish' the file.
    It's just plain missing functionality... Lightroom assumes that you will be printing photos yourself and it is missing a whole lot of stuff in order to support external printing, including:
    a) Resize to exact pixel dimensions. Many labs require an exact pixel count for a given print size, so for example for an 8x10 they need a 2400 x 3000 pixel image. Can't be done in Lightroom.
    b) Trim adjustment. If you are doing full bleed photos (no margin), you might need to provide about 0.05 inch of 'trim space' to compensate for image loss during printing. Can't be done in Lightroom.
    c) Exporting to a specific printer profile. Can't be done.
    d) Adding margins, text, etc. Many times my clients want a 1/4 inch margin in their photos to facilitate framing, or I put a dim watermark type text in the bottom corner. Can't be done.
    My solution. Do most processing in Lightroom, do an export, and then do a second export via qImage on a PC. It does all of the above plus much more. It's just a matter of using the right tool for the job, and LIghtroom has no facilities at all for printing to a file.
    I have been asking for "Print to File" ever since I was under NDA during the betas.

  • Exporting a HD clip lossless for stock footage.

    Hi all,
    I want to start submiting some of my archive footage to a stock site. I want to trim the clips in FCP but not sure the best way to export them without losing any quality. Its HD footage at 1920 x 1080 (Pal).
    Any advice on the best way to do this would be greatly appreciated.
    Many thanks,
    Simon.

    I have experience, both with buying and providing clips.  The most common format is Photo JPEG...and it is pretty darn good.  If you weren't impressed, you did something wrong.  Some people like PhotoJPEG at 75%, but they provide, and I have given them, Photo JPEG at 100%. 
    Some do ProRes, but that is RARE. Animation too, but also rare, due to file size. H.264 is not unheard of, but is so uncommon that it is barely a blip.  I have only seen that format a handful of times. 

Maybe you are looking for

  • My serial number is not valid anymore ? I bought my photoshop and I have been using it for a little bit more than one year...

    Hello, I bought my Photoshop Extended (PC) last year in the US. I have been using it for a bit more than a year. Today, opening Photoshop, Adobe asks me to pay or enter my serial number. So I enter the number (again), and it says that the number is i

  • Help needed to write a dialog program

    Hello ABAP Gurus, I am very much new to ABAP Programming. Can anybody help me to write a simple Dialog Program ?? I have a database table. I have created a screen with screen painter, and kept some input fields & a push button in it. I want to fill t

  • Problems with Import in Cap3

    Hi, I have problems to merge 2 Cap3-files - the option "Datei > Importieren > Folien / Objekte" (sorry, german version) is not working, no file is imported. Is there a known trick or workaround ? Would be happy about every clou. Thanks a lot, Wolvo

  • Best Web Filter and Application control for K-12 School using Chromebooks

    Sophos UTM has good education pricing and provides all this and a lot more Wil replace the firewall and has excellent web filtering and application control Also nice features for education like allowing google apps but limiting to your google domain

  • IDOC Segment table names

    Hi All, I know that data records are stored in table EDID4. But for each segment in EDID4, in which table I can find the fields and the values Kindly help me out... Thanks in advance, Jaffer Ali.S