Keywording any better in 1.0?

Compared to the checklist keywording in Bridge, I've found Lightroom's drag-and-drop system, uhm, a drag.
In Bridge, you can create categories of keywords -- People, Places, Trees, etc. -- and add specific keywords within those categories -- Fred, London, Birch. You can access the list anytime, select multiple photos and simply tick a checkbox beside each keyword to add it to your selects. Fast, clean, easy. No tugging little words across the screen.
I had hoped, even assumed, Adobe would incorporate this feature into 1.0 because it's so superior (in my opinion) to dragging. But I've seen no mention of it yet. Anyone know?

Here is a keywording questions that it seems those chatting about this subject will know the answer to. Where do the keywords one assigns in Lightroom reside. In Bridge they are written into the metadata of the image file itself. So that when one keywords a file in Bridge, the keywords will be recognized by other programs that read metadata keywords. But there is so much talk about Lightroom being a database driven program that I am afraid that perhaps those keywords just reside in a Lightroom database and nowhere else. For example, if I keyword a folder full of files in Lightroom, and then open that same folder in Bridge, will the keywords still be there? Or will I have to use Lightroom forever to access them. Any light on this subject will be greatly appreciated. Thanks
Lito Tejada F

Similar Messages

  • IMAQ Resample performanc​e. Any better choice for 50% downsample​? (average 2x2 - 1 pixel)

    My video source is a 4 Mpixel (2k x 2k resolution) USB3 camera. This is displaying a live image OK in Labview at 45 fps using only 20% CPU.  So far, so good.
    I added a "IMAQ Resample" block to downsize this to 1024 x 1024 image. That works with almost no additional processing time if I select "Zero Order" interpolation (eg. plain subsample to value of nearest pixel). However, I want to average each 2x2 block (4 pixels) in the input image into 1 output pixel. I *think* that is the effect of selecting Bi-Linear interpolation. Doing that works, but takes about 45% of CPU.  I want to do some other processing but am worried I will quickly run out of CPU time and start dropping frames.
    Is there any better way to do this simple 50% downsize (2x2 average), that would take less CPU overhead, or is this the best way?

    Hi jbeale1,
    In NI-MAX (Measurement & Automation Explorer) select your camera. Under the  'Acquistion Attributes' tab do you see an option to change the Video Mode of your camera to a different resolution? If your camera supports it, it would be more efficient to change the resolution there.
    If not , here is a little more info regarding the IMAQ Resample VI:
    http://zone.ni.com/reference/en-XX/help/370281P-01​/imaqvision/imaq_resample/
    You are correct, the Bi-Linear option uses a more intensive interpolation technique which is why it is more taxing on your CPU. I hope this is helpful.
    Robert S.
    Applications Engineer
    National Instruments

  • How to create a function with dynamic sql or any better way to achieve this?

            Hello,
            I have created below SQL query which works fine however when scalar function created ,it
            throws an error "Only functions and extended stored procedures can be executed from within a
            function.". In below code First cursor reads all client database names and second cursor
            reads client locations.
                      DECLARE @clientLocation nvarchar(100),@locationClientPath nvarchar(Max);
                      DECLARE @ItemID int;
                      SET @locationClientPath = char(0);
                      SET @ItemID = 67480;
       --building dynamic sql to replace database name at runtime
             DECLARE @strSQL nvarchar(Max);
             DECLARE @DatabaseName nvarchar(100);
             DECLARE @localClientPath nvarchar(MAX) ;
                      Declare databaselist_cursor Cursor for select [DBName] from [DataBase].[dbo].
                      [tblOrganization] 
                      OPEN databaselist_cursor
                      FETCH NEXT FROM databaselist_cursor INTO @DatabaseName
                      WHILE @@FETCH_STATUS = 0
                      BEGIN       
       PRINT 'Processing DATABASE: ' + @DatabaseName;
        SET @strSQL = 'DECLARE organizationlist_cursor CURSOR
        FOR SELECT '+ @DatabaseName +'.[dbo].[usGetLocationPathByRID]
                                   ([LocationRID]) 
        FROM '+ @DatabaseName +'.[dbo].[tblItemLocationDetailOrg] where
                                   ItemId = '+ cast(@ItemID as nvarchar(20))  ;
         EXEC sp_executesql @strSQL;
        -- Open the cursor
        OPEN organizationlist_cursor
        SET @localClientPath = '';
        -- go through each Location path and return the 
         FETCH NEXT FROM organizationlist_cursor into @clientLocation
         WHILE @@FETCH_STATUS = 0
          BEGIN
           SELECT @localClientPath =  @clientLocation; 
           SELECT @locationClientPath =
    @locationClientPath + @clientLocation + ','
           FETCH NEXT FROM organizationlist_cursor INTO
    @clientLocation
          END
           PRINT 'current databse client location'+  @localClientPath;
         -- Close the Cursor
         CLOSE organizationlist_cursor;
         DEALLOCATE organizationlist_cursor;
         FETCH NEXT FROM databaselist_cursor INTO @DatabaseName
                    END
                    CLOSE databaselist_cursor;
                    DEALLOCATE databaselist_cursor;
                    -- Trim the last comma from the string
                   SELECT @locationClientPath = SUBSTRING(@locationClientPath,1,LEN(@locationClientPath)-  1);
                     PRINT @locationClientPath;
            I would like to create above query in function so that return value would be used in 
            another query select statement and I am using SQL 2005.
            I would like to know if there is a way to make this work as a function or any better way
            to  achieve this?
            Thanks,

    This very simple: We cannot use dynamic SQL from used-defined functions written in T-SQL. This is because you are not permitted do anything in a UDF that could change the database state (as the UDF may be invoked as part of a query). Since you can
    do anything from dynamic SQL, including updates, it is obvious why dynamic SQL is not permitted as per the microsoft..
    In SQL 2005 and later, we could implement your function as a CLR function. Recall that all data access from the CLR is dynamic SQL. (here you are safe-guarded, so that if you perform an update operation from your function, you will get caught.) A word of warning
    though: data access from scalar UDFs can often give performance problems and its not recommended too..
    Raju Rasagounder Sr MSSQL DBA
          Hi Raju,
           Can you help me writing CLR for my above function? I am newbie to SQL CLR programming.
           Thanks in advance!
           Satya
              

  • Is there any better way for updating table other than this?

    Hi all, I need to update a row in the table that require me to search for it first (the table will have more than hundred thousands of row). Now, I am using a LOV that will return the primary key of the row and put that primary key to DEFAULT_WHERE property in the block and execute query command to fetch the row that need updating. This works fine except that it require 2-query-trip per update (the lov and the execute_query). Is there any better way to doing this? This update is the main objective for my application and I need to use the most effective way to do it since we need to update many records per hour.

    Thanks Rama, I will try your method. Others, how to query row instead of primary key? I thought that querying primary key is faster due to the index?
    BTW, what people do if you need to update a table using Form? I have been using the LOV then execute query since I first developing form. But I am building a bigger database recently that I start worrying about multiple query trip to dbms.
    FYI my table will have up to million rows on it. Each row will be very active (updated) within 1-2 weeks after it creation. After that it will exist for records purposes only (select only). The active rows are probably less than 1% of all the rows.

  • Is there any better option than this slow query?

    Hi all,
    i want to find out lets say Ticekt no range from Variable1 - Variable 2, are all ready existed or missing from my ticket master table which is having a million records and the no grows timely.
    For example i want to find out in range 30000 - 50000
    if any missings it should give missing number are
    34567
    45678 etc . etc.
    i wrote a for.. loop and im checking one bye one from ticket master table using select count(*) from ticket_master where ticket_no = var, which is time consuming and server becoming slow when i issue this query. my ticket master ticket_no is indexed.
    any better idea advise please..

    I am not sure I understand your problem correctly.
    Here are some test datas:
    create table ticket_masters ( ticket_no number) ;
    exec for i in 1..1000 loop insert into ticket_masters values (round(i/0.97)); end loop
    select 200-1+rownum missing from ticket_masters where rownum<=300-200
      minus
    select ticket_no from ticket_masters where ticket_no between 200 and 300
       MISSING
           217
           250
           283i am select rownum from ticket_masters, but I could select from anything actually, a pl/sql table, dual group by cube(1,1,1,1,1,1,1,1,1,1), all_objects, ...
    Could you please do a desc ticket_masters to show me your datatype and also select a few ticket_no
    Regards
    Laurent

  • If I host with Business Catalyst, will search engines fine me any "better" or Worse than go daddy?

    If I host with Business Catalyst, will search engines fine me any "better" or Worse than go daddy?
    I am new to Muse and love it! Not sure if the hosting site matters one way or the other.

    Liam has some great points. Some other things to consider is that google (just talking one search engine at the moment) more than likely know what a BC site looks like and is made of and would know the best ways to index it assuming you use some of the features of BC and not just straight up HTML.
    Godaddy is straight up HTML so as far as indexing goes I think BC has an advantage as it is a known system much like wordpress.
    As far as IP blocks, bad neighbours, etc only google will know that information and it's not easy to say which is better.
    For example if someone on BC spam's there website everywhere or engages in dodgy SEO practises, spambots etc. Their site is going to be pushed down, if you happen to be on the same IP or close to that site you will be in a "bad neighbourhood" and it may affect your site in the short term. This is the case for ANY hosting solution, so take it all with a gain of salt.

  • Quality of photos BAD in iDvd5- is iDvd 6 any better? Jaggies, blurring...

    I've found that the quality of photos is BAD in iDvd5- and I want to know is iDvd 6 any better? Jaggies, blurring, pixelation... nothing NEAR the quality of the original photos. I've read a bunch of old posts on this topic, and I know I can't expect pristine quality from my 10 megapixel camera files, but I CAN expect something better than the crap I'm getting out of iDVD. The slideshows I'm getting in iMovie are great, I just can't burn to DVD for clients. I've tried making them in iMovie but I have the same problem- plus I'd like to bypass iMovie altogether as all my images are stills, no video (for this project anyway).
    I've tried everything other people have recommended- saving to quicktime and then importing or dragging into iDvd, applying ken burns first so it won't render, changing the quality settings... anything that anyone has said might help. Still getting crappy DVDs. I think I've wasted about 20 DVDs by now.
    I'll buy the new iLife pack if iDvd 6 is any better (currently running iDvd5). I'm hearing that this is apple's problem, not ours, and none of the fixes I've been trying are working. If I can't find a solution soon I'm going to have to buy some non-apple software.

    I've found that the quality of photos is BAD in iDvd5- and I want to know is iDvd 6 any better? Jaggies, blurring, pixelation... nothing NEAR the quality of the original photos. I've read a bunch of old posts on this topic, and I know I can't expect pristine quality from my 10 megapixel camera files, but I CAN expect something better than the crap I'm getting out of iDVD. The slideshows I'm getting in iMovie are great, I just can't burn to DVD for clients.
    Your expectations are probably too high for what can be achieved usinmg today's DVD technology. Read Preparing images for DVD slideshows at http://docs.info.apple.com/article.html?path=iDVD/6.0/en/17.html for information about image size requirements for iDVD. 'Throwing' to large a file at iDVD can actually make the final image quality poorer that using a smaller image size as recommended in the article because of QuickTime resizing issues.
    Today's NTSC DVDs are less than 640x480 pixels. That's all you get. Period.
    Roxio's Toast Titanium 8 offers a photo disc option that produces an auto running slideshow in XP machine and only requires a double-click to run under Mac OS 10.4. This approach puts your full res images on the disc which are then available to anyone who has the disc. Since this approach is COMPUTER based and NOT DVD based, image quality on a computer screen is very good.
    I have recently put together several large slideshows with FotoMagico. The default setting was less than optimum, put with a little playing around I was able to produce DV movies with excellent image quality. After iDVD compression, the image quality was better than I could get with a slideshow created in iDVD directly. Image sharpness was good. However, it is still only 640x480 resolution and thus hardly high resolution.
    It would be possible to put the DV movies I created in FotoMagico on a DVD data disc. I suspect users would have to then copy the DV movie to their hard drive in order to get a high enough data rate for good playback.
    Bottom line: for maximum image quality on playback stay away from creating a video DVD and use Toast 8 to create a photo disc that is played back on a computer.
    If you need a video DVD, FotoMagico (or Photo to Movie) will probably give you better slideshow image quality than iDVD. But 640x480 is still only 640x480!
    F Shippey

  • Will VS 2015 offer any better javascript support for intellisense compare to 2013 or 2012?

    Ive started using AngularJS and the intellisense or code completion features are kinda blah in 2012 and done appear to be much any better in 2013.  Is there hope for 2015?  
    Other developers doing alot of front end work with javascript, especially with some of the newer frameworks are using tools like Webstorm for the front-end code and leaving VS for the back-end code.  Id rather not use two different IDE's if possible.
     Luckily, for now the front end code isnt that extensive on my projects, but that is starting to change.
    eg.
    http://wildermuth.com/2014/12/13/Visual_Studio_and_WebStorm_Am_I_Mad

    Hi shiftbit,
    Thank you for posting in MSDN forum.
    Based on your issue, as far as I know that the VS2015 is offer javascript support for intellisense. For more information, please see the part of the JaveScript Editor support from the following link.
    https://www.visualstudio.com/en-us/news/vs2015-preview-vs.aspx
    https://www.visualstudio.com/en-us/news/vs2015-vs.aspx
    In addition, about this AngularJS and the intellisense, please see the following blog.
    http://blogs.msdn.com/b/scicoria/archive/2015/02/27/angularjs-intellisense-nuget-package-added.aspx
    Since this VS2015 is still a preview version instead of final released version,
    Isuggest
    that you should submit this feedback to Microsoft Connect feedback portal:
    http://connect.microsoft.com/VisualStudio/feedback/CreateFeedback.aspx,
    Microsoftengineers
    will evaluate them seriously.
    After you submit the feedback, you can post the
    link here which will be beneficial for other members with the similar issue.
    Thanks for your understanding.
    Best Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Are the books any better in Aperture vs iPhoto?

    Are the books any better in Aperture vs iPhoto?  Is it worth getting Aperture or is it a glorified version of iPhoto?

    Do you know if the images are printed right on the cover of the book or if it is just a sleeve?
    Look at this page:
    http://www.apple.com/aperture/resources/print-products.html
    It is printed on the cover and the sleeve.

  • How to find out whether my Iphone 3Gs is officlially unlocked ( factory unlocked ) or "made" unlocked ? Can I upgrade its OS to OS 5 even if my phone is "made" unlocked ? how to up gared its OS ? are there any better ways to do it ?

    How to find out whether my Iphone 3Gs (OS version 3.1.3) is officlially unlocked ( factory unlocked ) or "made" unlocked ? Can I upgrade its current OS 3.1.3 to OS 5 even if my phone is not officlially unlocked ? how to up grade its OS ? what are there any better ways to do it ?
    Thanks,
    PRANAJ

    Depends wher you obtained the iPhone from and it's original supplier
    If the iPhone is an authorised unlock ( approved by the carrier) or was
    purchased from Apple as an unlocked iPhone  updating the iOS
    will have no effect on the iPhone and it's lock status
    HOWEVER if the software has been tampered with to remove the lock,
    updating the iOs will lock the iPhone back to the original carrier who holds the lock
    To find out the status of your iPhone  you could call Apple support
    and they may tell you if the iPhone is locked or not and if it is which carrier

  • Garageband struggling with Plugins. Is Logic going to be any better?

    Hey all
    Ive got what I thought was a pretty decent iMac (3.06 GHz INtel Core 2 Duo - 4GB 1067 MHz DDR3 - 500GB with 120GB of free space) for DAW music production, but Garageband is seriously stuggling and becomes so very slow when I use plugins with it.
    Does anyone know if Logic handles plugins any better/quicker?
    I'm only running a couple of plugins on a couple of tracks.
    Thanks

    It depends on the plugin.
    The plugins that come with Logic run very efficiently with it. Some 3rd party plugins are CPU hogs.
    It's not a Garageand vs. Logic thing, it really boils down to the plugin and, of course, your computer.

  • Hi, is there any programme instead of i-dvd which no longer supported with lion? Toast11 had terrible reviews, . Aimersoft DVD creator is any better?

    Hi, is there any programme instead of i-dvd which is no longer supported with lion? Toast11 had terrible reviews, . Aimersoft DVD creator is any better?

    Not quite true. iDVD IS compatible with Lion, but does not Come with Lion.  I have a fully updated version of Lion 10.7.4 and I installed a previous copy of iDVD from backup of Snow Leopard, and iDVD works fine.  You just need to find a copy of it in iLife 11 or somewhere.
    Hope this helps

  • I want to put my .m2ts movies (Sony HD recorder) on my Ipad3. It looks they need to be converted to h.264 format. Question is will Quicktime Pro work or are there any better software solutions. Running XP system. Thanks

    I want to put my .m2ts movies (Sony HD recorder) on my Ipad3. It looks they need to be converted to h.264 format. Question is will Quicktime Pro work or are there any better software solutions. Running XP system. Thanks

    You could try Handbrake, it works quite well.

  • Is short any better than int on cheap mobile phones?

    Hello,
    I'm writing an open source 6502 emulator (http://jbit.sourceforge.net/). I've written the CPU code very quickly to focus on the rest of the system. Sooner or later I will refactor the CPU code and in the process I will speed it up a bit.
    The point I'm not so sure is this:
    class VM {
      byte[] memory;
      short get(int address) {
        return (short)(memory[address] & 0XFF);
    class CPU {
      VM vm;
      short accumulator;
      void step() {
         accumulator = (short)((accumulator + vm.get(address)) & 0xFF);
    }The code is simplified but the relevant information should be here. As a side note; replacing VM.get with direct access to memory is out of the question (the real get is more complex than that).
    6502 is a 8bit CPU, The intuitive solution would be to use byte for accumulator/VM.get and that was my first version (before releasing). But some code was a pain to get right (I've found working with signed bytes very inconvenient) and in the test phase I decided to look for a easier solution to quickly get a working CPU. If I was developing for J2SE I would have gone for ints, but since I don't know CLDC very well I looked for other emulators and I found one that used short. It seemed a reasonable compromise; not as convenient as int, but perhaps faster on cheap phones (16bit CPU?). But now that I'm planning a refactoring I'm checking this assumption. Of course, I would rather work with ints than with shorts (let alone bytes).
    Has anyone some insight about whether shorts are actually faster than int on most real low-end devices (e.g. CLDC 1.0, 2-3 years old, 50-100 USD/Euros at the time of release)? By how much? Or byte is so much faster to be worth the pain?
    Also note that VM.get is part of a public interface (see http://jbit.sourceforge.net/doc/hello.html); I can change it, but I need some solid reasons to do so.
    Thanks,
    Emanuele

    Just to let you know.
    I've looked into the generated bytecode and I've reviewed the JVM specification. My impression is that short is pretty much useless, unless you want to save space in fields.
    The J2ME VM uses basically the same class format as the J2SE VM and thus has very limited support for shorts (and bytes). Even assuming that a mobile phone is using a 16-bit CPU, I doubt that the JVM implementation would be smart enough to figure out when 16-bit operations could be used.
    Of course, if you know any better I would appreciate your comments...
    Emanuele

  • Is there any better and faster way to copy...

    can anyone teel me any better and faster way to copy...
    InputStream in = null;
              OutputStream out = null;
              try {
                   in = new FileInputStream(src);
                   out = new FileOutputStream(dest);
                   byte[] buf = new byte[1024];
                   int len;
                   while ((len = in.read(buf)) > 0) {
                        out.write(buf, 0, len);
              }catch(Exception e){
    }

    Here's a small program as a sample and for testing. Just ran a few tests with a file of 1.5 MB (buffered slightly faster) and a file of 45 MB (NIO much faster) ...
    import java.io.*;
    import java.nio.channels.*;
    public class Copy {
         public static void main(String[] args) {
              if (args.length == 3) {
                   File from = new File(args[1]);
                   File to = new File(args[2]);
                   if (from.exists()) {
                        long start = System.currentTimeMillis();
                        try {
                             if (args[0].equals("nio")) {
                                  copyNIO(from,to);
                             else {
                                  copyBuffered(from,to);
                        catch (Exception ex) {
                             ex.printStackTrace();
                        System.out.println("Time: " + (System.currentTimeMillis() - start) + " ms");
         private static void copyBuffered(File from,File to) throws IOException {
              FileInputStream fis = null;
              FileOutputStream fos = null;
              try {
                   fis = new FileInputStream(from);
                   fos = new FileOutputStream(to);
                   BufferedInputStream in = new BufferedInputStream(fis);
                   BufferedOutputStream out = new BufferedOutputStream(fos);
                   byte[] buf = new byte[8192];
                   int r = 0;
                   while ((r = in.read(buf)) > 0) {
                        out.write(buf,0,r);
              finally {
                   if (fis != null) {
                        try {
                             fis.close();
                        catch (Exception ex) {}
                   if (fos != null) {
                        try {
                             fos.close();
                        catch (Exception ex) {}
         private static void copyNIO(File from,File to) throws IOException {
                   FileInputStream fis = null;
                   FileOutputStream fos = null;
                   try {
                        fis = new FileInputStream(from);
                        fos = new FileOutputStream(to);
                        FileChannel chin = fis.getChannel();
                        FileChannel chout = fos.getChannel();
                        long size = from.length();
                        long total = 0;
                        while (total < size) {
                             total += chin.transferTo(0,size,chout);
                   finally {
                        if (fis != null) {
                             try {
                                  fis.close();
                             catch (Exception ex) {}
                        if (fos != null) {
                             try {
                                  fos.close();
                             catch (Exception ex) {}
    }

Maybe you are looking for

  • Receiver file adapter - support for attachments

    Hi, Is it possible that the reciever file adapter can process a message with attachment and generate two files, one for the main payload and the other for the attachment? Thanks, Amol

  • Use of Singleton pattern in Distributed environment

    Can somebody say why it is not advisable to use Singleton pattern for developing client server applications.

  • Automatic User Device Affinity - Audit logs retention

    Hello, We have problems on generating primary user info on a lot Computers and we suspect that problem is because audit logs are kept for too short time. So the config is following: 1) User device affinity threshold (minutes): 2880 2) User device aff

  • Saving state within regions

    Hi I have a form page with 2 regions; the first region has fields like name, surname... You filled in your details name etc and region 2 is hidden at this point and then you press the button 'Continue' this button then branches to the same page and s

  • Number range error in Assessment cycle in CO-PA

    Dear All, During performing "Actual Assessment cycle in CO-PA" , system gives the error message "An error occurred when the SAP System attempted to determine the document number. No interval was found for number range object RK_BELEG, sub-object 1000