Collections Supervisor and Statistics

How do you access the Statistics for the Collections Supervisor ? If I execute the program UDM_WL_STATISTICS, no data comes up.  Is this done within UDM_SUPERVISOR transaction ? If so, how ?

what stats are you after?
Most metric based reporting should be done via BW.
You might find your Partner has some standard ABAP reports that you could purchase from them - this is a service we offer.
Within the worklist you can view open items and actioned items so during the day you can see how productive the team have been - what is outstanding.

Similar Messages

  • Collecting Process Chain Statistics

    Hi anyone,
         I'm looking to write some ABAP Code to collect Process Chain Statistics so they can be coherently evaluated and not by looking at the process chains individually. I'll be looking to identify variances in # of records, load times, etc. so I can send out alerts.
         I've been able to collect a lot of Process Chain Process information so far using existing API and FM calls.  I'm able to get down to the Process Log Levels so far, but I'm trying to think how to go about collecting the actual key figures I need. 
    <b>I'm looking to capture things like the following:</b>
    - Statistics of each process chain and related process chain process like I need to know that my ODS activation started @ 10am actived 10,000 records and ended at 10:01 am. (how long it took).
    - I need to know that my data load loaded 1,000 records and how long it took.
    - I need to know that my rollup compressed x number of records and took how long.. Etc, Etc, Etc.
    I know the BI Administration Cockpit in 2004s might provide some/most of this information, but it isn't an option right now.
    The way I see it I really only 1 option with a 2nd potential option:
    1. Most likely option is to write a program to go out and collect the statistics for all the processes executing the program on a scheduled basis.
    2.  Modify/Copy the process chain process classes to somehow output the statistics I need.  No idea how I would do this yet.
    So again I am currently down to the process level logs and just don't know how to collect the statistical key figures I need like how many records were loaded, compressed, activated, etc...  I'm thinking of somehow parsing the logs to get the key figures, but that seems like it would be difficult.
    Anyone have any suggestions or ideas how to collect this information so I can report on it?
    I posted this here because I thought the bw abappers would frequent this area.  Also, If anyone thinks this would be better suited in another forum then please suggest.
    Thanks,
    Ken Murray

    Do you subscribe to BWexpert?
    There is an article a person posted specifically on this...
    <a href="http://www.bwexpertonline.com/">BE Expert</a>
    "Identify Failed Data Loads with This Check Tool"
    Thanks.
    /smw
    Message was edited by: Steve Wilson

  • Performance Problems - Index and Statistics

    Dear Gurus,
    I am having problems lossing indexes and statistics on cubes ,it seems my indexes are too old which in fact are not too old just created a month back and we check indexes daily and it returns us RED on the manage TAB.
    please help

    Dear Mr Syed ,
    Solution steps I mentioned in my previous reply itself explains so called RE-ORG of tables;however to clarify more on that issue.
    Occasionally,ORACLE <b>Cost-Based Optimizer</b> may calculate the estimated costs for a Full Table Scan lower than those for an Index Scan, although the actual runtime of an access via an index would be considerably lower than the runtime of the Full Table Scan,Some Imperative points to be considered in order to perk up the performance and improve on quandary areas such as extensive running times for Change runs & Aggregate activate & fill ups.
    Performance problems based on a wrong optimizer decision would show that there is something serious missing at Database level and we need to RE_ORG  the degenerated indexes in order to perk up the overall performance and avoid daily manual (RSRV+RSNAORA)activities on almost similar indexes.
    For <b>Re-organizing</b> degenerated indexes 3 options are available-
    <b>1) DROP INDEX ..., and CREATE INDEX …</b>
    <b>2)ALTER INDEX <index name> REBUILD (ONLINE PARALLEL x NOLOGGING)</b>
    <b>3) ALTER INDEX <index name> COALESCE [as of Oracle 8i (8.1) only]</b>
    Each option has its Pros & Cons ,option <b>2</b> seems to be having lot of advantages to
    <b>Advantages- option 2</b>
    1)Fast storage in a different table space possible
    2)Creates a new index tree
    3)Gives the option to change storage parameters without deleting the index
    4)As of Oracle 8i (8.1), you can avoid a lock on the table by specifying the ONLINE option. In this case, Oracle waits until the resource has been released, and then starts the rebuild. The "resource busy" error no longer occurs.
    I would still leave the Database tech team be the best to judge and take a call on these.
    These modus operandi could be institutionalized  for all fretful cubes & its indexes as well.
    However,I leave the thoughts with you.
    Hope it Helps
    Chetan
    @CP..

  • How can I exit a collection window and return to parent folder?

    I would "love" to have a place where I could ask the simple questions like this that come up and seem so simple to those who know, they don't mention the details I need like how to close a working collection window and move back to the parent folder of that collection.
    I've been stuck for two days (it happened before) in a window (library module) with a collection of 8 photos.  I cannot find any way to close that window or go back to the library collection of hundreds of photos from which it was chosen.  It's very frustrating to be stuck in this window, perhaps forever?
    How do I close these windows.  Isn't there a "back" button?
    I started in Photoshop 3, then 5, then 7, then stopped renewing due to difficulty learning PS until the Adobe CC became available with the many tutorials.  It was great learning how to make a collection, but the tutorials didn't mention how to get back to the main folder where the collection was chosen.  So again, I"m stuck at an even lower level of functioning than with photoshop alone.
    I had to copy a photo and open it separatly in Photoshop just to be able to work on it.  Yesterday I could "edit in Photoshop cc" with right click.  Today, it won't even do that.
    I suspect now two things are working to prevent me from working properly on my photos but have no clue how to fix this.

    In the library module, left-hand panel, there are different sections or tabs. Collections is one of those tabs near the bottom out that panel. If you scroll up in that panel you should see other sections, one of which is "Folders". If that isn't showing, then right-click on the Collections section heading and make sure there are checkmarks in the sections that you want to have displayed. To return to the folder you need to go to the folders section and find the folder in your list.
    Another way to return to a folder is to click in the area in this illustration:
    Doing so will present a list of folders that you have recently been inside of. You can click on one of those folders and return directly to it.

  • How to set security group as primary site collection admin and secondary site collection admin using powershell in sharepoint online site - office 365?

    How to set security group as primary site collection admin and secondary site collection admin using powershell in sharepoint online site - office 365?

    Hi,
    According to your description, my understanding is that you want to set security group as admin of primary and secondary site collection using PowerShell command in office 365.
    I suggest you can use the command below to set the group to site owner, then it will have the site collection admin permission.
    Set-SPOSite -Identity https://contoso.sharepoint.com/sites/site1 -Owner [email protected] -NoWait
    Here are some detailed articles for your reference:
    https://technet.microsoft.com/en-us/library/fp161394(v=office.15)
    http://blogs.realdolmen.com/experts/2013/08/16/managing-sharepoint-online-with-powershell/
    Thanks
    Best Regards
    Jerry Guo
    TechNet Community Support

  • Declare and initialize a varray of collection Object and pass it as OUT Par

    Hi ,
    How to declare and initialize a varray of collection Object and pass it as OUT Parameter to a procedure.
    Following is the Object and VARRAY Type 's I have created and trying to pass the EmployeeList varray type variable as an OUT parameter to my stored procedure, but it is not working. I tried different possibilities of declaring and initializing the varray type variable but it did not work. Any help would be appreciated.
    CREATE TYPE Employee IS Object
              employeeId     Number,
              employeeName VARCHAR2(31),
              employeeType     VARCHAR2(20),
    CREATE TYPE EmployeeList IS VARRAY(100) OF Employee;
    /* Procedure execution block */
    declare
    employees EmployeeList;
    begin
    EXECUTE displayEmployeeDetails(100, employees);
    end;
    Thanks in advance,
    Raghu.

    but it is not workingWhat's the definition of not working?
    Error messages are always helpful.
    SQL> CREATE OR REPLACE TYPE Employee IS Object
      2  (
      3  employeeId Number,
      4  employeeName VARCHAR2(31),
      5  employeeType VARCHAR2(30)
      6  );
      7  /
    Type created.
    SQL>
    SQL> CREATE OR REPLACE TYPE EmployeeList IS VARRAY(100) OF Employee;
      2  /
    Type created.
    SQL> CREATE OR REPLACE PROCEDURE getEmployeeDetails (
      2    o_employees OUT employeelist
      3  )
      4  AS
      5  BEGIN
      6   o_employees := employeelist();
      7   o_employees.EXTEND;
      8   o_employees(1) := employee(1,'Penry','Mild Mannered Janitor');
      9  END;
    10  /
    Procedure created.
    SQL> set serveroutput on
    SQL> declare
      2   employees employeelist;
      3  begin
      4   getemployeedetails(employees);
      5   for i in 1 .. employees.count
      6   loop
      7    dbms_output.put_line(employees(i).employeeid||' '||
      8                         employees(i).employeename||' '||
      9                         employees(i).employeetype);
    10   end loop;
    11  end;
    12  /
    1 Penry Mild Mannered Janitor
    PL/SQL procedure successfully completed.
    SQL>

  • How to use BULK COLLECT, FORALL and TREAT

    There is a need to read match and update data from and into a custom table. The table would have about 3 millions rows and holds key numbers. BAsed on a field value of this custom table, relevant data needs to be fetched from joins of other tables and updated in the custom table. I plan to use BULK COLLECT and FORALL.
    All examples I have seen, do an insert into a table. How do I go about reading all values of a given field and fetching other relevant data and then updating the custom table with data fetched.
    Defined an object with specifics like this
    CREATE OR REPLACE TYPE imei_ot AS OBJECT (
    recid NUMBER,
    imei VARCHAR2(30),
    STORE VARCHAR2(100),
    status VARCHAR2(1),
    TIMESTAMP DATE,
    order_number VARCHAR2(30),
    order_type VARCHAR2(30),
    sku VARCHAR2(30),
    order_date DATE,
    attribute1 VARCHAR2(240),
    market VARCHAR2(240),
    processed_flag VARCHAR2(1),
    last_update_date DATE
    Now within a package procedure I have defined like this.
    type imei_ott is table of imei_ot;
    imei_ntt imei_ott;
    begin
    SELECT imei_ot (recid,
    imei,
    STORE,
    status,
    TIMESTAMP,
    order_number,
    order_type,
    sku,
    order_date,
    attribute1,
    market,
    processed_flag,
    last_update_date
    BULK COLLECT INTO imei_ntt
    FROM (SELECT stg.recid, stg.imei, cip.store_location, 'S',
    co.rtl_txn_timestamp, co.rtl_order_number, 'CUST',
    msi.segment1 || '.' || msi.segment3,
    TRUNC (co.txn_timestamp), col.part_number, 'ZZ',
    stg.processed_flag, SYSDATE
    FROM custom_orders co,
    custom_order_lines col,
    custom_stg stg,
    mtl_system_items_b msi
    WHERE co.header_id = col.header_id
    AND msi.inventory_item_id = col.inventory_item_id
    AND msi.organization_id =
    (SELECT organization_id
    FROM hr_all_organization_units_tl
    WHERE NAME = 'Item Master'
    AND source_lang = USERENV ('LANG'))
    AND stg.imei = col.serial_number
    AND stg.processed_flag = 'U');
    /* Update staging table in one go for COR order data */
    FORALL indx IN 1 .. imei_ntt.COUNT
    UPDATE custom_stg
    SET STORE = TREAT (imei_ntt (indx) AS imei_ot).STORE,
    status = TREAT (imei_ntt (indx) AS imei_ot).status,
    TIMESTAMP = TREAT (imei_ntt (indx) AS imei_ot).TIMESTAMP,
    order_number = TREAT (imei_ntt (indx) AS imei_ot).order_number,
    order_type = TREAT (imei_ntt (indx) AS imei_ot).order_type,
    sku = TREAT (imei_ntt (indx) AS imei_ot).sku,
    order_date = TREAT (imei_ntt (indx) AS imei_ot).order_date,
    attribute1 = TREAT (imei_ntt (indx) AS imei_ot).attribute1,
    market = TREAT (imei_ntt (indx) AS imei_ot).market,
    processed_flag =
    TREAT (imei_ntt (indx) AS imei_ot).processed_flag,
    last_update_date =
    TREAT (imei_ntt (indx) AS imei_ot).last_update_date
    WHERE recid = TREAT (imei_ntt (indx) AS imei_ot).recid
    AND imei = TREAT (imei_ntt (indx) AS imei_ot).imei;
    DBMS_OUTPUT.put_line ( TO_CHAR (SQL%ROWCOUNT)
    || ' rows updated using Bulk Collect / For All.'
    EXCEPTION
    WHEN NO_DATA_FOUND
    THEN
    DBMS_OUTPUT.put_line ('No Data: ' || SQLERRM);
    WHEN OTHERS
    THEN
    DBMS_OUTPUT.put_line ('Other Error: ' || SQLERRM);
    END;
    Now for the unfortunate part. When I compile the pkg, I face an error
    PL/SQL: ORA-00904: "LAST_UPDATE_DATE": invalid identifier
    I am not sure where I am wrong. Object type has the last update date field and the custom table also has the same field.
    Could someone please throw some light and suggestion?
    Thanks
    uds

    I suspect your error comes from the »bulk collect into« and not from the »forall loop«.
    From a first glance you need to alias sysdate with last_update_date and some of the other select items need to be aliased as well :
    But a simplified version would be
    select imei_ot (stg.recid,
                    stg.imei,
                    cip.store_location,
                    'S',
                    co.rtl_txn_timestamp,
                    co.rtl_order_number,
                    'CUST',
                    msi.segment1 || '.' || msi.segment3,
                    trunc (co.txn_timestamp),
                    col.part_number,
                    'ZZ',
                    stg.processed_flag,
                    sysdate
    bulk collect into imei_ntt
      from custom_orders co,
           custom_order_lines col,
           custom_stg stg,
           mtl_system_items_b msi
    where co.header_id = col.header_id
       and msi.inventory_item_id = col.inventory_item_id
       and msi.organization_id =
                  (select organization_id
                     from hr_all_organization_units_tl
                    where name = 'Item Master' and source_lang = userenv ('LANG'))
       and stg.imei = col.serial_number
       and stg.processed_flag = 'U';
    ...

  • Which Event Classes i should use for finding good indexs and statistics for queries in SP.

    Dear all,
    I am trying to use pro filer to create a trace,so that it can be used as workload in
    "Database Engine Tuning Advisor" for optimization of one stored procedure.
    Please tel me about the Event classes which i  should use in trace.
    The stored proc contains three insert queries which insert data into a table variable,
    Finally a select query is used on same table variable with one union of the same table variable, to generate a sequence for records based on certain condition of few columns.
    There are three cases where i am using the above structure of the SP, so there are three SPS out of three , i will chose one based on their performance.
    1) There is only one table with three inserts which gets  into a table variable with a final sequence creation block.
    2) There are 15 tables with 45 inserts , which gets into a tabel variable with a final
    sequence creation block.
    3)
    There are 3 tables with 9 inserts , which gets into a table variable with a final
    sequence creation block.
    In all the above case number of record will be around 5 lacks.
    Purpose is optimization of queries in SP
    like which Event Classes i should use for finding good indexs and statistics for queries in SP.
    yours sincerely

    "Database Engine Tuning Advisor" for optimization of one stored procedure.
    Please tel me about the Event classes which i  should use in trace.
    You can use the "Tuning" template to capture the workload to a trace file that can be used by the DETA.  See
    http://technet.microsoft.com/en-us/library/ms190957(v=sql.105).aspx
    If you are capturing the workload of a production server, I suggest you not do that directly from Profiler as that can impact server performance.  Instead, start/stop the Profiler Tuning template against a test server and then script the trace
    definition (File-->Export-->Script Trace Definition).  You can then customize the script (e.g. file name) and run the script against the prod server to capture the workload to the specified file.  Stop and remove the trace after the workload
    is captured with sp_trace_setstatus:
    DECLARE @TraceID int = <trace id returned by the trace create script>
    EXEC sp_trace_setstatus @TraceID, 0; --stop trace
    EXEC sp_trace_setstatus @TraceID, 2; --remove trace definition
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • Tables for Collection, dispute and credit management

    Hi SAP Gurus,
    I would appreciate if any one could provide the list of tables for collection, dispute and credit management. Thanks!
    Regards,
    aj

    I think you mean the tables for FSCM.
    The easiest way to find it is to do SE16 FDM*
    FDM_AR_WRITEOFFS               FSCM-DM: Automatic Write-Offs Not Executed
    FDM_BUFFER                     Cluster for Decoupling in 1-System Scenari
    FDM_BW_INV_DELTA               Delta Queue for BI Invoice Extractor
    FDM_COLL_CCOLOAD               Company Codes for which Initial Load Perfo
    FDM_COLL_CFIELD                FSCM-COL: Relevant Fields for Document Cha
    FDM_COLL_COMPCOD               FSCM-COL: Active Company Codes of Collecti
    FDM_COLL_DUNNLEV               Harmonized Dunning Levels
    FDM_COLL_LASTPAY               Last Payments of Business Partner
    FDM_COLL_LTRIG                 Missing Entries of Table FDM_COLL_STRIG
    FDM_COLL_SFIELD                FSCM-COL: Relevant Fields for Document Cha
    FDM_COLL_STRIG                 FSCM-COL: Control of Trigger Update in TRO
    FDM_COLL_TROBJ                 FSCM-COL: Trigger Table for Collections Ma
    FDM_CONTACT_BUF                Personalization of Contact Person Data
    FDM_DCOBJ                      FSCM-DM Integration: Disputed Objects
    FDM_DCPROC                     FSCM-DM Integration: Dispute Case Processe
    FDM_P2P_ATTR                   Attributes of Promise to Pay; Required for
    FDM_PERSONALIZE                Personalization of Collections Management
    FDM1                           Cash Management & Forecast: Line Items of
    FDM2                           Cash management line items from MM purchas
    FDMV                           Cash Planning Line Items of Earmarked Fund
    Hope this helps, award points if useful.

  • Supervisor and Subordinate Report

    Hi,
    I am trying to create a HR report where one can search for an employee (who is a supervisor) and have the report return all the subordinates for that employee. I've looked into the Connect By clause but have not had much luck.
    Any input would be appreciated.
    Thanks

    SELECT
    LPAD(' ',10*(LEVEL-1)) || peo.full_name name,
    org.name assignment,
    job.name job,
    loc.location_code location,
    asg.person_id person_id
    FROM
    per_all_assignments_f asg,
    per_all_people_f peo,
    hr_locations_all loc,
    hr_all_organization_units_tl org,
    per_jobs_tl job
    WHERE 1=1
    AND peo.person_id = asg.person_id
    AND NVL(peo.effective_end_date,TO_DATE('01-01-2200','dd-mm-yyyy')) > SYSDATE
    AND NVL(asg.effective_end_date,TO_DATE('01-01-2200','dd-mm-yyyy')) > SYSDATE
    AND asg.assignment_status_type_id = 1
    AND loc.location_id = asg.location_id
    AND org.organization_id = asg.organization_id
    AND job.job_id = asg.job_id
    START WITH 1=1
    AND asg.person_id = 408
    AND NVL(peo.effective_end_date,TO_DATE('01-01-2200','dd-mm-yyyy')) > SYSDATE
    AND NVL(asg.effective_end_date,TO_DATE('01-01-2200','dd-mm-yyyy')) > SYSDATE
    AND peo.employee_number <> 'iExpense_Admin'
    CONNECT BY PRIOR
    asg.person_id = asg.supervisor_id
    AND NVL(peo.effective_end_date,TO_DATE('01-01-2200','dd-mm-yyyy')) > SYSDATE
    AND NVL(asg.effective_end_date,TO_DATE('01-01-2200','dd-mm-yyyy')) > SYSDATE
    AND peo.employee_number <> 'iExpense_Admin'
    ORDER SIBLINGS BY
    peo.full_name,
    org.name
    NOTES: The line AND asg.person_id = 408 is where you set the id of the supervisor. The lines with the NVL()'s prevent duplication of people with multiple records for the same person. (There are other ways to do this, but this works fine for this example.) The line "AND peo.employee_number <> 'iExpense_Admin'" exists to prevent infinite loops in some setups using iExpense.

  • Which one is the best way to collect config and performance details in azure

    Hi ,
    I want to collect the information of both configuration and performance of cloud, virtual machine and web role .I am going to collect all these details using
    java.  so Please suggest which one is the best way. 
    1) REST API
    2) Azure SDK for java
    Regards
    Rathidevi
    rathidevi

    Hi,
    There are four main tasks to use Azure Diagnostics:
    Setup WAD
    Configuring data collection
    Instrumenting your code
    Viewing data
    The original Azure SDK 1.0 included functionality to collect diagnostics and store them in Azure storage collectively known as Azure Diagnostics (WAD). This software, built upon the Event Tracing for Windows (ETW) framework, fulfills two design requirements
    introduced by Azure scale-out architecture:
    Save diagnostic data that would be lost during a reimaging of the instance..
    Provide a central repository for diagnostics from multiple instances.
    After including Azure Diagnostics in the role (ServiceConfiguration.cscfg and ServiceDefinition.csdef), WAD collects diagnostic data from all the instances of that particular role. The diagnostic data can be used for debugging and troubleshooting, measuring
    performance, monitoring resource usage, traffic analysis and capacity planning, and auditing. Transfers to Azure storage account for persistence can either be scheduled or on-demand.
    To know more about Azure Diagnostics, please refer to the below article ( Section : Designing More Supportable Azure Services > Azure Diagnostics )
    https://msdn.microsoft.com/en-us/library/azure/hh771389.aspx?f=255&MSPPError=-2147217396
    https://msdn.microsoft.com/en-us/library/azure/dn186185.aspx
    https://msdn.microsoft.com/en-us/library/azure/gg433048.aspx
    Hope this helps !
    Regards,
    Sowmya

  • DAM: integrate/harmonize Collections, Keywords and IPTC location

    Lightroom currently presents the user with two independent methods to assign photos to a hierarchical structure: collections and keywords. Unless I missed something, keywords do everything that collections do, and more.
    In addition, there is one structure that is inherently hierarchical, but is not currently implemented as such: the IPTC (metadata) location. For instance, the city of Amsterdam lies in the province of North Holland in The Netherlands, so entering Amsterdam in the 'City' field implies a province and a country.
    Based on these observations, I have a couple of suggestions:
    * Enhance the usability of collections with a few of the keyword features: listing all assigned collections upon selection, and dragging collections onto photos to assign.
    * Allow users to convert collection trees to keyword trees, and possibly back (convenient for import/export, or changing ones mind as to what paradigm to use).
    * Allow users to either export a keyword tree to the IPTC location fields, or to make a dynamic link between the two (i.e. updating the keyword would immediately affect the location data). This feature would also require an 'exclusivity swith', see below.
    * Introduce an 'exclusivity switch' for keywords or collections, implying that you may only assign *one* keyword in its sub-tree to any particular image. This makes sense for location, because an image was made in a particular location, but it can also be used other contexts. For example, to indicate where the latest version of a file has been backed up to, or what the status (imported, selected, processed, finished) of a file is.
    * Introduce dynamic collections *and* dynamic keywords. I have read things about query-based dynamic collections coming to Lightroom, but please also create 'dynamic keywords'. This would allow keywords to remain the catch-all organization structure, instead of dividing functionality across collections and keywords.
    I think these features could make a significant contribution to Lightroom's DAM capabilities. If implemented alongside improved search capabilities, they would allow me (and possibly others) to stop using my stand-alone DAM application.
    Simon

    John, I had forgotten the fact that keywords do not have any output settings associated with them, unlike collections. I support your idea of indicating which collections have these settings attached to them.
    Still, except for this point, keywords can currently be made to do the same things that collections do. You can even keep them private by choosing not to export certain keywords. Note that even though this is the case, I do not ask for the abolishment of collections per se, because it is convenient to maintain the *conceptual* difference (in addition to the output metadata in collections).
    On the locations, I don't think the 'problem' is solved simply by exporting the location data to a keyword tree. It's the other direction that would greatly speed up the workflow. For example, dragging Amsterdam (potentially one of many Amsterdams) onto an image, would automatically assign the higher level fields as well. A metadata preset would indeed do the same, but only after one has created such a preset for every location, which can quickly add up.
    Also, I did not mean to imply that dynamic keywords would be beneficial for keeping track of the location metadata. In general, dynamic categories (collections or keywords) are very useful for saving searches or keeping track of an internal workflow (like: if an image is not in a backup collection, it is not backed up). Specifically, I can imagine someone wanting to couple a search result to an exported keyword. For example, for a personal website or a Flickr account, you may want to tag images with the keyword '5stars' that is automatically generated.
    The way I see it, there is currently no real difference in the logical structures that can be constructed using collections and keywords. The distinctions between the two are in the input or output phase. It would be a shame to see the logical capabilities of collections enhanced with 'smart' collections, whilst leaving keywords behind.
    By the way, I'm all for a scripting interface, but I think that it's best to get the basics implemented in the right way first.
    Simon
    PS - You pointed out the existence of custom fields as a workflow solution. I have no experience with them, and don't have access to Lightroom on this machine, so I'll get back on that later.

  • I created a new collection and put all of my same photos in them as a previous collection, then I changed them to black and white and both collections changed. How do I keep the first collection colour and make the second collection black and white.

    I created a new collection and put all of my same photos in them as a previous collection, then I changed them to black and white and both collections changed. How do I keep the first collection colour and make the second collection black and white.

    For the second collection, choose the option to make virtual copies. Then you can turn the virtual copies to black & white, leaving the first collection in color. If you want file copies of the B & W images you can export copies.

  • I was wondering when I set up my collections, groups, and metadata in Abode Bridge CS6 if there was any way to transfer that information to Abode Bridge CC?

    I was wondering when I set up my collections, groups, and metadata in Abode Bridge CS6 if there was any way to transfer that information to Abode Bridge CC?

    Funds cannot be transferred from one Apple ID account to another.
    Try here > Rescue email address and how to reset Apple ID security questions
    If that doesn't help, contact Apple for assistance with your security questions > Contacting Apple for support and service

Maybe you are looking for