COLLECT stmt  - counter.

Experts,
Basically
DATA : Count TYPE i.
will work with COLLECT stmt as a counter.
I have DB structure having filed as RKE2_VVQ09 (QUAN type) in 4.6 C which was used as a counter filed in COLLECT stmt.
However in ECC 6.0, looks like is not working in COLLECT stmt as a counter.
So DDIC team have changed it in the DB structure from RKE2_VVQ09 (QUAN type) to char20, however it is not either working  as a counter when we use COLLECT stmt (i did not test ).
So could you please let me know Whether  RKE2_VVQ09 (QUAN type)  will work with COLLECT stmt in ECC ?
or do I need to change this to INT4 ?
I Could not change in DB structure and test it ...that why I am asking for your help.
YOUR HELP WILL BE APPRECIATED.
Thanks in advance.

>
sam kumar wrote:
> I Could not change in DB structure and test it ...that why I am asking for your help.
You don't have to change the DB structure. You could simply create an internal table that has fields of both types and see if either works.
Rob

Similar Messages

  • Problem in collect stmt

    hi all,
    I am doing report in which there is opening stock , closing stock and all movement wise stock.
    i am able to do all this for one material but when i am try for more than one material then value of last material is overwrite on other material.
    I m using collect stmt to do sum for all movement.
    plz help me.

    hi all,
    plz see my code look like this :
    TABLES : MSEG , MKPF , MAKT , MBEW.
    DATA : BEGIN OF ITAB OCCURS 0,
            LBKUM LIKE MSEG-LBKUM,
            LBKUM1 LIKE MSEG-LBKUM,
            MAKTX LIKE MAKT-MAKTX,
            LGORT LIKE MSEG-LGORT,
            BWART LIKE MSEG-BWART,
            ZEILE LIKE MSEG-ZEILE,
            MENGE LIKE MSEG-MENGE,
            MEINS LIKE MSEG-MEINS,
            MATNR LIKE MSEG-MATNR,
            WERKS LIKE MSEG-WERKS,
            SHKZG LIKE MSEG-SHKZG,
            MBLNR LIKE MKPF-MBLNR,
            BUDAT LIKE MKPF-BUDAT,
            SIGN(2),
            101_102 LIKE MSEG-MENGE,
    END OF ITAB.
    DATA : BEGIN OF ITAB_TOT OCCURS 0,
            MATNR LIKE MSEG-MATNR,
            LBKUM1 LIKE MSEG-LBKUM,
              TOTAL1 LIKE MSEG-MENGE,
            TOTAL1 TYPE P DECIMALS 3,
            CLO TYPE P DECIMALS 3,
            101_102 LIKE MSEG-MENGE,
    END OF ITAB_TOT.
    PARAMETERS : WERKS LIKE MSEG-WERKS DEFAULT 'BRHP' OBLIGATORY.
    SELECT-OPTIONS : MATNR FOR MSEG-MATNR OBLIGATORY.
    SELECT-OPTIONS: DAT  FOR MKPF-BUDAT.
    SELECT-OPTIONS: LGORT FOR MSEG-LGORT.
    START-OF-SELECTION.
      SELECT DISTINCT  MSEG~MATNR
             MSEG~LGORT
             MSEG~BWART
             MSEG~ZEILE
             MSEG~MENGE
             MSEG~MEINS
             MSEG~WERKS
            MSEG~LBKUM
             MSEG~SHKZG
             MKPF~MBLNR
             MKPF~BUDAT
             INTO CORRESPONDING FIELDS OF TABLE ITAB
             FROM MSEG
             INNER JOIN MKPF ON MSEG~MBLNR = MKPF~MBLNR
             WHERE MSEG~MATNR IN MATNR
            AND MSEG~LGORT IN LGORT
             AND MSEG~WERKS EQ WERKS
             AND mKPF~BUDAT IN DAT
             AND MSEG~ZEILE = 1
             order by mkpf~MBLNR.
    LOOP AT ITAB.
        IF ITAB-SHKZG = 'S'.
          ITAB-SIGN = '+'.
          MODIFY ITAB.
        ELSE.
          ITAB-SIGN = '-'.
          MODIFY ITAB.
        ENDIF.
      ENDLOOP.
    SORT ITAB BY MATNR.
    LOOP AT ITAB.
       IF ITAB-BWART = '121' OR ITAB-BWART = '122' .
          SELECT  MENGE INTO (ITAB-121_122) FROM MSEG  WHERE MATNR =
        ITAB-MATNR
          AND MBLNR = ITAB-MBLNR
          AND LGORT = ITAB-LGORT
          AND WERKS = ITAB-WERKS.
         AND ZEILE = ITAB-ZEILE.
            MODIFY ITAB.
          ENDSELECT.
        ENDIF.
         LOOP AT ITAB.
        IF ITAB-SHKZG = 'H'.
            ITAB-121_122 = ITAB-121_122 * -1.
            MODIFY ITAB.
        ELSE.
            ITAB-121_122 = ITAB-121_122.
            MODIFY ITAB.
         ENDIF.
         ENDLOOP.
      ENDLOOP.
    LOOP AT ITAB.
    SELECT LBKUM INTO (ITAB-LBKUM) FROM MSEG WHERE MATNR = ITAB-MATNR
    and
    MBLNR = ITAB-MBLNR AND WERKS = ITAB-WERKS AND ZEILE = 1.
    MODIFY ITAB.
    ENDSELECT.
    ENDLOOP.
    SORT ITAB BY  matnr mblnr.
    LOOP AT ITAB.
    IF ITAB[] IS NOT INITIAL.
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING MATNR .
    MOVE ITAB-lBKUM TO ITAB-LBKUM1.
    MODIFY ITAB.
    *APPEND ITAB_TOT1.
    ENDIF.
    ENDLOOP.
    LOOP AT ITAB .
      MOVE-CORRESPONDING ITAB TO ITAB_TOT.
      COLLECT ITAB_TOT.
    ENDLOOP.
    loop at itab_tot.
    write : / itab_tot-matnr , itab_tot-121_122 , itab-lbkum.
    endloop.
    i am not getting correct value of 121_122.
    what should i do?

  • Collections and counts

    Greetings!
    I have a procedure in the following format:
    PROCEDURE test as
    type p1 is table of number;
    tab_t1 p1;
    cursor c1 is
    (select ....... from table1);
    begin
    open c1;
    loop
    fetch c1 BULK COLLECT INTO tab_t1... limit 1000;
    FORALL i in 1..tab_t1.count
    insert into table2 values tab_t1(i);
    commit;
    exit when c1%notfound;
    end loop;
    msg := 'Count of records inserted ' || to_char(sql%rowcount);
    test_util.write_to_log (handle, msg);
    close c1;
    commit;
    end;
    My question is where should the sql%rowcount be when we use collections (after closing the cursor or before end loop or ..), to capture count of all the records that have been inserted so far. Thank you,
    Lakshmi
    Message was edited by:
    user6773

    Ramana:
    There is one major difference betqween your version and Adrian's accumulating SQL%ROWCOUNT for each of the INSERTS. Adrian uses a count of 0 in the collection to exit, and you are using cursr%NOTFOUND.
    I created two procedures, one p based on your logic, and a second p1 based on Adrians. I also set the limit to something less than the number of records in the table, and not an even multiple of the number of records. so, the procedures are:
    CREATE or replace PROCEDURE p AS
       TYPE array_tp IS TABLE OF t%ROWTYPE;
       l_array array_tp;
       CURSOR c IS
          SELECT * FROM t;
       l_cnt1 NUMBER := 0;
       l_cnt2 NUMBER := 0;
       l_cnt3 NUMBER := 0;
    BEGIN
       OPEN c;
       LOOP
          FETCH c BULK COLLECT INTO l_array LIMIT 9;
          l_cnt1 := c%ROWCOUNT;
          FORALL i IN 1 .. l_array.COUNT
             INSERT INTO t1 VALUES l_array(i);
          EXIT WHEN c%NOTFOUND;
          l_cnt2 := l_cnt2 + SQL%rowcount;
       END LOOP;
       l_cnt3 := c%ROWCOUNT;
       CLOSE c;
       DBMS_OUTPUT.Put_Line('Cursor rowcount after fetch gives: '||l_cnt1);
       DBMS_OUTPUT.Put_Line('Accumulate sql rowcount after insert gives: '||l_cnt2);
       DBMS_OUTPUT.Put_Line('Cursor rowcount before close gives: '||l_cnt3);
    END;
    CREATE or replace PROCEDURE p1 AS
       TYPE array_tp IS TABLE OF t%ROWTYPE;
       l_array array_tp;
       CURSOR c IS
          SELECT * FROM t;
       l_cnt1 NUMBER :=0;
       l_cnt2 NUMBER :=0;
       l_cnt3 NUMBER :=0;
    BEGIN
       OPEN c;
       LOOP
          FETCH c BULK COLLECT INTO l_array LIMIT 9;
          EXIT WHEN l_array.COUNT = 0;
          l_cnt1 := c%ROWCOUNT;
          FORALL i IN 1 .. l_array.COUNT
             INSERT INTO t1 VALUES l_array(i);
          l_cnt2 := l_cnt2 + SQL%rowcount;
       END LOOP;
       l_cnt3 := c%ROWCOUNT;
       CLOSE c;
       DBMS_OUTPUT.Put_Line('Cursor rowcount after fetch gives: '||l_cnt1);
       DBMS_OUTPUT.Put_Line('Accumulate sql rowcount after insert gives: '||l_cnt2);
       DBMS_OUTPUT.Put_Line('Cursor rowcount before close gives: '||l_cnt3);
    END;Note that the line l_cnt3 := c%ROWCOUNT after the LOOP but before closing the cursor would be my preference as a way to get the count if you really needed it. Now running both I get:
    SQL> SELECT COUNT(*) FROM t;
      COUNT(*)
            40
    SQL> SELECT COUNT(*) FROM t1;
      COUNT(*)
             0
    SQL> exec p
    Cursor rowcount after fetch gives: 40
    Accumulate sql rowcount after insert gives: 36
    Cursor rowcount before close gives: 40
    PL/SQL procedure successfully completed.
    SQL> SELECT COUNT(*) FROM t1;
      COUNT(*)
            40
    SQL> ROLLBACK;
    SQL> exec p1;
    Cursor rowcount after fetch gives: 40
    Accumulate sql rowcount after insert gives: 40
    Cursor rowcount before close gives: 40
    PL/SQL procedure successfully completed.
    SQL> SELECT COUNT(*) FROM t1;
      COUNT(*)
            40You version using SQL%ROWCOUNT misses the last partial set of rows because c%NOTFOUND is TRUE after fetching the last 4 rows from the table, so you exit the loop prior to accumulating. Moving the EXIT WHEN after assigning SQL%ROWCOUNT would give the correct results.
    John

  • Collection method COUNT and performance

    Hi,
    I use associative arrays in my application, this kind:
    TYPE ADRESSETT IS TABLE OF mig.ADRESSE%ROWTYPE INDEX BY BINARY_INTEGER;
    A good many times I need the count of elements in this arrays.
    Of course I can use the COUNT method for this purpose.
    However could it be faster to maintain a simple counter which is incremented and decremented on inserting/deleting elements
    from the array?
    How expensive is the COUNT method? Does the method really calculate the count every time new?
    Thomas

    user447000 wrote:
    Hi,
    I use associative arrays in my application, this kind:
    TYPE ADRESSETT IS TABLE OF mig.ADRESSE%ROWTYPE INDEX BY BINARY_INTEGER;
    A good many times I need the count of elements in this arrays.
    Of course I can use the COUNT method for this purpose.
    However could it be faster to maintain a simple counter which is incremented and decremented on inserting/deleting elements
    from the array?
    How expensive is the COUNT method? Does the method really calculate the count every time new?
    ThomasIf you know anthing about arrays/collections/linked lists etc. (all the sort of stuff taught in the basics of programming in the first year of university), then you'll know that retrieving the count of the number of 'elements' in those things is part of what they are about. The count method doesn't have to 'calculate' the count every time, as it will be held in the collections internal structure and internally maintained every time something is added/deleted. It's not going to be an expensive operation to perform as it's just going to return the value of an internal value that is already known.
    Of course, it's far easier to avoid collections altogether as they use expensive PGA memory, and instead process your data using SQL directly against the database tables.

  • Collection classes, count

    I am attempting to count occurances of a string being added in a collection class. This is what I have
    class CountOccurrencesListener implements ActionListener
    public void actionPerformed(ActionEvent event)
    String target = targetText.getText( );
    feedback.append(target + " occurs ");
    int answer = countOccurrences(answer);
    feedback.append(answer + " times.\n");
    public int countOccurrences(String target, int answer)
    int index;
    answer = 0;
    for (index = 0; index < manyItems; index++)
    if (target == data[index])
    answer++;
    return answer;
    It should count how many times a string was added from the text field, TargetText, but I get this error message:
    "BagApplet.java": Error #: 300 : method countOccurrences(int) not found
    any ideas on what I am doing wrong

    To cure that error, change "int answer = countOccurrences(answer)" to:
    int answer = countOccurrences(target,answer);
    You either left out some code in your post or you will find additional errors.

  • Can I get my music collection (play counts and playlists inc.) from my iPad to my PC?

    I had a problem with iTunes where I couldn't add songs to the library because I didn't have sufficient permissions to edit my iTunes library (even though I did). After a few days of tweaking administrator permissions and the like, I got really annoyed and tried re-installing iTunes to see if it would make it go away. Whilst the iTunes64Setup.exe file was downloading, I was fiddling with the .itl and .xml files in my iTunes folder and when I re-installed iTunes I had lost all my play counts and playlists. When iTunes opened, I assumed it must have overwritten the two files because when I looked at them in Windows Explorer they were both a lot smaller in size.
    Obviously, I didn't sync my iPad or iPhone after this had happened, so I still have the playlists and play counts stored on them. Is there any way I can:
    a) Recover a previous version of the .xml and .itl files on my laptop and use them to restore my plays?
    or
    b) Use iTunes Match to restore them onto my laptop?
    I'm not sure about b). When you subscribe to Match, does it analyze your current library and then save it to the Cloud so you can redownload it?
    I'm going to buy the new iPhone when it is released in the next couple of months so I'm willing to jailbreak my iPhone 4 if need be.
    Thanks for any help.

    Empty/corrupt library after upgrade/crash
    Hopefully it's not been too long since you last upgraded iTunes, in fact if you get an empty/incomplete library immediately after upgrading then with the following steps you shouldn't lose a thing or need to do any further housekeeping. In the Previous iTunes Libraries folder should be a number of dated iTunes Library files. Take the most recent of these and copy it into the iTunes folder. Rename iTunes Library.itl as iTunes Library (Corrupt).itl and then rename the restored file as iTunes Library.itl. Start iTunes. Should all be good, bar any recent additions to or deletions from your library.
    See iTunes Folder Watch for a tool to catch up with any changes since the backup file was created.
    When you get it all working make a backup!
    Or see Recover your iTunes library from your iPod or iOS device.
    tt2

  • Collect file count and add to CSV file.

    I have this script efficiently crafted by Jacques Rioux and I now what to do a little more with it.
    What it currently does is look on my desktop at a select number of Folders on my desktop. It then looks at the keyword information and then returns the results to a csv file.
    it looks for all the photographs; shot by Matthew. edited by Matthew etc.... with the date appended to the start and then the next time the script is run it adds the next data to the bottom of the last.
    The result looks like this
    19/12/2012,255,412,37,68
    27/12/2012,197,342,16,26
    From the fist line you can see on the 19th December 2012 I shot 255 images
    No what I would like it to do is:-
    a) Specifaically look in the folders of the desktop whose name begins with BH, BU, DA, DI, DO, FR, IN, NO, MA, TM, WA, PR, SE (These folders may or may not exist at the time, but are the only folders it should look at)
    b) also do a file count of the contents of the above individual folders and append it to the csv file. Again a folder may not exist. Where it doesn't exist the file count must = 0 so that it can then be added to the CSV file.
    This is how I hope the line to look like from the CSV file,
    19/12/2012,255,412,37,68, 5,3,20,25,60,101,25,0,85,5,40,0,0
    from the line above you can see that the folders NO, PR, and SE were all non existant and therefore a 0 was written in its place on the CSV file.
    Below is the working script that looks for the keywords.
    set spotlightqueryList to {"Shot by Matthew", "Editted by Matthew", "Shot by Shah", "Editted by Shah"}
    set thefolders to {"Desktop"}
    set thekind to "PSD"
    set csvFileName to "ProductivityLog.csv"
    set tHome to path to home folder as string
    set tc to count spotlightqueryList
    set theseCount to {}
    repeat tc times
              set end of theseCount to 0
    end repeat
    repeat with i in thefolders
              set thepath to my existsItem(tHome & i)
              if thepath is not "" then -- exists
                        repeat with j from 1 to tc
                                  set tQuery to item j of spotlightqueryList
                                  do shell script "mdfind -onlyin " & thepath & " " & tQuery & " " & thekind & " | wc -l" -- wc return the number of lines
                                  set item j of theseCount to (item j of theseCount) + (the result as integer) -- add the number of lines
                        end repeat
              end if
    end repeat
    set csvPath to "DCKGEN:Brands:Zoom:Online Photography:" & csvFileName
    set oTID to text item delimiters
    set text item delimiters to "," -- CSV delimiter
    set thisLine to (theseCount as text) -- convert list to text, each number is separated by comma
    set text item delimiters to oTID
    tell (current date) to set tDate to short date string
    set beginning of theseCount to tDate -- insert the date (first column)
    set csvPath to "DCKGEN:Brands:Zoom:Online Photography:" & csvFileName
    set oTID to text item delimiters
    set text item delimiters to "," -- CSV delimiter
    set thisLine to (theseCount as text) -- convert list to text, each number is separated by comma
    set text item delimiters to oTID
    --- append this line to CSV file
    do shell script "echo " & (quoted form of thisLine) & " >>" & quoted form of POSIX path of csvPath
    on existsItem(f)
              try
                        return quoted form of POSIX path of (f as alias) -- exists
              end try
              return "" -- else not exists
    end existsItem
    (* just a way to visually see it working
    set dialog to "Matt Shot: \"" & item 1 of theseCount & "\"" & return & return & "Matt Edit: \"" & item 2 of theseCount & "\"" & return & return & "Shah Shot: \"" & item 3 of theseCount & "\"" & return & return & "Shah Edit: \"" & item 4 of theseCount & "\"" & return & return
    display dialog dialog
    This is what I began to wrote but really have no idea how I would write it into the data into the CSV file and also I was struggling to get the non existant folder to = 0?
    tell application "Finder"
              set folderA to (get first folder of desktop whose name starts with "BH")
              set folderB to (get first folder of desktop whose name starts with "Bu")
              set folderC to (get first folder of desktop whose name starts with "Da")
              set folderD to (get first folder of desktop whose name starts with "DI")
              set folderE to (get first folder of desktop whose name starts with "Do")
              set folderF to (get first folder of desktop whose name starts with "Fr")
              set folderG to (get first folder of desktop whose name starts with "In")
              set folderH to (get first folder of desktop whose name starts with "Ma")
              if (exists (get first folder of desktop whose name starts with "No")) is true then
                        set folderI to (get first folder of desktop whose name starts with "No")
              else
                        set folderI to "0"
                        set folderJ to (get first folder of desktop whose name starts with "To")
                        set folderK to (get first folder of desktop whose name starts with "Wa")
                        if (exists (get first folder of desktop whose name starts with "SE")) is truethen
                                  set folderL to (get first folder of desktop whose name starts with"SE")
                        else
                                  set folderL to "0"
                                  if (exists (get first folder of desktop whose name starts with "PR"))is true then
                                            set folderM to (get first folder of desktop whose name starts with "PR")
                                  else
                                            set folderM to "0"
                                            set folderM to (get first folder of desktop whose name starts with "PR")
                                  end if
                        end if
              end if
              tell application "System Events"
                        set contentsA to (number of files in folderA)
                        set contentsB to (number of files in folderB)
                        set contentsC to (number of files in folderC)
                        set contentsD to (number of files in folderD)
                        set contentsE to (number of files in folderE)
                        set contentsF to (number of files in folderF)
                        set contentsG to (number of files in folderG)
                        set contentsH to (number of files in folderH)
                        set contentsI to (number of files in folderI)
                        set contentsJ to (number of files in folderJ)
                        set contentsK to (number of files in folderK)
                        set contentsL to (number of files in folderL)
                        set contentsM to (number of files in folderM)
              end tell
    end tell
    I hope someone can help me compile the remaining data.
    Many thanks
    Matt

    OK i've done my homework and I have been able to get a lot closer I just need to make the search specific to a number of folders on the desktop?
    Line 7 explains how I would like it to search.
    set spotlightqueryList to {"Shot_by_Matthew", "Editted_by_Matthew", "Shot_by_Shah", "Editted_by_Shah"}
    set spotlightqueryList2 to {"AL70", "BH70", "BH70", "BU40", "ES20", "DV25", "DJ30", "RA30", "FR10", "GT55", "MA65", "MB65", "MC65", "FI65", "MF65", "MH65", "NN_", "TM15", "WA35", "PR_", "SE_"}
    set thefolders to {"Desktop"}
    --Here I need to limit the search so that it only looks in folders of the desktop whose name begins with "BH", "BU", "DA", "DI", "DO", "FR", "IN", "MA", "NO", "TM", "WA", "PR", "SE"
    set thekind to "PSD"
    set csvFileName to "ProductivityLog.csv"
    set tHome to path to home folder as string
    set tc to count spotlightqueryList
    set theseCount to {}
    repeat tc times
              set end of theseCount to 0
    end repeat
    set tc2 to count spotlightqueryList2
    set theseCount2 to {}
    repeat tc2 times
              set end of theseCount2 to 0
    end repeat
    repeat with i in thefolders
              set thepath to my existsItem(tHome & i)
              if thepath is not "" then -- exists
                        repeat with j from 1 to tc
                                  set tQuery to item j of spotlightqueryList
                                  do shell script "mdfind -onlyin " & thepath & " " & tQuery & " " & thekind & " | wc -l" -- wc return the number of lines
                                  set item j of theseCount to (item j of theseCount) + (the result as integer) -- add the number of lines
                        end repeat
              end if
    end repeat
    repeat with i2 in thefolders
              set thepath2 to my existsItem2(tHome & i2)
              if thepath2 is not "" then -- exists
                        repeat with j2 from 1 to tc2
                                  set tQuery2 to item j2 of spotlightqueryList2
                                  do shell script "mdfind -onlyin " & thepath2 & "  -name " & tQuery2 & " " & thekind & " | wc -l" -- wc return the number of lines
                                  set item j2 of theseCount2 to (item j2 of theseCount2) + (the result as integer) -- add the number of lines
                        end repeat
              end if
    end repeat
    set csvPath to "DCKGEN:Brands:Zoom:Online Photography:" & csvFileName
    set oTID to text item delimiters
    set text item delimiters to "," -- CSV delimiter
    set thisLine to (theseCount as text) -- convert list to text, each number is separated by comma
    set thisLine2 to (theseCount2 as text) -- convert list to text, each number is separated by comma
    set text item delimiters to oTID
    tell (current date) to set tDate to short date string
    set beginning of theseCount to tDate -- insert the date (first column)
    set csvPath to "DCKGEN:Brands:Zoom:Online Photography:" & csvFileName
    set oTID to text item delimiters
    set text item delimiters to "," -- CSV delimiter
    set thisLine to (theseCount as text) -- convert list to text, each number is separated by comma
    set thisLine2 to (theseCount2 as text) -- convert list to text, each number is separated by comma
    set text item delimiters to oTID
    --- append this line to CSV file
    do shell script "echo " & (quoted form of thisLine) & (quoted form of thisLine2) & " >>" & quoted form of POSIX path of csvPath
    on existsItem(f)
              try
                        return quoted form of POSIX path of (f as alias) -- exists
              end try
              return "" -- else not exists
    end existsItem
    on existsItem2(f)
              try
                        return quoted form of POSIX path of (f as alias) -- exists
              end try
              return "" -- else not exists
    end existsItem2
    (* just a way to visually see it working
    set dialog to "Matt Shot: \"" & item 1 of theseCount & "\"" & return & return & "Matt Edit: \"" & item 2 of theseCount & "\"" & return & return & "Shah Shot: \"" & item 3 of theseCount & "\"" & return & return & "Shah Edit: \"" & item 4 of theseCount & "\"" & return & return
    display dialog dialog

  • Collect stmt

    hi
    collect means it adds all the numeric numeric fields based on non-numeric fields but i wnat add the single numeric field how to solve this one plz help me

    Hai Kiran
    go through the following Documentation
    COLLECT [wa INTO] itab.
    Addition
    ... SORTED BY f
    Effect
    COLLECT is used to create unique or compressed datsets. The key fields are the default key fields of the internal table itab .
    If you use only COLLECT to fill an internal table, COLLECT makes sure that the internal table does not contain two entries with the same default key fields.
    If, besides its default key fields, the internal table contains number fields (see also ABAP/4 number types ), the contents of these number fields are added together if the internal table already contains an entry with the same key fields.
    If the default key of an internal table processed with COLLECT is blank, all the values are added up in the first table line.
    If you specify wa INTO , the entry to be processed is taken from the explicitly specified work area wa . If not, it comes from the header line of the internal table itab .
    After COLLECT , the system field SY-TABIX contains the index of the - existing or new - table entry with default key fields which match those of the entry to be processed.
    Notes
    COLLECT can create unique or compressed datasets and should be used precisely for this purpose. If uniqueness or compression are unimportant, or two values with identical default key field values could not possibly occur in your particular task, you should use APPEND instead. However, for a unique or compressed dataset which is also efficient, COLLECT is the statement to use.
    If you process a table with COLLECT , you should also use COLLECT to fill it. Only by doing this can you guarantee that
    the internal table will actually be unique or compressed, as described above and
    COLLECT will run very efficiently.
    If you use COLLECT with an explicitly specified work area, it must be compatible with the line type of the internal table.
    Example
    Compressed sales figures for each company
    DATA: BEGIN OF COMPANIES OCCURS 10,
            NAME(20),
            SALES TYPE I,
          END   OF COMPANIES.
    COMPANIES-NAME = 'Duck'.  COMPANIES-SALES = 10.
    COLLECT COMPANIES.
    COMPANIES-NAME = 'Tiger'. COMPANIES-SALES = 20.
    COLLECT COMPANIES.
    COMPANIES-NAME = 'Duck'.  COMPANIES-SALES = 30.
    COLLECT COMPANIES.
    The table COMPANIES now has the following appearance:
    NAME SALES
    Duck 40
    Tiger 20
    Addition
    ... SORTED BY f
    Effect
    COLLECT ... SORTED BY f is obsolete and should no longer be used. Use APPEND ... SORTED BY f which has the same meaning.
    Note
    Performance
    The cost of a COLLECT in terms of performance increases with the width of the default key needed in the search for table entries and the number of numeric fields with values which have to be added up, if an entry is found in the internal table to match the default key fields.
    If no such entry is found, the cost is reduced to that required to append a new entry to the end of the table.
    A COLLECT statement used on a table which is 100 bytes wide and has a key which is 60 bytes wide and seven numeric fields is about approx. 50 msn (standardized microseconds).
    Note
    Runtime errors
    COLLECT_OVERFLOW : Overflow in integer field when calculating totals.
    COLLECT_OVERFLOW_TYPE_P : Overflow in type P field when calculating totals.
    Thanks & regards
    Sreenivasulu P

  • Regarding COLLECT stmt usage in an ABAP Program.

    Hi All,
    Could anyone please explain if the COLLECT statement really hampers the performance of the program, to a large extent.
    If it is so, please explain how the performance can be improved with out using the same.
    Thanks & Regards,
    Goutham.

    COLLECT allows you to create unique or summarized datasets. The system first tries to find a table entry corresponding to the table key. (See also Defining Keys for Internal Tables). The key values are taken either from the header line of the internal table itab, or from the explicitly-specified work area wa. The line type of itab must be flat - that is, it cannot itself contain any internal tables. All the components that do not belong to the key must be numeric types ( ABAP Numeric Types).
    Notes
    COLLECT allows you to create a unique or summarized dataset, and you should only use it when this is necessary. If neither of these characteristics are required, or where the nature of the table in the application means that it is impossible for duplicate entries to occur, you should use INSERT [wa INTO] TABLE itab instead of COLLECT. If you do need the table to be unique or summarized, COLLECT is the most efficient way to achieve it.
    If you use COLLECT with a work area, the work area must be compatible with the line type of the internal table.
    If you edit a standard table using COLLECT, you should only use the COLLECT or MODIFY ... TRANSPORTING f1 f2 ... statements (where none of f1, f2, ... may be in the key). Only then can you be sure that:
    -The internal table actually is unique or summarized
    -COLLECT runs efficiently. The check whether the dataset
    already contains an entry with the same key has a constant
    search time (hash procedure).
    If you use any other table modification statements, the check for entries in the dataset with the same key can only run using a linear search (and will accordingly take longer). You can use the function module ABL_TABLE_HASH_STATE to test whether the COLLECT has a constant or linear search time for a given standard table.
    Example
    Summarized sales figures by company:
    TYPES: BEGIN OF COMPANY,
            NAME(20) TYPE C,
            SALES    TYPE I,
          END OF COMPANY.
    DATA: COMP    TYPE COMPANY,
          COMPTAB TYPE HASHED TABLE OF COMPANY
                                    WITH UNIQUE KEY NAME.
    COMP-NAME = 'Duck'.  COMP-SALES = 10. COLLECT COMP INTO COMPTAB.
    COMP-NAME = 'Tiger'. COMP-SALES = 20. COLLECT COMP INTO COMPTAB.
    COMP-NAME = 'Duck'.  COMP-SALES = 30. COLLECT COMP INTO COMPTAB.
    Table COMPTAB now has the following contents:
              NAME    | SALES
              Duck    |   40
              Tiger   |   20

  • Collection inaccessible from Before Header process

    Hi,
    I have a form based on the Matrix Order application. There's a collection-based report with a bunch of text fields that can be updated and a column with a "Remove" item that basically calls a conditional Before Header process to remove the item from the collection. What I want to do is update the collection (with the values that the end-user may have entered in the updateable columns) prior to removing any item. So when the user clicks on "remove", first I do an update_member_attribute and finally I remove the item the user chose. My problem is that the updates are skipped altogether; the code is not even executed, but the removal is. It's like the collection doesn't exist at one point, but exists the next. I added an insert into a dummy table to see how many items the collection had, and it returned zero. However, the delete worked just fine. Can you have a look at my code, to see if I'm missing something obvious? Thanks a lot!
    Here's the code in the Before Header:
    --Here we find out how many rows in the collection
    select count(*) into c from apex_collections where collection_name = 'ORDER';
    insert into table1 (column1) values ('to update '||i); --This inserts "to update 0" into my dummy table.
    --Update collection - nice try!
    FOR x IN 1..APEX_APPLICATION.g_f01.COUNT --This code is skipped; according to the previous select, the collection has 0 rows
    LOOP
    c := x;
    apex_collection.update_member_attribute
    ( p_collection_name => 'ORDER',
    p_seq => APEX_APPLICATION.g_f03 (x),
    p_attr_number => 4,
    p_attr_value => APEX_APPLICATION.g_f01 (x) );
    apex_collection.update_member_attribute
    ( p_collection_name => 'ORDER',
    p_seq => APEX_APPLICATION.g_f03 (x),
    p_attr_number => 6,
    p_attr_value => APEX_APPLICATION.g_f02 (x) );
    END LOOP;
    --This works fine, however!!!
    apex_collection.delete_member(p_collection_name => 'ORDER', p_seq => :P612_SEQ_ID);
    I've tried placing the code in a "On submit" process but it's never executed. I guess that the links on report columns are not considered submits, just redirects or something else.
    Gabe

    There is data, though:
    <tr onmouseover="row_mouse_over5754815862961903(this, 1)" onmouseout="row_mouse_out5754815862961903(this, 1)"><td class="t15data" >21</td><td class="t15data" >Zapatos marrones</td><td class="t15data" >115</td><td class="t15data" ><*label for="f01_0001"* class="hideMe508">C004</label><input type="text" name="f01" size="15" maxlength="2000" value="1" id="f01_0001" /></td><td class="t15data" >898989</td><td class="t15data" ><label for="f02_0001" class="hideMe508">C006</label><input type="text" name="f02" size="15" maxlength="2000" value="75" id="f02_0001" /></td><td class="t15data" >Zapatos de cuero marron horribles</td><td class="t15data" ><img src="/i/minus.gif" border="0" title="Eliminar" alt="Icon 1"><label for="f03_0001" class="hideMe508">SEQ_ID</label><input type="hidden" name="f03" value="1" id="f03_0001" /><input type="hidden" name="fcs" value="AA82BB7B2A984AD8C1B429F5F782B915" /></td></tr><tr onmouseover="row_mouse_over5754815862961903(this, 2)" onmouseout="row_mouse_out5754815862961903(this, 2)"><td class="t15data" >22990</td><td class="t15data" >V/vs. bordado</td><td class="t15data" > - </td><td class="t15data" ><label for="f01_0002" class="hideMe508">C004</label><input type="text" name="f01" size="15" maxlength="2000" value="1" id="f01_0002" /></td><td class="t15data" >099900866</td><td class="t15data" ><label for="f02_0002" class="hideMe508">C006</label><input type="text" name="f02" size="15" maxlength="2000" value="" id="f02_0002" /></td><td class="t15data" >V/vs. bordado</td><td class="t15data" ><img src="/i/minus.gif" border="0" title="Eliminar" alt="Icon 1"><label for="f03_0002" class="hideMe508">SEQ_ID</label><input type="hidden" name="f03" value="2" id="f03_0002" /><input type="hidden" name="fcs" value="096444F9ABCDC797CE35E5A694057CF4" /></td></tr><tr onmouseover="row_mouse_over5754815862961903(this, 3)" onmouseout="row_mouse_out5754815862961903(this, 3)"><td class="t15data" >22990</td><td class="t15data" >V/vs. bordado</td><td class="t15data" > - </td><td class="t15data" ><label for="f01_0003" class="hideMe508">C004</label><input type="text" name="f01" size="15" maxlength="2000" value="1" id="f01_0003" /></td><td class="t15data" >099900866</td><td class="t15data" ><label for="f02_0003" class="hideMe508">C006</label><input type="text" name="f02" size="15" maxlength="2000" value="" id="f02_0003" /></td><td class="t15data" >V/vs. bordado</td><td class="t15data" ><img src="/i/minus.gif" border="0" title="Eliminar" alt="Icon 1"><label for="f03_0003" class="hideMe508">SEQ_ID</label><input type="hidden" name="f03" value="3" id="f03_0003" /><input type="hidden" name="fcs" value="DA834BD62491260D28F63E05662DDD97" /></td></tr><tr onmouseover="row_mouse_over5754815862961903(this, 4)" onmouseout="row_mouse_out5754815862961903(this, 4)"><td class="t15data" >22993</td><td class="t15data" >V/salerno-2003-rosa/celeste</td><td class="t15data" > - </td><td class="t15data" ><label for="f01_0004" class="hideMe508">C004</label><input type="text" name="f01" size="15" maxlength="2000" value="1" id="f01_0004" /></td><td class="t15data" >169780340</td><td class="t15data" ><label for="f02_0004" class="hideMe508">C006</label><input type="text" name="f02" size="15" maxlength="2000" value="" id="f02_0004" /></td><td class="t15data" >V/salerno-2003-rosa/celeste</td><td class="t15data" ><img src="/i/minus.gif" border="0" title="Eliminar" alt="Icon 1"><label for="f03_0004" class="hideMe508">SEQ_ID</label><input type="hidden" name="f03" value="4" id="f03_0004" /><input type="hidden" name="fcs" value="51CCCCF026D1C8F9350AA019CDCA3432" /></td></tr><tr>
    Edited by: Gab2 on Oct 26, 2009 12:40 PM

  • Using FOR .. LOOP counter in handling of PL/SQL procedures with nest. table

    Hi all!
    I'm learning PL/SQL on Steve Bobrovsky's book (specified below sample is from it) and I've a question.
    In the procedure of specified below program used an integer variable currentElement to get reference to the row of nested table of %ROWTYPE datatype.
    Meanwhile, the program itself uses a common FOR .. LOOP counter i.
    DECLARE
    TYPE partsTable IS TABLE OF parts%ROWTYPE;
    tempParts partsTable := partsTable();
    CURSOR selectedParts IS
      SELECT * FROM parts ORDER BY id;
    currentPart selectedParts%ROWTYPE;
    currentElement INTEGER;
    PROCEDURE printParts(p_title IN VARCHAR2, p_collection IN partsTable) IS
      BEGIN
       DBMS_OUTPUT.PUT_LINE(' ');
       DBMS_OUTPUT.PUT_LINE(p_title || ' elements: ' || p_collection.COUNT);
       currentElement := p_collection.FIRST;
       FOR i IN 1 .. p_collection.COUNT
       LOOP
        DBMS_OUTPUT.PUT('Element #' || currentElement || ' is ');
         IF tempParts(currentElement).id IS NULL THEN DBMS_OUTPUT.PUT_LINE('an empty element.');
         ELSE DBMS_OUTPUT.PUT_LINE('ID: ' || tempParts(currentElement).id || ' DESCRIPTION: ' || tempParts(currentElement).description);
         END IF;
        currentElement := p_collection.NEXT(currentElement);
       END LOOP;
    END printParts;
    BEGIN
    FOR currentPart IN selectedParts
    LOOP
      tempParts.EXTEND(2);
      tempParts(tempParts.LAST) := currentPart;
    END LOOP;
    printParts('Densely populated', tempParts);
    FOR i IN 1 .. tempParts.COUNT
    LOOP
      IF tempParts(i).id is NULL THEN tempParts.DELETE(i);
      END IF;
    END LOOP;
    FOR i IN 1 .. 50
    LOOP
      DBMS_OUTPUT.PUT('-');
    END LOOP;
    printParts('Sparsely populated', tempParts);
    END;
    /When I've substituted an INTEGER global variable with such FOR .. LOOP counter, an APEX have returned an error "ORA-01403: no data found".
    DECLARE
    TYPE partsTable IS TABLE OF parts%ROWTYPE;
    tempParts partsTable := partsTable();
    CURSOR selectedParts IS
      SELECT * FROM parts ORDER BY id;
    currentPart selectedParts%ROWTYPE;
    PROCEDURE printParts(p_title IN VARCHAR2, p_collection IN partsTable) IS
      BEGIN
       DBMS_OUTPUT.PUT_LINE(' ');
       DBMS_OUTPUT.PUT_LINE(p_title || ' elements: ' || p_collection.COUNT);
       FOR i IN 1 .. p_collection.COUNT
       LOOP
        DBMS_OUTPUT.PUT('Element is ');
         IF tempParts(i).id IS NULL THEN DBMS_OUTPUT.PUT_LINE('an empty element.');
         ELSE DBMS_OUTPUT.PUT_LINE('ID: ' || tempParts(i).id || ' DESCRIPTION: ' || tempParts(i).description);
         END IF;
       END LOOP;
    END printParts;
    BEGIN
    FOR currentPart IN selectedParts
    LOOP
      tempParts.EXTEND(2);
      tempParts(tempParts.LAST) := currentPart;
    END LOOP;
    printParts('Densely populated', tempParts);
    FOR i IN 1 .. tempParts.COUNT
    LOOP
      IF tempParts(i).id is NULL THEN tempParts.DELETE(i);
      END IF;
    END LOOP;
    FOR i IN 1 .. 50
    LOOP
      DBMS_OUTPUT.PUT('-');
    END LOOP;
    printParts('Sparsely populated', tempParts);
    END;
    /When I've tried to handle this code in SQL*Plus, the following picture have appeared:
    Densely populated elements: 10
    Element is an empty element.
    Element is ID: 1 DESCRIPTION: Fax Machine
    Element is an empty element.
    Element is ID: 2 DESCRIPTION: Copy Machine
    Element is an empty element.
    Element is ID: 3 DESCRIPTION: Laptop PC
    Element is an empty element.
    Element is ID: 4 DESCRIPTION: Desktop PC
    Element is an empty element.
    Element is ID: 5 DESCRIPTION: Scanner
    Sparsely populated elements: 5
    DECLARE
    ERROR at line 1:                                 
    ORA-01403: no data found                         
    ORA-06512: at line 14                            
    ORA-06512: at line 35What's wrong in code(or what I have not understood)? Help please!

    942736 wrote:
    What's wrong in code(or what I have not understood)? Help please!First code. You have collection of 10 elements:
    1 - null
    2 - populated
    3 - null
    4 - populated
    5 - null
    6 - populated
    7 - null
    8 - populated
    9 - null
    10 - populated
    Then you delete null elements and have 5 element collection
    2 - populated
    4 - populated
    6 - populated
    8 - populated
    10 - populated
    Now you execute:
    printParts('Sparsely populated', tempParts);Inside procedure you execute:
    currentElement := p_collection.FIRST;
    This assingns currentElement value 2. Then procedure loops 5 times (collection element count is 5). Element 2 exists. Inside loop procedure executes:
    currentElement := p_collection.NEXT(currentElement);
    which assigns currentElement values 4,6,8,10 - all existing elements.
    Now second code. Everything is OK until you delete null elements. Again we have:
    2 - populated
    4 - populated
    6 - populated
    8 - populated
    10 - populated
    Again you execute:
    printParts('Sparsely populated', tempParts);Now procedure loops 5 times (i values are 1,2,3,4,5):
    FOR i IN 1 .. p_collection.COUNT
    Very first iteration assingns i value 1. And since collection has no element with substript 1 procedure raises no data found.
    SY.

  • Counting TTL pulses at high speed

    Hi all,
    I am using PCI-6221 board with DAQmx to count the number of TTL pulses (which varies in its frequency between 0Hz to 10MHz) at a high speed (200,000 samples/sec.) and I am having a problem when the TTL pulse frequency drops below a certain level.
    I am using CTR0 to generate continuous pulse train at 200kHz frequency to feed to CTR1 Gate input. I verified that the pulse train is being generated fine.
    I am using CRT1 with buffered counting to collect the count for 200,000 samples at a time (duration of 1 sec.). I got the example code (Cnt-Buf-Cont-ExtClk) and pretty much used it as is.
    CTR1 Gate is coming from CTR0 Out, which is 200kHz pulse train with 50% duty cycle, and CTR1 Source is the TTL signal that I am trying to count. At first, I thought that everything was working fine with the Source signal being at around 5MHz. Then, when I had the Source signal down below about 300kHz, I noticed that the program is taking longer than 1 sec. to collect the same 200k samples. Then, when I got the Source signal down to 0Hz, the program timed out.
    I am guessing that somehow the counter is not reading for the next sample when there has been no change to the count, but I cannot figure out why and how.
    Any information on this and a way to get around would be greatly appreciated.
    Kwang

    One thing you can try is to set the counter input CI.DupCounterPrevention property, this setting will filter the input, it is possible that when the ctr 0 is slow then many of the values you are counting become zero as well and are filtered out, since they are nolonger points, the counter will not collect enough points before the time-out occurs and the counter input read times out.  I am not sure if this is your issue but I found out the hard way that this occurs when I switched to daqMX where this feature was added.  Let me know if it worked,
    Paul
    Paul Falkenstein
    Coleman Technologies Inc.
    CLA, CPI, AIA-Vision
    Labview 4.0- 2013, RT, Vision, FPGA

  • Collect information from an exprenal SQL DB and collect it into a SCOM Databag so we can collect the data, report and report on it.

    The DB is collecting counts.  We want to collect the counts into a bag, then once in the bag be able to report the counts (need to collect on an hourly basis)

    Hi TLC426,
    you can use a timed powershell or vb script to connect to the db and run a sql command against it. The result must then be parsed and returned in the property bag.
    As a starting point you should have a look at Pete Zerger's blog post
    http://www.systemcentercentral.com/scom-query-sql-database-for-value-in-monitoring-script/ .
    Just wrap around the creation of a property bag and replace the eventlog entry generation with returning the value.
    Please keep in mind that you should always should return a property bag, even it is empty.
    Regards
    Boris
    Regards, Boris

  • Get-Counter : The \\ServerNameHere\\SQLSERVER:Locks(_Total)\Lock Waits/sec performance counter path is not valid.

    I have the following code:
    Import-Module "sqlps" -DisableNameChecking
    #####http://www.travisgan.com/2013/03/powershell-and-performance-monitor.html
    function ExtractPerfmonData 
        param(
            [string]$server,
            [string]$instance
        [Microsoft.PowerShell.Commands.GetCounter.PerformanceCounterSampleSet]$collections
        $monitorServer = "MonitoringServerNameHere"
        $monitorDB = "MonitoringDatabaseNameHere"
        $counters = @(
            "\$($instance):Memory Manager\Memory Grants Pending",
            "\$($instance):Memory Manager\Target Server Memory (KB)",
            "\$($instance):Memory Manager\Total Server Memory (KB)",
            "\$($instance):Buffer Manager\Buffer Cache Hit Ratio",
            "\$($instance):Buffer Manager\Checkpoint pages/sec",
            "\$($instance):Buffer Manager\Page Life Expectancy",
            "\$($instance):General Statistics\User Connections",
            "\$($instance):General Statistics\Processes Blocked",
            "\$($instance):Access Methods\Page Splits/sec",
            "\$($instance):SQL Statistics\Batch Requests/sec",
            "\$($instance):SQL Statistics\SQL Compilations/sec",
            "\$($instance):SQL Statistics\SQL Re-Compilations/sec",
            "\$($instance):Locks(_Total)\Lock Waits/sec"
        $collections = Get-Counter -ComputerName $server -Counter $counters -SampleInterval 1 -MaxSamples 1
        $sampling = $collections.CounterSamples | Select-Object -Property TimeStamp, Path, Cookedvalue
        $xmlString = $sampling | ConvertTo-Xml -As String
        $query = "dbo.usp_InsertPerfmonCounters_SQLServer '$xmlString';"
        Invoke-Sqlcmd -ServerInstance $monitorServer -Database $monitorDB -Query $query
    #####ExtractPerfmonData -server "YourRemoteServerName" -instance "MSSQL`$SQLTest"
    ExtractPerfmonData -server "ServerName1Here" -instance "SQLSERVER" 
    ExtractPerfmonData -server "ServerName2Here" -instance "SQLSERVER"
    ExtractPerfmonData -server "ServerName3Here" -instance "MSSQL`$InstanceNameHere"  
    ExtractPerfmonData -server "ServerName4Here" -instance "SQLSERVER" 
    (I have 93 instances listed; here I just gave a sample of 4.)
    For only one instance I get the following error message:
    Get-Counter : The
    \\ServerNameHere2\\SQLSERVER:Locks(_Total)\Lock Waits/sec performance counter path  is not valid.
    All of the instances should have been set up the same, so I am confused as to why this error only occurs on this one instance.  I do not know where to look.  Has anyone ever seen this type of error before?
    lcerni

    Here is the output of the first script:
    CounterSetName                                                                                                                     
    ...       (eliminating non sql server counters here)              
    SQLAgent:Alerts                                                        
    SQLAgent:Jobs                                                          
    SQLAgent:JobSteps                                                        
    SQLAgent:Statistics                                      
    SQLServer:Access Methods                                                                                          
    SQLServer:Availability Replica  SQLServer:Backup Device                                                                                                            
    SQLServer:Batch Resp Statistics                                                                                                    
    SQLServer:Broker Activation                                                                                                        
    SQLServer:Broker Statistics                                                                                                        
    SQLServer:Broker TO Statistics                                                                                                     
    SQLServer:Broker/DBM Transport                                                                                                     
    SQLServer:Buffer Manager                                                                                                           
    SQLServer:Buffer Node                                                                                                              
    SQLServer:Catalog Metadata                                                                                                         
    SQLServer:CLR                                                                                                                      
    SQLServer:Cursor Manager by Type                                                                                                   
    SQLServer:Cursor Manager Total                                                                                                     
    SQLServer:Database Mirroring                                                                                                       
    SQLServer:Database Replica                                                                                                         
    SQLServer:Databases                                                                                                                
    SQLServer:Deprecated Features                                                                                                      
    SQLServer:Exec Statistics                                                                                                          
    SQLServer:FileTable                                                                                                                
    SQLServer:General Statistics                                                                                                       
    SQLServer:Latches                                                                                                                  
    SQLServer:Locks                                                                                                                    
    SQLServer:Memory Broker Clerks                                                                                                     
    SQLServer:Memory Manager                                                                                                           
    SQLServer:Memory Node                                                                                                              
    SQLServer:Plan Cache                                                                                                               
    SQLServer:Query Execution                                                                                                          
    SQLServer:Replication Agents                                                                                                       
    SQLServer:Replication Dist.                                                                                                        
    SQLServer:Replication Logreader                                                                                                    
    SQLServer:Replication Merge                                                                                                        
    SQLServer:Replication Snapshot                                                                                                     
    SQLServer:Resource Pool Stats                                                                                                      
    SQLServer:SQL Errors                                                                                                               
    SQLServer:SQL Statistics                                                                                                           
    SQLServer:Transactions                                                                                                             
    SQLServer:User Settable                                                                                                            
    SQLServer:Wait Statistics                                                                                                          
    SQLServer:Workload Group Stats                                                                                                     
    ...       (eliminating non sql server counters here)
    lcerni

  • Smart collections not reading correct number of images inside

    I have an issue with smart collections for the first time. I have just set up a new workflow with 4 stages and the number counter next to the collection is counting wrong despite the images being represented in the collection. For example if I click on the smart colection I can see 10 images inside but the counter reads 3. Does anybody have this experience?
    I have optimised my cataloge, however I do have some 20,000 images inside... will this reduce the ability of LR to keep up to running speed with such smart collection operations, giving incorrect readings.
    What are best practices for image numbers and the arguement for running multiple cats. I've just tried in a fresh catalogue and the results were perfect. Is this an inherent limitation of Lightroom? I was under the impression that I could manage my whole archive and current projects under one roof, as it were.
    Thanks,
    Graeme

    I suppose it would be worth optimising the catalog, if you haven't already done so.
    Another idea, may be to make a fresh catalog and "import from another catalog" your current one, into that. This transfers pictures and keywords, collections, smart collections etc. But if there is something "structural" or "infrastructural" wrong with your live catalog, I'd expect that to get left behind by this process - since everyting should AFAIK get re-indexed after being brought in.
    Then you'd go forward with the new catalog, if successful. Doing this makes no change to your present catalog so can do no harm to try - the various pictures' source files are simply shared across both catalogs - single file referenced by both, IOW.
    One thing in your post (which I did not fully understand) suggested to me that your workflow relies at some point, on external metadata being written. If your workflow and smart collections criteria rely on the LR tracking of the status of external metadata, such as "is up to date", I believe this specific aspect has been found quite buggy and unreliable by many people: the metadata status badges etc, being sometimes rather approximate in their correspondence to reality . So if some other criterion could be found on which to base your workflow, that would probably go more smoothly . 
    However, if you do have more than one catalog pointing to just one set of photo source files, that aspect of writing metadata out, will of course be a little problematic - since the latest catalog to have written to the file will "overwrite", and the other catalog(s) will regard that same image as having been externally modified meanwhile.
    regards, RP

Maybe you are looking for

  • Display preferences crash during opening (10.6.6)

    I am using Mac 10.6.6. After I updated to 10.6.5 display preferences started to crash. I looked at following topic and it seems like the same issue.( I can't post there bcz the topic is now archived, so posting as a new topic) http://discussions.info

  • Setting up VoIP

    hi all, there are 2 locations, SiteA and SiteB connected point 2 point via Satellite connection. both the ends have cisco 1700 router with FXS voice cards respectively. i have connected both the ends with a handset and trying to call from one end, bu

  • OSHA 300 Report fields

    Hi Gurus, When running the standard OSHA 300 Report out of SAP, can anyone tell me how the field "Where the Event Occurred" is being populated from?  This is in the "Describe the Case" section, letter "E". So far I can find no documentation on where

  • IPhone/iTunes question

    I have two questions. One, how do I download selected songs from my iTunes folder on my Mac to my iPhone? Two, is there a good site for video tutorials on iPhone? mahalo

  • How do you record voice audio?

    Hi, I have been unable to record any voice audio at all. I have tried several types of microphones, and went directly into garage band, but no audio has been recording at all. The speaker part of the audio is working just fine. I am wondering if perh