Performance of Collections

Hi,
I plan to test this but I wanted to ask if anyone has any rules of thumb or expereince in this matter.
When returning results sets from the database, my colleagues have got into the habit of also getting a rowcount (the query runs twice in the stored procedure, once for the count; the second for the rows).
The rowcount is used to size the array appropriately.
Does anyone know if knowing in advance the number of elements for the collection is such a huge bonus it might justify extra work on the database side?
We normally use HashMaps and ArrayLists. I have once also used LinkedHashMap (to maintain order of the elements).
Thanks
Rakesh

Hi,
Resizing an ArrayList is probably a lot faster then
performing database queries.
/KajAgreed, although you could probably find out for sure with some simple test code.
~Cheers

Similar Messages

  • Performance of Collection Types

    Hiya,
    We are having severe performance problems using collections in one part of our system. I was hoping that someone may have some suggestions re init.ora parameters, memory allocation etc etc.
    The situation is - we have a form which is based on a view. This view in turn is based on a collection within a package specification on the database. This collection is populated before the user enters the form by a call to code on the database, let us call this create_collection. The create_collection code runs a SQL statement which does a CAST( MULTISET and SELECTs from 3 inline views, each of which itself joins to another collection via TABLE(CAST. It should be noted that one of the inline views is joining to a table of size 9 and a half million rows, which is large for our application. Having said that, I have explain planned the code and the execution plan is correctly using the appropriate indexes when you remove the TABLE( CASTS from the from clause of the SQL.
    As a temporary solution we could change the view to be based on a series of temporary tables ( which is populated for each user on a session by session basis ). Having benchmarked this we found that performance changed from 90 seconds to query the view using collections, to 0.4 second to query the view when all collections are removed and temporary tables subsituted instead.
    Obviously this is a good solution, however the changes required across our system would be numerous to say the least. We use collections extensively elsewhere within the system and have not encountered this performance issue before, having said that nowhere else do we use CAST( MULTISET and TABLE(CAST within the same SQL statement.
    So, my question is - does anyone have any tips for increasing the performance of collections when the underlying SQL is processing large volumes of data.
    Cheers
    Chris

    No. Java handles memory alignment internally, you should not be worrying about that yourself. Whatever you think you can do, Java does it better. A lot better. Write dumb code in stead to help the JVM optimize.
    http://java.sun.com/developer/technicalArticles/Interviews/devinsight_1/

  • Log files/troubleshooting performance data collection

    Hello: 
    Trying to use MAP 9.0, 
    When doing performance data collection, getting errors.  Is there a log file or event log that captures why the errors are occurring?
    One posting said to look in bin\log - but there is no log directory under BIN for this version it seems.  
    Thank you, 
    Mustafa Hamid, System Center Consultant

    Hi Mark,
    There's no CLEANER_ADJUST_UTILIZATION in EnvironmentConfig for BDB JE 5.0.43 which I'm currently using, I also tried
       envConfig.setConfigParam("je.cleaner.adjustUtilization",
              "false");
    it fails to start up with below error
    Caused by: java.lang.IllegalArgumentException: je.cleaner.adjustUtilization is not a valid BDBJE environment parameter
        at com.sleepycat.je.dbi.DbConfigManager.setConfigParam(DbConfigManager.java:412) ~[je-5.0.43.jar:5.0.43]
        at com.sleepycat.je.EnvironmentConfig.setConfigParam(EnvironmentConfig.java:3153) ~[je-5.0.43.jar:5.0.43]

  • Performance Analysis 怎么用collect -j 收集tomcat上的web项目的性能数据?

    Performance Analysis 怎么用collect -j 收集tomcat上的web项目的性能数据?
    How to user Performance Analysis  collect -j to collect web apps data on tomcat?
    Thank you!

    如果你用的是studio 12.4,collect可以收集从脚本运行的java程序。
    比如如果从一个shell脚本run.sh运行tomcat,那么运行
    > collect -j on sh run.sh
    就可以采集tomcat运行后所有加载的web项目的数据。
    如果你的web项目在某个时间段内运行,而且你只关心这个时间段的数据,你可以通过signal控制数据采集的开始和结束时间。
    > collect -j on -y PROF sh run.sh
    然后你可以向目标进程(假设进程号是1111)发送SIGPROF signal来开始或暂停数据采集:
    > kill -SIGPROF 1111
    在Linux上SIGPROF不能用,建议用SIGVTALRM。
    如果你用的是solaris,collect 可以附着在正在运行的程序(假设进程号是1111)上:
    > collect -j on -P 1111
    studio 12.4 页面:
    Oracle Solaris Studio Performance Analyzer

  • Need to increase performance-bulk collect in cursor with limit and in the for loop inserting into the trigger table

    Hi all,
    I have a performance issue in the below code,where i am trying to insert the data from table_stg into target_tab and in parent_tab tables and then to child tables via cursor with bulk collect .the target_tab and parent_tab are huge tables and have a row wise trigger enabled on it .the trigger is mandatory . This timetaken for this block to execute is 5000 seconds.Now my requirement is to reduce it to 5 to 10 mins.
    can someone please guide me here.Its bit urgent .Awaiting for your response.
    declare
    vmax_Value NUMBER(5);
      vcnt number(10);
      id_val number(20);
      pc_id number(15);
      vtable_nm VARCHAR2(100);
      vstep_no  VARCHAR2(10);
      vsql_code VARCHAR2(10);
      vsql_errm varchar2(200);
      vtarget_starttime timestamp;
      limit_in number :=10000;
      idx           number(10);
              cursor stg_cursor is
             select
                   DESCRIPTION,
                   SORT_CODE,
                   ACCOUNT_NUMBER,
                     to_number(to_char(CORRESPONDENCE_DATE,'DD')) crr_day,
                     to_char(CORRESPONDENCE_DATE,'MONTH') crr_month,
                     to_number(substr(to_char(CORRESPONDENCE_DATE,'DD-MON-YYYY'),8,4)) crr_year,
                   PARTY_ID,
                   GUID,
                   PAPERLESS_REF_IND,
                   PRODUCT_TYPE,
                   PRODUCT_BRAND,
                   PRODUCT_HELD_ID,
                   NOTIFICATION_PREF,
                   UNREAD_CORRES_PERIOD,
                   EMAIL_ID,
                   MOBILE_NUMBER,
                   TITLE,
                   SURNAME,
                   POSTCODE,
                   EVENT_TYPE,
                   PRIORITY_IND,
                   SUBJECT,
                   EXT_PRD_ID_TX,
                   EXT_PRD_HLD_ID_TX,
                   EXT_SYS_ID,
                   EXT_PTY_ID_TX,
                   ACCOUNT_TYPE_CD,
                   COM_PFR_TYP_TX,
                   COM_PFR_OPT_TX,
                   COM_PFR_RSN_CD
             from  table_stg;
    type rec_type is table of stg_rec_type index by pls_integer;
    v_rt_all_cols rec_type;
    BEGIN
      vstep_no   := '0';
      vmax_value := 0;
      vtarget_starttime := systimestamp;
      id_val    := 0;
      pc_id     := 0;
      success_flag := 0;
              vstep_no  := '1';
              vtable_nm := 'before cursor';
        OPEN stg_cursor;
              vstep_no  := '2';
              vtable_nm := 'After cursor';
       LOOP
              vstep_no  := '3';
              vtable_nm := 'before fetch';
    --loop
        FETCH stg_cursor BULK COLLECT INTO v_rt_all_cols LIMIT limit_in;
                  vstep_no  := '4';
                  vtable_nm := 'after fetch';
    --EXIT WHEN v_rt_all_cols.COUNT = 0;
        EXIT WHEN stg_cursor%NOTFOUND;
    FOR i IN 1 .. v_rt_all_cols.COUNT
      LOOP
       dbms_output.put_line(upper(v_rt_all_cols(i).event_type));
        if (upper(v_rt_all_cols(i).event_type) = upper('System_enforced')) then
                  vstep_no  := '4.1';
                  vtable_nm := 'before seq sel';
              select PC_SEQ.nextval into pc_id from dual;
                  vstep_no  := '4.2';
                  vtable_nm := 'before insert corres';
              INSERT INTO target1_tab
                           (ID,
                            PARTY_ID,
                            PRODUCT_BRAND,
                            SORT_CODE,
                            ACCOUNT_NUMBER,
                            EXT_PRD_ID_TX,         
                            EXT_PRD_HLD_ID_TX,
                            EXT_SYS_ID,
                            EXT_PTY_ID_TX,
                            ACCOUNT_TYPE_CD,
                            COM_PFR_TYP_TX,
                            COM_PFR_OPT_TX,
                            COM_PFR_RSN_CD,
                            status)
             VALUES
                            (pc_id,
                             v_rt_all_cols(i).party_id,
                             decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
                             v_rt_all_cols(i).sort_code,
                             'XXXX'||substr(trim(v_rt_all_cols(i).ACCOUNT_NUMBER),length(trim(v_rt_all_cols(i).ACCOUNT_NUMBER))-3,4),
                             v_rt_all_cols(i).EXT_PRD_ID_TX,
                             v_rt_all_cols(i).EXT_PRD_HLD_ID_TX,
                             v_rt_all_cols(i).EXT_SYS_ID,
                             v_rt_all_cols(i).EXT_PTY_ID_TX,
                             v_rt_all_cols(i).ACCOUNT_TYPE_CD,
                             v_rt_all_cols(i).COM_PFR_TYP_TX,
                             v_rt_all_cols(i).COM_PFR_OPT_TX,
                             v_rt_all_cols(i).COM_PFR_RSN_CD,
                             NULL);
                  vstep_no  := '4.3';
                  vtable_nm := 'after insert corres';
        else
              select COM_SEQ.nextval into id_val from dual;
                  vstep_no  := '6';
                  vtable_nm := 'before insertcomm';
          if (upper(v_rt_all_cols(i).event_type) = upper('REMINDER')) then
                vstep_no  := '6.01';
                  vtable_nm := 'after if insertcomm';
              insert into parent_tab
                 (ID ,
                 CTEM_CODE,
                 CHA_CODE,            
                 CT_CODE,                           
                 CONTACT_POINT_ID,             
                 SOURCE,
                 RECEIVED_DATE,                             
                 SEND_DATE,
                 RETRY_COUNT)
              values
                 (id_val,
                  lower(v_rt_all_cols(i).event_type), 
                  decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
                  'Email',
                  v_rt_all_cols(i).email_id,
                  'IADAREMINDER',
                  systimestamp,
                  systimestamp,
                  0);  
         else
                vstep_no  := '6.02';
                  vtable_nm := 'after else insertcomm';
              insert into parent_tab
                 (ID ,
                 CTEM_CODE,
                 CHA_CODE,            
                 CT_CODE,                           
                 CONTACT_POINT_ID,             
                 SOURCE,
                 RECEIVED_DATE,                             
                 SEND_DATE,
                 RETRY_COUNT)
              values
                 (id_val,
                  lower(v_rt_all_cols(i).event_type), 
                  decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
                  'Email',
                  v_rt_all_cols(i).email_id,
                  'CORRESPONDENCE',
                  systimestamp,
                  systimestamp,
                  0); 
            END if; 
                  vstep_no  := '6.11';
                  vtable_nm := 'before chop';
             if (v_rt_all_cols(i).ACCOUNT_NUMBER is not null) then 
                      v_rt_all_cols(i).ACCOUNT_NUMBER := 'XXXX'||substr(trim(v_rt_all_cols(i).ACCOUNT_NUMBER),length(trim(v_rt_all_cols(i).ACCOUNT_NUMBER))-3,4);
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)
              values
                (id_val,
                 'IB.Correspondence.AccountNumberMasked',
                 v_rt_all_cols(i).ACCOUNT_NUMBER);
             end if;
                  vstep_no  := '6.1';
                  vtable_nm := 'before stateday';
             if (v_rt_all_cols(i).crr_day is not null) then 
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)
              values
                (id_val,
                 --'IB.Correspondence.Date.Day',
                 'IB.Crsp.Date.Day',
                 v_rt_all_cols(i).crr_day);
             end if;
                  vstep_no  := '6.2';
                  vtable_nm := 'before statemth';
             if (v_rt_all_cols(i).crr_month is not null) then 
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)
              values
                (id_val,
                 --'IB.Correspondence.Date.Month',
                 'IB.Crsp.Date.Month',
                 v_rt_all_cols(i).crr_month);
             end if;
                  vstep_no  := '6.3';
                  vtable_nm := 'before stateyear';
             if (v_rt_all_cols(i).crr_year is not null) then 
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)
              values
                (id_val,
                 --'IB.Correspondence.Date.Year',
                 'IB.Crsp.Date.Year',
                 v_rt_all_cols(i).crr_year);
             end if;
                  vstep_no  := '7';
                  vtable_nm := 'before type';
               if (v_rt_all_cols(i).product_type is not null) then
                  insert into child_tab
                     (COM_ID,                                            
                     KEY,                                                                                                                                        
                     VALUE)
                  values
                    (id_val,
                     'IB.Product.ProductName',
                   v_rt_all_cols(i).product_type);
                end if;
                  vstep_no  := '9';
                  vtable_nm := 'before title';         
              if (trim(v_rt_all_cols(i).title) is not null) then
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE )
              values
                (id_val,
                 'IB.Customer.Title',
                 trim(v_rt_all_cols(i).title));
              end if;
                  vstep_no  := '10';
                  vtable_nm := 'before surname';
              if (v_rt_all_cols(i).surname is not null) then
                insert into child_tab
                   (COM_ID,                                            
                   KEY,                                                                                                                                          
                   VALUE)
                values
                  (id_val,
                  'IB.Customer.LastName',
                  v_rt_all_cols(i).surname);
              end if;
                            vstep_no  := '12';
                            vtable_nm := 'before postcd';
              if (trim(v_rt_all_cols(i).POSTCODE) is not null) then
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)                              
               values
                (id_val,
                 'IB.Customer.Addr.PostCodeMasked',
                  substr(replace(v_rt_all_cols(i).POSTCODE,' ',''),length(replace(v_rt_all_cols(i).POSTCODE,' ',''))-2,3));
              end if;
                            vstep_no  := '13';
                            vtable_nm := 'before subject';
              if (trim(v_rt_all_cols(i).SUBJECT) is not null) then
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)                              
               values
                (id_val,
                 'IB.Correspondence.Subject',
                  v_rt_all_cols(i).subject);
              end if;
                            vstep_no  := '14';
                            vtable_nm := 'before inactivity';
              if (trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) is null or
                  trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '3' or
                  trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '6' or
                  trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '9') then
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)                              
               values
                (id_val,
                 'IB.Correspondence.Inactivity',
                  v_rt_all_cols(i).UNREAD_CORRES_PERIOD);
              end if;
                          vstep_no  := '14.1';
                          vtable_nm := 'after notfound';
        end if;
                          vstep_no  := '15';
                          vtable_nm := 'after notfound';
        END LOOP;
        end loop;
                          vstep_no  := '16';
                          vtable_nm := 'before closecur';
        CLOSE stg_cursor;
                          vstep_no  := '17';
                          vtable_nm := 'before commit';
        DELETE FROM table_stg;
      COMMIT;
                          vstep_no  := '18';
                          vtable_nm := 'after commit';
    EXCEPTION
    WHEN OTHERS THEN
      ROLLBACK;
      success_flag := 1;
      vsql_code := SQLCODE;
      vsql_errm := SUBSTR(sqlerrm,1,200);
      error_logging_pkg.inserterrorlog('samp',vsql_code,vsql_errm, vtable_nm,vstep_no);
      RAISE_APPLICATION_ERROR (-20011, 'samp '||vstep_no||' SQLERRM:'||SQLERRM);
    end;
    Thanks

    Its bit urgent
    NO - it is NOT urgent. Not to us.
    If you have an urgent problem you need to hire a consultant.
    I have a performance issue in the below code,
    Maybe you do and maybe you don't. How are we to really know? You haven't posted ANYTHING indicating that a performance issue exists. Please read the FAQ for how to post a tuning request and the info you need to provide. First and foremost you have to post SOMETHING that actually shows that a performance issue exists. Troubleshooting requires FACTS not just a subjective opinion.
    where i am trying to insert the data from table_stg into target_tab and in parent_tab tables and then to child tables via cursor with bulk collect .the target_tab and parent_tab are huge tables and have a row wise trigger enabled on it .the trigger is mandatory . This timetaken for this block to execute is 5000 seconds.Now my requirement is to reduce it to 5 to 10 mins.
    Personally I think 5000 seconds (about 1 hr 20 minutes) is very fast for processing 800 trillion rows of data into parent and child tables. Why do you think that is slow?
    Your code has several major flaws that need to be corrected before you can even determine what, if anything, needs to be tuned.
    This code has the EXIT statement at the beginning of the loop instead of at the end
        FETCH stg_cursor BULK COLLECT INTO v_rt_all_cols LIMIT limit_in;
                  vstep_no  := '4';
                  vtable_nm := 'after fetch';
    --EXIT WHEN v_rt_all_cols.COUNT = 0;
        EXIT WHEN stg_cursor%NOTFOUND;
    The correct place for the %NOTFOUND test when using BULK COLLECT is at the END of the loop; that is, the last statement in the loop.
    You can use a COUNT test at the start of the loop but ironically you have commented it out and have now done it wrong. Either move the NOTFOUND test to the end of the loop or remove it and uncomment the COUNT test.
    WHEN OTHERS THEN
      ROLLBACK;
    That basically says you don't even care what problem occurs or whether the problem is for a single record of your 10,000 in the collection. You pretty much just throw away any stack trace and substitute your own message.
    Your code also has NO exception handling for any of the individual steps or blocks of code.
    The code you posted also begs the question of why you are using NAME=VALUE pairs for child data rows? Why aren't you using a standard relational table for this data?
    As others have noted you are using slow-by-slow (row by row processing). Let's assume that PL/SQL, the bulk collect and row-by-row is actually necessary.
    Then you should be constructing the parent and child records into collections and then inserting them in BULK using FORALL.
    1. Create a collection for the new parent rows
    2. Create a collection for the new child rows
    3. For each set of LIMIT source row data
      a. empty the parent and child collections
      b. populate those collections with new parent/child data
      c. bulk insert the parent collection into the parent table
      d. bulk insert the child collection into the child table
    And unless you really want to either load EVERYTHING or abandon everything you should use bulk exception handling so that the clean data gets processed and only the dirty data gets rejected.

  • Performance data collection issue

    Hi,
    We are using SCOM 2007 r2.We have some servers not collecting performance data.These servers are up and running fine and generating alerts (Monitoring Working fine). Can any one please suggest us what is the work arround for this.
    Thanks&Regards,
    Padmaja M.

    Try to clear management server health service cache by stopping the system center management service, renaming the health service state folder and starting the service.
    Juke Chou
    TechNet Community Support

  • Performance manager collection

    The majority of the host end device data is collected but there are hosts that there is no data just n/a in the following fields:
    Avg. Rx/sec Avg. Tx/sec (Rx+Tx)/sec Peak Tx/sec Peak Tx/sec Errors Discards all have n/a entries. How do I enable the collection for these hosts.

    Performance Manager does not collect statistics on nonmanageable and non-MDS switches. Loop devices (FL/NL) are not collected.
    One year's worth of data for two variables (Rx and Tx bytes) requires a round-robin database (rrd) file size of 76 K. If errors and discards are also collected, the rrd file size becomes 110 K. The default internal values are as follows:
    •600 samples of 5 minutes (2 days and 2 hours)
    •700 samples of 30 minutes (12.5 days)
    •775 samples of 2 hours (50 days)
    •300 samples of 1 day
    A 1000-port SAN requires 110 MB for a year's worth of historical data that includes errors and discards. If there were 20 switches in this SAN with equal distribution of fabric ports, about two to three SNMP packets per switch would be sent every 5 minutes for a total of about 100 request or response SNMP packets required to monitor the data.
    Because of their variable counter requests, flows are more difficult to predict storage space requirements for. But in general you can expect that, each extra flow adds another 76 KB.

  • Performance manager-collection,flow

    Hi, in the collection wizard I can't find port-channel interfaces ( ISLs ). But in the real-time monitoring I can monitor all ISLs and port-channels. How can I collect history of port-channels? I have also IVR configured and I am not able to setup flows for IVR source and destination. I don't see the devices in the flow wizard. I see only devices within a single VSAN. How can I do that? Thanks in advance for Your Help.

    I opened a TAC case and the answer is:
    Customer can create a collection with the performance manager for ISLs,
    host, storage and the flows, not for the port channels. (Port channels
    are not supported) The wizard is not able to be used for the the flow
    creation for the IVR devices since the flow is created within a VSAN.
    Unfortunately this is as designed.
    I think it's really a very big problem when You can't monitor port-channels and IVR flows!!! That does mean this features are not usable in production environment where monitoring is a must!!! Why is this not written in the documentation on the first page with bold. It's really frustrating. Thanks for Your help.

  • Required account rights for Alert and Performance/Topology collection and operations SDK

    Hi,
    My question is with regards to required user role privileges for performing certain operations using Operations Manager 2007 R2 SDK.
    - Registering an internal connector to MS SCOM for alert subscription:
    http://msdn.microsoft.com/en-us/library/bb437580.aspx
    - Collection and acknowledging (or closing) of alerts
    http://msdn.microsoft.com/en-us/library/bb424130.aspx
    http://msdn.microsoft.com/en-us/library/hh327168.aspx
    Based on experience and this article (http://technet.microsoft.com/en-us/library/hh212758.aspx) on how to configure a product connector subscription I'm guessing
    that user account needs to have Operations Manager Administrators user role privileges, however I was unable to find this documented in context of SDK.
    Are requirements the same for GetMonitoringPerformanceData and GetConfigurationGroups methods?
    http://msdn.microsoft.com/en-us/library/microsoft.enterprisemanagement.monitoring.monitoringperformancedatareader.getmonitoringperformancedata.aspx
    http://msdn.microsoft.com/en-us/library/dd848560.aspx
    Thank you and kind regards,
    Zsolt

    Hi,
    As far as I know, to close an alert the user should be at least an Operator role. Please refer to the below article to get more information about SCOM user role profiles:
    Operations Associated with User Role Profiles
    http://technet.microsoft.com/en-us/library/hh872885.aspx
    Regards,
    Yan Li
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

  • How to perform Data Collection on single SFC with QTY = 1 with material lot size 1?

    Dear experts,
    We are working with SFC qty>1 on a relaxed routing. At a given operation we want to collect the data on single quantity; i.e. SFC qty on that operation, where the collection will happen, will be 1.The corresponding material lot size is for ex 10. The operator must be able to collect data on SFC with qty=1 multiple times until the quantities will be consumed. He must be also able to collect other values on the remaining quantities on the same operation with the same DC group or other DC groups. How many times the data must be collected is dependent on the shop order build quantity. The data may be collected several time but not more than the build qty. In other words some specific data will be collected on a qty of a product while others will be collected against remaining quantity. The data collection must be also done in a serialized manner.
    Here's what we have set up so far:
    1) 3 DC groups, each DC group has 3 data fields.
    2) Each data field has the following restrictions:  Required Data Entries = 0 and Optional Data Entries = 1
    3) All DC groups are attached on the same combination of operation\material\routing
    4) we are using relaxed routing
    Process description:
    The operator must be able to collect any data field on single product. For that he will enter the operation where the data collect are attached, he will enter the SFC with qty=1 then he will run the data collection after selecting the appropriate DC Group and entering the needed information. The operator will complete the SFC with qty=1.
    The operator will pick the next product, select the same SFC and enter qty 1 and collect other value against this product.
    Problem is:
    Once the first collection is done on a given SFC with entered qty=1, the system is not allowing the operator to do further collects on the same SFC with qty=1 or any other quantity. He cannot select any DC group from the DC group list. We tried also with the table selection menu on the DC Group list but nothing can be selected.
    So we tried to play around with the DC group definitions as follows:
    A) we set Required Data Entries = 0 and Optional Data Entries = 10. Still the operator was not able to select any DC group after collecting data the first time. We tried to reopen the pod and list again. But we get the same blocking behavior.
    B) we set Required Data Entries = 10 and Optional Data Entries = 1. The operator was able to select the DC group after collecting data the first time. BUT operator must enter the data fields 10 times on one SFC quantity, which is not what we want. Besides again he cannot collect other information on remaining quantities on the same operation.
    C) There's an option to serialize the SFC before reaching the operation where the collection is happening, then merging ofter complete. Automation is needed here; hence customization. We are highly avoiding customization now, since we expect the data collect to work well on single quantities even when the main SFC has qty>1
    Questions:
    1) Are we missing any kind of further configuration\setup?
    2) Or the current system design does not allow collecting data on single quantities of an SFC which main quantity is greater than 1?
    3) Looking at this link Approaches to Collection of Data - SAP Manufacturing Execution (SAP ME) - SAP Library, there's nothing mentioned about same SFC number with multiple quantities!!!
    We are using SAP ME 15.0.3.0.
    Thanks in advance for your help
    Ali

    Ali
    to collect data for the same SFC multiple times, your system rule "Allow Multiple Data Collection" needs to be set to true for the site.
    Stuart

  • After adding seperate performance collections to MAP 9.1 database, the database has become corrupted

    Hi,
    I have installed MAP 9.1(9.1.265.0) on my notebook, with Windows 7 Enterprise. The inventory of our environment was successfull and I have successfully added some performance collections to the database.
    First I ran the performance collection for one hour, then added a performance collection of one week. this was okay.
    Then I waited one week and added another collection of about five days. That collection would not stop: It was scheduled to run between 2014-07-28 12:49:31 until 2014-08-01 05:00:04. But it was still running at 2014-08-01 05:30:25. I waited for a little
    bit longer, but the collection kept on running according to the status screen.
    So I cancelled out of the collection at 2014-08-01 05:45. In the performance data it said that the colection ran from Jul 14 2014 08:04 AM until Aug 1 2014 4:57AM. So that looked okay.
    But now, when I try to Get the performance metrics data from MAP, it states that I have to do a "Refressh Assessment", because I problably have cancelled out of a collection. This "Refresh Assessment" wil run for about an hour and than
    completes with a message "Failed"
    I get these errors in the MapToolkit.log
    <2014-08-05 05:14:51.09
    AssessInventoryWorker@StoredProcAssessment,I> RunAssessment() - [Perf] [[Perf_Assessment].[ClearPerfdata]] : 125 ms
    <2014-08-05 05:14:56.13
    AssessInventoryWorker@StoredProcAssessment,I> RunAssessment() - [Perf] [[Perf_Assessment].[CreateTimeIntervals]] : 5039 ms
    <2014-08-05 05:22:47.76
    AssessInventoryWorker@StoredProcAssessment,I> RunAssessment() - [Perf] [[Perf_Assessment].[CreateMetricsPerTimeInterval]] : 471591 ms
    <2014-08-05 05:52:54.66
    AssessInventoryWorker@DataAccessCore,W> DoWorkInTransaction<T>() - Caught InvalidOperationException trying to roll back the transaction: This SqlTransaction has completed; it is no longer usable.
    <2014-08-05 05:52:54.79
    AssessInventoryWorker@DataAccessCore,W> DoWorkInTransaction<T>() - Caught a SQL transaction timeout exception. Will retry 3 more time(s). Retrying in 5000 milliseconds.
    <2014-08-05 05:53:15.86
    AssessInventoryWorker@DataAccessCore,W> OpenConnection() - Caught a SqlException trying to connect to the database.  Will retry connection 3 more time(s).  Retrying in 5000 milliseconds.
    <2014-08-05 05:53:20.99
    AssessInventoryWorker@DataAccessCore,W> OpenConnection() - Caught a SqlException trying to connect to the database.  Will retry connection 2 more time(s).  Retrying in 10000 milliseconds.
    <2014-08-05 05:53:45.00
    AssessInventoryWorker@DataAccessCore,W> OpenConnection() - Caught a SqlException trying to connect to the database.  Will retry connection 1 more time(s).  Retrying in 15000 milliseconds.
    <2014-08-05 06:24:03.69
    AssessInventoryWorker@DataAccessCore,W> DoWorkInTransaction<T>() - Caught a SQL transaction timeout exception. Will retry 2 more time(s). Retrying in 10000 milliseconds.
    <2014-08-05 06:54:13.91
    AssessInventoryWorker@DataAccessCore,W> DoWorkInTransaction<T>() - Caught a SQL transaction timeout exception. Will retry 1 more time(s). Retrying in 15000 milliseconds.
    <2014-08-05 07:24:28.99 AssessInventoryWorker@Analyzer,E> RunAssessments() - Assessment threw an exception:
    <2014-08-05 07:24:29.03
    AssessInventoryWorker@AssessInventoryWorker,I> AssessmentCompletedEventHandler: Assessment completed event.
    <2014-08-05 07:24:29.09
    AssessInventoryWorker@TaskProcessor,I> WorkerCompleted: Worker: 'AssessInventoryWorker'
    <2014-08-05 07:24:29.15 TID-16@TaskProcessor,I> Run: Completed. Status: Failed
    Is there maybe a restriction to the intervalls of adding performance collection data, or is there something else I am doing wrong?
    (I made a backup of the database after the first week of performance data, that database is still usable, so I can try to add more performance collections to that version of the database)
    I hope someone has an idea what is going on.
    Thanks!

    Time between isn't the problem. If you look in the log file, SQL is timing out. I think the problem is machine resources and time related. After the performance data collection has run for the predefined amount of time, MAP has SQL execute various assessments
    on the data to aggregate the raw data into something MAP can use. The more raw data that exists, the longer SQL will take and the more CPU and memory resources SQL will need.
    I would recommend that you have at least 4 cores or vCPU's and 6-8 GB of memory dedicated to the machine on which MAP is running. I would also follow the directions in this Wiki article to increase the timeout in MAP so that MAP will give SQL the time it
    needs to complete the job.
    http://social.technet.microsoft.com/wiki/contents/articles/10397.map-toolkit-increasing-the-sql-database-timeout-value.aspx
    Please remember to click "Mark as Answer" on the post that helps you, and to click
    "Unmark as Answer" if a marked post does not actually answer your question. Please
    VOTE as HELPFUL if the post helps you. This can be beneficial to other community members reading the thread.

  • Regarding Internal table and access performance

    hey guys.
    In my report , Somehow i reduced the query performance time by selecting minimum key fields and moved the selected records to internal table.
    Now from this internal table i am restricting the loop
    as per my requirements using where statements.(believing that internal table retrieval is more faster than database acces(using query)).
    But still my performance goes down.
    Could you pls suggest me how to reduce the execution time
    in abap programming.
    I used below commands.
    Read using binary search.
    loop ...where statement.
    perform statements.
    collect staements.
    delete itab.(delete duplicates staements too)
    sort itab(sorting).
    For each above statements do we have any faster way to retrieval records.
    If i see my bottle neck at se30.it shows
    ABAP programming to 70 percent
    database access to 20 percent
    R3 system as 10percent.
    now how to reduce this abap process.
    could you pls reply.
    ambichan.
    ambichan.

    Hello Ambichan,
    It is difficult to suggest the improvements without looking at the actual code that you are running. However, I can give you some general information.
    1. READ using the BINARY SEARCH addition.
    This is indeed a good way of doing a READ. But have you made sure that the internal table is <i>sorted by the required fields</i> before you use this statement ?
    2. LOOP...WHERE statement.
    This is also a good way to avoid looping through unnecessary entries. But further improvement can certainly be achieved if you use FIELD-SYMBOLS.
    LOOP AT ITAB INTO <FIELD_SYMBOL_OF_THE_SAME_LINE-TYPE_AS_ITAB>.
    ENDLOOP.
    3. PERFORM statements.
    A perform statement can not be optimized. what matters is the code that you write inside the FORM (or a subroutine).
    4. COLLECT statements.
    I trust you have used the COLLECT statement to simplify the logic. Let that be as it is. The code is more readable and elegant.
    The COLLECT statement is somewhat performance intensive. It takes more time with a normal internal table (STANDARD). See if you can use an internal table of type  SORTED. Even better, you can use a HASHED internal table.
    5. DELETE itab.(delete duplicates staements too)
    If you are making sure that you are deleting several entries based on a condition, then this should be okay. You cannot avoid using the DELETE statement if your functionality requires you to do so.
    Also, before deleting the DUPLICATES, ensure that the internal table is sorted.
    6. SORT statement.
    It depends on how many entries there are in the internal table. If you are using most of the above points on the same internal table, then it is better that you define your internal table to be of type SORTED. That way, inserting entries will take a little more time (to ensure that the table is always sorted), but alll the other operations are going to be much faster.
    Get back to me if you need further assistance.
    Regards,
    <a href="https://www.sdn.sap.com:443http://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.sdn.businesscard.sdnbusinesscard?u=zwcc%2fwm4ups%3d">anand Mandalika</a>.

  • AE CC 13.0.2 on Mac OSX 10.9.5 - file collecting feature does not work anymore

    hi, I am experiencing issues with the file collecting feature out of AE CC 13.0.2 on Mac OSX 10.9.5. I have the same kind of problems on my MacBook Pro Retina 15'' (2014) with 1TB SSD and 16 GB Ram and on my iMac (mid 2011 3,4 GHz Intel Core i7 16 GB RAM). I open a project without any problem, then try to collect onto an external hard drive. AE gives me error message, it cannot find a certain file (never the same file, I tried many times). I hand collected a current project onto external HD from my MacBook Pro, went to my imac, opened the hand collected Project from external HD (again no issue while opening) then again tried to perform the collect command. again error message notifying me that a certain file is missing. This is the case with different AE projects!! The After Effects collect files command to me is a very important feature that provides backup safety. with this feature not properly working I feel insecure, I need to fix this  asap.

    Oh yes. My bad. It is in face the one-second mark. And it is in every
    project I have tried.
    The assets in the first project that I noticed this happening on (which was
    also the first project I opened after upgrading to OSX 10.10.2) include:
    - Solid background with a color gradient
    - Alpha Track Matte
    - Optical Flare
    - Particular
    - Null object
    But this exact same problem occurs with every project I've tried to export,
    regardless of what's in the project/comp

  • Need to generate a report for Event collection rules created in SCOM

    Hi All,
    Can any one let me know where can i get a report of the Event based collection rules created in SCOM (For default and custom made).
    I used the below report but it gives the collection rules which are performance counter based not event collection based.
    Reporting -> Microsoft Generic report library ->Performance detail (Gives performance based Collection rule but not event based). Can any one help.
    Gautam.75801

    what is your objects setting?
    Suppose, I want to display the Windows Restart Event which is collected by "collection rule for windows Restarted Events" . from the Rules pane in Authoring workpsace, "collection rule for windows Restarted Events" is target on windows
    Server Operating system. As a result, my "custom event" report setting as
    1) Objects: objects instance of windows Operating system
    2) rules filter by rules name contains "windows Restarted Events"
    Roger

  • Campus Manager No Data Collected from Network

    Just intergraged CiscoWorks with ACS.  Now when we select Campus Manager | Visualization | Topology Services we get the following error message:
    No Data Collected from Network
    1.Devices are not availiable from DCR or
    2.You have not performed data collection at least once
    When we look at Common Services | Devices and Credentials | Device Discovery the total devices discovered and the devices reachable indicate there are devices that were found.
    We could see devices fine before the ACS intergration but now, we cannot see any devices.
    Any thoughts?

    The devices (original clients) that I defined are as follows:
    10.0.1.*
    10.0.10.*
    10.0.2.*
    I was able to see these clients.  Deleted the above clients from ACS and added:
    10.0.1-10.*
    So that I can see the above clients and others (10.0.3.*, 10.0.4.*,etc).
    I can still see the original clients only.  The others show up in Devices not configured in ACS report.
    What do I need to do?  What did I miss?

Maybe you are looking for

  • RoboHelp for Word 7 with version 5 source files

    I am trying to compile a WinHelp 4 file in RoboHelp 7, and it will not compile. The project is old and was last used in RoboHelp X5. I would love some suggestions. This is a very large help file and I don't want to recreate it. Working on a Windows X

  • From ESB to BPEL via DB Adapter.

    Hi' All! I've a database table with a clob holding an XML document. I also have a BPEL flow excepting a whole XML document. How do I map the Clob field to the BPEL flow in ESB? Rgds, Henrik

  • Lines in rasterized pdf (PS CS6)

    After opening pdf in photoshop CS6 (win 7 64 bit)  there are thin (1 px) horizontal and vertical lines. Pdf was created by distiler from PostScript from InDesign (InDesign -> PostScript -> Distiller -> pdf). Same pdf rasterized in photoshop CS4 does

  • Separate program to send a smartform converted into pdf, thru mail

    Hi Folks, I had created a smartform and converted it into a PDF format. Now I want a separate program where in if i submit this program it should sent it throuogh mail. Note:-<b>I want a separate program not the one where in you design a smartform,co

  • Port is not listening

    10.6.8 X-serve.  I have an app that requires ports 9100 and 9200 to be open.  I have added these ports to the server firewall to enable.  Started and stopped the sever firewall.  My app is unable to connect via these two ports.  Through terminal nets