Convert array to fewer smaples per second

Have large array with 9 channels of data per times stamp.
Timestamp shows that there are about 100 samples per second.
I need to generate array that only displays one average value per channel, per second.
Struggling with "build array" and "indexing" to get proper array.
Suggestions??
Solved!
Go to Solution.

I might even attach the VI this time!! (V 8.6)
Attachments:
Array -channel data.vi ‏9 KB

Similar Messages

  • Svchost.exe with "Dhcp, eventlog, lmhosts" services is generating thousands of page faults and I/O reads per second?

    On one of our Windows 2008 R2 Enterprise (SP1) servers, we're noticing a strange phenomenon.....that the svchost.exe that hosts "Dhcp, eventlog, lmhosts" is constantly generating page faults....a few thousand per second, accumulating to billions of total
    page faults.  I/O reads and I/O other are also rising every second.  Cpu is consistently 2%, and memory is constant. (~40M). 
    I'm guessing that it's the eventlog service because our HP openview log reader (opcle.exe) is also working hard to keep up.  I've searched for others posting a similar problem but am coming up empty handed. 
    This is a MS Analysis Services 2008 server, but we haven't noticed any problems coming from SSAS.  We have other file sharing-related jobs that interact with this server, that sometimes take 30 min and sometimes 6 hours, for the same workload....and
    we're thinking that the 6 hour runs are somehow related to this process's unusual page faults.
    Anyone else seen this eventlog strange behavior?
    Thanks
    -Mark

    Hi,
    The best thing would be downloading the Process Explorer and analyzing the problem.
    Process Explorer
    http://technet.microsoft.com/en-us/sysinternals/bb896653
    For how to use Process Explorer to troubleshoot the performance issue, please refer to the following Microsoft TechNet blogs:
    HIGH CPU – SVCHOST.EXE
    http://blogs.technet.com/b/askperf/archive/2009/04/10/prf-high-cpu-svchost-exe.aspx
    Getting Started with SVCHOST.EXE Troubleshooting
    http://blogs.technet.com/b/askperf/archive/2008/01/11/getting-started-with-svchost-exe-troubleshooting.aspx
    If you find the cause is Automatic update, please also refer to the following Microsoft TechNet blog:
    Automatic Update causes SVCHOST.exe high CPU
    http://blogs.technet.com/b/asiasupp/archive/2007/05/29/automatic-update-causes-svchost-exe-high-cpu.aspx
    Regards,
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • 1,000,000 updates per second?

    How could you configure a coherence cluster to handle processing a million stock quotes per sec? The datafeed could be configured as a single app spewing out all 1,000,000/sec or it could be many apps producing proportionately fewer ticks/sec but in any case it's going to total a million/sec. Fractions of the feed spread among multiple physical servers sounds smartest. The quote Map.Entry would probably have a Key of String (or char[] if that's more efficient - i know the max length). The Value would be a price and a size so maybe just those two elements byte[]{Float,Integer} or a java object with Float and Integer member variables. I'd want to trigger actions based on market conditions when the planets align just right so I'm not simply ignoring these values or pub/sub'ing them out to client apps, I'm evaluating many of them simultaneously and using them as event triggers. Is something like that remotely possible? On how much hardware?
    Thanks,
    Andrew

    Andrew,
    Using partitioning, Coherence can handle 1 million updates per second, but the big question is how many updates per second do you need on the hottest instrument at the hottest time?
    The other question is related to "the planets lining up", because that may imply a global view of the market, which becomes more difficult in a partitioned system.
    To provide a high rate of change to data in a partitioned system, the data providers (those with a large amount of data or a high rate of change) should be in the cluster (not coming in over *Extend) to eliminate one hop. To avoid blocking on the tick update from the data provider, it should locally enqueue the update. The queue servicer (a separate thread) should either coalesce whatever ticks are in the queue into a single putAll(), or if every tick needs to be recorded (i.e. all three ticks in the queue like "change to 3.5", "change to 3.55", "change to 3.6" have to be published, instead of just the latest "change to 3.6") then it would batch up everything in the queue until it hits an item that it already has in its batch, and then do a putAll().
    The use of that async publishing mode is what allows for the much higher throughput, particularly when a data provider is producing a huge number of ticks in a given period of time. You can make it even smoother (e.g. avoid outliers caused by some servers being slower) by having more local queues+services (partitioned by Coherence partition, or at the extreme by instrument). You can determine the Coherence partition using the KeyPartitioningStrategy returned from the PartitionedService for the ticks cache.
    Peace,
    Cameron Purdy | Oracle Coherence
    http://coherence.oracle.com/

  • What is the most Frames Per Second NI-CAN can do?

    My goal is to send 1000 Frames per Second on my CAN bus using the NI-CAN PCI 2 slot card I have.  However the closest I have been able to do is 666 frames per second.  This is sending 8 frames every 12 MS using an edited readmult example.  Is there a way to do this with writemult?  Or is there a hardware limit that I am trying to go past?
    What can I mess with to get more frames?  Increase Baudrate?  Decrease the size of the frames?  (I've tried both of those)
    Other questions that should probably go in other posts  (Frame API):
    Is there a way to send/read the frames at the bit-level?  I have found ways to manipulate Arbitration ID, Remote Frame, Data Length, and Data, but there are several other bits in a frame.
    Is there a way to send a bad frame, one that would raise/cause an error frame?

    Yes, I did break 1,000 Frames Per Second.  I got up to 1,714 and 1,742 using two different methods.  This is at 250 kbps, if you used 500 or 1 Mbps, you could get more frames.  If you have 125 kbps, you might not be able to break 1,000 Frames per Second.
    ncWriteMult is the key.  You can load 512 frames in a queue at a time.  I would put 256 on at a time and check to see if there was less than 256 frames left in the queue and if there was, load it up, that way the queue would never be empty.  I went about it 2 ways, one was using ncGetAttribute to determine space left, and that got the faster method, however, I was also trying to read the messages to verify that it worked, and I had problems with logging every frame.  It would also send the first 40 ncWriteMults quickly, as if the queue it was filling was much larger than 512.
    The other way, was using trial and error to determine how many MS to sleep before writing the next batch of 256 frames.  There are variables outside of my control that determined the time it would take and it would vary a few ms.  I wanted a stable environment that could send forever without filling the queue so I went with a value that would wait 2 or 3 ms, depending on conditions before writing again.  The value I used was 142 ms, I think.  Your Mileage May Vary.
    There is also a way to do some error handling that I did not try to utilize.  Instead of the process crashing, there is a way to tell it to wait if this error is returned.  That might be the best way for me to implement what I wanted to do, but I was assigned another task before I tried to get that to work.
    There is a timing element in ncWriteMult's documentation I didn't look into very much, but that would space the frames out and could send 1,000 frames a second evenly distributed, instead of sending them as quickly as possible, wait some ms then send another batch.
    If anyone could link us, or provide some code snippets of the error handling, or proper usage of ncGetAttribue, or some way to read faster, that would be greatly appreciated.

  • Pages Per Second, User Modes

    What is being recorded in the Pages per second metrics, is it:
    1) The DOWNLOAD and (emulated) RENDERING of the page with its attributes/objects (pages, images, frames etc)
    OR
    2) Just the DOWNLOAD of the page with its attributes/objects.
    Does this change between User Modes i.e. Thick and Thin Client

    Hi Paul,
    A page is the eTester representation of what would be a web page in terms of navigations and actions that are performed in the web browser after the document is completely rendered and before the last transition.
    Open the navigation editor and take a look at the Navigations tree. There it can be seen how the pages are divided. Each page will have between 0 to multiple navigations. Pages with zero navigations doesn't count, these pages won't even be shown in the report. If the page have frames the page will show more than one navigation. If the web application uses ajax or a similar technology with http transactions, or if the page uses java applets, flash objects or activex controls that makes http transactions and the proxy recorder was On, then the pages most likely will show more than one navigation.
    eLoad will request all the navigations contained in any given page of any script in your load test scenario, if the web server will responds back successfuly for all the navigations, then this is counted as one page received. eLoad willl be requesting multiple pages from the different scripts that exists in the scenario submitted then it will count how many of those pages are being received in an interval of time, then makes a units convertion of the time interval in order to display it in seconds.
    The pages received per second is the average of how many pages with all the navigations contained in it were successfully obtained from the web servers every second.
    The pages per second is the same statistic regardless if you are running in thick or in thin.
    The pages per second doesn't take into consideration the download of images, scripts, css, or any other object. These are considered in the hits per second.
    A similar thread was created earlier:
    http://qazone.empirix.com/thread.jspa?threadID=11&tstart=30
    The link was inserted here for crossreference if required later.
    Regards,
    Zuriel

  • Number of IDOC created per second

    Hi, I am using a middleware (such as Cast Iron and IBM DataStage) to convert external data into IDOC and send to SAP using ALE method. The speed of creating the IDOC in SAP is about 1 IDOC per second, is there any parameter/configuration can be done in SAP to speed up the creation of IDOC? I heard that R/3 will open addtional ALE channel at run time depend on load, can it be configure?
    I am running SAP 4.7 (6.2) with unicode.

    Hi Chee Hean Liew,
    the basic situation is, that one IDOC takes one dialog process in one tRFC.
    Some partner middlewares are certified for sending several IDOCs in one LUW (tRFC) -> several IDOCs take one dialog process.
    You have following possibilities to increase performance:
    in option if you collect IDOCs:
    - merge several IDOCs in one tRFC if partner certified for
    /- merge several IDOcs in one File/
    General:
    - Increase dialog processes - depending on your system   performance
    - Increase dialog process for tRFC (RZ12, RZ04)
    There is a general rule: sum of dialog tRFC must be higher
    then the sum of non-dialog processes.
    Regards
    Tibor

  • Logical reads per second

    I have two databases - one is a clone of the other, amde a few months ago. Database A has somewhat more data, since it's the active production database, but not significantly more - perhaps 10% greater. They are on different boxes. Database A is on a Sun 280R 2-processor box. Database B is on a Dell 2950 with 2 dual-core processors. So this isn't exactly comparing apples to apples. However, when I run the same query on the two databases, I get radically different results. Against Database A, the query takes about 7 minutes. On Database B, it takes about 2 seconds. Logical reads per second on Database A reach 80,000-90,000; on Database B, they're about 3,000. There are a few configuration differences (both databases use automatic memory management):
    Database A Database B
    db_file_multiblock_read_count 64 16
    log_buffer 14290432 2104832
    open_cursors 1250 300
    sga_max_size 4194304000 536870912
    sga_target 2634022912 536870912
    shared_pool_reserved_size 38587596 7340032
    The timings were taken off-hours so neither database would be busy. I'm baffled by the extreme difference in execution times. Any help appreciated!
    Thanks,
    Harry
    Edited by: harryb on Apr 8, 2009 7:26 PM

    OK, let's start here....
    Database A (TEMPOP)
    SQL> show parameter optimizer
    NAME TYPE VALUE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.3
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    SQL> show parameter db_file_multi
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 64
    SQL> show parameter db_block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    ===================================================
    Database B (TEMPO11)
    SQL> show parameter optimizer
    NAME TYPE VALUE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.1
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    SQL> show parameter db_file_multi
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 16
    SQL> show parameter db_block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    =================================================================
    Now for the query that's causing the problem:
    SELECT dsk_document_attribute.value_text inspect_permit_no,
              NVL (activity_task_list.revised_due_date,
                   activity_task_list.default_due_date
                 inspect_report_due_date,
              agency_interest.master_ai_id agency_interest_id,
              agency_interest.master_ai_name agency_interest_name,
              get_county_code_single (agency_interest.master_ai_id)
                 parish_or_county_code,
              agency_interest_address.physical_address_line_1 inspect_addr_1,
              agency_interest_address.physical_address_line_2 inspect_addr_2,
              agency_interest_address.physical_address_line_3 inspect_addr_3,
              agency_interest_address.physical_address_municipality inspect_city,
              agency_interest_address.physical_address_state_code state_id,
              agency_interest_address.physical_address_zip inspect_zip,
              person.master_person_first_name person_first_name,
              person.master_person_middle_initial person_middle_initial,
              person.master_person_last_name person__last_name,
              SUBSTR (person_telecom.address_or_phone, 1, 14) person_phone,
              activity_task_list.requirement_id
       FROM dsk_document_attribute,
            agency_interest,
            activity_task_list,
            agency_interest_address,
            dsk_central_file dsk_aaa,
            dsk_central_file dsk_frm,
            person,
            person_telecom
       WHERE agency_interest.int_doc_id = 0
             AND agency_interest.master_ai_id =
                   agency_interest_address.master_ai_id
             AND agency_interest.int_doc_id = agency_interest_address.int_doc_id
             AND agency_interest.master_ai_id = dsk_frm.master_ai_id
             AND dsk_aaa.int_doc_id = activity_task_list.int_doc_id
             AND dsk_frm.int_doc_id = dsk_document_attribute.int_doc_id
             AND dsk_frm.doc_type_specific_code =
                   dsk_document_attribute.doc_type_specific_code
             AND dsk_frm.activity_category_code = 'PER'
             AND dsk_frm.activity_class_code = 'GNP'
             AND dsk_frm.activity_type_code IN ('MAB', 'NAB', 'REB')
             AND dsk_frm.program_code = '80'
             AND dsk_frm.doc_type_general_code = 'FRM'
             AND dsk_frm.doc_type_specific_code = 'PERSET'
             AND dsk_aaa.doc_template_id = 2000
             AND dsk_frm.master_ai_id = dsk_aaa.master_ai_id
             AND dsk_frm.activity_category_code = dsk_aaa.activity_category_code
             AND dsk_frm.program_code = dsk_aaa.program_code
             AND dsk_frm.activity_class_code = dsk_aaa.activity_class_code
             AND dsk_frm.activity_type_code = dsk_aaa.activity_type_code
             AND dsk_frm.activity_year = dsk_aaa.activity_year
             AND dsk_frm.activity_num = dsk_aaa.activity_num
             AND dsk_document_attribute.doc_attribute_code = 'PERMIT_NO'
             AND activity_task_list.requirement_id IN ('3406', '3548', '3474')
             AND activity_task_list.reference_task_id = 0
             AND NVL (activity_task_list.status_code, '$$$') <> '%  '
             AND person.master_person_id(+) =
                   f_get_gp_contact (agency_interest.master_ai_id)
             AND person.int_doc_id(+) = 0
             AND person.master_person_id = person_telecom.master_person_id(+)
             AND person.int_doc_id = person_telecom.int_doc_id(+)
             AND person_telecom.telecom_type_code(+) = 'wp';Here's the explain plan for Database A, where the query takes 7-8 minutes or more:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                           | Name                       | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                    |                            |     1 |   253 |    34   (3)|
    |   1 |  NESTED LOOPS                       |                            |     1 |   253 |    34   (3)|
    |   2 |   NESTED LOOPS                      |                            |     1 |   224 |    32   (0)|
    |   3 |    NESTED LOOPS OUTER               |                            |     1 |   169 |    31   (0)|
    |   4 |     NESTED LOOPS OUTER              |                            |     1 |   144 |    29   (0)|
    |   5 |      NESTED LOOPS                   |                            |     1 |   122 |    27   (0)|
    |   6 |       NESTED LOOPS                  |                            |     1 |    81 |    26   (0)|
    PLAN_TABLE_OUTPUT
    |   7 |        NESTED LOOPS                 |                            |     1 |    48 |    19   (0)|
    |   8 |         INLIST ITERATOR             |                            |       |       |            |
    |*  9 |          TABLE ACCESS BY INDEX ROWID| ACTIVITY_TASK_LIST         |     1 |    21 |    17   (0)|
    |* 10 |           INDEX RANGE SCAN          | ACTIVITY_TASK_LIST_FK11    |   106 |       |     4   (0)|
    |* 11 |         TABLE ACCESS BY INDEX ROWID | DSK_CENTRAL_FILE           |     1 |    27 |     2   (0)|
    |* 12 |          INDEX UNIQUE SCAN          | PK_DSK_CENTRAL_FILE        |     1 |       |     1   (0)|
    |* 13 |        TABLE ACCESS BY INDEX ROWID  | DSK_CENTRAL_FILE           |     1 |    33 |     7   (0)|
    |* 14 |         INDEX RANGE SCAN            | CF_MASTER_AI_ID_IND        |     9 |       |     2   (0)|
    |  15 |       TABLE ACCESS BY INDEX ROWID   | AGENCY_INTEREST            |     1 |    41 |     1   (0)|
    |* 16 |        INDEX UNIQUE SCAN            | PK_AGENCY_INTEREST         |     1 |       |     0   (0)|
    |  17 |      TABLE ACCESS BY INDEX ROWID    | PERSON                     |     1 |    22 |     2   (0)|
    PLAN_TABLE_OUTPUT
    |* 18 |       INDEX UNIQUE SCAN             | PK_PERSON                  |     1 |       |     1   (0)|
    |  19 |     TABLE ACCESS BY INDEX ROWID     | PERSON_TELECOM             |     1 |    25 |     2   (0)|
    |* 20 |      INDEX UNIQUE SCAN              | PK_PERSON_TELECOM          |     1 |       |     1   (0)|
    |  21 |    TABLE ACCESS BY INDEX ROWID      | AGENCY_INTEREST_ADDRESS    |     1 |    55 |     1   (0)|
    |* 22 |     INDEX UNIQUE SCAN               | PK_AGENCY_INTEREST_ADDRESS |     1 |       |     0   (0)|
    |  23 |   TABLE ACCESS BY INDEX ROWID       | DSK_DOCUMENT_ATTRIBUTE     |     1 |    29 |     1   (0)|
    |* 24 |    INDEX UNIQUE SCAN                | PK_DSK_DOCUMENT_ATTRIBUTE  |     1 |       |     0   (0)|
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       9 - filter("ACTIVITY_TASK_LIST"."REFERENCE_TASK_ID"=0 AND
                  NVL("ACTIVITY_TASK_LIST"."STATUS_CODE",'$$$')<>'%  ')
      10 - access("ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3406 OR
                  "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3474 OR "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3548)
      11 - filter("DSK_AAA"."DOC_TEMPLATE_ID"=2000 AND "DSK_AAA"."ACTIVITY_CLASS_CODE"='GNP' AND
                  "DSK_AAA"."PROGRAM_CODE"='80' AND "DSK_AAA"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_AAA"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_AAA"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_AAA"."ACTIVITY_TYPE_CODE"='REB'))
      12 - access("ACTIVITY_TASK_LIST"."INT_DOC_ID"="DSK_AAA"."INT_DOC_ID")
      13 - filter("DSK_FRM"."ACTIVITY_CLASS_CODE"='GNP' AND "DSK_FRM"."PROGRAM_CODE"='80' AND
    PLAN_TABLE_OUTPUT
                  "DSK_FRM"."DOC_TYPE_SPECIFIC_CODE"='PERSET' AND "DSK_FRM"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  "DSK_FRM"."DOC_TYPE_GENERAL_CODE"='FRM' AND ("DSK_FRM"."ACTIVITY_TYPE_CODE"='MAB' OR
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"='NAB' OR "DSK_FRM"."ACTIVITY_TYPE_CODE"='REB') AND
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"="DSK_AAA"."ACTIVITY_TYPE_CODE" AND
                  "DSK_FRM"."ACTIVITY_YEAR"="DSK_AAA"."ACTIVITY_YEAR" AND
                  "DSK_FRM"."ACTIVITY_NUM"="DSK_AAA"."ACTIVITY_NUM")
      14 - access("DSK_FRM"."MASTER_AI_ID"="DSK_AAA"."MASTER_AI_ID")
      16 - access("AGENCY_INTEREST"."MASTER_AI_ID"="DSK_FRM"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST"."INT_DOC_ID"=0)
      18 - access("PERSON"."MASTER_PERSON_ID"(+)="F_GET_GP_CONTACT"("AGENCY_INTEREST"."MASTER_AI_ID
                  ") AND "PERSON"."INT_DOC_ID"(+)=0)
    PLAN_TABLE_OUTPUT
      20 - access("PERSON"."MASTER_PERSON_ID"="PERSON_TELECOM"."MASTER_PERSON_ID"(+) AND
                  "PERSON_TELECOM"."TELECOM_TYPE_CODE"(+)='wp' AND
                  "PERSON"."INT_DOC_ID"="PERSON_TELECOM"."INT_DOC_ID"(+))
      22 - access("AGENCY_INTEREST"."MASTER_AI_ID"="AGENCY_INTEREST_ADDRESS"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST_ADDRESS"."INT_DOC_ID"=0)
      24 - access("DSK_FRM"."INT_DOC_ID"="DSK_DOCUMENT_ATTRIBUTE"."INT_DOC_ID" AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_ATTRIBUTE_CODE"='PERMIT_NO' AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_TYPE_SPECIFIC_CODE"='PERSET')============================================================================
    Here's the explan plan output for Database B, where the query takes 2-3 seconds:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                           | Name                       | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                    |                            |     1 |   289 |    39   (0)|
    |   1 |  NESTED LOOPS OUTER                 |                            |     1 |   289 |    39   (0)|
    |   2 |   NESTED LOOPS                      |                            |     1 |   260 |    37   (0)|
    |   3 |    NESTED LOOPS                     |                            |     1 |   205 |    36   (0)|
    |   4 |     NESTED LOOPS OUTER              |                            |     1 |   172 |    35   (0)|
    |   5 |      NESTED LOOPS                   |                            |     1 |   145 |    34   (0)|
    |   6 |       NESTED LOOPS                  |                            |     1 |   104 |    33   (0)|
    PLAN_TABLE_OUTPUT
    |   7 |        NESTED LOOPS                 |                            |     1 |    61 |    26   (0)|
    |   8 |         INLIST ITERATOR             |                            |       |       |            |
    |*  9 |          TABLE ACCESS BY INDEX ROWID| ACTIVITY_TASK_LIST         |     1 |    25 |    24   (0)|
    |* 10 |           INDEX RANGE SCAN          | ACTIVITY_TASK_LIST_FK11    |   145 |       |     4   (0)|
    |* 11 |         TABLE ACCESS BY INDEX ROWID | DSK_CENTRAL_FILE           |     1 |    36 |     2   (0)|
    |* 12 |          INDEX UNIQUE SCAN          | PK_DSK_CENTRAL_FILE        |     1 |       |     1   (0)|
    |* 13 |        TABLE ACCESS BY INDEX ROWID  | DSK_CENTRAL_FILE           |     1 |    43 |     7   (0)|
    |* 14 |         INDEX RANGE SCAN            | CF_MASTER_AI_ID_IND        |     9 |       |     2   (0)|
    |  15 |       TABLE ACCESS BY INDEX ROWID   | AGENCY_INTEREST            |     1 |    41 |     1   (0)|
    |* 16 |        INDEX UNIQUE SCAN            | PK_AGENCY_INTEREST         |     1 |       |     0   (0)|
    |  17 |      TABLE ACCESS BY INDEX ROWID    | PERSON                     |     8 |   216 |     1   (0)|
    PLAN_TABLE_OUTPUT
    |* 18 |       INDEX UNIQUE SCAN             | PK_PERSON                  |     1 |       |     0   (0)|
    |  19 |     TABLE ACCESS BY INDEX ROWID     | DSK_DOCUMENT_ATTRIBUTE     |     1 |    33 |     1   (0)|
    |* 20 |      INDEX UNIQUE SCAN              | PK_DSK_DOCUMENT_ATTRIBUTE  |     1 |       |     0   (0)|
    |  21 |    TABLE ACCESS BY INDEX ROWID      | AGENCY_INTEREST_ADDRESS    |     1 |    55 |     1   (0)|
    |* 22 |     INDEX UNIQUE SCAN               | PK_AGENCY_INTEREST_ADDRESS |     1 |       |     0   (0)|
    |  23 |   TABLE ACCESS BY INDEX ROWID       | PERSON_TELECOM             |     1 |    29 |     2   (0)|
    |* 24 |    INDEX UNIQUE SCAN                | PK_PERSON_TELECOM          |     1 |       |     1   (0)|
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       9 - filter("ACTIVITY_TASK_LIST"."REFERENCE_TASK_ID"=0 AND
                  NVL("ACTIVITY_TASK_LIST"."STATUS_CODE",'$$$')<>'%  ')
      10 - access("ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3406 OR
                  "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3474 OR "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3548)
      11 - filter("DSK_AAA"."DOC_TEMPLATE_ID"=2000 AND "DSK_AAA"."ACTIVITY_CLASS_CODE"='GNP' AND
                  "DSK_AAA"."PROGRAM_CODE"='80' AND "DSK_AAA"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_AAA"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_AAA"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_AAA"."ACTIVITY_TYPE_CODE"='REB'))
      12 - access("ACTIVITY_TASK_LIST"."INT_DOC_ID"="DSK_AAA"."INT_DOC_ID")
      13 - filter("DSK_FRM"."DOC_TYPE_SPECIFIC_CODE"='PERSET' AND
    PLAN_TABLE_OUTPUT
                  "DSK_FRM"."ACTIVITY_CLASS_CODE"='GNP' AND "DSK_FRM"."PROGRAM_CODE"='80' AND
                  "DSK_FRM"."DOC_TYPE_GENERAL_CODE"='FRM' AND "DSK_FRM"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_FRM"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_FRM"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"='REB') AND "DSK_FRM"."ACTIVITY_TYPE_CODE"="DSK_AAA"."ACTIVITY_TY
                  PE_CODE" AND "DSK_FRM"."ACTIVITY_YEAR"="DSK_AAA"."ACTIVITY_YEAR" AND
                  "DSK_FRM"."ACTIVITY_NUM"="DSK_AAA"."ACTIVITY_NUM")
      14 - access("DSK_FRM"."MASTER_AI_ID"="DSK_AAA"."MASTER_AI_ID")
      16 - access("AGENCY_INTEREST"."MASTER_AI_ID"="DSK_FRM"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST"."INT_DOC_ID"=0)
      18 - access("PERSON"."MASTER_PERSON_ID"(+)="F_GET_GP_CONTACT"("AGENCY_INTEREST"."MASTER_AI_ID
                  ") AND "PERSON"."INT_DOC_ID"(+)=0)
    PLAN_TABLE_OUTPUT
      20 - access("DSK_FRM"."INT_DOC_ID"="DSK_DOCUMENT_ATTRIBUTE"."INT_DOC_ID" AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_ATTRIBUTE_CODE"='PERMIT_NO' AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_TYPE_SPECIFIC_CODE"='PERSET')
      22 - access("AGENCY_INTEREST"."MASTER_AI_ID"="AGENCY_INTEREST_ADDRESS"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST_ADDRESS"."INT_DOC_ID"=0)
      24 - access("PERSON"."MASTER_PERSON_ID"="PERSON_TELECOM"."MASTER_PERSON_ID"(+) AND
                  "PERSON_TELECOM"."TELECOM_TYPE_CODE"(+)='wp' AND
                  "PERSON"."INT_DOC_ID"="PERSON_TELECOM"."INT_DOC_ID"(+))===============================================================================
    Edited by: harryb on Apr 9, 2009 3:29 PM

  • Can 10000 sample per second can be acuire from a sensor through cRIO

    I just want to know, can I acure 10000 smaple (when the programme run in real time) per second per chanel or more, when data taking from a sensor, through the cRIO.   

    Bikash wrote:
    Can I apply this method in simulation loop. Where I want make my control algorithm. 
    It sounds like a perfect application for your needs. Remember to buy powerful hardware not having to worry about the performance.
    http://sine.ni.com/nips/cds/view/p/lang/en/nid/210400
    Or something similar performance with a PXI solution.
    Br,
    /Roger

  • Too High Wakeups-from-idle per second

    I am using Arch for the past few months and I have been noticing that the battery is draining much quicker than it used to be in windows.
    I used powertop and found out that the "Wakeups-from-idle per second" is way too high. It is ~300 per second on average and once i even saw numbers like 8000 !!
    The link http://www.linuxpowertop.org/powertop.php quotes it is possible to get 3 wakeups running a full GNOME desktop. It says that 193 is lot more than 3.
    But in my case the numbers are way too high than expected. I run Arch with XFCE4.
    Can somebody explain why am I seeing such high numbers and is this the reason for my battery draining ?

    nachiappan wrote:Can somebody explain why am I seeing such high numbers and is this the reason for my battery draining ?
    Powertop shows what is causing the wakeups. If it is the kernel, trying a different frequency governor might help.
    More than likely tweaking a few different settings together will get you more battery life, the wakeups alone won't be doing all the battery draining.
    There's lots of helpful hints on power saving here:
    https://wiki.archlinux.org/index.php/La … ttery_life
    edit: seems your cpu isn't being monitored correctly. 3000% seems so wrong.
    What kernel and cpu are you using?
    Last edited by moetunes (2012-02-23 00:12:46)

  • Importsetting Pictures per Second

    Hallo
    The default Setting in the Adobe CS3 Pro for Videoimport is 25 pictures per Second.
    My problem is that I don't really know with how many pictures per Second I actually recorded my footage.
    The usermanual of my Panasonic Camcorder tells me that it records "about 6 single pictures per second".
    But wouldn't that be way to few?
    Thanks for helping me

    it's this camera:
    http://www.directshopper.de/image/zoom/pan/panasonic-nv-gs-150-eg.jpg
    It's name is panasonic NV-GS-150. But that's the European name. I figuared since it's a regular camcorder it should record the standart of 25 pictures per second... any doubters?

  • Monitor Transactions Per second

    Some times I see my SQL Server is running less than 2000 Transactions per second (from the "Transactions /sec" Perfmon counter) and some times I see there are 17000 Transactions per second.
    If I want to find out which application is executing these transactions or which query is executing these transactions, is there a way to find this out?

    If its a prod and a busy server, running the profiler *for few hours* wont be a good idea. You may opt to run the sever side traces with only specific events.
    http://msdn.microsoft.com/en-us/library/cc293613.aspx
    If you are using SQL server 2012 you can use extended events
    http://msdn.microsoft.com/en-IN/library/bb630282.aspx
    Satheesh
    My Blog |
    How to ask questions in technical forum

  • Use several still images per second

    For a time-lapse video that I shot I want the play speed to be several frames per second.
    for example: I made 400 foto's and now I want to play them all in 40 seconds (10fps)
    What I tried was the following:
    load all the seperate images in order in Premiere Pro
    Select all images
    ->"Speed/duration" menu
    Now here is the part where I get stuck.
    If I lower the duration per picture to as low as 1 second everything works perfectly fine, but if I try to set the seconds area to "0" and the column next to it to some value [e.g. 00:00:00:20] my video gets messed up.
    The images indeed only last for a period shorter than a second, but I get a lot of "empty video" between the images.
    For some reason the ripple edit does not seems to work anymore going into sub-second durations, since it does not automatically paste all the seperate images together anymore.
    How could I try to fix it?
    Also, what format is the column after the seconds? is this miliseconds? does it go from 0-99?

    Great news. For Time-Lapse, I probably go with 3 - 5 Frames for my Stills Duration most often, but it depends on the look and feel that I am trying to obtain, and also the interval that the shooting took place. Sometimes I end up with less, but sometimes more - it just depends on the look. I usually do a test Project, and Import a few Images at 3 Frames, then play that many times, to see if I like it. If not, then off I go to 4 Frames, etc.. You were going in the other direction, as you already had a good feel for the pacing of your piece - you just needed to do the math to get there.
    Good luck,
    Hunt

  • Calls Per Second

    I am working on a Tech Refresh upgrade and part of the upgrade will have the  client changing the way they route calls to their IP IVRs.  That being said, I  think we are going to be short on Translation Routes and would like to create  some new ones.  I saw in the SRND that the recommendation is Translation Route  Pool = 20 * CPS.  I was wondering what is the best way to determine Calls Per  Second with IP IVR?  Any help would be very much appreciated.  Thanks.

    By far the easiest way is to look at the Calls Per Second coming into UCCE.
    I just did this with a customer the other day and plotted them across the day, 5min intervals. Like so (a tad under 16 calls per second)
    You need to jump onto your Logger and run this query (this table is not on a AW/HDS)
    SELECT Time = CONVERT(char,DateTime,108), CPS = CONVERT(decimal(5,2), RouteCallDetailTo5/300.0)
    FROM Logger_Meters
    WHERE DateTime BETWEEN '05/07/2012 00:01'
    AND '05/07/2012 23:59'
    ORDER BY Time
    Then send the results to Excel and draw a  pretty picture.
    Of course, this assumes all your calls arriving in the Call Router are trans routed to IPIVR.
    Regards,
    Geoff

  • Bytes per Second

    Dear Reader,
    I have made my simple download utility program in java.
    In there i can count the number of bytes downloaded.
    --Problem i am facing is to get bytes per second of the download in progress.
    For this purpose i take system time at definite intervals(one or two seconds) and count
    the bytes downloaded in that specific intervals.
    Doing so slows down the speed of the program(execution speed).
    If you have any alternative to get bytes per second which does not consume time please
    notify...
    Thank you

    Make sure you don't have a lot of System.out.prints when you calculate the speed, and instead of reading just one byte at the time, you could read x bytes at the time using a byte array.
    byte[] buffer = new byte[1024];
    long start = System.currentTimeMillis();
    long bytesRead = 0;
    int len;
    while (System.currentTimeMillis()-start < 2000 && (len = in.read(buffer)) != -1) {
      bytesRead += len;
      // ... do something with the read bytes.
      // Valid bytes are in the buffer from index 0 to len-1, the rest is garbage
    long rate = 1000*bytesRead/(System.currentTimeMillis()-start);
    System.out.println("Current rate is: "+rate);Something like that..

  • Queries Per Second

    Is there any why to get the number of queries per second on oracle database.
    more specifically i want result like this
    Number of Select statements per second:
    Number of Insert statements per second:
    Number of Update statements per second:
    Number of Delete statements per second:
    my database is
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Release 10.2.0.4.0 - 64bit Production
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production

    Nimish Garg wrote:
    yes our tech hod is asking, how much execution of queries (basically select) are getting executed on a particular server from a defied website, which use that particular db server.
    he is basically asking select statement execution per second (average). i also believe this is meaning less but if there is any way, plz provide me solution.There is no solution to this ito querying Oracle v$ tables. There are numerous issues - including the same cursor being shared by for example a web session and a job process. There are no means to determine how many time the one has executed the same cursor versus the other. You can look at AWR tables and views - provided you have the appropriate Oracle license too. This will not give you session statistics that you can divide into web sessions versus other sessions. But it will contain some metrics that could be useful in determining the workload of the web app tier.
    The correct approach would be instrumentation. Instrumenting the web app code to record the metrics that are required.
    If you use Oracle Apex, then this is already available. Apex keeps track of click counts, web page performance and so on - allowing you to create a picture of how the web site is used, what the slower pages are and so on.
    Besides instrumentation, there are no real viable and alternative solutions. Yes, you can add a SQL proxy in-between web application and database server - and this proxy server can record the required metrics. But such proxies are few and far in-between, introduces another layer of complexity, will likely have a negative impact on performance, and can introduce security complexities.
    And I'm still not convinced that having an answer to the question posed, will have the slightest benefit. For example, let's say the answer is 10 SQLs/second. What does this mean? Is it good? Is it bad? Does it say anything about potential bottleneck? Does it say anything about performance?
    No, it does not. It is meaningless.
    What is meaningful for example, is looking at the hottest query (SQL) in the Shared Pool. The one that is executed the most. And then determining if this cannot be optimised. Or looking at the Shared Pool for queries that are not using bind variables - and addressing this problem with the developers, fixing their code. Or looking at the queries generating the most I/O and determining if the amount of I/O is justified and warranted.
    I cannot see how knowing the SQL/sec of your web application tier has any meaningful information about workload or performance.

Maybe you are looking for

  • Folder size is not at the bottom of the folder when opened as in previous version,

    I have always like the feature that showed the size of items within any folder as shown from that screen. With Mountain Lion, I noticed that it does not show a bottom bar with the size of the folder. I have attempted numerous times to open this featu

  • Create View table with multiple table

    I want to create View table with relation with multiple tables. for ex table 1 mrnno mrnqty table 2 mrnno issqty table 3 mrnno retqty want to create view table where i can see the sum (mrnqty), sum(issqty),sum(retqty) group by mrnno sandy

  • Security information send to an old mail address

    Wanted to change my password through manage apple ID. However, an old (and non accessible) mail address is being used for sending security information. I've changed my primary mailaddress (and a second mail address) months ago: boths are apparently n

  • Congrats, you're our 1 millionth "Focus probelm" poster!

    For a particular tabel column, I have an cell editor that extends DefaultCellEditor and uses a JTextField as the editorComponent. I tried attaching an InputVerifier to the JTextField... the verify() method returns the correct boolean value, but that

  • Error MG-144 occurring while creating article by function module

    Hi Experts,                I have requirement to create the article in MM41 by function module, I have used WRF_MATERIAL_MAINTAINDATA_RT function module. But it is giving error MG-144 saying the field MARC-DISMM is defined as required field ; it does