Bytes per Second

Dear Reader,
I have made my simple download utility program in java.
In there i can count the number of bytes downloaded.
--Problem i am facing is to get bytes per second of the download in progress.
For this purpose i take system time at definite intervals(one or two seconds) and count
the bytes downloaded in that specific intervals.
Doing so slows down the speed of the program(execution speed).
If you have any alternative to get bytes per second which does not consume time please
notify...
Thank you

Make sure you don't have a lot of System.out.prints when you calculate the speed, and instead of reading just one byte at the time, you could read x bytes at the time using a byte array.
byte[] buffer = new byte[1024];
long start = System.currentTimeMillis();
long bytesRead = 0;
int len;
while (System.currentTimeMillis()-start < 2000 && (len = in.read(buffer)) != -1) {
  bytesRead += len;
  // ... do something with the read bytes.
  // Valid bytes are in the buffer from index 0 to len-1, the rest is garbage
long rate = 1000*bytesRead/(System.currentTimeMillis()-start);
System.out.println("Current rate is: "+rate);Something like that..

Similar Messages

  • 1,000,000 updates per second?

    How could you configure a coherence cluster to handle processing a million stock quotes per sec? The datafeed could be configured as a single app spewing out all 1,000,000/sec or it could be many apps producing proportionately fewer ticks/sec but in any case it's going to total a million/sec. Fractions of the feed spread among multiple physical servers sounds smartest. The quote Map.Entry would probably have a Key of String (or char[] if that's more efficient - i know the max length). The Value would be a price and a size so maybe just those two elements byte[]{Float,Integer} or a java object with Float and Integer member variables. I'd want to trigger actions based on market conditions when the planets align just right so I'm not simply ignoring these values or pub/sub'ing them out to client apps, I'm evaluating many of them simultaneously and using them as event triggers. Is something like that remotely possible? On how much hardware?
    Thanks,
    Andrew

    Andrew,
    Using partitioning, Coherence can handle 1 million updates per second, but the big question is how many updates per second do you need on the hottest instrument at the hottest time?
    The other question is related to "the planets lining up", because that may imply a global view of the market, which becomes more difficult in a partitioned system.
    To provide a high rate of change to data in a partitioned system, the data providers (those with a large amount of data or a high rate of change) should be in the cluster (not coming in over *Extend) to eliminate one hop. To avoid blocking on the tick update from the data provider, it should locally enqueue the update. The queue servicer (a separate thread) should either coalesce whatever ticks are in the queue into a single putAll(), or if every tick needs to be recorded (i.e. all three ticks in the queue like "change to 3.5", "change to 3.55", "change to 3.6" have to be published, instead of just the latest "change to 3.6") then it would batch up everything in the queue until it hits an item that it already has in its batch, and then do a putAll().
    The use of that async publishing mode is what allows for the much higher throughput, particularly when a data provider is producing a huge number of ticks in a given period of time. You can make it even smoother (e.g. avoid outliers caused by some servers being slower) by having more local queues+services (partitioned by Coherence partition, or at the extreme by instrument). You can determine the Coherence partition using the KeyPartitioningStrategy returned from the PartitionedService for the ticks cache.
    Peace,
    Cameron Purdy | Oracle Coherence
    http://coherence.oracle.com/

  • Logical reads per second

    I have two databases - one is a clone of the other, amde a few months ago. Database A has somewhat more data, since it's the active production database, but not significantly more - perhaps 10% greater. They are on different boxes. Database A is on a Sun 280R 2-processor box. Database B is on a Dell 2950 with 2 dual-core processors. So this isn't exactly comparing apples to apples. However, when I run the same query on the two databases, I get radically different results. Against Database A, the query takes about 7 minutes. On Database B, it takes about 2 seconds. Logical reads per second on Database A reach 80,000-90,000; on Database B, they're about 3,000. There are a few configuration differences (both databases use automatic memory management):
    Database A Database B
    db_file_multiblock_read_count 64 16
    log_buffer 14290432 2104832
    open_cursors 1250 300
    sga_max_size 4194304000 536870912
    sga_target 2634022912 536870912
    shared_pool_reserved_size 38587596 7340032
    The timings were taken off-hours so neither database would be busy. I'm baffled by the extreme difference in execution times. Any help appreciated!
    Thanks,
    Harry
    Edited by: harryb on Apr 8, 2009 7:26 PM

    OK, let's start here....
    Database A (TEMPOP)
    SQL> show parameter optimizer
    NAME TYPE VALUE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.3
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    SQL> show parameter db_file_multi
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 64
    SQL> show parameter db_block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    ===================================================
    Database B (TEMPO11)
    SQL> show parameter optimizer
    NAME TYPE VALUE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.1
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    SQL> show parameter db_file_multi
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 16
    SQL> show parameter db_block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    =================================================================
    Now for the query that's causing the problem:
    SELECT dsk_document_attribute.value_text inspect_permit_no,
              NVL (activity_task_list.revised_due_date,
                   activity_task_list.default_due_date
                 inspect_report_due_date,
              agency_interest.master_ai_id agency_interest_id,
              agency_interest.master_ai_name agency_interest_name,
              get_county_code_single (agency_interest.master_ai_id)
                 parish_or_county_code,
              agency_interest_address.physical_address_line_1 inspect_addr_1,
              agency_interest_address.physical_address_line_2 inspect_addr_2,
              agency_interest_address.physical_address_line_3 inspect_addr_3,
              agency_interest_address.physical_address_municipality inspect_city,
              agency_interest_address.physical_address_state_code state_id,
              agency_interest_address.physical_address_zip inspect_zip,
              person.master_person_first_name person_first_name,
              person.master_person_middle_initial person_middle_initial,
              person.master_person_last_name person__last_name,
              SUBSTR (person_telecom.address_or_phone, 1, 14) person_phone,
              activity_task_list.requirement_id
       FROM dsk_document_attribute,
            agency_interest,
            activity_task_list,
            agency_interest_address,
            dsk_central_file dsk_aaa,
            dsk_central_file dsk_frm,
            person,
            person_telecom
       WHERE agency_interest.int_doc_id = 0
             AND agency_interest.master_ai_id =
                   agency_interest_address.master_ai_id
             AND agency_interest.int_doc_id = agency_interest_address.int_doc_id
             AND agency_interest.master_ai_id = dsk_frm.master_ai_id
             AND dsk_aaa.int_doc_id = activity_task_list.int_doc_id
             AND dsk_frm.int_doc_id = dsk_document_attribute.int_doc_id
             AND dsk_frm.doc_type_specific_code =
                   dsk_document_attribute.doc_type_specific_code
             AND dsk_frm.activity_category_code = 'PER'
             AND dsk_frm.activity_class_code = 'GNP'
             AND dsk_frm.activity_type_code IN ('MAB', 'NAB', 'REB')
             AND dsk_frm.program_code = '80'
             AND dsk_frm.doc_type_general_code = 'FRM'
             AND dsk_frm.doc_type_specific_code = 'PERSET'
             AND dsk_aaa.doc_template_id = 2000
             AND dsk_frm.master_ai_id = dsk_aaa.master_ai_id
             AND dsk_frm.activity_category_code = dsk_aaa.activity_category_code
             AND dsk_frm.program_code = dsk_aaa.program_code
             AND dsk_frm.activity_class_code = dsk_aaa.activity_class_code
             AND dsk_frm.activity_type_code = dsk_aaa.activity_type_code
             AND dsk_frm.activity_year = dsk_aaa.activity_year
             AND dsk_frm.activity_num = dsk_aaa.activity_num
             AND dsk_document_attribute.doc_attribute_code = 'PERMIT_NO'
             AND activity_task_list.requirement_id IN ('3406', '3548', '3474')
             AND activity_task_list.reference_task_id = 0
             AND NVL (activity_task_list.status_code, '$$$') <> '%  '
             AND person.master_person_id(+) =
                   f_get_gp_contact (agency_interest.master_ai_id)
             AND person.int_doc_id(+) = 0
             AND person.master_person_id = person_telecom.master_person_id(+)
             AND person.int_doc_id = person_telecom.int_doc_id(+)
             AND person_telecom.telecom_type_code(+) = 'wp';Here's the explain plan for Database A, where the query takes 7-8 minutes or more:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                           | Name                       | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                    |                            |     1 |   253 |    34   (3)|
    |   1 |  NESTED LOOPS                       |                            |     1 |   253 |    34   (3)|
    |   2 |   NESTED LOOPS                      |                            |     1 |   224 |    32   (0)|
    |   3 |    NESTED LOOPS OUTER               |                            |     1 |   169 |    31   (0)|
    |   4 |     NESTED LOOPS OUTER              |                            |     1 |   144 |    29   (0)|
    |   5 |      NESTED LOOPS                   |                            |     1 |   122 |    27   (0)|
    |   6 |       NESTED LOOPS                  |                            |     1 |    81 |    26   (0)|
    PLAN_TABLE_OUTPUT
    |   7 |        NESTED LOOPS                 |                            |     1 |    48 |    19   (0)|
    |   8 |         INLIST ITERATOR             |                            |       |       |            |
    |*  9 |          TABLE ACCESS BY INDEX ROWID| ACTIVITY_TASK_LIST         |     1 |    21 |    17   (0)|
    |* 10 |           INDEX RANGE SCAN          | ACTIVITY_TASK_LIST_FK11    |   106 |       |     4   (0)|
    |* 11 |         TABLE ACCESS BY INDEX ROWID | DSK_CENTRAL_FILE           |     1 |    27 |     2   (0)|
    |* 12 |          INDEX UNIQUE SCAN          | PK_DSK_CENTRAL_FILE        |     1 |       |     1   (0)|
    |* 13 |        TABLE ACCESS BY INDEX ROWID  | DSK_CENTRAL_FILE           |     1 |    33 |     7   (0)|
    |* 14 |         INDEX RANGE SCAN            | CF_MASTER_AI_ID_IND        |     9 |       |     2   (0)|
    |  15 |       TABLE ACCESS BY INDEX ROWID   | AGENCY_INTEREST            |     1 |    41 |     1   (0)|
    |* 16 |        INDEX UNIQUE SCAN            | PK_AGENCY_INTEREST         |     1 |       |     0   (0)|
    |  17 |      TABLE ACCESS BY INDEX ROWID    | PERSON                     |     1 |    22 |     2   (0)|
    PLAN_TABLE_OUTPUT
    |* 18 |       INDEX UNIQUE SCAN             | PK_PERSON                  |     1 |       |     1   (0)|
    |  19 |     TABLE ACCESS BY INDEX ROWID     | PERSON_TELECOM             |     1 |    25 |     2   (0)|
    |* 20 |      INDEX UNIQUE SCAN              | PK_PERSON_TELECOM          |     1 |       |     1   (0)|
    |  21 |    TABLE ACCESS BY INDEX ROWID      | AGENCY_INTEREST_ADDRESS    |     1 |    55 |     1   (0)|
    |* 22 |     INDEX UNIQUE SCAN               | PK_AGENCY_INTEREST_ADDRESS |     1 |       |     0   (0)|
    |  23 |   TABLE ACCESS BY INDEX ROWID       | DSK_DOCUMENT_ATTRIBUTE     |     1 |    29 |     1   (0)|
    |* 24 |    INDEX UNIQUE SCAN                | PK_DSK_DOCUMENT_ATTRIBUTE  |     1 |       |     0   (0)|
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       9 - filter("ACTIVITY_TASK_LIST"."REFERENCE_TASK_ID"=0 AND
                  NVL("ACTIVITY_TASK_LIST"."STATUS_CODE",'$$$')<>'%  ')
      10 - access("ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3406 OR
                  "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3474 OR "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3548)
      11 - filter("DSK_AAA"."DOC_TEMPLATE_ID"=2000 AND "DSK_AAA"."ACTIVITY_CLASS_CODE"='GNP' AND
                  "DSK_AAA"."PROGRAM_CODE"='80' AND "DSK_AAA"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_AAA"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_AAA"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_AAA"."ACTIVITY_TYPE_CODE"='REB'))
      12 - access("ACTIVITY_TASK_LIST"."INT_DOC_ID"="DSK_AAA"."INT_DOC_ID")
      13 - filter("DSK_FRM"."ACTIVITY_CLASS_CODE"='GNP' AND "DSK_FRM"."PROGRAM_CODE"='80' AND
    PLAN_TABLE_OUTPUT
                  "DSK_FRM"."DOC_TYPE_SPECIFIC_CODE"='PERSET' AND "DSK_FRM"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  "DSK_FRM"."DOC_TYPE_GENERAL_CODE"='FRM' AND ("DSK_FRM"."ACTIVITY_TYPE_CODE"='MAB' OR
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"='NAB' OR "DSK_FRM"."ACTIVITY_TYPE_CODE"='REB') AND
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"="DSK_AAA"."ACTIVITY_TYPE_CODE" AND
                  "DSK_FRM"."ACTIVITY_YEAR"="DSK_AAA"."ACTIVITY_YEAR" AND
                  "DSK_FRM"."ACTIVITY_NUM"="DSK_AAA"."ACTIVITY_NUM")
      14 - access("DSK_FRM"."MASTER_AI_ID"="DSK_AAA"."MASTER_AI_ID")
      16 - access("AGENCY_INTEREST"."MASTER_AI_ID"="DSK_FRM"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST"."INT_DOC_ID"=0)
      18 - access("PERSON"."MASTER_PERSON_ID"(+)="F_GET_GP_CONTACT"("AGENCY_INTEREST"."MASTER_AI_ID
                  ") AND "PERSON"."INT_DOC_ID"(+)=0)
    PLAN_TABLE_OUTPUT
      20 - access("PERSON"."MASTER_PERSON_ID"="PERSON_TELECOM"."MASTER_PERSON_ID"(+) AND
                  "PERSON_TELECOM"."TELECOM_TYPE_CODE"(+)='wp' AND
                  "PERSON"."INT_DOC_ID"="PERSON_TELECOM"."INT_DOC_ID"(+))
      22 - access("AGENCY_INTEREST"."MASTER_AI_ID"="AGENCY_INTEREST_ADDRESS"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST_ADDRESS"."INT_DOC_ID"=0)
      24 - access("DSK_FRM"."INT_DOC_ID"="DSK_DOCUMENT_ATTRIBUTE"."INT_DOC_ID" AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_ATTRIBUTE_CODE"='PERMIT_NO' AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_TYPE_SPECIFIC_CODE"='PERSET')============================================================================
    Here's the explan plan output for Database B, where the query takes 2-3 seconds:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                           | Name                       | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                    |                            |     1 |   289 |    39   (0)|
    |   1 |  NESTED LOOPS OUTER                 |                            |     1 |   289 |    39   (0)|
    |   2 |   NESTED LOOPS                      |                            |     1 |   260 |    37   (0)|
    |   3 |    NESTED LOOPS                     |                            |     1 |   205 |    36   (0)|
    |   4 |     NESTED LOOPS OUTER              |                            |     1 |   172 |    35   (0)|
    |   5 |      NESTED LOOPS                   |                            |     1 |   145 |    34   (0)|
    |   6 |       NESTED LOOPS                  |                            |     1 |   104 |    33   (0)|
    PLAN_TABLE_OUTPUT
    |   7 |        NESTED LOOPS                 |                            |     1 |    61 |    26   (0)|
    |   8 |         INLIST ITERATOR             |                            |       |       |            |
    |*  9 |          TABLE ACCESS BY INDEX ROWID| ACTIVITY_TASK_LIST         |     1 |    25 |    24   (0)|
    |* 10 |           INDEX RANGE SCAN          | ACTIVITY_TASK_LIST_FK11    |   145 |       |     4   (0)|
    |* 11 |         TABLE ACCESS BY INDEX ROWID | DSK_CENTRAL_FILE           |     1 |    36 |     2   (0)|
    |* 12 |          INDEX UNIQUE SCAN          | PK_DSK_CENTRAL_FILE        |     1 |       |     1   (0)|
    |* 13 |        TABLE ACCESS BY INDEX ROWID  | DSK_CENTRAL_FILE           |     1 |    43 |     7   (0)|
    |* 14 |         INDEX RANGE SCAN            | CF_MASTER_AI_ID_IND        |     9 |       |     2   (0)|
    |  15 |       TABLE ACCESS BY INDEX ROWID   | AGENCY_INTEREST            |     1 |    41 |     1   (0)|
    |* 16 |        INDEX UNIQUE SCAN            | PK_AGENCY_INTEREST         |     1 |       |     0   (0)|
    |  17 |      TABLE ACCESS BY INDEX ROWID    | PERSON                     |     8 |   216 |     1   (0)|
    PLAN_TABLE_OUTPUT
    |* 18 |       INDEX UNIQUE SCAN             | PK_PERSON                  |     1 |       |     0   (0)|
    |  19 |     TABLE ACCESS BY INDEX ROWID     | DSK_DOCUMENT_ATTRIBUTE     |     1 |    33 |     1   (0)|
    |* 20 |      INDEX UNIQUE SCAN              | PK_DSK_DOCUMENT_ATTRIBUTE  |     1 |       |     0   (0)|
    |  21 |    TABLE ACCESS BY INDEX ROWID      | AGENCY_INTEREST_ADDRESS    |     1 |    55 |     1   (0)|
    |* 22 |     INDEX UNIQUE SCAN               | PK_AGENCY_INTEREST_ADDRESS |     1 |       |     0   (0)|
    |  23 |   TABLE ACCESS BY INDEX ROWID       | PERSON_TELECOM             |     1 |    29 |     2   (0)|
    |* 24 |    INDEX UNIQUE SCAN                | PK_PERSON_TELECOM          |     1 |       |     1   (0)|
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       9 - filter("ACTIVITY_TASK_LIST"."REFERENCE_TASK_ID"=0 AND
                  NVL("ACTIVITY_TASK_LIST"."STATUS_CODE",'$$$')<>'%  ')
      10 - access("ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3406 OR
                  "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3474 OR "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3548)
      11 - filter("DSK_AAA"."DOC_TEMPLATE_ID"=2000 AND "DSK_AAA"."ACTIVITY_CLASS_CODE"='GNP' AND
                  "DSK_AAA"."PROGRAM_CODE"='80' AND "DSK_AAA"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_AAA"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_AAA"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_AAA"."ACTIVITY_TYPE_CODE"='REB'))
      12 - access("ACTIVITY_TASK_LIST"."INT_DOC_ID"="DSK_AAA"."INT_DOC_ID")
      13 - filter("DSK_FRM"."DOC_TYPE_SPECIFIC_CODE"='PERSET' AND
    PLAN_TABLE_OUTPUT
                  "DSK_FRM"."ACTIVITY_CLASS_CODE"='GNP' AND "DSK_FRM"."PROGRAM_CODE"='80' AND
                  "DSK_FRM"."DOC_TYPE_GENERAL_CODE"='FRM' AND "DSK_FRM"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_FRM"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_FRM"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"='REB') AND "DSK_FRM"."ACTIVITY_TYPE_CODE"="DSK_AAA"."ACTIVITY_TY
                  PE_CODE" AND "DSK_FRM"."ACTIVITY_YEAR"="DSK_AAA"."ACTIVITY_YEAR" AND
                  "DSK_FRM"."ACTIVITY_NUM"="DSK_AAA"."ACTIVITY_NUM")
      14 - access("DSK_FRM"."MASTER_AI_ID"="DSK_AAA"."MASTER_AI_ID")
      16 - access("AGENCY_INTEREST"."MASTER_AI_ID"="DSK_FRM"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST"."INT_DOC_ID"=0)
      18 - access("PERSON"."MASTER_PERSON_ID"(+)="F_GET_GP_CONTACT"("AGENCY_INTEREST"."MASTER_AI_ID
                  ") AND "PERSON"."INT_DOC_ID"(+)=0)
    PLAN_TABLE_OUTPUT
      20 - access("DSK_FRM"."INT_DOC_ID"="DSK_DOCUMENT_ATTRIBUTE"."INT_DOC_ID" AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_ATTRIBUTE_CODE"='PERMIT_NO' AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_TYPE_SPECIFIC_CODE"='PERSET')
      22 - access("AGENCY_INTEREST"."MASTER_AI_ID"="AGENCY_INTEREST_ADDRESS"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST_ADDRESS"."INT_DOC_ID"=0)
      24 - access("PERSON"."MASTER_PERSON_ID"="PERSON_TELECOM"."MASTER_PERSON_ID"(+) AND
                  "PERSON_TELECOM"."TELECOM_TYPE_CODE"(+)='wp' AND
                  "PERSON"."INT_DOC_ID"="PERSON_TELECOM"."INT_DOC_ID"(+))===============================================================================
    Edited by: harryb on Apr 9, 2009 3:29 PM

  • Why can't I send any more than 5 TCP messages per second on an NT system?

    Help!
    I am trying to send a message (approximately 50 bytes long) every 22 ms (45.4 Hz) to the only other machine on our network (the server). I am using the following function:
    Status = ClientTCPWrite(Conversation_Handle, Pointer_to_the_data, Size_of_the_data, Timeout_value);
    I have tried various values (0, 1, & 10) for the Timeout value to no avail. The status always comes back saying it wrote fine (no error status). I am running this application on Windows NT 4.0. No matter what I have tried it can only muster about 5 messages per second. I think NT might be limiting me, but I'm not sure. I am updating the screen (which ought to take longer than writing the messa
    ge) every time I send with a count of how many messages I have sent. This appears to be working at the proper rate (I am using a timer set to go off every 0.022 seconds to call the send). I am the administrator on this NT machine with all rights (as far as I can determine). I have tried using the SDK function SetPriorityClass to set the priority of this task to REALTIME_PRIORITY_CLASS, but the return value comes back 0, which means it was unsuccessful. I haven't explored this any further yet, but 22ms should be a long time to a computer so it ought to work without having to go to these extraordinary measures (at least that is what I think). Anybody familiar with the inner workings of NT or the TCP library PLEASE RESPOND.
    Thanks,
    Greg Filis
    [email protected]

    22ms is a long time for your computer, but I am wondering about your network. Are you 10 base or 100 base? Switch or hub? Have you tried a direct connection with a crossover cable? Just trying to spark ideas here. I use TCP all the time but only at 500 ms intervals.

  • Slow response - but over 500 activities per second doing something!

    Hi. I'm having trouble with my iMac running slowly with lots of hangs. The system.log.7.bx2 is showing the following activity apparently over 500 times per second!
    Apr 24 11:15:12 kelvin-08e2b175 sandboxd[7358]: mDNSResponder(19) deny file-read-data /private/var/db/com.apple.parentalcontrols.keychain.jZOTiS
    * process 7919 exceeded 500 log message per second limit - remaining messages this second discarded *
    I've checked the parental controls and they are all off except possibly for the Guest login which seems to show it as the default.
    There's also a line saying something about a slow response on refreshing the log as follows -
    Apr 25 00:31:23 kelvin-08e2b175 configd[14]: Kernel configd com.apple.powermanagement.applicationresponse.slowresponse 6960 ms
    Can anybody give me a pointer as to what could be wrong? Many thanks. Kev

    Macintosh HD:
    Capacity: 297.29 GB (297,292,267,520 bytes)
    Available: 204.04 GB (204,044,525,568 bytes)
    Writable: Yes
    File System: Journaled HFS+
    BSD Name: disk0s2
    Mount Point: /

  • Horrible video skip / lag problem - once per second in all apps!

     I built a new system last month (my first AMD) and I am having a really aggravating problem. In all games and all video playback I get an annoying skip once per second, every second. It affects sound during gameplay but not during movie or mp3 playback. It even happens with the visualization mode in Windows Media Player.
    My system is as follows: MSI K8N Neo4-F, A64 3200+ venice core, MSI 6800GT 256MB PCI-E, two sticks of Corsair valueselect DDR400 512MB each, 500 watt PS, 160GB 7200 SATA HDD. Most recent NVIDIA drivers for everything. WinXP Pro with SP2 and all updates, DX9C. Nothing overclocked, all settings standard.
    I have tried the following solutions:
    1) BIOS upgrades, started with 1.4, installed 1.5, MSI tech support gave me 1.6b2 and I installed that. No luck with any of them.
    2) Memory, installed per MSI directions, but I've tried all legal combinations, including one stick at a time. No change.
    3) full format and reinstall of WinXP. No luck.
    4) Switching between WinXP IDE drivers and NVidia drivers, with and without RAID drivers, No luck.
    5) Removal of 6800GT PCI-E card and replacing with Ancient 8MB PCI Permedia2 video card. Problem still persists.
    6) Disable onboard sound and LAN. No luck.
    7) Running Fedora core 4 on second partition. Installed Nvidia video drivers, tried some games. THIS WORKS! No hitch, no skip, no nothing. Framerates are noticably slower but very stable. In WinXP I saw framerates bounce all over the place, from 230 FPS down to about 70 with one game. That same game on Linux ran smoothly at about 166 FPS with only occasional slight drops. The big FPS drops in Windows usually came right after one of the skips but didn't occure after every skip.
    Right now I'm stumped. Linux uses totally different drivers for sound, LAN and SATA support. Some of those drivers don't fully use the Nforce4 chipset's features, maybe that's part of the difference.

    Thanks TireSmoke:
    I had found that sticky, but I took your advice and went thorugh it in detail last night.  Lots of great info, fixes tweaks and tools, sadly none of them fixed my problem.  The lag problem most people are reporting is not really like the wierd problem I am having.  I have tried the recoommended fixes with ablsolutely no change in my system's behavior.
    I am beginning to suspect a faulty motherboard component.
    Russ_XP:
    I think you are correct about fast writes.  I googled the heck out of that last night and couldn't find any reference to enabling or disabling fast writes on PCI-E.
    The drive is SATA-1.  The Neo4-F is not SATA-2 enabled (there is a hack for it though).  From memory I think it's a Western Digital WD1600-something, 7200 RPM dirve.  I've tried it on both SATA buses and tried disabling the unused bus in BIOS.
    I'm pretty sure I can dig up an old PATA drive somewhere and give that a try.
    Gpalmer:
    True enough, and I don't have these problems under Fedora.  Sadly this is a cross-platform game development box, I need both XP and Fedora working.
    Black_God:
    Nope, this is a clean install.  Although I wonder, could any of the built in XP update and security tools be causing this?  I have disabled Windows firewall and virus protection monitoring.

  • Downloads per Second slower than normal

    Hi, i've moved from profile ADSL MAX to ADSL 2+ however my downloads per second are average around 2-400kbs a sec, my download speed is just over 7mb and my downloads have been around 850kbs a sec.
    Any reason to why this is?
    I don't do much heavy downloads, most of my connection is used for either gaming or downloading some the app or playstation and its really slow at downloading anything.
    If you want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side If the the reply answers your question then please mark as ’Mark as Accepted Solution

    sorry my computer crashed here it is
    FAQ
    Test1 comprises of two tests
    1. Best Effort Test:  -provides background information.
    Download  Speed
    4339 Kbps
    0 Kbps
    7150 Kbps
    Max Achievable Speed
     Download speedachieved during the test was - 4339 Kbps
     For your connection, the acceptable range of speeds is 2000-7150 Kbps.
     Additional Information:
     Your DSL Connection Rate :7192 Kbps(DOWN-STREAM), 1060 Kbps(UP-STREAM)
     IP Profile for your line is - 6345 Kbps
    2. Upstream Test:  -provides background information.
    Upload Speed
    821 Kbps
    0 Kbps
    1060 Kbps
    Max Achievable Speed
    >Upload speed achieved during the test was - 821 Kbps
     Additional Information:
     Upstream Rate IP profile on your line is - 1060 Kbps
    We were unable to identify any performance problem with your service at this time.
    It is possible that any problem you are currently, or had previously experienced may have been caused by traffic congestion on the Internet or by the server you were accessing responding slowly.
    If you continue to encounter a problem with a specific server, please contact the administrator of that server in the first instance.
    If you want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side If the the reply answers your question then please mark as ’Mark as Accepted Solution

  • Acquire, display, and write data at 50 samples per second

    I have a vi running on a PXI which samples data using two 4220's (all 4 channels) and one 6031 (only 6channels).  I am acquiring data at 100 samples per second, but only need to write the data out at 50 samples per second.  The data needs to be displayed at a minimum of 10samples per second.  The problem is that the VI can not get 50samples per second writen to the file, it writes about 20 to 30samples per second.
    I dont know if the issue is displaying the data which is holding up the writing at 50samples per second or if it is something else in the VI.  I have moved the writing of the data to the outside of the while loop, but this did not help enough to reach 50samples/sec.
    Is it better to change the waveform data types to dynamic waveforms?  Would this increase speed of operations?
    Galen
    Attachments:
    ATM_FrictionTests_v1.2.vi ‏375 KB

    Galen,
    Looking at your vi, I would recommend writing to your file in a different way.  The function you are using is actually opening, writing, and then closing the file every time you call it.  This greatly increases the amount of resources being used.  Take a look at the Cont Acq to Spreadsheet File.vi example and note that the file is only being opened and closed once.  The data is being written to the file during execution of the program, and then closed when the app is done running.  The example is done in traditional DAQ but you should be able to do something similar with DAQmx.  Try this and let me know if it helps. 
    Regards,
    LA

  • How many of these objects should I be able to insert per second?

    I'm inserting these objects using default (not POF) serialization with putAll(myMap). I receive about 4000 new quotes per second to put in the cache. I try coalescing them to various degrees but my other apps are still slowing down when these inserts are taking place. The applications are listening to the cache where these inserts are going using CQCs. The apps may also be doing get()s on the cache. What is the ideal size for the putAll? If I chop up myMap into batches of 100 or 200 objects then it increases the responsiveness of other apps but slows down the overall time to complete the putAll. Maybe I need a different cache topology? Currently I have 3 storage enabled cluster nodes and 3 proxy nodes. The quotes go to a distributed-scheme cache. I have tried both having the quote inserting app use Extend and becoming a TCMP cluster member. Similar issues either way.
    Thanks,
    Andrew
    import java.io.Serializable;
    public class Quote implements Serializable {
        public char type;
        public String symbol;
        public char exch;
        public float bid = 0;
        public float ask = 0;
        public int bidSize = 0;
        public int askSize = 0;
        public int hour = 0;
        public int minute = 0;
        public int second = 0;
        public float last = 0;
        public long volume = 0;
        public char fastMarket; //askSource for NBBO
        public long sequence = 0;
        public int lastTradeSize = 0;
        public String toString() {
            return "type='" + type + "'\tsymbol='" + symbol + "'\texch='" + exch + "'\tbid=" +
                    bid + "\task=" + ask +
                    "\tsize=" + bidSize + "x" + askSize + "\tlast=" + lastTradeSize + " @ " + last +
                    "\tvolume=" + volume + "\t" +
                    hour + ":" + (minute<10?"0":"") + minute + ":" + (second<10?"0":"") + second + "\tsequence=" + sequence;
        public boolean equals(Object object) {
            if (this == object) {
                return true;
            if ( !(object instanceof Quote) ) {
                return false;
            final Quote other = (Quote)object;
            if (!(symbol == null ? other.symbol == null : symbol.equals(other.symbol))) {
                return false;
            if (exch != other.exch) {
                return false;
            return true;
        public int hashCode() {
            final int PRIME = 37;
            int result = 1;
            result = PRIME * result + ((symbol == null) ? 0 : symbol.hashCode());
            result = PRIME * result + (int)exch;
            return result;
        public Object clone() throws CloneNotSupportedException {
            Quote q = new Quote();
            q.type=this.type;
            q.symbol=this.symbol;
            q.exch=this.exch;
            q.bid=this.bid;
            q.ask = this.ask;
            q.bidSize = this.bidSize;
            q.askSize = this.askSize;
            q.hour = this.hour;
            q.minute = this.minute;
            q.second = this.second;
            q.last = this.last;
            q.volume = this.volume;
            q.fastMarket = this.fastMarket;
            q.sequence = this.sequence;
            q.lastTradeSize = this.lastTradeSize;
            return q;
    }

    Well, firstly, I surprised you are using "float" objects in a financial object, but that's a different debate... :)
    Second, why aren't you using pof? Much more compact from my testing; better performance too.
    I've inserted similar objects (but with BigDecimal for the numeric types) and seen insert rates in the 30-40,000 / second (single machine, one node). Obviously you take a whack when you start the second node (backup's being maintained, plus that node is probably on a separate server, so you are introducing network latency.) Still, I would have thought 10-20,000/second would be easily doable.
    What are the thread counts on the service's you are using?; I've found this to be quite a choke point on high-throughput caches. What stats are you getting back from JMX for the Coherence components?; what stats from the server (CPU, Memory, swap, etc)?; What spec of machines are you using? Which JVM are you using? How is the JVM configured? What's are the GC times looking like? Are you CQC queries using indexes? Are your get()'s using indexes, or just using keys? Have you instrumented your own code to get some stats from it? Are you doing excessive logging? So many variables here... Very difficult to say what the problem is with so little info./insight into your system.
    Also, maybe look at using a multi-threaded "feeder" client program for your trades. That's what I do (as well as upping the thread-count on the cache service thread) and it seems to run fine (with smaller batch sizes per thread, say 50.) We "push" as well as fully "process" trades (into Positions) at a rate of about 7-10,000 / sec on a 4 server set-up (two cache storage nodes / server; two proxies / server.) Machines are dual socket, quad-core 3GHz Xeons. The clients use CQC and get()'s, similar to your set-up.
    Steve

  • 10,000 Recorc Per Second (In EJB 3.0)

    hi all,
    i have some mission critical tasks into my project, is it possible to persist 10 000 record per seconds,
    1. AS - JBoss Application Server 4.0.4GA
    2. Database - Oracle 10G 10.2.0.1
    3.EJB - 3.0 Framework
    4.OS - SunOS 5.10
    4.Server - Memory: 16G phys mem, 31G swap, 16 CPU,
    i know that i need performace
    here is my configurations about performance
    1. JVM Config Into JBoss
    JAVA_OPTS="-server -Xmx3168m -Xms2144m -Xmn1g -Xss256k -d64 -XX:PermSize=128m -XX:MaxPermSize=256m
       -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
        -XX:ParallelGCThreads=20 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
        -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=31 -XX:+AggressiveOpts
        -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintTenuringDistribution2. also i configure my database.xml file
    <?xml version="1.0" encoding="UTF-8"?>
    <datasources>
      <xa-datasource>
        <jndi-name>XAOracleDS</jndi-name>
        <track-connection-by-tx/>
        <isSameRM-override-value>false</isSameRM-override-value>
        <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class>
        <xa-datasource-property name="URL">jdbc:oracle:thin:@192.168.9.136:1521:STR</xa-datasource-property>
        <xa-datasource-property name="User">SRVPROV</xa-datasource-property>
        <xa-datasource-property name="Password">SRVPROV</xa-datasource-property>
        <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name>
        <min-pool-size>50</min-pool-size>
        <max-pool-size>200</max-pool-size>    
        <metadata>
             <type-mapping>Oracle9i</type-mapping>
          </metadata>
      </xa-datasource>
      <mbean code="org.jboss.resource.adapter.jdbc.vendor.OracleXAExceptionFormatter"
             name="jboss.jca:service=OracleXAExceptionFormatter">
        <depends optional-attribute-name="TransactionManagerService">jboss:service=TransactionManager</depends>
      </mbean>
    </datasources>3. Also i have one simple Stlateless Session Bean
    @Stateless
    @Remote(UsageFasade.class)
    public class UsageFasadeBean implements UsageFasade {
         @PersistenceContext(unitName = "CustomerCareOracle")
         private EntityManager oracleManager;
         @TransactionAttribute(TransactionAttributeType.REQUIRED)
         public long createUsage(UsageObject usageObject, UserContext context)
                   throws UserManagerException, CCareException {
              try {
                   oracleManager
                             .createNativeQuery("INSERT INTO USAGE "
                                       + " (ID, SESSION_ID, SUBSCRIBER_ID, RECDATE, STARTDATE, APPLIEDVERSION_ID, CHARGINGPROFILE_ID, TOTALTIME, TOTALUNITS, IDENTIFIERTYPE_ID, IDENTIFIER, PARTNO, CALLTYPE_ID, USAGETYPE, APARTY, BPARTY, CPARTY, IMEI, SPECIFICCALLTYPE, APN, SOURCELOCATION, SMSCADDRESS, MSC_ID, ENDREASON, USAGEORIGIN, BILL_ID, CONTRACT_ID) "
                                       + " VALUES(SEQ_USAGE_ID.NEXTVAL, NULL, NULL, SYSDATE, SYSDATE, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL) ");                              
                   return 1;
              } catch (Exception e) {
    }3. and into client side i have 200 Threads, each of them tried to call this method 50 times
    my result is that i can persist 10000 record in 20 seconds, without hibernate, with hibernate i got worst result :(,
    also i hear that it is good idea to use JDBC 3.0 driver for performance,
    i download newest oracle jdbc jar file from oracle site
    http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/jdbc_10201.html
    is this jar file JDBC 3.0 driver ?
    is there any hibernate performance configuration?
    is it any more performance tuning into JBoss or EJB with entity beans?
    can anybody help me ? or is there any doc which can help me ?
    Regards,
    Paata,
    Message was edited by:
    paata
    Message was edited by:
    paata

    What makes you think that your database, just the database (with the box that it is on) can handle that rate?
    What makes you think that your network can handle that?
    While this is going on is this the ONLY traffic that will be on the network?

  • Newbie trying to understand the frame/fields per second concept on video

    Do video camera shutter speeds (1/60sec) reflect an interlaced field or a full frame comprised of two interlaced fields per part of a second.
    I suspect that a shutter setting of 1/60sec means it's not a full frame of video but is just an odd or even lined interlace field. Meaning that if I want to shoot 30fps I need to keep my shutter setting on 1/60sec.
    But if that's really the case, then what am I exactly shooting per second if actual NTSC frame rate is 29.97fps? I mean, the first 2p frames can easily be divided into two fields each, but what about the remaining .97frame. How can you divide that into two fields of interlaced video lines?
    Forgive my ignorance but books have a bad habit of not answering back when you don't understand something they say.
    iMac Intel Duo-Core; Intel Mac mini single-core   Mac OS X (10.4.6)  
    iMac Intel Duo-Core; Intel Mac mini single-core   Mac OS X (10.4.6)  

    The shutter speed is not relevant. I can be either field based or progressive. The frame rate is not dependent on the shutter speed.

  • Calculating Frames Per Second Accurately

    I tried searching the forums for suggestions or code used to calculate FPS accurately... But was ultimately unsuccessful.
    I think I have a pretty basic understanding of how it works... but I'm still left wondering if there's a better... or correct way of attacking the problem.
    What I started out doing is taken a pre-cycle time sample using System's currentTimeMillis method, followed by the main parts I'm executing in my program and then took a second time sample. I subtracted the post-cycle from the pre-cycle to get the time... and it occurred to me that the currentTimeMillis is not exactly reliable. As it says in the API, "For example, many operating systems measure time in units of tens of milliseconds," when discussing that method.
    I then noticed the nanoTime method and decided to use that.
    It quite honestly seems to work perfectly for what I'm trying to accomplish.
    I pretty much used the same steps as before.
    But now my problem is that I want to limit the amount of frames per second to 60.
    I decided that I should try using the Thread class's sleep(long millis, int nanos) method.
    It seemed like it would work... but to my dismay, it did not.
    The milliseconds, for the most part, were correctly timed... but still not always. The nanoseconds even less so... but I knew that the nanoseconds would be less reliable-I decided to use microseconds. Using microseconds doesn't even work that well.
    So I wondered if there was an even better way... maybe a more 'manual' approach to fixing my problem.
    I would greatly appreciate any input/knowledge on the matter.

    its quite simple. Windows has an API call that can set the timer precision (its all based on some interrupt interval - I forgot the precise details). This precision is system wide, so if one application does it it is immediately active for all other applications running at that time. Applications can only lower it, so if you set it to 1ms then some other application cannot force it to 5ms for example.
    Now here comes JVM bug number two.
    As said the interval differs per system - sometimes it is 10ms, sometimes it is 15ms. The JVM wrongfully assumes it is always 10ms however. Do a test: try to sleep for some time that is a multiple of 10 (without the long running thread hack active); you'll find that the precision still sucks. That is because in this specific case the JVM does NOT change the system wide timing precision. But if you sleep for any number of milliseconds that is not a multiple of 10 it will actually temporarily set the precision to 1ms.
    So the rule is: as long as one thread goes to sleep the precision is set to 1ms. When the last sleeping thread wakes up, it is reset to what it was. Then it also doesn't matter for what amount of time you make your real thread sleep as the long sleeping thread will not be sleeping for an amount that is a multiple of 10, thus forcing the precision to 1ms.
    Now say that you don't do the long sleeping thread hack and you make your own thread sleep for say 9ms, switching to 1ms precision temporarily. This behavior makes it so that sampling the passed time (before and after sleeping) can be imprecise; you'll find that most of the time you'll get sampling that matches the number of milliseconds you slept and sometimes it jumps to 10/15ms depending on the granularity of your system. This is a concurrency problem; sometimes the precision is reset before you get a chance to sample the current time.
    So to recap to give 100% accuracy with System.currentTimeInMillis(), you need to keep a thread sleeping at all time so the precision stays at 1ms.
    And then finally we come to bug #3, which is a problem in Windows itself: rapidly changing the precision (which happens when you make a single thread sleep for short intervals) can screw with the system clock. I don't know if this problem still exists in later iterations of Windows, but it is again a reason to do the long sleeping thread hack. Because this is a known issue I still call this a bug in the JVM because of the way they implemented the precision timer activation, which can trigger the problem in Windows. The command line switch mentioned in the above bug report SHOULD have fixed that... but you know, facepalm bug #4.
    But at the end of the day: even if at least 4 bugs can be named regarding precise timing in Java under Windows, the root of all evil is still the way timing is implemented in Windows itself making life too difficult for the poor JVM devs. What were the MS devs thinking at the time?

  • Final Cut Pro X Image Sequence Export missing frames per second option?

    I am using the trial version of Final Cut Pro X.  I can export an image sequence but it will only allow me to do so at 30 frames per second -every single frame! 
    So a 10 min movie takes half an hour to export 20,000+ frames that no one on earth has time to look thru.
    In my old Final Cut Express I was able to choose 1 or 2 frames per second, which was just right.
    What am I missing?
    Do I need to buy Quicktime Pro or Compressor to allow me to export image sequences without exporting every frame?
    Why would Apple even have an image sequence export if it only allows you to export every frame, or it that only in the trial version?
    Thank you

    I found a free option for you, if you don't already have Compressor 4.
    In FCPX create your movie. Then SHARE as a Master File.
    Get MPEG Streamclip, which is a free app available here.
    http://www.squared5.com/svideo/mpeg-streamclip-mac.html
    Drag your Master File into MPEG Streamclip.
    In MPEG Streamclip, select FILE/EXPORT TO OTHER FORMATS.
    In FORMAT: choose IMAGE SEQUENCE
    In OPTIONS, choose your frame rate.

  • How to create a $ per second report.

    I have a scenario where I have several stores all posting sales in real time to a BAM object.
    The Fileds are:
    StoreID "string"
    PurchaseDate "datetime"
    PurchaseAmount "integer"
    I like to create a report that will display the sales per second for a given store over a defined Period of time.
    As an example:
    If the report is run and 60 seconds is entered as a report parameter for the period of time and StoreID set to "1" (also via paramter) then I'd like to know what was the average sales per second over the last 60 seconds for StoreID 1.
    Obviously this report would be continually updating giving me a real time gauge indicating when sales per second has dropped or spiked.
    A real example:
    Lets say there are 3 sales in the last 60 seconds for StoreID 1 as follows -
    1,3/20/1977 3:08:08 PM,100
    1,3/20/1977 3:08:14 PM,160
    1,3/20/1977 3:08:33 PM,100
    If I ran the report at 3/20/1977 3:08:34 PM there would only be 3 entries which fall within the last 60 seconds (Report Paramter) for StoreID 1 (Report Paramter). And averaging out the sales total would give me (100+160+100)/60=6$/second
    Can anyone point me in the right direction on how to solve this?
    D

    Not sure I understand what you want, but I set up a page with 2 items on one row, 3 on the next. http://apex.oracle.com/pls/apex/f?p=23834:30 is that the sorta thing you want to do? - control where the items appear?
    If so - you can just use drag and drop layout, or on the items settings (Displayed settings), specify whether or not it appears on a new line or not.
    Ta,
    Trent

  • CO Pulse Frequency doesn't actually generate 1 Pulse per Second?

    Hello all,
    I have a VI layed out in the attachment below.  I seem to have a lack of understanding on how to program this VI here.  I just don't understand what is possibly going wrong.
    The VI is very basic.  The frequency has been set to 1, and the units are Hertz.  To me, this means that the program should send one pulse to my linear actuator ONCE per second.  I have a simple pulse counter set up in the VI as well to count how many pulses are actually being sent (using the DAQ assistant).  Why is it that when I run the program, I get around 300 pulses per second?  Raising the value makes it goes slightly faster, but lowering the value doesn't really make it go any slower.  There seems to be no real correlation between the input frequency and the actual number of pulses that are sent.
    I just simply want a program that I can input "1 pulse per second" or however many pulses I want per second and have the card send ONE pulse per second (or however many is input).  Where do I start?  I have a whole program written out and ready to go, but this basic concept here completely eludes me. 
    Thanks,
    James
    Attached:  1) Picture of concept that I'm completely baffled about  2) VI of my program which said concept is being used in
    Solved!
    Go to Solution.
    Attachments:
    Voltage vs Distance SM v1.1.vi ‏475 KB
    COPulseFreq.jpg ‏75 KB

    In the simple image, you are running a loop as fast as possible (there's no timing mechanism).  Inside that loop, you configure the pulse task, start it, then immediately stop and clear it.  You need to create the channel and configure the timing outside the while loop, before it starts, and you need to clear the task outside the loop as well, after the while loop terminates.  Depending on what you want to do, you may be able to move the task start outside the loop as well, or just let the task auto-start.
    You'll need to restructure your VI a bit.  I can't tell if you want to clear the task after each step, or just change the frequency.  If it's just the frequency, you can use DAQmx Write to change it; if you need to start and stop the task, you'll need some logic to do that once each time you want to restart (you may get an error if you start a task that's already running).  There's no need for "Is Task Done?" since you're not using the output for anything.
    EDIT: Also, it is always a good idea to put a wait inside loops that execute indefinitely.  Otherwise they will spin as fast as possible, consuming all available processor time and preventing other code from running.  If you configure your counter task properly the loop timing won't affect the pulse rate (because that's done in hardware) but there's no need to run the loop that fast.

Maybe you are looking for

  • Problem with display of check boxes in 10.1.7

    We create PDF versions of our infopath documents for users to download.  Several users who have upgraded to 10.1.7 report that any checkboxes or radio buttons come out flat and blank.  They display fine in 10.1.6  and older and version 11.  Only 10.1

  • Field Profit Ctr is a required field for G/L account 1000 205405

    Hi, While we are doing MIRO, we are facing a error message - Field Profit Ctr is a required field for G/L account 1000 205405. Please guide, what should be error. Is there any error in G/L account? Regards

  • SSRS Reporting manages goes down frequently. ERROR: The underlying connection was closed: An unexpected error occurred on a send.

    We have been experiencing issues with SSRS over the past 5 weeks, the service goes down frequently. And it comes up only on reboot of the server. We had also disabled the On Access scan in Mcafee. The service is in use for about two years, and the be

  • Unrendered and Easy Setup

    I have a new Sony SR11 and I'm experimenting with a FCE workflow. I've recorded a few minutes of video at the camcorder's highest setting, but am having problems getting it to play in the viewer in FCE - the audio plays, but the video shows "unrender

  • 3G blocking call facility

    Over the last few days, I have not been able to make or receive calls. Vodafone suggested turning 3G off which I did, and I can now make and receive calls again. This is obviously not an ideal solution as I have to keep turning 3G on and off dependin