ReadString() get time limit exceed to complete operation error

I have a device name Time Electronics 5075. I am trying to use the SimpleReadWrite.2010 in National Instruments\NI-488.2\Examples\DotNET4.0\SimpleReadWrite\cs to comunicate with the device.
I can open the connection and send commands to the device by using this code
device = new Device((int)boardIdNumericUpDown.Value,(byte)primaryAddressNumericUpDown.Value,(byte)currentSecondaryAddress);
device.IOTimeout = TimeoutValue.T100s;
device.Write("X0/A1/F0{13}");
 But I am stuck at 
stringReadTextBox.Text = device.ReadString();
I get the error "Time limit exceeded to complete operation."
I don't know exactly should I add 
device.Write("T/D{13}");
before the device.ReadString() or not. But I get the same error for both cases.
Some info: Visual Studio 2010, Measurement Studio 2013, NI.488.2, Windows 7 Pro
Any idea?

Thank you Curt_C,
I have attached the screen capture of NI IO Trace. I just open the session, send two commands T\n and D\n to the device and readstring. The red one are errors on reading. The last one is close session.
I think \n is the correct terminator. It works with most of commands, just get stuck with ReadString.
Kind regards,
Thang
PS: I also attached GPIB Interface Properties for more info
Attachments:
NI IO Trace_01.png ‏54 KB
GPIB Interface Properties.png ‏54 KB

Similar Messages

  • Time limit Exceeded problem

    Hi ,
    I have a file to idoc scenario .
    I have a file which generates many idocs (size atmost 5 mb)
    I am aware sap recommends to limit the message the file size to atmost 5mb.
    Do they mean file size of the payload ...because size increases in xi also .
    I just want to process such a file quickly .
    For that I am employing queue priortization at both sender and reciever .
    and would idoc packaging help at the receiver .
    Besides I am getting 'time limit exceeded in sm58' although I am seeing my idocs in idx5.
    Here are some of the things I have checked ..
    1) Parameters in SXMB_ADM - Integration Eningine configuration ---Specific configuration ...tried various tuning parameters ...but it didn't work..
    2) In partner profile in recieving CRM system made chagend the setting from trigger immediately ..to background Task ...
    (I am not certain how far that is relevant .
    If it is a problem with CRM system's ability to accept mass idocs (because a small number is acceptible ...under 1mb file size ) then seetings should in crm so that large number idocs get succesfully posted .
    I have posted this issue several times ...Kindlyhelp
    KIndly what parameters i should tweak in xi or that matter recieving crm system to take care of this problem.
    Kindly help

    HI,
    Large size of file processing
    In general, an extra sizing for XI memory consumption is not required. The total memory of the SAP Web Application Server
    should be sufficient except in the case of large messages (>1MB).
    To determine the memory consumption for processing large messages, you can use the following rules of thumb:
    Allocate 3 MB per process (for example, the number of parallel messages per second may be an indicator)
    Allocate 4 kB per 1kB of message size in the asynchronous case or 9 kB per 1kB message size in the synchronous case
    Example: asynchronous concurrent processing of 10 messages with a size of 1MB requires 70 MB of memory
    (3MB + 4 * 1MB) * 10 = 70 MB With mapping or content-based routing where an internal representation of the message payload
    may be necessary, the memory requirements can be much higher (possibly exceeding 20 kBytes per 1kByte
    message, depending on the type of mapping).
    The size of the largest message thus depends mainly on the size of the available main memory. On a normal 32Bit operating system, there is an upper boundary of approximately 1.5 to 2 GByte per process, limiting the respective largest message size.
    please check these links..
    /community [original link is broken]:///people/michal.krawczyk2/blog/2006/06/08/xi-timeouts-timeouts-timeouts
    Input Flat File Size Determination
    /people/shabarish.vijayakumar/blog/2006/04/03/xi-in-the-role-of-a-ftp
    data packet size  - load from flat file
    How to upload a file of very huge size on to server.
    Regards
    Chilla

  • Time Limit exceeded error in R & R queue

    Hi,
    We are getting Time limit exceeded error in the R & R queue when we try to extract the data for a site.
    The error is happening with the message SALESDOCGEN_O_W. It is observed that whenever, the timelimit error is encountered, the possible solution is to run the job in the background. But in this case, is there any possibility to run the particular subscription for sales document in the background.
    Any pointers on this would be of great help.
    Thanks in advance,
    Regards,
    Rasmi.

    Hi Rasmi
    I suppose that the usual answer would be to increase the timeout for the R&R queue.
    We have increased the timeout on ours to 60 mins and that takes care of just about everything.
    The other thing to check would be the volume of data that is going to each site for SALESDOCGEN_O_W. These are pretty big BDOCs and the sales force will not thank you for huge contranns time 
    If you have a subscription for sales documents by business patrner, then it is worth seeing if the business partner subscription could be made more intelligent to fit your needs
    Regards
    James

  • Mails to 1 particular user is getting bounced - Command time limit exceeded

    I've about 100 mail clients. Since yesterday morning, I've had one user whose incoming emails have been getting bounced. I've run a "mailbfr -m user" w/o success. The server is current on updates and has been restarted. All other users are functioning correctly. Notes:
    -IMAP account
    -This user has a 4GB Sent folder. All other folders are under 2GB.
    -This user has recently starting syncing Notes from a Blackberry.
    THE SMTP LOG:
    Oct 7 08:26:52 myserver postfix/pipe[488]: D1BCE11CE230: to=<[email protected]>, relay=cyrus, delay=1000, delays=0.01/0/0/1000, dsn=5.3.0, status=bounced (Command time limit exceeded: "/usr/bin/cyrus/bin/deliver")
    A PORTION OF THE BOUNCE EMAIL SENT TO SENDER:
    This is the mail system at host myserver.com.
    I'm sorry to have to inform you that your message could not
    be delivered to one or more recipients. It's attached below.
    For further assistance, please send mail to postmaster.
    If you do so, please include this problem report. You can
    delete your own text from the attached returned message.
    The mail system
    <[email protected]>: Command time limit exceeded:
    "/usr/bin/cyrus/bin/deliver"
    Reporting-MTA: dns; myserver.com
    X-Postfix-Queue-ID: AD79711CE1DD
    X-Postfix-Sender: rfc822; [email protected]
    Arrival-Date: Thu, 7 Oct 2010 08:07:57 -0700 (PDT)
    Final-Recipient: rfc822; [email protected]
    Original-Recipient: rfc822;[email protected]
    Action: failed
    Status: 5.3.0
    Diagnostic-Code: x-unix; internal software error
    Any help?

    Solved. This person was in-process of moving Blackberry notes to the imap account. There was an interruption during the transfer phase and the server's Notes folder became bad somehow. When Mailbfr hit that particular folder, it stopped the rebuild process.
    I removed the Notes folder and ran Mailbfr again. The Notes folder was recreated and the rebuild function completed.
    Mail delivery has been restored to this user.

  • Message getting stuck in XBQO queue - Time limit exceeded

    Hi All,
    We have a BPM scenario in our project (on PI 7.0 SP18), where bundle of PEXR2002 Payment IDocs are received as a single flat file. This file is then consumed by the BPM, to split the message into multiple payments using Java Mapping.
    However, when we get an IDoc file of size greater than 5 MB (more than 500 IDocs), the message gets stuck in XBQO queue and eventually giving a SYSFAIL with the message "Time limit exceeded". Could you please let us know if you have encountered a similar issue and are aware of a possible solution.
    Any pointers to this will be really appreciated.
    Thanks & Regards,
    ROSIE SASIDHARAN

    H Rosie,
    1)  Goto SXMB_ADM-> Integration Engine Configuration->Parameter  EO_MSG_SIZE_LIMIT->possible values 0 - 2,097,151 (KB)
    The parameter EO_MSG_SIZE_LIMIT enables serial processing of messages of a particular size. This applies for messages with the quality of service Exactly Once (EO). If the message is larger than the parameter value, the message is processed in a separate queue.
    2)  Goto SXMB_ADM-> Integration Engine Configuration->Parameter  HTTP_TIMEOUT->possible values n Seconds, where n is a whole number.
    The parameter Specifies the timeout for HTTP connections (time between two data packages at line level). This value overrides the system profile parameter icm/server_port_n (for example, icm/server_port_0 : PROT=HTTP, PORT=50044, TIMEOUT=900). If you do not set the parameter HTTP_TIMEOUT or if you set the parameter to 0, then the setting for the system profile parameter is used.
      See SAP Note 335162  for sysfail issue....
      Hope these will help u....
    Regds,
    Pinangshuk.

  • Time Limit Exceeded in File - java- IDOC

    Hello,
    I have an interface which reads a text file in XI and uses a java mapping to produce IDOCs in an R3 system synchronously, the interface was running fine for more than 2 years . Since the text file is larger than 20 MB, we are splitting into small text files (2mb each) for easier processing. While processing, each file will hold the outbound queue (SMQ2) until it gets completed. after 2 years of using this interface, it started giving errors in the queue (Time Limit Exceeded) for files larger than 1mb.
    Any hint ??

    Swarup,
    increasing the RFC adapter time out from the visual admin has solve the problem,
    thank you for that.
    but what I was looking for is a way to find out what could be the root cause, since it was running 2 years with (300000 ms time out) without a problem. Now I must increase this time out which mean my system is not performing well!
    Agasthuri,
    as I mentioned, it was running fine with 2mb, now even 1mb cannot be proccessed,
    Thanks again.

  • SMQ2 Time limit exceeded

    Hi Experts,
    IDOC to IDOC Scenarion--There are some entries in SMQ2 are showing "time limit exceeded". i tried to unlok the Queue every time i get the error and after that queue is going into Running status for some time and again it is giving the same error and status is "SYSFAIL". Queue registration is also done.we are facing this problem in production.
    All other interfaces messages are processing fine except for one interface. some IDocs are processed even for the same interface. I assumed that the size of the message is large. so it is not processing, mapping step is not executed.
    see the example below for one message which is not processed from XI
    <SAP:MessageSizePayload>467228</SAP:MessageSizePayload>
    <SAP:MessageSizeTotal>479041</SAP:MessageSizeTotal>
    <SAP:PayloadSizeRequest>467228</SAP:PayloadSizeRequest>
    We have restarted the complete XI server after Kernal upgrade and after that we are getting this problem, can any body give me the suggestion how can we solve the problem.

    Hi
    The size of the messages also not huge, we have tested it testing system and it is taking less than one minute to process the message. And we have checked at the Gate way & JCO connections are working fine. The problem is still there as i explained in my first post.
    Every time it is giving time limit exceeded after unlocking the Queue.
    Is there any other suggestions to resolve this issue.
    Thanks in Advance
    Neeru
    Edited by: neeru on Mar 3, 2010 2:57 PM

  • TRFC error "time limit exceeded"

    Hi Prashant,
    No reply to my below thread...
    Hi Prashant,
    We are facing this issue quite often as i stated in my previous threads.
    As you mentioned some steps i have already followed all the steps so that i can furnish the jog log and tRFC details for reference long back.
    This issue i have posted one month back with full details and what we temporarily follow to execute this element successfully.
    Number of times i have stated that i need to know the root cause and permanent solution to resolve this issue as the log clearly states that it is due to struck LUWs(Source system).
    Even after executing the LUWs manually the status is same (Request still running and the status is in yellow color).
    I have no idea why this is happening to this element particularly as we have sufficient background jobs.
    we need change some settings like increasing or decreasing data package size or something else to resolve the issue permanently?
    For u i am giving the details once again
    Data flow:Standard DS-->PSA--->Data Target(DSO)
    In process monitor screen the request is in yellow color. NO clear error message s defined here.under update 0 record updated and missing message with yellow color except this the status against each log is green.
    Job log:Job is finished TRFCSSTATE=SYSFAIL message
    Trfcs:time limit exceeded
    What i follow to resolve the issue:Make the request green and manually update from PSA to Data target and the job gets completed successfully.
    Can you please tell me how to follow in this scenario to resolve the issue as i waiting for the same for long time now.
    And till now i didn't get any clue and what ever i have investigated i am getting replies till that point and no further update beyond this
    with regards,
    musai

    Hi,
    You have mentioned that already you have checked for LUWs, so the problem is not there now.
    In source system, go to we02 and check for idoc of type RSRQST & RSINFO. If any of them are in yellow status, take them to BD87 and process them. If the idoc processed is of RSRQST type, it would now create the job in source system for carrying out dataload. If it was of RSINFO type, it would finish the dataload in SAP BI side as well.
    If any in red, then check the reason.

  • Short dump "Time limit exceeded" when searching for Business Transactions

    Hello Experts,
    We migrated from SAP CRM 5.2 to SAP CRM 7.0. After migration, our business transaction search (quotation, sales order, service order, contract etc) ends with the short dump "Time limit exceeded" in class CL_CRM_REPORT_ACC_DYNAMIC, method DATABASE_ACCESS. The select query is triggered from line 5 of this method.
    Number of Records:
    CRMD_ORDERADM_H: 5,115,675
    CRMD_ORDER_INDEX: 74,615,914
    We have done these so far, but the performance is still either poor or times out.
    1. DB team checked the ORACLE parameters and confirmed they are fine. They also checked the health of indices in table CRMD_ORDER_INDEX and indices are healthy
    2. Created additional indices on CRMD_ORDERADM_H and CRMD_ORDER_INDEX. After the creation of indices, some of the searches(without any criteria) work. But it takes more than a minute to fetch 1 or 2 records
    3. An ST05 trace confirmed that the selection on CRMD_ORDER_INDEX takes the most time. It takes about 103 seconds to fetch 2 records (max hits + 1)
    4. If we specify search parameters, say for example a date or status, then again we get a short dump with the message "Time limit exceeded".
    5. Observed that only if a matching index is available for the WHERE clause, the results are returned (albeit slowly). In the absence of an index, we get the dump.
    6. Searched for notes and there are no notes that could help us.
    Any idea what is causing this issue and what we can do to resolve this?
    Regards,
    Bala

    Hi Michael,
    Thanks. Yes we considered the note 1527039. None of the three scenarios mentioned in the note helped us. But we ran CRM_INDEX_REBUILD to check if the table CRMD_ORDER_INDEX had a problem. That did not help us either.
    The business users told us that they mostly search using the date fields or Object ID. We did not have any problem with search by Object ID. So we created additional indices to support search using the date fields.
    Regards,
    Bala

  • Error while running query "time limit exceeding"

    while running a query getting error "time limit exceeding".plz help.

    hi devi,
    use the following links
    queries taking long time to run
    Query taking too long
    with hopes
    Raja Singh

  • Time Limit exceeded while running in RSA3

    Hi BW Experts,
      I am trying to pull 1 lakh data from CRM to BI System.
    Before scheduling, i am trying to execute in RSA3.I am getting the error message as "Time Limit Exceeded".
    Pls suggest, why it is happening like this.
    Thanks in advance.
    Thanks,
    Ram.

    Hi,
                because huge data with in the stipulated time it is not executing and showing the all records ,so it is better to go any selection option by each document type or some else then u can add all the documents ,anyway in bw side we r running this job in background so no problem.if u want see all records at a  time then u can discuss with ur basis people to extend the time for that.
    Thanks & Regards
    sathish

  • Time Limit Exceeded while executing Proxy Program

    Hi all,
    we are  frequently facing Time Limit Exceeded problem in R/3 system while exceuting proxy program for large payloads (appx 5-7 MB). Sometimes we are able to successfully restart the message and sometimes we have to delete these messages. How can we resolve this issue.
    Thanks,
    Mayank

    hi Joerg,
    we are getting this error in inbound queue in R/3 system, also this is a async call, so no chance of any communication interruption b/w SAP systems. From PI system, message is succeccfully passed to R/3 system & Time Limit Exceeded is coming in R/3 system inboud queue (SMQ2). Is it poosible that timeout will happen within R/3 system.
    Thanks,
    Mayank

  • Time Limit exceeded Error while updating huge number of records in MARC

    Hi experts,
    I have a interface requirement in which third party system will send a big file say.. 3 to 4MB file into SAP. in proxy we
    used BAPI BAPI_MATERIAL_SAVEDATA to save the material/plant data. Now, because of huge amount of data the SAP Queues are
    getting blocked and causing the time limit exceeded issues. As the BAPI can update single material at time, it will be called as many materials
    as we want to update.
    Below is the part of code in my proxy
    Call the BAPI update the safety stock Value.
        CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
          EXPORTING
            headdata                    = gs_headdata
            CLIENTDATA                  =
            CLIENTDATAX                 =
           plantdata                   = gs_plantdata
           plantdatax                  = gs_plantdatax
           IMPORTING
              return              = ls_return.
        IF ls_return-type <> 'S'.
          CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK'.
          MOVE ls_return-message TO lv_message.
    Populate the error table and process next record.
          CALL METHOD me->populate_error
            EXPORTING
              message = lv_message.
          CONTINUE.
        ENDIF.
    Can any one please let me know what could be the best possible approach for this issue.
    Thanks in Advance,
    Jitender
    Hi experts,
    I have a interface requirement in which third party system will send a big file say.. 3 to 4MB file into SAP. in proxy we
    used BAPI BAPI_MATERIAL_SAVEDATA to save the material/plant data. Now, because of huge amount of data the SAP Queues are
    getting blocked and causing the time limit exceeded issues. As the BAPI can update single material at time, it will be called as many materials
    as we want to update.
    Below is the part of code in my proxy
    Call the BAPI update the safety stock Value.
        CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
          EXPORTING
            headdata                    = gs_headdata
            CLIENTDATA                  =
            CLIENTDATAX                 =
           plantdata                   = gs_plantdata
           plantdatax                  = gs_plantdatax
           IMPORTING
              return              = ls_return.
        IF ls_return-type <> 'S'.
          CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK'.
          MOVE ls_return-message TO lv_message.
    Populate the error table and process next record.
          CALL METHOD me->populate_error
            EXPORTING
              message = lv_message.
          CONTINUE.
        ENDIF.
    Can any one please let me know what could be the best possible approach for this issue.
    Thanks in Advance,
    Jitender

    Hi Raju,
    Use the following routine to get fiscal year/period using calday.
    *Data definition:
    DATA: l_Arg1 TYPE RSFISCPER ,
          l_Arg2 TYPE RSFO_DATE ,
          l_Arg3 TYPE T009B-PERIV .
    *Calculation:
    l_Arg2  = TRAN_STRUCTURE-POST_DATE. (<b> This is the date that u have to give</b>)
    l_Arg3  = 'V3'.
    CALL METHOD CL_RSAR_FUNCTION=>DATE_FISCPER(
      EXPORTING I_DATE = l_Arg2
                I_PER = l_Arg3
      IMPORTING E_FISCPER = l_Arg1  ).
    RESULT = l_Arg1 .
    Hope it will sove ur problem....!
    Please Assign points.......
    Best Regards,
    SG

  • Program terminated: Time limit exceeded, ABAP performance, max_wprun_time

    Hi,
    I am running an ABAP program, and I get the following short dump:
    Time limit exceeded. The program has exceeded the maximum permitted runtime and has therefore been terminated. After a certain time, the program terminates to free the work processfor other users who are waiting. This is to stop work processes being blocked for too long by
    - Endless loops (DO, WHILE, ...),
    - Database acceses with large result sets,
    - Database accesses without an apporpriate index (full table scan)
    - database accesses producing an excessively large result set,
    The maximum runtime of a program is set by the profile parameter "rdisp/max_wprun_time". The current setting is 10000 seconds. After this, the system gives the program a second chance. During the first half (>= 10000 seconds), a call that is blocking the work process (such as a long-running SQLstatement) can occur. While the statement is being processed, the database layer will not allow it to be interrupted. However, to stop the program terminating immediately after the statement has been successfully processed, the system gives it another 10000 seconds. Hence the maximum runtime of a program is at least twice the value of the system profile parameter "rdisp/max_wprun_time".
    Last error logged in SAP kernel
    Component............ "NI (network interface)"
    Place................ "SAP-Dispatcher ok1a11cs_P06_00 on host ok1a11e0"
    Version.............. 34
    Error code........... "-6"
    Error text........... "connection to partner broken"
    Description.......... "NiPRead"
    System call.......... "recv"
    Module............... "niuxi.c"
    Line................. 1186
    Long-running programs should be started as background jobs. If this is not possible, you can increase the value of the system profile parameter "rdisp/max_wprun_time".
    Program cannot be started as a background job. We have now identified two options to solve the problem:
    - Increase the value of the system profile parameter "rdisp/max_wprun_time"
    - Improve the performance of the following SELECT statement in the program:
    SELECT ps_psp_pnr ebeln ebelp zekkn sakto FROM ekkn
    INTO CORRESPONDING FIELDS OF TABLE i_ekkn
    FOR ALL ENTRIES IN p_lt_proj
    WHERE ps_psp_pnr = p_lt_proj-pspnr
    AND ps_psp_pnr > 0.
    In EKKN we have 200 000 entries.
    Is there any other options we could try?
    Regards,
    Jarmo

    Thanks for your help, this problem seems to be quite challenging...
    In EKKN we have 200 000 entries. 199 999 entries have value of 00000000 in column ps_psp_pnr, and only one has a value which identifies a WBS element.
    I believe the problem is that there isn't any WBS element in PRPS which has the value of 00000000. I guess that is the reason why EKKN is read sequantially.
    I also tried this one, but it doesn't help at all. Before the SELECT statement is executed, there are 594 entries in internal table p_lt_proj_sel:
      DATA p_lt_proj_sel LIKE p_lt_proj OCCURS 0 WITH HEADER LINE.
      p_lt_proj_sel[] = p_lt_proj[].
      DELETE p_lt_proj_sel WHERE pspnr = 0.
      SORT p_lt_proj_sel by pspnr.
      SELECT ps_psp_pnr ebeln ebelp zekkn sakto FROM ekkn
      INTO CORRESPONDING FIELDS OF TABLE i_ekkn
      FOR ALL ENTRIES IN p_lt_proj_sel
      WHERE ps_psp_pnr = p_lt_proj_sel-pspnr.
    I also checked that the index P in EKKN is active.
    Can I somehow force the optimizer to use the index?
    Regards,
    Jarmo

  • IDoc on outbound side from XI - time limit exceeded error

    Hi,
    I have a FIle to IDoc scenario and I'm creating there a lot of idocs (20000) in a single push. I'm getting an error "time limit exceeded" on outbound side with red flag in SXMB_MONI. How to increase this limit parameter? Where to do it? - PI? or R/3?
    Thank you,
    Olian

    Hi,
    Check thios below thread..
    Re: XI timeout error
    /people/michal.krawczyk2/blog/2006/06/08/xi-timeouts-timeouts-timeouts
    Regards,
    Srini

Maybe you are looking for

  • When trying to print in iPhoto 9.1.1 error: "There were no themes located"

    Hi, first time poster here. When trying to print a photo in iPhoto 9.1.1, I get an error stating "There were no themes located. Until at least one theme has been installed, this feature will be unavailable. This started after upgrading to iPhoto 11.

  • Ipod Classic not recognized by iTunes

    I am trying to initilize my daughters iPod classic as it won't sync - it is not showing up in iTunes. Here are the trouble-shooting steps I have covered: - the ipod will not stay connected to itunes for long enough to sync or restore. It breifly appe

  • Huge memory leak when closing PDF from Hyperlink

    I was wondering if anyone else has experienced this issue with Adobe Reader 11.0.10 on Windows 7 64bit: 1. I have a list of hyperlinks in an Access Table to certain PDF files on a local network folder. 2. Clicking the hyperlink opens the correspondin

  • Not enough disk space for app download

    Got a message to download X Code update, when I select It says there is not enough disk space. on the drive which has my downloads folder it says I have 2.09GB available is this where the download goes? R

  • Event ID 4740 Not Appearing After Upgrade to 2012 R2

    Hi Recently we upgraded our domain controllers both primary and secondary from 2008 r2 to 2012 r2 and since the upgrade i do not see the lockout events that fall with ID 4740. On 2008 R2 it used to appear as even id 4740 giving where the lockout take