Buffer size problem

SQL*Plus, Oracle 8.1.7, Win2000Pro
I have an script which spool the out put to a file. But I fill the biffer or something like this. If I limit the quantity od chacaters sent to the file, everything works fine.
ORA-20000: ORU-10027: buffer overflow, limit of 2000 bytes
Where can I enlarge the buffer size?

set serveroutput on size 1000000Richard

Similar Messages

  • PLEASE can a AE from NI take a look at my problem. Sound input read behave in strange manner then the buffer size is larger than 2X number of samples to read.

    On my computer I have discovered some strange behavior then reading data from the sound card. Then the buffer size is 2x samples to read everything is as expected. But since I read the sound card 10 times pr second I feel a .2 second buffer is to small. I am using XP, and XP is not a RTOS so with a buffer set to 0.2 seconds I may lose data. Therefore I set the buffer size (number samples/ch on Sound Input Configure.vi) to be in range of 2 seconds. The result then is that then reading from Sound input.vi, a reading often take more than 0.1 second. On my computer it is often 500mSec. Then the next 5 read follows with almost zero interval. I do not loose data. But on my front panel the graphs looks like an very early silent movie. This error was introduced in Labview 8.x. To be honest I think the labview 7.x sound system was much better in many ways.
    But before I point any finger NI. Other people has to verify the behavior I experience. I have made an example showing this error. It is a modified version  of the "Continuous Sound Input.vi" example. Then the "buffer in seconds" control is set to 0.2 every thing works OK. Changer this to a larger number will produce the mentioned above hiccup. The larger number in this control the larger hiccup. Is it any way to fix this? My solution up to now has to use a free 3. part software(http://www.zeitnitz.de/Christian/index.php?sel=wav​eio) But I guess it soon will be outdated. It may not work with newer windows versions.
    Any help at all will be appreciated 
    And yes I have the most updated version fo DirectX. Also I se this in Labview 2009 which I have trail version of. The VI I have made is in 8.6
    Message Edited by Coq Rouge on 09-07-2009 10:54 AM
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)
    Attachments:
    Continuous Sound Input with timing.vi ‏23 KB

    macaba wrote:
    If you take a moving average of the 0.2s buffer vs. 3s buffer at an update rate of 10, then they are the same (just under 100ms), so the average refresh rate is the same. I agree that is odd behaviour that the time between sound reads go to zero quite a lot then take a long time once in a while (presumably to fill the buffer
    I guess it goes to zero because it is reading data from the buffer it do not has to wait for data from the sound card. The mysterious thing is the periodic delay. You are also correct then saying that average timing is correct. And in my application I have no data loss.
    If you search for sound in this forum you will find out that many people has reported trouble with the sound system.
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)

  • Svrmgrl problem. illegal redo buffer size

    Hello experts,
    I installed oracle 8.1.5 in Linux 6.1 successfully.
    And I rebooted the computer and got into the svrmgrl to start the db.
    I can connect using "connect internal"
    But I cannot start the db using "startup;"
    I get this error message,
    ORA-07304: ksmcsg: illegal redo buffer size.
    I had no idea what to do now.
    Please help me out.
    And many many thanks in advance !!!
    null

    Check log_buffer parameter in init.ora.
    If the OS block size is 512, a value of 10*512 or 5120 will be valid.
    Rounding off to 5100 would generate this error.

  • Linux Serial NI-VISA - Can the buffer size be changed from 4096?

    I am communicating with a serial device on Linux, using LV 7.0 and NI-VISA. About a year and a half ago I had asked customer support if it was possible to change the buffer size for serial communication. At that time I was using NI-VISA 3.0. In my program the VISA function for setting the buffer size would send back an error of 1073676424, and the buffer would always remain at 4096, no matter what value was input into the buffer size control. The answer to this problem was that the error code was just a warning, letting you know that you could not change the buffer size on a Linux machine, and 4096 bytes was the pre-set buffer size (unchangeable). According to the person who was helping me: "The reason that it doesn't work on those platforms (Linux, Solaris, Mac OSX) is that is it simply unavailable in the POSIX serial API that VISA uses on these operating systems."
    Now I have upgraded to NI-VISA 3.4 and I am asking the same question. I notice that an error code is no longer sent when I input different values for the buffer size. However, in my program, the bytes returned from the device max out at 4096, no matter what value I input into the buffer size control. So, has VISA changed, and it is now possible to change the buffer size, but I am setting it up wrong? Or, have the error codes changed, but it is still not possible to change the buffer size on a Linux machine with NI-VISA?
    Thanks,
    Sam

    The buffer size still can't be set, but it seems that we are no longer returning the warning. We'll see if we can get the warning back for the next version of VISA.
    Thanks,
    Josh

  • Sychronize AO/AI buffered data graph and measure data more than buffer size

    I am trying to measure the response time (around 1ms) of the pressure drop indicated by AI channel 0 when the AO channel 0 gives a negetive single pulse to the unit under test (valve). DAQ board: Keithley KPCI-3108, LabView Version: 6.1, OS system: Win2000 professional.
    My problem is I am getting different timed graph between the AI and AO channels every time I run my program, except the first time I can get real time graph. I tried to decrease the buffer size less than the max buffer size of the DAQ board (2048 samples), but it still does unreal time graph from AI channel, seems it was still reading from old data in the buffer when AO writes the new buffer data, that is my guessing. In my p
    rogram, the AO and AI part is seperated, AO Write buffer is in a while loop while AI read is not. Would that be a problem? Or it's something else?
    Also, I am trying to measure data much larger than the board buffer size limit. Is it possible to make the measurement by modifying the program?
    I really appreciate any of your help. Thank you very much!
    Best,
    Jenna

    Jenna,
    You can modify the X-axis of a chart/graph in LabVIEW to display real-time. I have included a link below to an example program that illustrates how to do this.
    If you are doing a finite, buffered acquisition make sure that you are always reading everything from the buffer for each run. In other words, If you set a buffer size of 5000, then make sure you are reading 5000 scans (set number of scans to read to 5000). This will assure you are reading new data every time you run you program. You could always put the AI Read VI within a loop and read a smaller number from the buffer until the buffer is empty (monitor the scan backlog output of the AI Read VI to see how many scans are left in the buffer).
    You can set a buffer size larger than the FIFO
    buffer of the hardware. The buffer size you set in LabVIEW is actually a software buffer size within your computer's memory. The data is acquired with the hardware, stored temporarily within the on-board FIFO, transferred to the software buffer, and then read in LabVIEW.
    Are you trying to create a TTL square wave with the analog output of the DAQ device? If so, the DAQ device has counters that can generate a highly accurate digital pulse as well. Just a suggestion. LabVIEW has a variety of shipping examples that are geared toward using counters (find examples>>DAQ>>counters). I hope this helps.
    Real-Time Chart Example
    http://venus.ni.com/stage/we/niepd_web_display.DISPLAY_EPD4?p_guid=B45EACE3E95556A4E034080020E74861&p_node=DZ52038&p_submitted=N&p_rank=&p_answer=&p_source=Internal
    Regards,
    Todd D.
    National Instruments
    Applications Engineer

  • Network Stream Error -314340 due to buffer size on the writer endpoint

    Hello everyone,
    I just wanted to share a somewhat odd experience we had with the network stream VIs.  We found this problem in LV2014 but aren't aware if it is new or not.  I searched for a while on the network stream endpoint creation error -314340 and couldn't come up with any useful links to our problem.  The good news is that we have fixed our problem but I wanted to explain it a little more in case anyone else has a similar problem.
    The specific network stream error -314340 should seemingly occur if you are attempting to connect to a network stream endpoint that is already connected to another endpoint or in which the URL points to a different endpoint than the one trying to connect. 
    We ran into this issue on attempting to connect to a remote PXI chassis (PXIe-8135) running LabVIEW real-time from an HMI machine, both of which have three NICs and access different networks.  We have a class that wraps the network stream VIs and we have deployed this class across four machines (Windows and RT) to establish over 30 network streams between these machines.  The class can distinguish between messaging streams that handle clusters of control and status information and also data streams that contain a cluster with a timestamp and 24 I16s.  It was on the data network streams that we ran into the issue. 
    The symptoms of the problem were that we if would attempt to use the HMI computer with a reader endpoint specifying the URL of the writer endpoint on the real-time PXI, the reader endpoint would return with an error of -314340, indicating the writer endpoint was pointing to a third location.  Leaving the URL blank on the writer endpoint blank and running in real-time interactive or startup VI made no difference.   However, the writer endpoint would return without error and eventually catch a remote endpoint destroyed.  To make things more interesting, if you specified the URL on the writer endpoint instead of the reader endpoint, the connection would be made as expected. 
    Ultimately through experimenting with it, we found that the buffer size of the create writer endpoint  for the data stream was causing the problem and that we had fat fingered the constants for this buffer size.   Also, pre-allocating or allocating the buffer on the fly made no difference.  We imagine that it may be due to the fact we are using a complex data type with a cluster with an array inside of it and it can be difficult to allocate a buffer for this data type.  We guess that the issue may be that by the reader endpoint establishing the connection to a writer with a large buffer size specified, the writer endpoint ultimately times out somewhere in the handshaking routine that is hidden below the surface. 
    I just wanted to post this so others would have a reference if they run into a similar situation and again for reference we found this in LV2014 but are not aware if it is a problem in earlier versions.
    Thanks,
    Curtiss

    Hi Curtiss!
    Thank you for your post!  Would it be possible for you to add some steps that others can use to reproduce/resolve the issue?
    Regards,
    Kelly B.
    Applications Engineering
    National Instruments

  • Where can I change the buffer size for LKM File to Oracle (EXTRENAL TABLE)?

    Hi all,
    I'd a problem on the buffer size the "LKM File to Oracle (EXTRENAL TABLE)" as follow:
    2801 : 72000 : java.sql.SQLException: ORA-12801: error signaled in parallel query server P000
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-29400: data cartridge error
    KUP-04020: found record longer than buffer size supported, 524288, in D:\OraHome_1\oracledi\demo\file\PARTIAL_SHPT_FIXED_NHF.dat
    Do you know where can I change the buffer size?
    Remarks: The size of the file is ~2Mb.
    Tao

    Hi,
    The behavior is explained in Bug 4304609 .
    You will encounter ORA-29400 & KUP-04020 errors if the RECORDSIZE clause in the access parameters for the ORACLE_LOADER access driver is larger than 10MB and you are loading records larger than 10MB. Which means their is a another limitation on read size of a record which is termed as granule size. If the default granule size is less then RECORDSIZE it limits the size of the read buffer to granule size.
    Use the pxxtgranule_size parameter to change the size of the granule to a number larger than the size specified for the read buffer.You can use below query to determine the current size of the granule.
    SELECT KSPFTCTXPN PARAMETER_NUMBER,
    KSPPINM PARAMETER_NAME,
    KSPPITY PARAMETER_TYPE,
    KSPFTCTXVL PARAMETER_VALUE,
    KSPFTCTXDF IS_DEFAULT,
    KSPPIFLG MODIFICATION_FLAG,
    KSPFTCTXVF VALUE_FLAG
    FROM X$KSPPI X, X$KSPPCV2 Y
    WHERE (X.INDX+1) = KSPFTCTXPN AND
    KSPPINM LIKE '%_px_xtgranule_size%';
    There is no 'ideal' or recommended value for pxxtgranule_size parameter, it is safe to increase it to work around this particular problem. You can set this parameter using ALTER SESSION/SYSTEM command.
    SQL> alter system set "_px_xtgranule_size"=10000;
    Thanks,
    Sutirtha

  • Changing input buffer size slows down simulated device?!?

    hi,
    my basic problem is having a 200279 buffer overflow.
    for a quick fix I increased the buffer size using DAQmxSetBufInputBufSize, but now the signal coming from the simulated device is much slower.
    how can it be?
    thanx!

    Hi,
    Buffer overflow errors occur when data is written to the buffer faster than it is being read off of it. Buffer underflow errors occur when data is being read off the buffer faster than new data is being added. In order to avoid either of these, a general rule of thumb for NI devices is to acquire around 1/10th of a second of data. For example, for a sample rate of 100 Hz, set samples to read at 10 samples.
    Is it possible for you to decrease the sample rate?

  • Imp-00020 long column too large for column buffer size (22)

    Hi friends,
    I have exported (through Conventional path) a complete schema from Oracle 7 (Sco unix patform).
    Then transferred the export file to a laptop(window platform) from unix server.
    And tried to import this file into Oracle10.2. on windows XP.
    (Database Configuration of Oracle 10g is
    User tablespace 2 GB
    Temp tablespace 30 Mb
    The rollback segment of 15 mb each
    undo tablespace of 200 MB
    SGA 160MB
    PAGA 16MB)
    All the tables imported success fully except 3 tables which are having AROUND 1 million rows each.
    The error message comes during import are as following for these 3 tables
    imp-00020 long column too large for column buffer size (22)
    imp-00020 long column too large for column buffer size(7)
    The main point here is in all the 3 tables there is no long column/timestamp column (only varchar/number columns are there).
    For solving the problem I tried following options
    1.Incresed the buffer size upto 20480000/30720000.
    2.Commit=Y Indexes=N (in this case does not import complete tables).
    3.first export table structures only and then Data.
    4.Created table manually and tried to import the tables.
    but all efforts got failed.
    still getting the same errors.
    Can some one help me on this issue ?
    I will be grateful to all of you.
    Regards,
    Harvinder Singh
    [email protected]
    Edited by: user462250 on Oct 14, 2009 1:57 AM

    Thanks, but this note is for older releases, 7.3 to 8.0...
    In my case both export and import were made on a 11.2 database.
    I didn't use datapump because we use the same processes for different releases of Oracle, some of them do not comtemplate datapump. By the way, shouldn't EXP / IMP work anyway?

  • Doing Buffered Event count by using Count Buffered Edges.vi, what is the max buffer size allowed?

    I'm currently using Count Buffered Edges.vi to do Buffered Event count with the following settings,
    Source : Internal timebase, 100Khz, 10usec for each count
    gate : use the function generator to send in a 50Hz signal(for testing purpose only). Period of 0.02sec
    the max internal buffer size that i can allocate is only about 100~300. Whenever i change both the internal buffer size and counts to read to a higher value, this vi don't seem to function well. I need to have a buffer size of at least 2000.
    1. is it possible to have a buffer size of 2000? what is the problem causing the wrong counter value?
    2. also note that the size of max internal buffer varies w
    ith the frequency of signal sent to the gate, why is this so? eg: buffer size get smaller as frequency decrease.
    3. i'll get funny response and counter value when both the internal buffer size and counts to read are not set to the same. Why is this so? is it a must to set both value the same?
    thks and best regards
    lyn

    Hi,
    I have tried the same example, and used a 100Hz signal on the gate. I increased the buffer size to 2000 and I did not get any errors. The buffer size does not get smaller when increasing the frequency of the gate signal; simply, the number of counts gets smaller when the gate frequency becomes larger. The buffer size must be able to contain the number of counts you want to read, otherwise, the VI might not function correctly.
    Regards,
    RamziH.

  • How do you increase the Streaming buffer size in iTunes 10

    When I try to listen to the radio in ITunes it plays for 30 seconds then says rebuffering stream.
    I just upgraded to an iMac and my old dinasaur G4 does not have this problem.
    I've read things saying to increase the Streaming buffer size but I don't see how to do this anywhere.
    J

    I don't think that is possible with newer iTunes versions.

  • Smartcardio ResponseAPDU buffer size issue?

    Greetings All,
    I’ve been using the javax.smartcardio API to interface with smart cards for around a year now but I’ve recently come across an issue that may be beyond me. My issue is that I whenever I’m trying to extract a large data object from a smart card, I get a “javax.smartcardio.CardException: Could not obtain response” error.
    The data object I’m trying to extract from the card is around 12KB. I have noticed that if I send a GETRESPONSE APDU after this error occurs I get the last 5 KB of the object but the first 7 KB are gone. I do know that the GETRESPONSE dialogue is supposed to be sent by Java in the background where the responses are concatenated before being sent as a ResponseAPDU.
    At the same time, I am able to extract this data object from the card whenever I use other APDU tools or APIs, where I have oversight of the GETRESPONSE APDU interactions.
    Is it possible that the ResponseAPDU runs into buffer size issues? Is there a known workaround for this? Or am I doing something wrong?
    Any help would be greatly appreciated! Here is some code that will demonstrate this behavior:
    * test program
    import java.io.*;
    import java.util.*;
    import javax.smartcardio.*;
    import java.lang.String;
    public class GetDataTest{
        public void GetDataTest(){}
        public static void main(String[] args){
            try{
                byte[] aid = {(byte)0xA0, 0x00, 0x00, 0x03, 0x08, 0x00, 0x00};
                byte[] biometricDataID1 = {(byte)0x5C, (byte)0x03, (byte)0x5F, (byte)0xC1, (byte)0x08};
                byte[] biometricDataID2 = {(byte)0x5C, (byte)0x03, (byte)0x5F, (byte)0xC1, (byte)0x03};
                //get the first terminal
                TerminalFactory factory = TerminalFactory.getDefault();
                List<CardTerminal> terminals = factory.terminals().list();
                CardTerminal terminal = terminals.get(0);
                //establish a connection with the card
                Card card = terminal.connect("*");
                CardChannel channel = card.getBasicChannel();
                //select the card app
                select(channel, aid);
                //verify pin
                verify(channel);
                 * trouble occurs here
                 * error occurs only when extracting a large data object (~12KB) from card.
                 * works fine when used on other data objects, e.g. works with biometricDataID2
                 * (data object ~1Kb) and not biometricDataID1 (data object ~12Kb in size)
                //send a "GetData" command
                System.out.println("GETDATA Command");
                ResponseAPDU response = channel.transmit(new CommandAPDU(0x00, 0xCB, 0x3F, 0xFF, biometricDataID1));
                System.out.println(response);
                card.disconnect(false);
                return;
            }catch(Exception e){
                System.out.println(e);
            }finally{
                card.disconnect(false)
        }  

    Hello Tapatio,
    i was looking for a solution for my problem and i found your post, first i hope your answer
    so i am a begginer in card developpement, now am using javax.smartcardio, i can select the file i like to use,
    but the problem is : i can't read from it, i don't now exactly how to use hexa code
    i'm working with CCID Smart Card Reader as card reader and PayFlex as smart card,
              try {
                          TerminalFactory factory = TerminalFactory.getDefault();
                      List<CardTerminal> terminals = factory.terminals().list();
                      System.out.println("Terminals: " + terminals);
                      CardTerminal terminal = terminals.get(0);
                      if(terminal.isCardPresent())
                           System.out.println("carte presente");
                      else
                           System.out.println("carte absente");
                      Card card = terminal.connect("*");
                     CardChannel channel = card.getBasicChannel();
                     ResponseAPDU resp;
                     // this part select the DF
                     byte[] b = new byte[]{(byte)0x11, (byte)0x00} ;
                     CommandAPDU com = new CommandAPDU((byte)0x00, (byte)0xA4, (byte)0x00, (byte)0x00, b);
                     resp = channel.transmit(com);
                     System.out.println("Result: " + getHexString(resp.getBytes()));
                        //this part select the Data File
                     b = new byte[]{(byte)0x11, (byte)0x05} ;
                     com = new CommandAPDU((byte)0x00, (byte)0xA4, (byte)0x00, (byte)0x00, b);
                     System.out.println("CommandAPDU: " + getHexString(com.getBytes()));
                     resp = channel.transmit(com);
                     System.out.println("Result: " + getHexString(resp.getBytes()));
                     byte[] b1 = new byte[]{(byte)0x11, (byte)0x05} ;
                     com = new CommandAPDU((byte)0x00, (byte)0xB2, (byte)0x00, (byte)0x04, b1, (byte)0x0E); */
                        // the problem is that i don't now how to built a CommandAPDU to read from the file
                     System.out.println("CommandAPDU: " + getHexString(com.getBytes()));
                     resp = channel.transmit(com);
                     System.out.println("Result: " + getHexString(resp.getBytes()));
                      card.disconnect(false);
              } catch (Exception e) {
                   System.out.println("error " + e.getMessage());
              }read record : 00 A4 ....
    if you know how to do , i'm waiting for your answer

  • Increasing usb buffer size on HP Pavilion g6 / Windows 8

    I have recently loaded Traktor Audio 2 DJ software and have run diagnostics on the system because of sound drag and distortion. The error message suggested increasing the USB buffer size to solve the problem. How do I do that - and do any forum members who use Traktor reckon that will solve the problem?

    Thanks for the information, Here is a link to the keyboard troubleshooting steps for your computer. Click on the link and select the option for wrong character appear when typing. This should allow you to change the keyboard layout. What keyboard language do you have selected? What country was the computer purchased in?

  • Determining the max allowable buffer size

    I am working on a program which works with very large amounts of data. Each job that is processed requires the program to read through as many as several hundred files, each of which has tens of thousands of values. The result of the operation is to take the data values in these files, rearrange them, and output the new data to a single file. The problem is with the new order of the data. The first value in each of the input files (all several hundred of them) must be output first, then the 2nd value from each file, and so on. At first I tried doing this all in memory, but for large datasets, I get OutOfMemoryErrors. I then rewrote the program to first output to a temporary binary file in the correct order. However, this is a lot slower. As it processes one input file, it must skip all around in the output file, and do this several hundred times in a 60+ MB file.
    I could tell it to increase the heap size, but there is a limit to how much it will let me give it. I'd like to design it in a way so that I could allocate as big of a memory buffer as I can, read through all the files and put as much data in as it can into the buffer, output the block of data to the output file, and then run through everything again for the next block.
    So my question is, how do I determine the biggest possible byte buffer I can allocate, yet still have some memory left over for the other small allocations that will need to be done?

    The program doesn't append, it has to write data to the binary file out of order because the order that it reads data from the input files is not in the order that it will need to be output. The first value from all of the input files have to be output first, followed by the second value from every input file, and so on. Unless it is going to have several hundred input files open at once, the only way to output it in the correct order is to jump around the output file. For example, say that there are 300 input files with double values. The program opens the first input file, gets the first value and outputs at position 0. Then it reads the second value and must write it to position 300 (or byte position 8*300 considering the size of a double). The third value must go to position 600, and so on. That is the problem, it must output the values in a different order than it reads them.
    Without giving any VM parameters to increase the heap size, Java will let me create a byte buffer of 32MB in size, but not much more. If the data to be output is more than 32MB, I will need to work on the data in 32MB chunks. However, if the user passes VM parameters to increase the heap size, the program could create a larger buffer to work on bigger chunks and get better performance. So somehow I need to find out what a reasonable buffer size is to get the best performance.

  • I need to increase my FMS buffer size for display video smoothly

    I need to increase my buffer size in FMS.
    Please let me know form where i would be increase my FMS buffer size.
    My video is running from RTMP and i faces the problem to load. video stalled between the running. so i need to increase buffer size for running video smoothly.
    Please reply me as soon as possible so i can resoled my problem.
    Thanks in advance.....

    Hi,
    You can increase the buffer size by using the property bufferTime of NetStream Object you are using. For eg: if ns is your NetStream Object then it can be done by ns.bufferTime=<desired value>.
    Also you can go through this
    http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/flash/net/NetStream.html#bufferT ime.

Maybe you are looking for