NI-DNET Eurotherm Mini8 Output Buffer

Hello,
I am working on an application in LabVIEW with an NI-DNET card, and an EuroTherm Mini8 controller, which has polled input of 80 bytes and output of 48 bytes.
I started everything off with EasyIOConfig (feeding it the correct I/O sizes), indexed the Device Handler, passed them into a While Loop.  Inside the While Loop, I have Read DeviceNet I/O, and Write DeviceNet I/O, and feeding them with Convert From DeviceNet Read, and Convert For DeviceNet Write, respectively, with desired byte offsets specified.
Reading works perfect.  I have trouble writing to some output.  I have wired an indicator to the data line going into Write DeviceNet I/O, (which should give me a read of the output buffer, right?).  I can only address the first 8 bytes.  Outputs in the first 8 bytes work,  Byte 8 and on are grayed out and do no respond to input.  Input for Byte 42, writes data to Byte 4 for some reason.
I played with it for a while, tweaking different things.  At one point, I was able to address the first 14 bytes, but the offset did not do anything - it simply put the I16 data input sequentially into the output buffer.  Tried using different data types to define the offset, reloading the Ni-DNET drivers, no dice.
I am building the EXE from a PC running LabVIEW 8.6 (no DNET card), and running it on another PC with DNET card and hardware installed, if it makes a difference.
Any ideas?
Solved!
Go to Solution.

Solved it by sending an empty array of 48 bytes to the first Convert For DeviceNet Write. 

Similar Messages

  • Problem with ffmpeg: "lame: output buffer too small"

    Seems stream 0 codec frame rate differs from container frame rate: 1000.00 (1000/1) -> 15.00 (15/1)
    Input #0, flv, from 'cEoVBVBCkGI.flv':
    Duration: 00:09:31.06, start: 0.000000, bitrate: 305 kb/s
    Stream #0.0: Video: flv, yuv420p, 320x240, 241 kb/s, 15 tbr, 1k tbn, 1k tbc
    Stream #0.1: Audio: mp3, 22050 Hz, mono, s16, 64 kb/s
    File 'pluxus.mp3' already exists. Overwrite ? [y/N] y
    Output #0, mp3, to 'pluxus.mp3':
    Stream #0.0: Audio: libmp3lame, 22050 Hz, mono, s16, 64 kb/s
    Stream mapping:
    Stream #0.1 -> #0.0
    Press [q] to stop encoding
    [mp3 @ 0x814b270]mdb:255, lastbuf:0 skipping granule 0
    [mp3 @ 0x814b270]mdb:255, lastbuf:196 skipping granule 0
    [libmp3lame @ 0x814c760]lame: output buffer too small (buffer index: 9404, free bytes: 388)
    Audio encoding failed
    [n00b@asrock Music]$
    Any ideas?

    Hi to all!
    I've uploaded lame-3.97-1 package to AUR. You will have to delete gstreamer0.10-ugly-plugins because it needs lame-3.98.2.
    http://aur.archlinux.org/packages.php?ID=366

  • I/O Error ioe: Output buffer too small

    I am currently running an FTP process from within the database using a Java stored procedure attempting to send about 199 rows. I am receiving the error message i/o error ioe: Output buffer too small.
    If I reduce the number of rows to below 83, I can successfully complete, however, any number over 83, and the technical problem occurs.
    Is there an Oracle Parameter in the init.ora file that I can modify to increase the amount of data sent... (BTW.. 82 Rows is approximately 8K of data)
    Thanks,
    wn

    I am currently running an FTP process from within the database using a Java stored procedure attempting to send about 199 rows. I am receiving the error message i/o error ioe: Output buffer too small.
    If I reduce the number of rows to below 83, I can successfully complete, however, any number over 83, and the technical problem occurs.
    Is there an Oracle Parameter in the init.ora file that I can modify to increase the amount of data sent... (BTW.. 82 Rows is approximately 8K of data)
    Thanks,
    wn

  • Rman input/output buffer

    Hi,
    We taking rman backup in 10.2 and 11.2.Db size is 3Tb and every day 3 tb redo generated.
    While taking backup in tape(9 to 12 hours) is slower than disk(took 3 to 4 hours).
    I read the below doument
    http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmtunin.htm.
    It says that input buffer and output buffer for rman.
    Is it possible to change default input buffer / output buffer size for rman?
    Br,

    Hello;
    There's a White Paper which I believe is worth a look :
    http://www.oracle.com/technetwork/database/focus-areas/availability/rman-perf-tuning-bp-452204.pdf
    Best Regards
    mseberg

  • Underruns and output buffer failures

    Hi
    I have two Catalyst 3550 connecting two datacenter and I'm receiving errors on connecting switchports. Errors are output buffer failures and underruns. Is it just that switches are running out of resources or what? We are using servers in the datacenters with nfs and I'v heard that this could cause these errors. I'd really appreciate a opinion from a network professional.
    Here are port statistics. Both switchports are identical.
    FastEthernet0/3 is up, line protocol is up
    Hardware is Fast Ethernet, address is 000c.301f.c503 (bia 000c.301f.c503)
    MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
    reliability 255/255, txload 3/255, rxload 6/255
    Encapsulation ARPA, loopback not set
    Keepalive set (10 sec)
    Full-duplex, 100Mb/s
    input flow-control is off, output flow-control is off
    ARP type: ARPA, ARP Timeout 04:00:00
    Last input 00:00:00, output 00:00:00, output hang never
    Last clearing of "show interface" counters 21:54:22
    Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
    Queueing strategy: fifo
    Output queue :0/40 (size/max)
    1 minute input rate 2676000 bits/sec, 531 packets/sec
    1 minute output rate 1432000 bits/sec, 528 packets/sec
    123525330 packets input, 166317380 bytes, 0 no buffer
    Received 887720 broadcasts, 0 runts, 0 giants, 0 throttles
    0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
    0 watchdog, 671127 multicast, 0 pause input
    0 input packets with dribble condition detected
    108650060 packets output, 322356071 bytes, 11269 underruns
    0 output errors, 0 collisions, 0 interface resets
    0 babbles, 0 late collision, 0 deferred
    0 lost carrier, 0 no carrier, 0 PAUSE output
    11269 output buffer failures, 0 output buffers swapped out
    Thanks

    Hello,
    most likely, the erros are caused by your link being saturated.
    I found a previous post which explains the error as following:
    'The frames are being switched in hardware, since there are no output queue drops. But the hardware is attempting to put frames on the wire when it's pretty close to being saturated. Although your 1 minute average is only around 1,5 mbit, it only takes a couple millisecond long burst to fill up the hardware interface buffers and cause this error. It's probably time to add another link (forming an etherchannel) or jump to a gig interface. I don't think you can tune these hardware buffers at all to buy any additional time.'
    Conclusion: your link is getting overloaded...
    Regards,
    GP

  • Looping part of 6534 output buffer

    I need to loop a 16 bit output pattern that is only 1000 samples long. I want to loop this continuously but do not want to have to fill the buffer completely. Is it possible to loop only these 1000 samples in the buffer?

    Coxy,
    Yes, this is definitely possible. You don't need to fill the board's memory to perform onboard looping. Below, I have included links to a couple of example programs that you may want to examine:
    LabVIEW Example
    Continuously Generating Repeat Data from Digital Output Channels (Loop From Onboard Memory)
    NI-DAQ Function Calls Example
    Pattern Generation with Onboard Looping for the NI 6534 and C++
    Good luck with your application.
    Spencer S.

  • Set output buffer position

    Bonjour,
    j'ai un problème de configuration de mon pod USB-4431:je souhaite mesurer une fonction de transfert entre un signal d'entrée (output) et une sortie capteur.
    Pour mon signal d'entrée, j'envoi en continue un chirp de 8192 samples (tâche output).
    Pour mon signal de sortie, j'acquiert (de façon synchronisée) 5x8192 samples.
    La première mesure: tout est parfait... 
    Les mesures suivante: ce n'est plus bon...
    Le problème que j'identifie: mon appel DAQmxStopTask à la fin de la première mesure n'est pas aligné avec la fin du buffer => Lorsque je relance DAQmxStartTask, la tâche redémarre à un sample qui n'est pas 0 => mon signal d'entrée a changé...
    J'ai essayé un DAQmxSetWriteRelativeTo puis un DAQmxSetWriteOffset sans succès...
    La seule solution qui semble corriger ce problème: DAQmxResetDevice puis reconfiguration, mais c'est trop long pour mon application.
    Quelqu'un aurait une idée?
    Merci d'avance

    pouvez vous mettre votre vi en ligne pour que l'on puisse trouver une solution à votre problème plus facilement. ensuite pourquoi voulez vous faire 5* 8192 en acquisition?
    Cordialement
    L.MICOU

  • 6534 scarab output buffer use

    we use the 6534 for contiuous double buffered output of a changing 32 bit digital sequence. The large on board scarab buffer memory causes a delay when the sequence is changed. We can use set_daq_device_info to set ND_FIFO_TRANSFER_COUNT to ND_None, but what we need is a small buffer, bigger than ND_none but smaller than 32 MBytes. Is there a way of finding the acutal on board scarab buffer fill level. We could use that to avoid filling the scarab memory so that we can limit the delay, and still use the buffer.
    thanks, Alex

    Alex,
    I would say there is an important thing to watch for here. The board/driver behaves differently depending on whether you have enabled regeneration or not.
    For Regenerative output operations the driver will fill up the entire on-board memory (Scarab) with the pattern in the host buffer.
    Non-Regenerative operations are limited to using only as much on-board memory as the host buffer size. A 50k Sample buffer in the host memory would mean that the board will only use 50KSamples worth of its on-board memory.
    As you can see you can limit the amount of on-board memory used by disabling the regeneration of data. This of course makes your application vulnerable to speed issues but guarantees that you'll be able to update your output with much less delay.
    KB Online - 2QGEIN85 talks about this.
    (search for: +6534 +generated +data)
    You will also find KB Online - 2MOESVN5 very useful. It mentions a function call you can use to keep track of how much data has actually been generated by the board, which is related to the on-board memory status (empty/full) for non-regenerative operations.
    I hope this helps,
    Alejandro Asenjo
    National Instruments
    Applications Engineer

  • Constraints/Output Buffer for FPGA Pins (e.g. PXI_LBLSTAR0)

    Hi anybody,
    is there a way to change the output options for the pins provided in the FPGA files?
    For example: in the UCF file for the 8711R FPAG Board, there are constraints set for nearly any of the FPGA lines.
    <snip>
    NET "PXI_LBLSTAR0" IOSTANDARD = LVTTL;
    NET "PXI_LBLSTAR0" DRIVE = 6;
    NET "PXI_LBLSTAR0" SLOW;
    <snip>
    What, if i want to change the output driver to HIGH Speed?
    Is there a way to do this in the LabVIEW project/VI
    Thks in advance for your time.
    /Tilman

    Hi Tilman,
    I am sorry. I asked my colleagues but it is not supported to change this ucf files.
    There are several risks involved: Using this workaround will change this setting for any and all VIs compiled from this computer. This will affect any Reconfigurable I/O targets that you compile the VI for. In other words, this could cause problems if you are targetting devices other than the 7811R. There is no indication on the VI that the drive strength has been changed. Therefore, you must keep track of these settings. When changing the Driver Type to a setting other than the default, you are increasing the likelihood of cross-talk between Digital lines. Chaging other options in the toplevel_gen.ucf could cause some undesirable affects.
    I hope you will get along with the default settings.
    Kind Regards,
    Vanessa

  • Output errors, Transmit discards and big buffer errors on 1121 AP

    I have a AIR-AP1121G-A-K9 running c1100-k9w7-tar.123-7.JA2 (Autonomous)
    We have monitoring setup with Orion NPM and we consistently see output errors, Transmit discards and big buffer errors
    The users at the site have not reporting any issues but was wondering how to prevent these or are these normal?
    What causes the output errors on Wireless Radio ? How to troubleshoot further ?
    Radio0-802.11G
    Total Output Errors         0              47749
    Small Buffer Misses
    4 misses
    139 misses
    Medium Buffer Misses
    117 misses
    249 misses
    Big Buffer Misses
    62 misses
    8982 misses
    Dot11Radio0 is up, line protocol is up
    MTU 1500 bytes, BW 54000 Kbit, DLY 1000 usec,
         reliability 255/255, txload 1/255, rxload 1/255
    Encapsulation ARPA, loopback not set
    ARP type: ARPA, ARP Timeout 04:00:00
    Last input 00:00:00, output 00:00:00, output hang never
    Last clearing of "show interface" counters never
    Input queue: 0/75/479/0 (size/max/drops/flushes); Total output drops: 245980
    Queueing strategy: fifo
    Output queue: 0/30 (size/max)
    5 minute input rate 48000 bits/sec, 25 packets/sec
    5 minute output rate 34000 bits/sec, 22 packets/sec
         32482389 packets input, 2056095954 bytes, 0 no buffer
         Received 1622227 broadcasts, 0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
         0 input packets with dribble condition detected
         44289160 packets output, 1268314927 bytes, 0 underruns
         47752 output errors, 0 collisions, 1 interface resets
         0 babbles, 0 late collision, 0 deferred
         0 lost carrier, 0 no carrier
         0 output buffer failures, 0 output buffers swapped out
    Thanks

    This is normal.
    Remember that wireless network is like a hub:  One talks and everyone stops to listen and waits for their turn.

  • Is there a way to force the Tag Engine to dump its input buffer to the database?

    I have an application where I start a process and log the data, and then call a subVI that uses the Read Historical Trend VIs to get all of the data from when the process started until now. The problem is that the Historical Trend VIs only read from the database on disk, and the Tag Engine's buffer doesn't write to disk until it's full (or possibly times out; I'm not sure about that, though). Is there a way to force the Tag Engine to write to disk, so that the Historical Trend VIs will return the most recent data?
    Shrinking the buffer will help a little, but that will only result in missing less of the most recent data. One possible hack is to have a dummy tag that I simply write enou
    gh data to that will cause the buffer to be written to the database. I was hoping for something more elegant, though.

    That's a good question.
    The control about the datalogging and the DSC Engine is all done (more or less) automatically - you feel the NI ease-of-use idea
    That means the Citadel service (one of the NI Services installed by LabVIEW DSC) is responsible of taking care about the datahandling (writing to and reading from the database files including caching some data e.g. index files, frequently used data...).
    The DSC Engine makes a request to the Citadel service that this data has to be logged. Everything else is handled by the Citadel service. Internally, there are two kinds of logging periods handled through the Citadel service. One for traces being viewed (a small period: 200ms) and one for traces not being viewed (slow (big) log period: 20000ms). That
    means, if Citadel gets a request to store a value it will buffer it and store it as soon as possible depending on other circumstances. One depends on the fact if this trace data is being viewed (e.g. with Read Histroical Trend.vi) If you request/read to view a trace you should pretty much see the current values because citadel should use the fast log period.
    The Citadel service takes care as well about setting priorities e.g. the writes before the reads (We don't want to loose data - right?). That means if you really stuff the system by writing a lot of data the CPU might get overloaded and the reads will happen less often.
    If you really want to see "real-time" data I would recommend to use the "Trend Tags.vi". With this approach you avoid the chain DSCEngine-Output Buffer-CitadelService-InputBuffer-File-HD... and back.
    I hope this info helps.
    Roland
    PS: I've attached a simple VI that has a tip (workaround) in it which might do what you are looking for... However, Nationa
    l Instruments cannot support this offically because the VI being used are internally DSC VIs that certainly change in the next version of LV DSC... and therefore you would need to "re-factor" your application.
    Attachments:
    BenchReadHistTrend.llb ‏104 KB

  • How do I add column headings to an output file?

    Hi,
    I have an internal table that is created in my program and I send that table out as a data file attachment on an email.
    I have a request to include column heading on the data file going out. 
    Does anyone have any ideas as to how I can include a heading line as the first line of the output file?
    I'm an ABAP newbie and I don't know the best way to accomplish this?
    Thanks for your help!
    Andy

    Hi,
    While Building the attachement just add the field description refer following code
    Append header line to download data
      CONCATENATE 'Company Code'(004)
                  'State'(007)
                  'Store'(010)
                  'Tax Type'(013)
                  'Purchase'(015)
                  'Tax Rate'(017)
                  'Gross Tax Due'(021)
                  'Discount'(023)
                  'Net Tax Due'(025)
                  INTO ls_download-data
                  SEPARATED BY lc_tab.
      APPEND ls_download TO gt_download.
      CLEAR : ls_download.
    LOOP AT gt_error_log INTO ls_error_log.
        CONCATENATE ls_error_log-bukrs
                    ls_error_log-budat
                    ls_error_log-monat
                    ls_error_log-gjahr
                    ls_error_log-xblnr
                    ls_error_log-bschl
                    ls_error_log-waers
                    ls_error_log-hkont
                    ls_error_log-wrbtr
                    ls_error_log-prctr
                    ls_error_log-kostl
                    ls_error_log-message
                    INTO ls_attach-line SEPARATED BY lc_tab.
        CONCATENATE lc_cret ls_attach-line  INTO ls_attach-line.
      Append Error log data to attachment Table
        APPEND ls_attach TO lt_objbin.
      ENDLOOP.
    Call the function module to convert the data into Hex
      CALL FUNCTION 'SO_RAW_TO_RTF'
        TABLES
          objcont_old = lt_objbin
          objcont_new = lt_objbin.
    Append converted hex data to attachment table
      LOOP AT lt_objbin INTO lv_line.
        ls_conv = cl_abap_conv_out_ce=>create( encoding = 'UTF-8' endian = 'B').
      Call lmethod to add the data to the output buffer sequentially.
        CALL METHOD ls_conv->write( data = lv_line ).
        lv_buffer = ls_conv->get_buffer( ).
        MOVE lv_buffer TO lv_hexa.
        MOVE lv_hexa TO ls_hex-line.
      Append converted hex data to attachment table
        APPEND ls_hex TO gt_attach.
      ENDLOOP.
    Regards,
    Prashant

  • 6534 in Pattern Generation Input and Output

    Hello All
    I have a PCI 6534 High Speed Digital I/O card that I am trying to use to generate a pattern output from port A and acquire some data from port C. I have connected Port C, bit 0 to high and all the others low and I have also connected the two REQ pins together. This is to allow for the REQ pin from the pattern generation output to drive the input as an external clock. I have set the timebase as 1uS and the request interval as 10, to give a REQ pulse every 10uS. The idea being that the pattern generation output will generate a REQ pulse every 10uS and this would cause an input read to occur. My code can be found in the attached word file.
    Now initially I placed the DIG_Block_In command before the DIG_Block_Out and set the two counts to 100. For a single run of the application this filled my input buffer array with 50 elements of 257.... which I think is what I would expect as for a count of 100 it takes 2 to fill both the upper and lower 8 bits of the input array. OK....
    Now if I change the In Count to 200 and leave the Out Count at 100 I only fill 48 elements. I have no idea why this would happen. In my finall application I hope to increase the count to nearer 2000 and this loss of elements becomes significant.
    If I swap the DIG_Block_Out command to go before the DIG_Block_In then with both counts set to 100 I get no data acquired at all. If I increase both counts to say 2000, I actually acquire 944 elements, 56 less than I would expect? Why is this....?? Is it because the DIG_Block_Out command has already started the process before the DIG_Block_In command is initiated?
    Does anybody know what is going on here? I have had the same problem with a PCMCIA 6533 card (worse) and thought it would be solved with the PCI 6534. Does anybody know how I can ensure the correct numbers of data are acquired every time I run this operation? I need to be sure that all the desired data is being acquired as my final application is very dependent on this.
    Any help would be gratefully appreciated.
    Jamie
    Attachments:
    6534_Timing_Issues.doc ‏27 KB

    Hi Jamie,
    Whenever you are communicating from one group to the other on the 653x boards, you should have the input clocked on an external signal and you should have the output start AFTER the input process has started.
    As for why your input buffer is half your output buffer, it could be that the intermediate PC memory buffer is in terms of bytes and your application buffer was casting this data to a 2 byte word.
    I linked on one of your other posts 2 example programs. Adapting those might suit your needs best. Have a good day.
    http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HFORCEKWTID=75069:5&HOID=5065000000080000008BA10000&HExpertOnly=&UCATEGORY_0=_31_%24_12_&UCATEGORY_S=0
    Ron

  • How to Immediately Change Counter Output Rate?

    I have a piece of code that largely works like this example: http://zone.ni.com/devzone/cda/epd/p/id/5493
    In other words, I set up the Counter Output with some initial frequency and duty cycle, but then during the main loop of my program I continuously change the frequency to a new value based on other criteria.
    I'm using an M-series PXI card and LabVIEW RT.
    The problem I'm having is that the card waits for the next edge before changing the counter output rate. For instance, lets say it is going at a low frequency and I am upping to a high frequency. If the command arrives in the middle of the current pulse, it will wait to complete the low-rate pulse before starting the high frequency output. Is there a way to make it interrupt the current count and immediately start counting at the new rate?
    Thanks,
    Isaac

    Hi Isaac,
    I posted the code in LV 8.2 so you should be able to open it now (it sometimes takes several minutes to upload).
    There are a few limitations to using the digital lines instead of the counters:
    1.  The digital lines are updated off of a sample clock which will be much slower than a timebase.  For example, on the 6221 the maximum update rate is 1 MHz, while the counter output has a max timebase of 80 MHz. As a result, the number of frequencies you can generate are going to be more restricted (divide down from 1 MHz vs. 80 MHz).
    2.  On M series devices, the digital lines must be clocked from an external source.  This could be generated from a counter
    3.  You have to build the digital waveform, which is a bit tricky (I think the example code should help out with that but I haven't had time to thoroughly test it).
    4.  If you are generating digital lines at fast rates, you will need to write quite a few samples at a time to the output buffer to ensure the data does not underflow.  If the buffer includes multiple periods of the digital signal, you would have the case that using the counter output would still update more immediately.
    Again, to determine the best course of action it would be useful to know what frequencies you want to generate and which exact hardware you are using. I just mentioned the digital lines as an alternative to the counters, but it might not be ideal for your situation.
    -John
    John Passiak

  • Producer consumer with analog and digital inputs and outputs

    Hi everyone,
    I am working on a control system program for some practical test work. Currently I am working on the data acuisition component of the Labview program. My architecture is produced-consumer loops with a que. My system will have analog inputs, analog outputs, digital inputs and digital outputs. It's not a time critical sytem, but I would like all of the data acquisition to be synchronised. I have attached my program as it is at the moment. I am having trouble getting all of the data into the que since I have two data types. Also, I'm not sure if i've synchronised the four read/write sequences correctly. I would greatly appreciate if somebody could take a look at my program and give me some advice. Thanks in advance.
    Solved!
    Go to Solution.
    Attachments:
    control_v2_DAQ loop.vi ‏46 KB

    Robert, the specific error that I get is:
    Error -200462 occurred at DAQmx Start Task.vi:6
    Possible reason(s):
    Generation cannot be started because the output buffer is empty. 
    Write data before starting a buffered generation. The following actions can empty the buffer: changing the size of the buffer, unreserving a task, setting the Regeneration Mode property, changing the Sample Mode, or configuring retriggering.
    Task Name: Heater testing lab digital outputs
    This error occurs at the 'DAQmx write.vi' function. I just want to sent one sample per second, for each channel. I would like the producer and consumer loops to each run once every second.
    I have attached part of my code with just the data acquisition and writing. Any help would be greatly appreciated.
    Attachments:
    control_v2_ML_simple.vi ‏83 KB

Maybe you are looking for

  • Materialized View

    All, Using Oracle 11g and trying to create this materialized view however it is taking hours upon hours and yet it really is a simple query. In fact as a select statment on its own it takes only 18 seconds to run. Is there something I am missing when

  • When is Firefox Mobile getting mp3 and aac html5 support?

    I want to be able to use Firefox for Android to listen to music through Grooveshark's html5 website. From what information I've been able to gather Firefox for Android must still only have ogg hmtl5 support and I don't want to have to use Chrome or t

  • Invalid Characters in URL

    I'm trying to pass query variables in a URL. Within the code, the link is (CFM page)?Category=#rsCatSum.category1# The actual URL comes out as (CFM page)?Category=Skin%20Creams%20&%20Lotions%20. The category should show "Skin Creams & Lotions". Howev

  • PRICEING PROCEDURE

    WHEN WE ARE CREATING CONDITION TYPE IN MASTER DATA TAB PAGE 01. PRICING PROCEDURE HOW EXATELY THIS FIELD EFFECT?

  • BTE for material master creation

    Hi All, I need to update material long text using the specific class and characterstic values of a material when it is created via MM01 transaction. I want to do this by using BTE. Can you please explain on how can do this i.e. what BTE to be used, a