TDMS write buffer performance

I am writing log data to a TDMS file.  We log one sample for each of 16 channels at the beginning and end of each step in a test profile and every minute in between.
The profile has 640 steps, most shorter than one minute.  This results in approximately 400,000 writes to the TDMS file with only one data point per channel.  As a result the file is very fragmented and takes a long time to open or defragment.
In this post Brad Turpin mentions a fix that works well but greatly diminished the TDMS write performance.
http://forums.ni.com/ni/board/message?board.id=170&message.id=403179&query.id=7209265#M403179
I also found that it takes about 40 seconds to set the NI_MinimumBufferSize attribute on 10,240 channels. (640 groups * 16 channels)
I did test this and it works very well but it took hours to generate a file of dummy data using this method.  Generating the dummy data file with the same number of writes but not using the buffer size attribute took seconds. 
In this post Brad also mentioned that LV 2009 contains TDMS VIs with an updated option to create a single binary header for the entire file.
I have not been able to find any more references to this nor have I found the attribute to set this functionality.
Does anybody know how to set this attribute or have any suggestions on how to better deal with my file structure?
Thanks,
Dave 

Are you writing one value per channel for all 16 channels with a single call to TDMS Write, or are you calling TDMS Write 16 times for that? Consolidating this into one call to TDMS Write should improve performamce and fragmentation quite a bit. In addition to that, you could set NI_MinimumBufferSize to the maximum number of values a channel in a step can have. After each step you could call TDMS Flush in order for these buffers to be flushed to disk. That should further reduce fragmentation.
The feature Brad mentioned is used in 2009 automatically, you don't need to enable it. Unfortunately, that won't do much for your application, because you create new channels for every step, which results in a new binary header for every step. The 2009 improvements would only kick in if you would use the same channels all the way through.
We are currently working on some improvements to the TDMS API that will help making your use case a lot more efficient. These are not yet available to customers though, so I'll describe some ways of working around the issue. It's not going to be pretty, but it'll work.
1) The TDM Streaming API is built for high performance, but even more than that, it is built so whatever you do with it, it will always create a valid TDMS file. These safety measures come at a cost, especially if you have a large number of channels and/or properties versus a rather small number of values per channel. In order to better address use cases like that for the time being, we have published a set of VIs a while ago, which will write TDMS files based on LabVIEW File I/O functions. This API does a lot less data processing in the background than the built-in API and therefore is a lot more efficient tackling use cases with thousands of channels.
2) Another way of improving performance during your test steps is to push some of the tasks that cost a lot of performance out into a post processing step. You can merge multiple TDMS files by concatenating them on a file system level. A possible workaround would be to write one TDMS file per step or maybe one every 10 or 100 steps and merge them after the test is done. An example VI for concatenating TDMS files can be found here. 
Hope that helps,
Herbert
Message Edited by Herbert Engels on 05-12-2010 11:33 AM

Similar Messages

  • Memory growth in TDMS write

    I have tried all the turnarounds as I can find from the NI website, such as
    1) Setting the NI-MinimumBufferSize to 10,000, or
    2) Using the TDMS Advanced Synchronous Write along with other Advanced TDMS functions instead of standard TDMS write
    However, the RT free memory still diminishes gradually when a DMS file is written. It is also noted that the drive space available on the /C/ is reduced as well. Due to the reduced free RT memory, a FPGA memory Full error occurs after recording 4~5 several large TDMS files (100 to 200 MB).  Can anyone help on this issue?
    More details on the system and the program I am working on:
    1) LabVIEW 2014 Professional Developement
    2) cRIO 9068 + 2 NI 9234 modules for triggered Sound & Vibration measuement
    3) 8 Channels at 51200 S/s
    4) FPGA buffer size 8191
    5) RT buffer depth 5*8*51200
    6) LabVIEW Code is simply beased on the LabVIEW sample project:  LabVIEW FPGA Wavefrom Acquisition  and Logging on CompactRIO
    Regards,
    JJDFRD

    Hi Kyle,
    Thank you for the work you've done on this.
    I'm not 100% sure where we should be putting the TDMS Flush function?  I've attached a modified cRIO Waveform Acquisition and Logging on CompactRIO Sample project based on a cRIO 9068 and 9234 as JJDFRD is using, and I still see the physcial memory declining with each TDMS write until it gets down to about 4M (reserved for the system) - Funnily enough it seems to work fine and continue to log and write even after it reaches this bottom threshold.  If I change the Free Space constant to delete files when the disk space reaches say 30% rather thean the default value of 70%, I can see the Physcial Memory bounces back after a TDMS write and older tdms files are deleted.
    If you could pelase confirm where we should put the TDMS Flush function this would be very helpful, and JJDFRD can try this out to see if it fixes his problem.
    Kind Regards,
    Stuart Little
    Applications Engineering

  • How to send more than one command at the same time to write buffer in VISA READ?

    Hi,
    I'm using the LABVIEW->serial.VI with a small modification for my serial communication with a shutter control unit.At present,I can send one command CTSO1 to the write buffer.My instrument can read upto 5 commands.I want to send CTSO1 and CTSO2 simulatenouly so that i could open two shutters simulatenously.Note that commands have carraige return constants as terminatting characters.I'm enclsoing the VI which works for one command.
    Any help in this regard is greatly appreciated!
    Regards,
    Rajesh
    Attachments:
    VI_SHUTTER_JUNE30.vi ‏50 KB

    You cannot do two different simultaneous VISA writes over a serial bus. There is a single rx line on the pc's com port and a single rx line on the instrument. You only hope is if the instrument allows you to chain commands and the reception of the carriage return triggers the instrument to implement both at the same time. Maybe you can send CTS01 and CTS02 separated by a space or comma. The manual should tell you if that's possible or maybe you need to ask the manufacturer.

  • How to write a perform in Sap Script

    Hi Guys,
    Can anyone let me know how to write a perform statement in Sap Script.
    Thanks,
    Ramesh

    I just took this example from SAP Help
    =======================================
    Syntax in a form window:
    /: PERFORM <form> IN PROGRAM <prog>
    /: USING &INVAR1&
    /: USING &INVAR2&
    /: CHANGING &OUTVAR1&
    /: CHANGING &OUTVAR2&
    /: ENDPERFORM
    INVAR1 and INVAR2 are variable symbols and may be of any of the four SAPscript symbol types.
    OUTVAR1 and OUTVAR2 are local text symbols and must therefore be character strings.
    The ABAP subroutine called via the command line stated above must be defined in the ABAP report prog as follows:
    FORM <form> TABLES IN_TAB STRUCTURE ITCSY
    OUT_TAB STRUCTURE ITCSY.
    ENDFORM.

  • How to Write BUFFER & Read TEXT with Encrypt file ?

    I'm using Windows Phone 8.1 RT.
    I have a issue :
    - I write a BUFFER encrypted to file. After, I read file with TEXT. It's throw exception : No mapping for the Unicode character exists in the target multi-byte code page. (//ERROR 2)
    - I write a TEXT encrypted to file. After, I read file with BUFFER. It's throw exception : The supplied user buffer is not valid for the requested operation. (//ERROR 1)
    Code Write Buffer & Read Text.
    //Write Textstring msg = EncryptText.Text;
    //ERROR 1 - Use 1 or 2
    await WriteTextAsync(this.file, msg);
    //ERROR 1
    //Read Buffer
    string msg;
    //ERROR 1 - Use 1 or 2
    IBuffer buffer = await ReadBufferAsync(this.file);
    StreamReader stream = new StreamReader(buffer.AsStream());
    msg = stream.ReadToEnd();
    //ERROR 1
    Code Encrypt-Decypt.
    public static string EncryptString(string msg)
                var bufferMsg = CryptographicBuffer.ConvertStringToBinary(msg, BinaryStringEncoding.Utf8);
                var bufferMsgEncrypted = Encrypt(bufferMsg);
                var msgEncrypted = CryptographicBuffer.EncodeToBase64String(bufferMsgEncrypted);
                return msgEncrypted;
            }public static IAsyncAction WriteTextAsync(IStorageFile file, string msg)
                return FileIO.WriteTextAsync(file, EncryptString(msg));
    public static IBuffer Decrypt(IBuffer bufferMsg)
                var key = CreateKey(KEY);
                var aes = SymmetricKeyAlgorithmProvider.OpenAlgorithm(SymmetricAlgorithmNames.AesCbcPkcs7);
                var symetricKey = aes.CreateSymmetricKey(key);
                var bufferMsgDecrypted = CryptographicEngine.Decrypt(symetricKey, bufferMsg, null);
                return bufferMsgDecrypted;
            }public static IAsyncOperation<IBuffer> ReadBufferAsync(IStorageFile file)
                var buffer = FileIO.ReadBufferAsync(file);
                Task<IBuffer> result = Task.Run<IBuffer>(async () =>
                    var Buffer = await buffer;
                    return Decrypt(Buffer);
                return result.AsAsyncOperation();
    Link demo code :
    https://drive.google.com/file/d/0B_cS3IYO936_akE0cmI4bExJMjh0RU9qR3RvWDBWWElZWC1z/view?usp=sharing

    Please provide a working app so this can be tested. You can upload to OneDrive and share a link here.
    Matt Small - Microsoft Escalation Engineer - Forum Moderator
    If my reply answers your question, please mark this post as answered.
    NOTE: If I ask for code, please provide something that I can drop directly into a project and run (including XAML), or an actual application project. I'm trying to help a lot of people, so I don't have time to figure out weird snippets with undefined
    objects and unknown namespaces.

  • "Write Buffer is Empty" Error Message

    Hello All,
    I am using  a MC, USB-2416-4A0, controller and DasyLab V11.   I am inputting 14 analog temps (TC Type T) and 6 voltages.  I have two PID controlled analog outputs.  I utilize a variety of displays. 
    When I run the program it launches and presents the first set of values on the displays.  As soon as the displays change to real values the program reports out an error, "Write Buffer is Empty".
    I have searched both the MC and DasyLab Libraries and this forum's historical posts with no solution.  Any suggestions as to what I should do to eliminate the error.
    Also, if it is not asking too much... How do you get the clock display to display in 12hr format. All we can get is military time format.
    I have attached a copy of the program.
    Thanks,
    Bill...
    Head Cook
    Southern Smoke BBQ Team
    Auburn,Alabama.
    Attachments:
    Smoker 3.DSB ‏102 KB

    Hello Smoker,
    you get the error message "Write Buffer is Empty" from MCC Analog Output module "Dev1-Ao0"? If yes please check if you use a timer based analog output mode (fixed output rate) for your device. This type of output make not sense for an PID control. You should use an software polled analog output mode with blocksize 1.
    "... Also, if it is not asking too much... How do you get the clock display to display in 12hr format. All we can get is military time format. ..." - Do you means the clock example worksheet for 12hr format?
    Best regards,
    MHa

  • Is it possible we can write a perform statement in BADI?

    Hi ,
    Is it possible we can write a perform statement in BADI? Can any one please let me know .
    Thanks.

    Sure it is possible,  you just have to reference the program inwhich  the FORM exists.  Since the call is in a method, where do you put the actual FORM, well you need to put it in a separate program or subroutine pool, then you can call it from within the method.
    perform some_form in program zsubroutine_pool.
    Regards,
    RIch Heilman

  • In tdms write, duplicates the value after every run and works fine only if the probe tool is on

    Hello Everyone!,
                           I am facing very weird problem with my vi. The TDMS write in my "Processing" case/state records the processed data for each channel of every run. However, instead have different set of data for every run, I get same set of data in for all rows. It basically just duplicating first set of data in every row for each run. And the weird thing is that, if I use the probe tool on the wires going in to the tdms write in same case, it records the actual data in respective rows. so with probe tool, it looks like right data doing in and records the correct data, without it, it just duplicates the first row. can anyone please figure out what it going on with it?
      I have attached the VI. While at it, please feel free to provide feedback/suggestions on me design of vi, so that I can work on making it better.
    I appreciate all the help in advance!
    -Nilesh
    Kudos are (always) welcome for the good post. :-)
    Solved!
    Go to Solution.
    Attachments:
    Glucose Sensor Developmental VI.zip ‏77 KB

    I figured it out. In processing case, I was reading from the same group eaverytime. so the all the processign data was same. All I had to do was to get the current group value from the property node and wire it to tdms read function.
    Kudos are (always) welcome for the good post. :-)

  • Java NIO - copy-on-write buffer

    This is regarding peculiar behavior of a Java program on Linux-IA64 platform.
    I have a Java program that uses MappedByteBuffer. For the same file, there are three buffers - read only, read/write and copy-on-write (cow)
    First, change contents of cow buffer and print all buffers. The change is reflected only in cow buffer (not in ro or rw) - as expected.
    Then, change contents of r/w buffer and print all buffers. The change is NOT observed in cow buffer, but reflected in ro and rw buffers (for Linux-IA64). I tried the same program in Linux-X86 and the change is reflected in all buffers (including cow buffer).
    I used JDK 1.4.2_03 for compiling and running. The doc for MappedByteBuffer says
    "The content of a mapped byte buffer can change at any time, for example if the content of the corresponding region of the mapped file is changed by this program or another. Whether or not such changes occur, and when they occur, is operating-system dependent and therefore unspecified."
    Is this behaviour acceptable? Is this observed in any other platform?
    I think Java NIO leaves it to the O/S and JVM is not doing anything here (as different from java.io). This could probably result in inconsistent behaviour across platforms (though it improves performance)
    Please let me know your thoughts.

    My thought is that you have posted this to the wrong forum.

  • USB 6009 and TDMS write

    Dear all!
    I'm using USB-6009 device to adquire data from a physical channel. In the following vi, the number of samples to be adquired is set to 1000, with a 100Hz rate. Therefore, as I understand, the number of samples that should be read inside the while loop where DAQmx read is placed should be 10 for each iteration. However, I've tried to store this information in a TDMS file and I'm getting less number of samples as expected. I wouldn't like to lose any data.
    I've read overthere that the USB-6009 is a software-timed device, so, can this be the reason of this anomaly? Moreover, if I analyze the adquired samples using DIAdem software, I see some samples are being repeated. In this case, am I overwriting some data and reading past values?
    If I need to replace my data adquisition device with a hardware-timed one, which one should I get?
    Thank you very much in advance,
    Miren  
    Attachments:
    USB6009_TDMS.vi ‏25 KB

    When you run the sample clock in  "Continuous Samples" mode, the # of samples only determines the buffer size.  This is outlined in the help for that function.
    As a result, you have nothing pacing the internal loop: If there are samples available when it reaches the DAQmx Read, it passes out as many as are available in the buffer.  You might get 1, 2, 100; it depends on how long the other operations in the loop take.
    In general, DAQmx loops like this can be paced either by time or by # of samples.  Looks like you want sample pacing: wire the number of samples you want to collect to the "# of samples per channel" input of the DAQmx read.  Then the loop will wait at the daqmx read until that many samples are ready, and then it will proceed.
    -Barrett
    CLD

  • How to write a perform with dynamic internal table

    Hi to all experts,
    i have to read infotype 2001 2003 2002 with same pernr, begin date, end date im calling hr_read_infotype three times
    can i write a single perform and call it three how to pass different tables (2001, 2002, 2003).

    try the below code...
      DATA: w_subrc TYPE sy-subrc.
      DATA: w_infty(5) TYPE  c.
      data: w_string type string.
      FIELD-SYMBOLS: <f1> TYPE table.
      FIELD-SYMBOLS: <f1_wa> TYPE ANY.
      DATA: ref_tab TYPE REF TO data.
      CONCATENATE 'P' infty INTO w_infty.
      CREATE DATA ref_tab TYPE STANDARD TABLE OF (w_infty).
      ASSIGN ref_tab->* TO <f1>.
    * Create dynamic work area
      CREATE DATA ref_tab TYPE (w_infty).
      ASSIGN ref_tab->* TO <f1_wa>.
      IF begda IS INITIAL.
        begda = '18000101'.
      ENDIF.
      IF endda IS INITIAL.
        endda = '99991231'.
      ENDIF.
      CALL FUNCTION 'HR_READ_INFOTYPE'
        EXPORTING
          pernr           = pernr
          infty           = infty
          begda           = '18000101'
          endda           = '99991231'
        IMPORTING
          subrc           = w_subrc
        TABLES
          infty_tab       = <f1>
        EXCEPTIONS
          infty_not_found = 1
          OTHERS          = 2.
      IF sy-subrc <> 0.
        subrc = w_subrc.
      ELSE.
      ENDIF.

  • Export to shared buffer performance Implications

    All,
    I am using the following statement in one of the programs to export an internal table
    EXPORT it_dtab TO SHARED BUFFER indx(st) ID 'ZMME'.
    I like to know what the performance implications by using this ?

    Helo,
    You could check the current memory limit set for SAP shared space @ transaction RZ11 .
    Pass the profile parameter as rsdb/obj/buffersize and check the field 'Current Value' .
    You could discuss with your Basis counterpart if there's a need to increase the shared memory space which
    is generally not favourable as performance of other sap applns can come down on doing so.
    Regards
    Dedeepya C

  • What is the status of fixing nio Buffer performance?

    The performance of nio Buffers is very slow in comparison to primitive arrays. Does anyone know the status for getting this fixed? For performance sensitive code, Buffers are currently unacceptable.
    Several Bugs were open regarding this issue, but they have all mysteriously been closed without resolution?
    (Bug ID: 4411600, http://developer.java.sun.com/developer/bugParade/bugs/4411600.html)
    Any insight would be appreciated.

    Here are numbers using the code from the Bug reference I posted above.
    (Bug ID: 4411600, http://developer.java.sun.com/developer/bugParade/bugs/4411600.html)
    The only change I made was to the main() function so I could run tests multiple times to see if Hotspot would kick in at some point.
    Here is the change I made. Replaced the original main() function with the following:
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    public static void main(String[] args) {
    for ( int x = 0; x < 5; x++ )
    System.gc();
    System.out.println();
    System.out.println( ">>>> run #" + x );
    run();
    public static void run()
    timeArray();
    time("heap", ByteBuffer.allocate(SIZE));
    time("direct", ByteBuffer.allocateDirect(SIZE));
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    Btw, I am running this under Win2k.
    The results follow.
    D:\dev\parse>java -server Bench3
    run #0array 30
    array rev 80
    heap 51
    heap rev 90
    direct 601
    direct rev 401
    run #1array 40
    array rev 40
    heap 571
    heap rev 661
    direct 671
    direct rev 290
    run #2array 41
    array rev 40
    heap 550
    heap rev 671
    direct 591
    direct rev 291
    run #3array 40
    array rev 50
    heap 551
    heap rev 661
    direct 590
    direct rev 300
    run #4array 50
    array rev 51
    heap 561
    heap rev 651
    direct 581
    direct rev 291
    D:\dev\parse>java -client Bench3
    run #0array 181
    array rev 190
    heap 380
    heap rev 390
    direct 791
    direct rev 701
    run #1array 180
    array rev 180
    heap 861
    heap rev 802
    direct 731
    direct rev 711
    run #2array 180
    array rev 180
    heap 1202
    heap rev 721
    direct 721
    direct rev 971
    run #3array 200
    array rev 180
    heap 741
    heap rev 731
    direct 861
    direct rev 821
    run #4array 180
    array rev 210
    heap 911
    heap rev 721
    direct 741
    direct rev 741
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    But after taking a closer look at the code I realized that both the heap and direct buffers were being run through the same sum() and reverse() methods, so maybe Hotspot's inlining policy avoids keeping more than one specialized version of a function cached?
    So I re-ran the benchmark, but only running time("heap"...) and time("direct",...) one at a time. I will not bother posting the numbers, but this time array, heap, and direct were all roughly equivalent.
    So depending on the scenario, maybe it is not as bad as I though?
    To try another variant, I replaced all calls to bb.get(j) with bb.get(), thinking this may be an opportunity for Buffer to be faster than array, since no start-of-buffer bounds check should be necessary (can only iterate forward).
    But even running heap and direct separately, the test yielded slow numbers that suggest no inlining was performed again?
    So I'm not sure what to think? I would like to use Buffers over primitive arrays, but I have no certainty of the performance characteristics of Buffers? They may run fast, or they may run very slow, depending on the scenario and which functions are used.
    Here are the numbers after replacing calls bb.get(j) with bb.get().
    D:\dev\parse>java -server Bench3
    run #0array 41
    array rev 80
    heap 271
    heap rev 381
    run #1array 40
    array rev 50
    heap 231
    heap rev 321
    run #2array 40
    array rev 40
    heap 241
    heap rev 321
    run #3array 40
    array rev 50
    heap 231
    heap rev 321
    run #4array 40
    array rev 40
    heap 231
    heap rev 320

  • Code Analysis rule C6386 seems to report many false Write Buffer Overrun message

    Hi,
    why does Code Analysis complain on next peace of code ?
    we have many loops where the same issue is reported where we loop on (int i = 0 ; i < size ; i++ ) 
    _TCHAR * Name = new _TCHAR[String.GetLength() + 2];
    Name[0] = LanguageKey;
    for (i=0; i < String.GetLength(); i++)
    Name[i + 1] = String.GetAt(i);
    Name[i+1] = 0x00;
    i get next details :
    C6386 Write overrun
    Buffer overrun while writing to 'Name':  the writable size is '(String.public: int __cdecl ATL::CSimpleStringT<char,0>::GetLength(void)const ()+2)*1' bytes, but '4' bytes might be written.
    WinRoute
    gr_configuration.cpp
    453
    'Name' is an array of 3 elements (3 bytes)
    446
    Enter this loop, (assume 'i<String.GetLength()')
    450
    Continue this loop, (assume 'i<String.GetLength()')
    450
    'i' may equal 2
    450
    Exit this loop, ('i<String.GetLength()' is false)
    450
    Invalid write to 'Name[3]', (writable range is 0 to 2)
    453
    I think that actually if "i" may be 2 then it means the String.GetLenght() is at least 3 ( because i < String.GetLenght() ) in the
    loop condition. So then the array has at least lenght of  (3 + 2) * sizeof(_TCHAR) , making it writable to at least Name[3] ( and even Name[4] ) 
    Thanks, 

    It seems to me Code Analysis is wrong.
    You can also try single stepping using VS debugger, and check that index (i+1) at the end of the loop is just fine and in-bounds.
    Anyway, I wonder why don't you just use string concatenation operations (i.e. operator+=) instead of writing manual concatenation code?
    If you need a raw character pointer from the built string (e.g. for some legacy C API interop), you can always call CString::GetString() to get it (or use implicit CString conversion operator).
    Giovanni

  • DVD writer buffer underrun

    My DVD drive in my G5 tower packed in recently, so I bought a Pioneer DVR-115D to replace it. It writes CDs, but won't write a dvd. Every time I try, the system returns a buffer underrun error. I have tried four different media brands, +R and -R, have checked the jumper setting and am about to chuck it out of a window...
    Using Toast 8, and the firmware of the drive is 1.12.
    Any ideas, anyone?
    Thanks in anticipation....

    With modern media, 4X is probably going to be the slowest you can burn at. Even at a slow speed of 4X, the drive is burning over 5 MB per second. The DVR-115D has a buffer of 2 MB, so the data stream to your burner from the source location can only be interrupted for less than half a second before the buffer runs out of data and you get the buffer underrun error.
    I don't think Toast 9 would provide any assistance in this area. This issue is really a hardware problem.
    Where is the data that you're trying to burn stored? Local hard drive, shared folder on a network, etc? What else are you working on while burning the disc? What other programs are running?
    You shouldn't be getting a buffer underrun error unless your source is very slow, prone to hiccups, or being used heavily during the burn process.
    There is a newer firmware version for that drive, version 1.18. I'm not sure it would help, but it's certainly possible.

Maybe you are looking for

  • Problem with a template method in JDialog

    Hi friends, I'm experiencing a problem with JDialog. I have a base abstract class ChooseLocationDialog<E> to let a client choose a location for database. This is an abstract class with two abstract methods: protected abstract E prepareLocation(); pro

  • MacBook to HDMI

    I'm probably going to purchase a MacBook and I want to connect it to my HD plasma but i've only got hdmi ports or an RGB port (Which my mini runs through) anyway to connect a MacBook to HDMI i need: Apple Mini-DVI to DVI Adapter: http://store.apple.c

  • I keep getting error message 7

    Hi Peeps, I keep getting an error message 7, with the advice to uninstall the currant version and download the latest version, I have tried this only to be my computer will not let me do this. can any one help Radnor 13

  • Replicate Mat. goups with DNL_CUST_PROD1 from a new Z-table

    Hi I am trying to replicate material groups from backend to SRM with DNL_CUST_PROD1. Standard the replication will be from table T023 but I want to replicate from a new table Z023. Our new table is a copy of T023 and we have to maintain UNSPSC codes

  • How to solve the error sid missing?

    how to solve the error sid missing?