ARGB to RGB, array processing performance

I'm trying to use a 2D picture control to display rapidly changing data - I need a decent frame rate.  I'm using the Cairo graphics library to generate the bitmap I want to display and then using a dll call to LabVIEW:MoveBlock to get it to display.  The bitmap is fairly large (720x1366) but Cairo and the MoveBlock seem pretty efficient and their performance is fine.
The problem is that I need to convert the frame (a 1D array of U32, representing ARGB) to an array of U8 representing R then G then B. This is then converted to a string and sent to the 2D picture control to display.
I'm using this VI to do the conversion.
The ARGB and RGB arrays are preallocated.  However, the performance still sucks.  At the moment I'm getting about 16 frames a second and this section of my main loop is taking four times longer than everything else put together.  All I'm trying to do is strip out the alpha channel!  I've tried using masking and bitshifts rather than the split number blocks - no benefit.  I've tried pulling back the original frame as an array of U8 (A then R then G then B) and then using the decimate array and interleave array blocks to filter out the A bytes.  It was slower.
Any ideas?
Luke

The array going into the shift register is already preallocated to the right size (number of pixels * 3).  The ARGB array is sized to the number of pixels.  I've tried various splitting solutions, but it seems the cost of resizing the array into a linear array so I can convert it into a string to send it to the picture control is too great. 
Here's the code in context:  The ARGB array is grabbed using MoveBlock, processed to just get the RGB values and then formatted to be sent to the picture control.  The picture control is updated after the frame blank is detected, which just about gives flicker free updating.  The slow bits seem to be processing the ARGB array and passing the array to the picture control.  I've tried grabbing the ARGB as an array of A, R, G and B (ie bytes) but this doesn't yield any speed up.  I've also tried not processing it at all and telling the picture control to expect a color depth of 32 rather than 24, and, whilst this works (not documented... Grr NI Grr)  it isn't any faster and gets the colours wrong...
All I want to do is put an ARGB array on the screen quickly... how hard can it be??

Similar Messages

  • RGB array to Image

    I have a MIDlet that process a image to RGB array, hear is the code:
         int[] imgRgbData = new int[width * height];
         img.getRGB( imgRgbData, 0, width, 0, 0, width, height );
         ByteArrayOutputStream baos = new ByteArrayOutputStream();
         DataOutputStream dos = new DataOutputStream( baos );
         for( int i = 0; i < imgRgbData.length; i++ )
                   dos.writeInt( imgRgbData[i] );
         byte[] imageData = baos.toByteArray();
    after that I send the RGB array to a servlet from the MIDlet, hear is the code:
         con =(HttpConnection)Connector.open("http://" + server + "/" + project + "/addReport");
         con.setRequestMethod(HttpConnection.POST);
         //con.setRequestProperty("User-agent", "Profile/MIDP-2.0 Configuration/CLDC-1.1");
         con.setRequestProperty("Content-Language", "es-ES");
         //con.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");
         con.setRequestProperty("Accept", "application/octet-stream");
         ByteArrayOutputStream baos = new ByteArrayOutputStream();
         DataOutputStream dos = new DataOutputStream( baos );
         dos.writeUTF(tipoReporte);
         dos.writeUTF(usuario);
         dos.writeUTF(comentario);
         dos.writeUTF(direccion);
         dos.writeInt(bin_image.length);
         dos.write(bin_image, 0, bin_image.length);
         byte[] datos = baos.toByteArray();
         con.setRequestProperty("Content-Length", "" + datos.length);
         os = con.openOutputStream();
         os.write(datos);
         os.close();
         dos.flush();
         dos.close();
    Mi problem comes I want to save to SQL Server 2005 database this array like image type using a strored procedure. Exist
    a web aplication in .NET that show this image lika jpeg. I don�t know how to pass the array to the stored procedure.
    I try with setBytes(byte[]) but the web aplication does not dsiplay the image. Then I tried to pass a existing image from my local drive using this:
         File file = new File("c:\\image.jpg");
         FileInputStream fis = new FileInputStream(file);
    and then use:
         cstmt.setBinaryStream(5, fis, (int)file.length());
    That works fine, and the .NET web aplication shows properly the image(jpeg) but I don't know how to proces the rgb array and the pass it to the stored procedure.

    To post code, use the code tags -- [code]Your Code[/code]will display asYour CodeOr use the code button above the editing area and paste your code between the tags it generates.
    db

  • BPM - Process Performance Indicators and Process Monitoring

    Hi,
    In SAP BPM, is there any way to get some reports or dashboard with key PPI (Process Performance Indicators). What we are really interested in is to know how many times a month a process or a specific task of a process has been run, how many time it took to complete the task, was the task completed on time, where are we in the process right now, etc?
    We have just started to look into SAP BPM, but my first impression is that it is more a modeling tool along with some taks coordinations. I feel like it's missing the key analytics to really drive innovation in our processes.
    Probably that I'm wrong and I've missed something. Please let me know if there is a way to do that, if there are any workarounds or if there are any SAP partners that offer a solution that we could use along with SAP BPM.
    Thanks a lot!
    Martin

    Thanks for your feedback.
    We want to implement a SAP BPM scenario in a finance process for the VAT Tax reporting. So basically, our accountant needs to run some SAP transactions along with some manual outside steps. Then the supervisor will perform some checks and finally the tax manager needs to approve it.
    There is some interactions with SAP but only for a small part of the process. What we want to achieve is to be able to see the key performance indicator for our process. But at the moment this is not delivered with SAP BPM. I've heard that this may come in the next release end of 2009, but in the meanwhile I'm wondering if other people have been able to implement some customization in Net Weaver or have found other alternatives to be able to monitor their process adequately.
    Thanks
    Martin

  • Array processing question

    Hello,
    I am somewhat new to JNI technology and am getting a bit confused about array processing.
    I am developing a C program that is called by a third party application. The third party application expects the C function to have a prototype similar to the following:
    int read_content(char *buffer, int length)
    It is expecting that I read "length" bytes of data from a device and return it to the caller in the buffer provided. I want to implement the device read operation in Java and return the data back to the C 'stub' so that it, in turn can return the data to the caller. What is the most efficient way to get the data from my Java "read_content" method into the caller's buffer? I am getting somewhat confused by the Get<Type>ArrayElements routines and this concepts of pinning, copying and releasing with the Release<Type>ArrayElements routine. Any advice would be helpful... particularly good advice :)
    Thanks

    It seems to me that you probably want to call the Java read_content method using JNI and have it return an array of bytes.
    Since you already have a C buffer provided, you need to copy the data out of the Java array into this C buffer. GetPrimitiveArrayCritical is probably what you want:
    char *myBuf = (char *)((*env)->GetPrimitiveArrayCritical(env, javaDataArray, NULL));
    memcpy(buffer, myBuf, length);
    (*env)->ReleasePrimitiveArrayCritical(env, javaDataArray, myBuf, JNI_ABORT);Given a Java array of bytes javaDataArray, this will copy length bytes out of it and into the C array named buffer. I used JNI_ABORT as the last argument to ReleasePrimitiveArrayCritical because you do not change the array. But you could pass a 0 there also, there would be no observable difference in the behavior.
    Think of it as requesting and releasing a lock on the Java array. When you have no lock, the VM is free to move the array around in memory (and it will do so from time to time). When you use one of the various GetArray functions, this will lock the array so that the VM cannot move it. When you are done with it, you need to release it so the VM can resume normal operations on it. Releasing it also propagates any changes made back to the array. You cannot know whether changes made in native code will be reflected in the Java array until you commit the changes by releasing the array. But since you are not modifying the array, it doesn't make any difference.
    What is most important is that you get the array before you read its elements and that you release it when you are done. That's all that you really need to know :}

  • Increase Apply Process Performance

    Dear All,
    I want to know how can I increase Apply Process Performance in Oracle Streams Setup.
    I use Windows 2003 and Oracle 10g R2

    Check metalink Note:335516.1
    HTH...

  • Improving ODM Process Performance

    Hi Everyone,
    I'm running several workflow on sqldeveloper data miner tools to create my model. My Data is around 3 million rows, to monitor the process I look to oracle enterprise manager.
    From what I've seen in oracle enterprise manage most of process ODM from my modelling didn't get parallel and sometimes my process not finished more than a day.
    Any tips/suggestion how we can improve ODM Process Performance ? By enable parallelism on each process/query maybe ?
    Thanks

    Ensure that any input table used in modeling or scoring has a PARALLEL attribute set properly. Since minig algorithms are usually CPU bound try to utilize whatevet CPU power you have. Following might be a good starting point:
    ALTER TABLE myminingtable PARALLEL <Number of Physical Cores on your Hardware>;

  • GL COSOLIDATION PROCESS PERFORMANCE

    제품 : FIN_GL
    작성날짜 : 2002-11-07
    GL COSOLIDATION PROCESS PERFORMANCE
    ===================================
    GL의 Consolidation 작업시 발생할수 있는 Performance문제
    PURPOSE
    느린 GL Consolidation작업
    Explanation
    Consolidation은 크게 두가지의 방법으로 mapping rule을 정의할수 있다.
    1. Segment Mapping Rule
    특정 Segment는 어떤 Account로 바뀌어져야 한다는 Rule이다.
    Segment단위로 작업을 진행하기 때문에 Performance에는 크게 영향을
    주지 않기 때문에 가능한 Segment Rule로 Consolidation을
    진행할것을 권고한다.
    2. Account Mapping Rule
    만약 특정 Account 범위는 Segment Rule로 표현할수 없는 Account로
    Mapping을 원한다면 Account Rule을 사용하여 특정 범위의 Account를
    Conversion한다.
    주의) Account Mapping Rule을 사용할때는 가능한 작은 범위의 Rule을
    사용할것을 권고한다.
    특히 Segment Mapping Rule과 Account Mapping Rule을 동시에
    사용하지 않을것은 오라클은 권고한다.
    GL concurrent Debugging(Bulletin#17744)을 사용하여 GL consolidation
    작업의 시간을 측정해 본결과 Account Mapping 에서 작업 시간이 크게
    지연될 경우 GL_INTERFACE table의 GL_INTERFACE_N2가
    반드시 아래와 같은지 확인 한다.
    1. request_id
    2. je_header_id
    3. status
    4. code_combination_id
    Example
    Reference Documents
    Bug No: 2632310

    Thanks for the reply Roger. I have a solution.. and it's very quick (and i'm in a hurry so apologies if this doesn't read well..)
    Extra info first. My FLEX_VALUE column == SEGMENT2 values.
    In GL_SECURITY_PKG there is a query on a table called GL_BIS_SEGVAL_INT. This contains the segment_column_name, segment_value and parent_segment that my apps user can see.
    So, i'm looking for SEGMENT2 values from this table...
    select * from GL_BIS_SEGVAL_INT where SEGMENT_COLUMN_NAME = 'SEGMENT2'
    i join my original query to this query and.. bingo, i have what i need!
    SELECT
    MY_TABLE.FLEX_VALUE,
    GL_BIS_SEGVAL_INT.SEGMENT_COLUMN_NAME,
    GL_BIS_SEGVAL_INT.SEGMENT_VALUE,
    GL_BIS_SEGVAL_INT.PARENT_SEGMENT
    FROM
    MY_TABLE MY_TABLE,
    GL_BIS_SEGVAL_INT GL_BIS_SEGVAL_INT
    WHERE
    GL_BIS_SEGVAL_INT.SEGMENT_VALUE = MY_TABLE.FLEX_VALUE
    AND
    GL_BIS_SEGVAL_INT.SEGMENT_COLUMN_NAME = 'SEGMENT2'
    returns only the flex_value/segment2 values my apps user has access to.
    regards,
    Joss.
    Edited by: Joss Leaver on Sep 9, 2010 7:19 PM
    Edited by: Joss Leaver on Sep 9, 2010 7:20 PM

  • Difference between ARIS Process Performance Manager and SAP BI

    Hi All,
    I am searching for an answer on the following question: when the business purpose is to measure performance of E.g. a call center process. Can the Process Performance Manager from ARIS replace SAP BI.
    Regards,
    Marcel

    Hi Marcel,
    If I can add to the comment of Ajay,
    I think that when your goal is to measure the performance of a Process and analyze root cause of performance problems, Process Performance Manager is best suited.
    Even if SAP BI could do it as well, I think that SAP BI is best suited for data analysis like financial reports, Market studies ...
    I think that the baseline is that SAP BI is a broader BI solution but ARIS PPM is best suited for process performance than SAP BI.

  • Can a EDQ Process perform looping ?

    Hello,
    I want to create a looping process in EDQ. Here is the scenario..
    Reader based on a snapshot/data store.
    1. Process reads one record at a time from Reader.
    2. For each record,
          a. Call an external web-service (inserts the record in CRMOD, and returns Row-Id)**,
          b. Writes the record + CRMOD.Row_ID to a Writer (CSV file),
          c. Return to Reader and pick up the next record,
        Until No more records.
    **I have already created the custom script processor for the CRMOD web-service and it works.
    Is this type of flow even possible in EDQ ?
    Thanks in advance.
    Deepak Gopal.
    eVerge, LLC.

    Hi Nick,
    Thanks for your response. Yes I have written a process. The process has two processors. I have attached the dxi.
    1. Reader - Reads 100 records from snapshot (based on csv file)
    2. Script Processor - takes input (four fields), calls CRMOD query web service and performs query on the four fields, returns CRMOD data.
    The problem is, EDQ is flooding CRMOD and maxing out the concurrent session limit - 100 records get to CRMOD so quickly that they end up being too many concurrent sessions in CRMOD, which has limit of 5 concurrent sessions for this particular account.
    So my thought was - Don't process next record in EDQ until the current transaction has completed (so we maintain only one concurrent session in CRMOD).
    Here is my script..
    addLibrary("http");
    function GetValue(content, name) {
      var value = "";
      var startPos = content.indexOf("<" + name + ">");
      if (startPos > -1) {
       var endPos = content.indexOf("</" + name + ">", startPos);
    value = content.substring(startPos + name.length + 2, endPos);
      return value;
    try {
    var result = new Array();
    var inputFN = input1[0];
    var inputMN = input1[1];
    var inputLN = input1[2];
    var inputEmail = input1[3];
    var inputTitle = input1[4];
    var url = "https://secure-slsomxuda.crmondemand.com/Services/Integration";
    var xmlHttp = new XMLHttpRequest();
    xmlHttp.open("POST", url, false); ---------------------- false = synchronous session ------------------------------
    var request = "<?xml version='1.0' encoding='UTF-8' ?>" +
               " <soapenv:Envelope xmlns:soapenv='http://schemas.xmlsoap.org/soap/envelope/' xmlns:con='urn:crmondemand/ws/contact/' xmlns:con1='urn:/crmondemand/xml/contact'>" +
        " <soapenv:Header>" +
           " <wsse:Security xmlns:wsse='http://schemas.xmlsoap.org/ws/2002/04/secext'>" +
                " <wsse:UsernameToken>" +
                   " <wsse:Username>xxx</wsse:Username>" +
                   " <wsse:Password Type='wsse:PasswordText'>xxx</wsse:Password>" +
                " </wsse:UsernameToken>" +
           " </wsse:Security>" +
        " </soapenv:Header>" +
        " <soapenv:Body>" +
           "<con:ContactWS_ContactQueryPage_Input>" +
              "<con1:ListOfContact>" +
                 "<con1:Contact>" +
                           "<con1:ContactId></con1:ContactId>" +
                    "<con1:ContactEmail>='" + inputEmail + "'</con1:ContactEmail>" +
                    "<con1:ContactFirstName>='" + inputFN + "'</con1:ContactFirstName>" +
                    "<con1:JobTitle>='" + inputTitle + "'</con1:JobTitle>" +
                    "<con1:ContactLastName>= '" + inputLN + "'</con1:ContactLastName>" +
                    "<con1:MiddleName>='" + inputMN + "'</con1:MiddleName>" +
                  "</con1:Contact>" +
              "</con1:ListOfContact>" +
           "</con:ContactWS_ContactQueryPage_Input>" +
        "</soapenv:Body>" +
    "</soapenv:Envelope>";
        xmlHttp.setRequestHeader("SOAPAction", "\"document/urn:crmondemand/ws/contact/:ContactQueryPage\"");
        xmlHttp.send(request);
        var response = "" + xmlHttp.responseXML;
        var result = new String();
        var startPos = response.indexOf("<Contact>");
        while (startPos > -1)
           var endPos = response.indexOf("</Contact>", startPos);
           var record = response.substring(startPos, endPos);
           var conid = GetValue(record, "ContactId");
           result = conid;
           startPos = response.indexOf("<Contact>", endPos);
         if (startPos = response.indexOf("SBL-ODU-01003") > -1)
                { result = "Session Limit Reached";}
    catch (e) {
        result = "Error: " + e.toString();
    output1 = result;
    Thanks,
    Deepak.

  • BufferedImage to RGB Array

    I realize there's an advanced imaging forum, but this doesn't seem like it should be an advanced topic. Plus this forum has more activity =)
    I have a 2000x1312 pixel JPG that I load into a BufferedImage. I want to have three arrays: R[2000][1312], G[2000][1312], B[2000][1312] that hold values from 0-255 corresponding to the image in RGB space. My program used to do this in about 4 seconds. I didn't change any code involving this process, but for some reason, it now takes 100+ seconds to run.
    The slow line of code is:
    pixels = originalImage.getRGB (0, 0, 2000, 1312, null, 0, 2000);
    pixels is an int[2000*1312], originalImage is a bufferedImage.
    I don't have the dimensions hardcoded like this on my end, but I think you get the idea.

    A BufferedImage is basically a ColorModel plus a WritableRaster.
    A WriteableRaster is a SampleModel plus a DataBuffer.
    DataBuffer is an abstract class with concrete implementations like DataBufferByte and DataBufferInt.
    These buffer classes store data in arrays of primitive component type.
    With that in mind, try this code:
    int[] accessData(BufferedImage bi) {
        WritableRaster r = bi.getRaster();
        DataBuffer db = r.getDataBuffer();
        if (db instanceof DataBufferInt) {
            DataBufferInt dbi = (DataBufferInt) db;
            return dbi.getData();
        } else {
            System.err.println("db is of type " + db.getClass());
            return null;
    }This code does no copying of pixel data, so it should be very fast.
    Your data may be in a byte[], so you may need to adjust this code.
    Also, examine the size of the array. There may be some padding
    involved and so it may be longer than WxH.

  • Image Processing Performance Issue | JAI

    I am processing TIFF images to generate several JPG files out of it after applying image processing on it.
    Following are the transformations applied:
    1. Read TIFF image from disk. The tiff is available in form of a PlanarImage object
    2. Scaling
         /* Following is the code snippet */
         PlanarImage origImg;
         ParameterBlock pb = new ParameterBlock();
         pb.addSource(origImg);
         pb.add(scaleX);
         pb.add(scaleY);
         pb.add(0.0f);
         pb.add(0.0f);
         pb.add(Interpolation.getInstance(Interpolation.INTERP_BILINEAR));
         PlanarImage scaledImage = JAI.create("scale", pb);3. Convertion of planar image to buffered image. This operation is done because we need a buffered image.
         /* Following is the code snippet used */
         bufferedImage = planarImage.getAsBufferedImage();4. Cropping
         /* Following is the code snippet used */
         bufferedImage = bufferedImage.getSubimage(artcleX, artcleY, 302, 70);The performance bottle neck in the above algorithm is step 3 where we convert the planar image to buffered image before carrying out cropping.
    The operation typically takes about 1120ms to complete and considering the data set I am dealing with this is a very expensive operation. Is there an
    alternate to the above mentioned approach?
    I presume if I can carry out the operation mentioned under step 4 above on a planr image object instead of buffered image, I will be able to save
    considerable processing time as in this case step 3 won't be required. (and that seems like the bottle neck). I have also noticed that the processing
    time of the operation mentioned in step 3 above is proportional to the size of the planar image object.
    Any pointers around this would be appreciated.
    Thanks,
    Anurag
    Edited by: anurag.kapur on Oct 4, 2007 10:17 PM
    Edited by: anurag.kapur on Oct 4, 2007 10:17 PM

    It depends on whether you want to display the data or not.
    PlanarImage (the subclass of all renderedOps) has a method that returns a Graphics object you can use to draw on the image. This allows you to do this like write on an image.
    PlanarImage also has a getAsBufferedImage that will return a copy of the data in a format that can be used to write to Graphics objects. This is used for simply drawing processed images to a display.
    There is also a widget called ImageCanvas (and ScrollingImagePanel) shipped with JAI (although it is not a committed part of the API). These derive from awt.Canvas/Panel and know how to render RenderedImage instances. This may use less copying/memory then getting the data as a BufferedImage and drawing it via a Graphics Object. I can't say for sure though as I have never used them.
    Another way may be to extend JComponent (or another class) and customize it to use calls to PlanarImage/RenderedOp instances directly. This can hep with large tiled images when you only want to display a small portion.
    matfud

  • Question on BPEL Process Performance

    Hello,
    We have a BPEL process reading the datafile through file adapter and upserting into DB using DB Adapter. Our requirement is
    If there are 10 records to process and two records (record 5 and 9) fail while inserting/updating for some reason(i.e data type mismatch, column length mismatch etc..), at the end of the process you should see 8 records in the destination table and two records in error table.
    I know there are solutions of this :
    *1) Multiple calls to DB:* Use a While loop in a BPEL process and Invoke DB adapter for each record and use exception handling(Catch all block).
    *2) Invoke Store Procedure:* to prevent multiple calls to DB, create a stored proc on DB side to iterate and insert the records and the stored proc should also return the IDs of failed records back as error response so that you can insert those failed records to a log table or in log files.
    Can you suggest which solution is best in terms of performance and why ??
    Also we need to perform some business validation (i.e NOT NULL check, date format check etc..), Where should we perform this.. at DB level or BPEL process level?? and why..
    Thanks,
    Buddhi

    BPEL is a slow performer.
    Always call a stored procedure to do complex data processings.
    Hence go with the second approach.
    Error records:
    If your going to log errors in the same database, insert the error details direcly into the error table. Dont go back to BPEL.
    Application specific validations should be handled in the application itself.
    --Prasanna                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • [Error ORABPEL-10039]: invalid xpath expression  - array processing

    hi,
    I am trying to process multiple xml elements
    <assign name="setinsideattributes">
    <copy>
    <from expression="ora:getElement('Receive_1_Read_InputVariable','BILL','/ns2:BILL/ns2:CMS1500['bpws:getVariableData('iterator')']/ns2:HEADER/ns2:SSN')"/>
    <to variable="ssn"/>
    </copy>
    </assign>
    where iterator is a index variable .
    I am getting into this error .
    Error(48):
    [Error ORABPEL-10039]: invalid xpath expression
    [Description]: in line 48 of "D:\OraBPELPM_1\integration\jdev\jdev\mywork\may10-workspace\multixm-catch\multixm-catch.bpel", xpath expression "ora:getElement('Receive_1_Read_InputVariable','BILL','/ns2:BILL/ns2:CMS1500['bpws:getVariableData('iterator')']/ns2:HEADER/ns2:SSN')" specified in <from> is not valid, because XPath query syntax error.
    Syntax error while parsing xpath expression "ora:getElement('Receive_1_Read_InputVariable','BILL','/ns2:BILL/ns2:CMS1500['bpws:getVariableData('iterator')']/ns2:HEADER/ns2:SSN')", at position "77" the exception is Expected: ).
    Please verify the xpath query "ora:getElement('Receive_1_Read_InputVariable','BILL','/ns2:BILL/ns2:CMS1500['bpws:getVariableData('iterator')']/ns2:HEADER/ns2:SSN')" which is defined in BPEL process.
    [Potential fix]: Please make sure the expression is valid.
    any information on how to fix this .
    thanks in advance

    check out this note here
    http://clemensblog.blogspot.com/2006/03/bpel-looping-over-arrays-collections.html
    hth clemens

  • How to make use of byte array to perform credit and debit in Wallet Applet

    Hi,
    Can anyone please explain me how to use the concept of byte arrays.. to perform credit and debit operations of a 6byte amount. Say for example.. the amount contains 3 bytes whole number, 1 byte decimal dot and 2 bytes fractional part.
    It would be helpful if anyone could post a code snippet.. that way i can undersatnd easily.
    Thanks in Advance
    Nara

    Hello
    as explained in the other topic, remember your high school years and how you leart to add numbers.
    then, reproduce these algorithms for base-256 numbers.
    fractional? just google about fixed point math and floating point math. The first one is easier, as they are managed as integers.
    example, you want to store 1,42€. Then you decide the real number to deal with is N*100, and you store 142. You get 2 decimal places of precision.
    in the computer world it's easier to store n*256 as it's just a byte shift.
    then the addition operations are the same, and you just interpret the values as fractional numbers.
    regards
    sebastien

  • Array Processing in ABAP

    Hi all,
    I am a total newbie in ABAP.
    I need to process an array in ABAP.
    If it was in .NET C#, it will be like this:
    String strArr = new String[5] { "A", "B", "C", "D", "E" };
    int index = -1;
    for (int i=0; i<strArr.length; i++)
       if (myData.equals(strArr<i>))
            index = i;
            break;
    Can someone please convert the above code into ABAP?
    Urgent. Please help. <REMOVED BY MODERATOR>
    THank you.
    Edited by: Alvaro Tejada Galindo on Feb 22, 2008 5:37 PM

    Hi,
    There is no concept of arrays in ABAP, we use internal tables to hold application data. U can use access internal tables with index.
    Read table itab index l_index this is same as u do in case of arrrays a(l_index)
    Instead use an Internal table to
    populate the values and read them ..
    say your Internal table has a field F1 ..
    itab-f1 = 10.
    append itab.
    clear itab.
    itab-f1 = 20.
    append itab.
    clear itab.
    itab-f1 = 30.
    append itab.
    clear itab.
    Now internal table has 3 values ...
    use loop ... endloop to get the values ..
    loop at itab.
    write :/ itab-f1.
    endloop.
    Also you can do this way:
    IT IS POSSIBLE TO DECLARE THE ONE -DIMENSIONAL ARRAY IN ABAP BY USING INTERNAL TABLES.
    EX.)
    DATA: BEGIN OF IT1 OCCURS 10,
    VAR1 TYPE I,
    END OF IT1.
    IT1-VAR1 = 10.
    APPEND IT1.
    IT1-VAR1 = 20.
    APPEND IT1.
    IT1-VAR1 = 30.
    APPEND IT1.
    LOOP AT IT1.
    WRITE:/ IT1-VAR1 COLOR 5.
    END LOOP.
    <REMOVED BY MODERATOR>
    Cheers,
    Chandra Sekhar.
    Edited by: Alvaro Tejada Galindo on Feb 22, 2008 5:38 PM

Maybe you are looking for