More-efficient keyboard

Anyone have a way to get a better onscreen keyboard going than the QWERTY one that is stock on the iPhone? I love the FITALY (fitaly.com) keyboard on my old Palm, but that company says Apple won't let them do a system keyboard. One would like to think that a Dvorak or other non-18th century keyboard could be available among the international keyboards, but I don't find one.
And yes, I've already tried the alternatives, e.g., TikiNotes, which takes me three times as long to type on than the Qwerty.
If not, this would be a great suggestion for a new feature for Apple to incorporate.

Just to let everybody know I am now 28 years old. I first learned how to type in typing class in junior high. I was using the Qwerty layout. The only nice thing I can say about the Qwerty layout is that it's available at any computer you want to use without any configuration.
Then I looked online for a better way to type and more efficient. That's when I learned about Dvorak keyboard layout. This was about four years ago. I stuck with it for about two years. I felt my right hand was doing a lot more typing than my left hand. It felt too lopsided for me. But that's just my opinion. I went on the hunt for something better than Dvorak and I found the glorious Colemak keyboard layout.
I have been typing with it ever since. My hands are a lot more comfortable and I can type faster now. It took me a month to actually get comfortable with the keyboard layout. If you actually go to this Java applet on Colemak's website.
www.colemak.com/Compare
You can just copy and paste a body of text and click on Calculate it will analyze the typing and compare the three different keyboard layouts. I just hope it becomes an ANSI standard like Dvorak has. I hope that happens in the future.
I just want everybody to know there is a third option out there and its great. If ever Colemak goes away I will be going back to Dvorak. I will never learn the Qwerty keyboard layout ever again.
Just wanted to give my two cents worth.

Similar Messages

  • Implicit Join or Explicit Join...which is more efficient???

    Which is more efficient?
    An IMPLICIT JOIN
    SELECT TableA.ColumnA1,
    TableB.ColumnB2
    FROM TableA,
    TableB
    WHERE TableA.ColumnA1 = TableB.ColumnB1
    Or....An EXPLICIT JOIN
    SELECT TableA.ColumnA1,
    TableB.ColumnB2
    FROM TableA
    INNER JOIN TableB
    ON TableA.ColumnA1 = TableB.ColumnB1
    I have to write a pretty extensive query and there will be many parts and I just want to try and make sure it is efficient as possible. Can I EXPLAIN this in SQL Navigator as well to find out???
    Thanks in advance for your review and hopeful for a reply.
    PSULionRP

    Alex Nuijten wrote:
    The Partition Outer Join is very handy, but it's an Oracle-ism - Not ANSI ...Ooh, "New thing learnt today" - check.
    but then again who cares? ;)Oracle roolz! *{;-D                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Linking from one PDF to another: Is there a more efficient way?

    Some background first:
    We make a large catalog (400pages) in Indesign and it's updated every year. We are a wholesale distributor and our pricing changes so we also make a price list with price ref # that corresponded with #s printed in the main catalogue.  Last year we also made this catalog interactive so that a pdf of it could be browsed using links and bookmarks. This is not too difficult using Indesign and making any adjustments in the exported PDF. Here is the part that becomes tedious and is especially so this year:
    We also set up links in the main catalog that go to the price list pdf - opening the page with the item's price ref # and prices... Here's my biggest issue - I have not found any way to do this except making links one at a time in Acrobat Pro (and setting various specifications like focus and action and which page (in the price list) to open) Last year this wasn't too bad because we used only one price list. It still took some time to go through and set up 400-500 links individually.
    This year we've simplified our linking a little by putting only one link per page but that is still 400 links. And this year I have 6 different price lists (price tiers...) to link to the main catalogue pdf. (That's in the neighborhood of 1200-1500 double clicking the link(button) to open Button Properties, click Actions tab, click Add..."Go to page view" , set link to other pdf page, click edit, change Open in to "New Window" and set Zoom.  This isn't a big deal if you only have a few Next, Previous, Home kind of buttons....but it's huge when you have hundreds of links. Surely there's a better way?
    Is there anyway in Acrobat or Indesign to more efficiently create and edit hundreds of links from one pdf to another?
    If anything is unclear and my question doesn't make sense please ask. I will do my best to help you answer my questions.
    Thanks

    George, I looked at the article talking about the fdf files and it sounds interesting. I've gathered that I could manipulate the pdf links by making an fdf file and importing that into the PDF, correct?
    Now, I wondered - can I export an fdf from the current pdf and then change what is in there and import it back into the pdf.  I've tried this (Forms>More Form Options>Manage Form Data>Export Data) and then opened the fdf in a text editor but I see nothing related to the documents links... I assume this is because the links are 'form' data to begin with - but is there away to export something with link data like that described in the article link you provided?
    Thanks

  • A more efficient way to assure that a string value contains only numbers?

    Hi ,
    I'm using Oracle 9.2.0.6.
    I was curious to know if there was any way I could write a more efficient query to determine if a string value contains only numbers.
    Here's my current query. This SQL is from a sub query in a Join clause.
    select distinct cta.CUSTOMER_TRX_ID, to_number(cta.SALES_ORDER) SALES_ORDER
                from ra_customer_trx_lines_all cta
                where length(cta.SALES_ORDER) = 6
                and cta.SALES_ORDER is not null
                and substr(cta.SALES_ORDER,1,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,2,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,3,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,4,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,5,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,6,1) in('1','2','3','4','5','6','7','8','9','0')This is a string where I'm finding A-Z-a-z characters and '/' and '-' characters in all 6 positions, plus there are values that are longer than 6 characters. That's what the length(cta.SALES_ORDER) = 6 is for. Also, of course. some cells are NULL.
    So the question is, is there a more efficient way to screen out only the values in this field that are 6 character numbers or is what I have the best I can do?
    Thanks,

    I appreciate all of your very helpfull workarounds. The cost is a little better in all cases than my original where clause.
    To address the discussion that's popped up about design from this question, I can say a few things that should clear , at least, my situation up.
    First of all this custom quoting , purchase order , and sales order entry system WAS written by a bunch a of 'bad' coders who didn't document their work and then left. We don't even have an ER diagram
    The whole project that I'm only a small part of is literally trying to put Humpty Dumpty together again and then move it from a bad custom solution into Oracle Applications.
    We're rebuilding, documenting, and doing ETL. This is one of your prototypical projects from hell.
    It's a huge database project so we're taking small bites as a time. Hopefully, somewhere right before Armageddon hits, this thing will be complete.
    But until then,..., well,..., you know the drill.
    Thanks Again.

  • I need a more efficient method of transferin​g data from RT in a FP2010 to the host.

    I am currently using LV6.1.
    My host program is currently using Datasocket to read and write data to and from a Field Point 2010 system. My controls and indicators are defined as datasockets. In FP I have an RT loop talking to a communication loop using RT-FIFO's. The communication loop is using Publish to send and receive via the Datasocket indicators and controls in the host program. I am running out of bandwidth in getting data to and from the host and there is not very much data. The RT program includes 2 PID's and 2 filters. There are 10 floats going to the Host and 10 floats coming back from the Host. The desired Time Critical Loop time is 20ms. The actual loop time is about 14ms. Data is moving back and forth between Host and FP several times a second without regularity(not a problem). If I add a couple more floats each direction, the communications goes to once every several seconds(too slow).
    Is there a more efficient method of transfering data back and forth between the Host and the FP system?
    Will LV8 provide faster communications between the host and the FP system? I may have the option of moving up.
    Thanks,
    Chris

    Chris, 
    Sounds like you might be maxing out the CPU on the Fieldpoint.
    Datasocket is considered a pretty slow method of moving data between hosts and targets as it has quite a bit of overhead assosciated with it.  There are several things you could do. One, instead of using a datasocket for each float you want to transfer (which I assume you are doing), try using an array of floats and use just one datasocket transfer for the whole array.  This is often quite a bit faster than calling a publish VI for many different variables.
    Also, as Xu mentioned, using a raw TCP connection would be the fastest way to move data.  I would recommend taking a look at the TCP examples that ship with LabVIEW to see how to effectively use these. 
    LabVIEW 8 introduced the shared variable, which when network enabled, makes data transfer very simple and is quite a bit faster than a comparable datasocket transfer.  While faster than datasocket, they are still slower than just flat out using a raw TCP connection, but they are much more flexible.  Also, the shared variables can fucntion in the RT fifo capacity and clean up your diagram quite a bit (while maintaining the RT fifo functionality).
    Hope this helps.
    --Paul Mandeltort
    Automotive and Industrial Communications Product Marketing

  • More efficient way to extract number from string

    Hello guys,
    I am using this Regexp to extract numbers from a string, and I doubt that there is a more efficient way to get this done:
    SELECT  regexp_replace (regexp_replace ( REGEXp_REPLACE ('  !@#$%^&*()_+= '' + 00 SDFKA 324 000 8702 234 |  " ' , '[[:punct:]]',''), '[[:space:]]',''), '[[:alpha:]]','')  FROM dual
    {code}
    Is there a more efficient way to get this done ?
    Regards,
    Fateh                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Or, with less writing, using Perl syntax \D (non-digit):
    SELECT  regexp_replace('  !@#$%^&*()_+= '' + 00 SDFKA 324 000 8702 234 |  " ','\D')
      FROM  dual
    REGEXP_REPLACE(
    003240008702234
    SQL>
    {code}
    SY.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Are multiple processes more efficient in Solaris 2.6 ?

    I am running performance measurement tests in a Solaris 2.6 environment. The tests run one message at a time through my application and measure the response time. I found that my app will process the test message more quickly when multiple instances of the (single threaded) app are running.
    This runs contrary to my intuition - since only one message is available in the message queue for processing at one time, the performance should be the same, with one or several instances running.
    Does Solaris 2.6 run programs more efficiently when multiple instances exist?
    I ran sar with various options, and some of the results vary, but I'm not sure which results are significant.
    Thanks -

    Your thinking is a little off.
    What you are seeing is an attribute of a multi-processing operating system. I'm going to explain this wrong but hopefully the theme is correct. One characteristic is a single process runs on the CPU for a specific time quantum. If the process uses all of its time quantum, it gets forced off the CPU and its priority gets raised - meaning the potential that more processes could knock it off of the CPU cuz their priority is better. If it voluntarily releases the CPU, its priority gets lowered - means it may be the CPU more often if and when it needs it.
    So maybe your CPU can phsyically handle 10 operations/sec. Because of scheduling topics (time quantums, interrupts, etc) and the nature of single threaded apps, that single threaded performance process of yours may only get 2 operations/sec all by itself.. By increasing the number of processes you run to say 5, you'll get your 10 operations/sec. Increasing it to 6 processes, you might get 10.5 operations/sec but you've reached your point of limited return and may start slowing thing down.
    There are several good books on performance and high level queueing theory which can explain this much better then I just did.

  • Can we replace this SELECT query by more efficient code

    can we replace this SELECT query by more efficient code ?:-
    SELECT * FROM zv7_custord
         INTO TABLE G_T_ZV7_CUSTORD
         WHERE ( SENDER in S_SENDER and
                 ORDNUM in S_ORDER  and
                 ZDATE   in S_DATE ) OR
               ( SENDER in S_SENDER AND
                 STATUS = SPACE )
         ORDER BY IDOCNUM.

    Hi
    U can leave ORDER BY option and sort the table by yourself and try to split the query:
    SELECT * FROM zv7_custord
         INTO TABLE G_T_ZV7_CUSTORD
         WHERE  SENDER in S_SENDER and
                       ORDNUM in S_ORDER  and
                       ZDATE   in S_DATE .
    SELECT * FROM zv7_custord
         APPENDING TABLE G_T_ZV7_CUSTORD
         WHERE  SENDER in S_SENDER        and
                       NOT ORDNUM in S_ORDER  and
                       NOT ZDATE   in S_DATE       and
                       STATUS = SPACE
    or
    SELECT * FROM zv7_custord
         INTO TABLE G_T_ZV7_CUSTORD
         WHERE  SENDER in S_SENDER and
                       ORDNUM in S_ORDER  and
                       ZDATE   in S_DATE .
    SELECT * FROM zv7_custord
         APPENDING TABLE G_T_ZV7_CUSTORD
         WHERE  SENDER in S_SENDER        and
                       STATUS = SPACE.
    * Sort the table key fields
    SORT G_T_ZV7_CUSTORD BY <KEY1> <KEY2> .....
    DELETE ADJACENT DUPLICATES FROM G_T_ZV7_CUSTORD COMPARING <KEY1> .....
    Max

  • Make Code More Efficient

    I got this code to play some audio clips and it works alright. The only issue is that when I call the play method it lags the rest of my game pretty badly. Is there anything in the play method you guys think could be moved to the constructor to make it more efficient?
    package main;
    import java.io.*;
    import javax.sound.sampled.*;
    public class Sound
         private AudioFormat format;
        private byte[] samples;
        private String name;
         public Sound(String filename)
              name=filename;
              try
                AudioInputStream stream =AudioSystem.getAudioInputStream(new File("sounds/"+filename));
                format = stream.getFormat();
                samples = getSamples(stream);
            }catch (Exception e){System.out.println(e);}
         public byte[] getSamples()
            return samples;
        private byte[] getSamples(AudioInputStream audioStream)
            int length=(int)(audioStream.getFrameLength()*format.getFrameSize());
            byte[] samples = new byte[length];
            DataInputStream is = new DataInputStream(audioStream);
            try
                is.readFully(samples);
            }catch (Exception e){System.out.println(e);}
            return samples;
        public void play()
             InputStream stream =new ByteArrayInputStream(getSamples());
            int bufferSize = format.getFrameSize()*Math.round(format.getSampleRate() / 10);
            byte[] buffer = new byte[bufferSize];
            SourceDataLine line;
            try
                DataLine.Info info=new DataLine.Info(SourceDataLine.class, format);
                line=(SourceDataLine)AudioSystem.getLine(info);
                line.open(format, bufferSize);
            }catch (Exception e){System.out.println(e);return;}
            line.start();
            try
                int numBytesRead = 0;
                while (numBytesRead != -1)
                    numBytesRead =
                        stream.read(buffer, 0, buffer.length);
                    if (numBytesRead != -1)
                       line.write(buffer, 0, numBytesRead);
            }catch (Exception e){System.out.println(e);}
            line.drain();
            line.close();
        public String getName()
             return name;
    }

    I don't know much about the guts of flex, but I assume it's
    based on Java's design etc.
    Storing event.target.selectedItem in an objet should not be
    anymore efficient than calling event.target.selectedItem. The objet
    will simply be a pointer of sorts to event.target.selectedItem. At
    no point in the event.target.selectedItem call are you doing a
    search or something, so storing the result will not result in any
    big savings.
    Now, if you were doing something like
    array.findItem(something) 4 times, then yes, it would be to your
    advantage to store the data.
    Keep in mind that storing event.target.selectedItem in an
    object will probably break bindings....that may or may not be a
    problem. Objet doesn't support binding. There is a subclass of
    Object that does, but I forget which.
    Just a suggestion based on my knowledge of how data is stored
    in an object oriented language...this may not be the case in
    flex.

  • Would PHP or Python work more efficiently to grab from a MSSQL database in AS3?

    Would PHP or Python work more efficiently to grab information from a MSSQL database in AS3? How would it work? Thanks!

    that's not a flash issue. 
    but here's a comparison, http://benchmarksgame.alioth.debian.org/u32/benchmark.php?test=all&lang=python&lang2=php
    which looks like python would be faster at data handling and slow at arithmetic operations.
    you would call both server-side scripts using the urlloader class from flash.

  • Creating a time channel in the data portal and filling it with data - Is there a more efficient way than this?

    I currently have a requirement to create a time channel in the data portal and subsequently fill it with data. I've shown below how I am currently doing it:
    Time_Ch = ChnAlloc("Time channel", 271214           , 1      ,           , "Time"         ,1                  ,1)              'Allocate time channel
    For intLoop = 1 to 271214
      ChD(intLoop,Time_Ch(0)) = CurrDateTimeReal          'Create time value
    Next
    I understand that the function to create and allocate memory for the time channel is extremely quick. However the time to store data in the channel afterwards is going to be highly dependent on the length I have assigned to the Time_Ch. In my application the length of Time_Ch is variable but could easily be in the order of 271214 or higher. Under such circumstances the time taken to fill Time_Ch is quite considerable. I am wondering whether this is the most appropriate way of doing things or whether there is a more efficient way of creating a time channel and filling it.
    Thanks very much for any help.
    Regards
    Matthew

    Hi Matthew,
    You are correct that there is a more efficient way to do this.  I'm a little confused about your "CurrDateTimeReal" assignment-- is this a constant?  Most people want a Time channel that counts up linearly in seconds or fractions of a second over the duration of the measurement.  But that looks like you would assign the same time value to all the rows of the new Time channel.
    If you want to create a "normal" Time channel that increases at a constant rate, you can use the ChnGenTime() function:
    ReturnValue = ChnGenTime(TimeChannel, GenTimeUnit, GenTimeXBeg, GenTimeXEnd, GenTimeStep, GenTimeMode, GenTimeNo)
    If you really do want a Time channel filled with all the same values, you can use the ChnLinGen() function and simply set the GenXBegin and GenXEnd parameters to be the same value:
    ReturnValue = ChnLinGen(TimeChannel, GenXBegin, GenXEnd, XNo, [GenXUnitPreset])
     In both cases you can use the Time channel you've already created (which as you say executes quickly) and point the output of these functions to that Time channel by using the Group/Channel syntax of the Time channel you created for the first TimeChannel parameter in either of the above functions.
    Brad Turpin
    DIAdem Product Support Engineer
    National Instruments

  • Suggests for a more efficient query?

    I have a client (customer) that uses a 3rd party software to display graphs of their systems. The clients are constantly asking me (the DBA consultant) to fix the database so it runs faster. I've done as much tuning as I can on the database side. It's now time to address the application issues. The good news is my client is the 4th largest customer of this 3rd party software and the software company has listened and responded in the past to suggestions.
    All of the tables are setup the same with the first column being a DATE datatype and the remaining columns are values for different data points (data_col1, data_col2, etc.). Oh, that first date column is always named "timestamp" in LOWER case so got to use double quotes around that column name all of the time. Each table collects one record per minute per day per year. There are 4 database systems, about 150 tables per system, averaging 20 data columns per table. I did partition each table by month and added a local index on the "timestamp" column. That brought the full table scans down to full partition index scans.
    All of the SELECT queries look like the following with changes in the column name, table name and date ranges. (Yes, we will be addressing the issue of incorporating bind variables for the dates with the software provider.)
    Can anyone suggest a more efficient query? I've been trying some analytic function queries but haven't come up with the correct results yet.
    SELECT "timestamp" AS "timestamp", "DATA_COL1" AS "DATA_COL1"
    FROM "T_TABLE"
    WHERE "timestamp" >=
    (SELECT MIN("tb"."timestamp") AS "timestamp"
    FROM (SELECT MAX("timestamp") AS "timestamp"
    FROM "T_TABLE"
    WHERE "timestamp" <
    TO_DATE('2006-01-21 00:12:39', 'YYYY-MM-DD HH24:MI:SS')
    UNION
    SELECT MIN("timestamp")
    FROM "T_TABLE"
    WHERE "timestamp" >=
    TO_DATE('2006-01-21 00:12:39', 'YYYY-MM-DD HH24:MI:SS')) "tb"
    WHERE NOT "timestamp" IS NULL)
    AND "timestamp" <=
    (SELECT MAX("tb"."timestamp") AS "timestamp"
    FROM (SELECT MIN("timestamp") AS "timestamp"
    FROM "T_TABLE"
    WHERE "timestamp" >
    TO_DATE('2006-01-21 12:12:39', 'YYYY-MM-DD HH24:MI:SS')
    UNION
    SELECT MAX("timestamp")
    FROM "T_TABLE"
    WHERE "timestamp" <=
    TO_DATE('2006-01-21 12:12:39', 'YYYY-MM-DD HH24:MI:SS')) "tb"
    WHERE NOT "timestamp" IS NULL)
    ORDER BY "timestamp"
    Here are the queries for a sample table to test with:
    CREATE TABLE T_TABLE
    ( "timestamp" DATE,
    DATA_COL1 NUMBER
    INSERT INTO T_TABLE
    (SELECT TO_DATE('01/20/2006', 'MM/DD/YYYY') + (LEVEL-1) * 1/1440,
    LEVEL * 0.1
    FROM dual CONNECT BY 1=1
    AND LEVEL <= (TO_DATE('01/25/2006','MM/DD/YYYY') - TO_DATE('01/20/2006', 'MM/DD/YYYY'))*1440)
    Thanks.

    No need for analytic functions here (they’ll likely be slower).
    1. No need for UNION ... use UNION ALL.
    2. No need for <quote>WHERE NOT "timestamp" IS NULL</quote> … the MIN and MAX will take care of nulls.
    3. Ask if they really need the data sorted … the s/w with the graphs may do its own sorting
    … in which case take the ORDER BY out too.
    4. Make sure to have indexes on "timestamp".
    What you want to see for those innermost MAX/MIN subqueries are executions like:
    03:19:12 session_148> SELECT MAX(ts) AS ts
    03:19:14   2  FROM "T_TABLE"
    03:19:14   3  WHERE ts < TO_DATE('2006-01-21 00:12:39', 'YYYY-MM-DD HH24:MI:SS');
    TS
    21-jan-2006 00:12:00
    Execution Plan
       0   SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2.0013301108 Card=1 Bytes=9)
       1    0   SORT (AGGREGATE)
       2    1     FIRST ROW (Cost=2.0013301108 Card=1453 Bytes=13077)
       3    2       INDEX (RANGE SCAN (MIN/MAX))OF 'T_IDX' (INDEX) (Cost=2.0013301108 Card=1453 Bytes=13077)

  • Lib in different drives, is it more efficient?

    How much more efficient is it for the CPU to have your lib spread out in different external drives? ex. strings in one drive, perc in another drive etc...
    Thanks,
    Guy

    Thanks for the reply, but I wanted to go one step further since I often write in a symphonic texture,
    so having each section, strings-WW-brass-perc- synth etc in different drives, will that make my system more efficient than if I put all the samples in one drive. I'm concerned because it involves some investments, but if it's confirmed that it will make a difference than I don't mind investing in a couple of more drives.

  • Updating table contents more efficiently

    Dear forumers,
    I have a situation as elaborated below. Is there a more efficient way in updating table contents (itab D) more efficiently?
    loop at itab A.
      execute FM using fields from itab A.
      FM returns itab B.
      loop at itab B.
        populate itab C containing fields from itabs A and B.
      endloop.
    endloop.
    copy contents from itab D to itab D_temp.   " ***1
    clear itab D.
    loop at itab C.
    populate itab D containing fields from itabs C and D_temp.   " ***2
    endloop.
    Reason this codeline (***2) is implemented:
    The key fields in the initial itab D ***1 is different than the key fields in the end result of itab D ***2.
    ***1
    Key fields: AUFNR, OBKNR, WERKS, EQUNR, SERNR, MATNR
    ***2
    Key fields: DOCUMENTTYPE, DOCUMENTNUMBER, DOCUMENTVERSION, DOCUMENTPART, EQUNR, SERNR, MATNR
    Structure in itab C (key fields in ***2):
      equnr           TYPE viaufkst-equnr
      sernr           TYPE objk-sernr
      matnr           TYPE objk-matnr
      documenttype    TYPE zzdoc_data-documenttype
      documentnumber  TYPE zzdoc_data-documentnumber
      documentversion TYPE zzdoc_data-documentversion
      documentpart    TYPE zzdoc_data-documentpart
      description     TYPE zzdoc_data-description
    Structure in itab D:
      aufnr           TYPE viaufkst-aufnr
      equnr           TYPE viaufkst-equnr
      obknr           TYPE viaufkst-obknr
      qmnum           TYPE viaufkst-qmnum
      auart           TYPE viaufkst-auart
      werks           TYPE viaufkst-werks
      sernr           TYPE objk-sernr
      matnr           TYPE objk-matnr
      documenttype    TYPE zzdoc_data-documenttype
      documentnumber  TYPE zzdoc_data-documentnumber
      documentversion TYPE zzdoc_data-documentversion
      documentpart    TYPE zzdoc_data-documentpart
      description     TYPE zzdoc_data-description
    Many thanks for any inputs here at all.

    TRY THIS
    CLEAR li_ord_sln_tmp.
      li_ord_sln_tmp = i_ord_sln.
      SORT li_ord_sln_tmp BY equnr sernr matnr.
      DELETE ADJACENT DUPLICATES FROM li_ord_sln_tmp
        COMPARING equnr sernr matnr.
      CLEAR i_ord_sln.
      IF ( li_ord_sln_tmp[] IS NOT INITIAL AND
           i_docs[] IS NOT INITIAL ).
        SORT i_docs[] BY equnr sernr matnr.
      ENDIF.
    LOOP AT i_outgoing into w_outgoing.
        CALL FUNCTION 'ZZ_GET_ACTIVE_DMS'
          EXPORTING
            matnr      = w_outgoing-matnr
            werks      = w_outgoing-werks
            sernr      = w_outgoing-sernr
          TABLES
            i_doc_list = li_doclist
          EXCEPTIONS
            OTHERS     = 1.
        IF sy-subrc = 0.
          LOOP AT li_doclist INTO lw_doclist.
          CLEAR .
          MOVE-CORRESPONDING w_outgoing TO w_ord_sln.
          READ TABLE li_ord_sln_tmp
            INTO lw_ord_sln_tmp
            WITH KEY equnr = w_docs-equnr
                     sernr = w_docs-sernr
                     matnr = w_docs-matnr
            BINARY SEARCH.
          IF sy-subrc = 0.
            w_ord_sln-werks = w_docs-werks.
            w_ord_sln-aufnr = lw_ord_sln_tmp-aufnr.
            w_ord_sln-obknr = lw_ord_sln_tmp-obknr.
            w_ord_sln-qmnum = lw_ord_sln_tmp-qmnum.
            w_ord_sln-auart = lw_ord_sln_tmp-auart.
            APPEND w_ord_sln TO i_ord_sln.     " (i.e. this is itab D)
          ENDIF.  
        ENDIF.
      ENDLOOP.
    Regards
    Sajid
    Edited by: shaik sajid on Jul 20, 2009 8:52 AM

  • Which is more efficient for includes/imports?

    Which is more efficient?
    1) <jsp:include
    2) <%@ include file
    3) <c:import
    4) Custom tag version (tomcat 5)
    <me:header/>
    header.tag
    navigation html stuff
    Thanks,
    Karmen

    Depends...
    1) <jsp:include -- this is a runtime include. It will embed the included file at runtime.
    2) <%@ include -- this is a compile-time include. It will copy the file's contents into the JSP page when it compiles it into a servlet. Also, typcially it will not recompile the main page if the include file changes, where as #1 above will.
    3) <c:import -- ...well, I don't honestly know what this does.
    4) Custom tag -- what this does clearly depends on what the tag is written to do. But most of the time, this is just going to write some simple HTML stuff out. You could do it that way, but using 1 or 2 to include some HTML or JSP fragment files would probably be better, since it would be easier to maintain.

Maybe you are looking for