Fastest Way To Add Data Through DI

For Examples there're 5 or more rows in Matrix when i press add button it takes too long time about 10 seconds more. i dont know whether there's faster way. i did matrix row looping from row 1 to Matrix.rowcount to do oDocument_Lines.add .After all data add i do oDocument.add.
I know that many developers complaint about the DI Performance, but is there any way to enhance DI performance ? Thanks

Hi Hamdi,
There is a work around, If you bind the Matrix to a user defined table (It can be a blank table), and then use the oMatrix.LoadFromDatasource and oMatrix.FlushtoDatasource, you can manipulate/collect/call the data using the dbdatasource object.
This is quite fast, and I have had no problem with it, and also using a Documents object to do a goods receipt.
To recap, bound dbdatasource (blank if needed) to a usertable, use flushtodatasource and LoadFromDatasource and use dbdatasource.offset method
I attach some sample Code.
Populate your Matrix
oMatrix.FlushToDataSource()
For i = 0 To Scanindb.Size - 1
                        If Trim(Scanindb.GetValue(4, i)) = "No" Then
                            ' Right we need to add the item
                            OITMOBject.ItemCode = Trim(Scanindb.GetValue(0, i))
                            OITMOBject.Manufacturer = "-1"
                            OITMOBject.ItemsGroupCode = "101"
                            OITMOBject.UserFields.Fields.Item(4).Value = Scanindb.GetValue(3, i)
                            OITMOBject.UserFields.Fields.Item(5).Value = Scanindb.GetValue(2, i)
                            ret = OITMOBject.Add
                            SBO_Application.MessageBox(ret)
                        End If
                    Next i

Similar Messages

  • Fastest way to transfert data from cRIO

    Is anybody know the fastest way to transfert data from cRIO? I tested shared variable, but it's not fast enough. What is the fastest speed could we achieve with shared variable in cRIO? how can I transfert 50000 32 bit word per second from cRIO to my PC? This should run 24h/day.
    Thanks
    B.
    Benoit Séguin
    Software Designer

    Hi Benoit,
    Thanks for your post and I hope your well. I noticed you've not received a reply - and I would like offer my advice.
    Shared variables are one way to communicate over the Ethernet. You can use UDP, TCP and VI Server also. 
    UDP is the fastest network protocol but it includes little error handling and can lose data. TCP is commonly used for large systems. TCP takes more programming effort than shared variables, however, it provides a good mix of speed and reliability. You can use the VI server to run Vis and/or access and set the value of controls and indictors, but it is slower and not recommended for larger data sets. 
    Please let me know what you think,
    Kind Regards,
    James.
    Kind Regards
    James Hillman
    Applications Engineer 2008 to 2009 National Instruments UK & Ireland
    Loughborough University UK - 2006 to 2011
    Remember Kudos those who help!

  • HT4571 I have been trying to add data to my I pad and when I add it it goes to a page that says error. Is there another way to add data?

    I have been trying to add data to my I pad and when I add it it goes to a page that says error. Is there another way to add data?

    The installer is over 5Gb's and could take many hours to download.
    Check the download status in the App Store application.

  • What is the fastest way of getting data?

    With a scanning electron microscope, I need to scan a 512*512 pixel area with a pixel repetition of 15000 (two channels), meaning averaging over 15000 measurements. Simultaneously I have to adjust the voltage output for every pixel.
    I am using a 6111E Multifunction I/O board in a 800MHz P3. The whole task has do be done as fast as possible (not more than 20 minutes altogether).
    What is the fastest way to get this huge amount of data with averaging and output in between? (E.g. do I use buffered read with hardware triggering or is there a faster way?)

    Using the NI-DAQ API (not LabView) will give you a significant amount of more control over what happens and when to the data stream; which translates to a more efficient program. But you need to program in C/C++ or Delphi then. The Measurement Studio provides ActiveX controls that are like the LabView ones for C&C++ (they�re slow like the LabView ones though � not a lot you can do about the Windows GDI).
    What are you trying to sample 15000 times? The 512*512 pixel field?
    That�s almost 15Gigs of data! And it means you need to process data at 12.8MB/s to finish it in 20 minutes. I hope you know C, x86 assembly and MMX.
    I would setup a huge circular buffer (NI-DAQ calls them �double buffers�), about 30 seconds worth or so, to use with SCAN_Start. Then I would proces
    s the actual buffer the card is DMA�ing the data into with a high priority thread. Progressively sum the scan values from the 16bit buffer (the samples are only 12 bit, but the buffer should still be 16bits wide) into a secondary buffer of DWORDs the size of the screen (512*512), and you�ll need two of those, one for each channel. Once the 15000 scans are complete, convert each entry into a float divide by 15000.0f, and store it in a third buffer of floats.
    If you wish to contract this out, send me an email at [email protected]

  • Fastest way to add multiple images, one after the other, in DW5.5

    There has to be a faster way to add multiple images, one after the other, in a web page rather than just dragging each from the Files tab w/in DW 5.5
    I was a little surprized I couldn't select multiple images and just drag a block and have DW just insert an img tag with empty alts and the source path but maybe there is some options I just haven't found?
    I tried using Bridge to generate a quick html slideshow/gallery but it creates skads of divs and such... I just have to update an older site and was looking for an efficient way to add a bunch of images (random names ugh so just copying-pasting one and changing names is still more manual) that lie next to each other, no breaks inbetween....seems an ideal automated situation.
    I'd settle for an external script or app or something because they come in blocks of 25 approx and I have all my actions in PS set up to size appropriately which is nice so I just wondered if there was something good for this low-level production task.
    Thanks

    Thanks. Yes, I started writing a script but I just wondered if there were hidden gems in DW....so many programs do more than they are intended to do nowadays, I had to ask people who use it more!
    Actually, the fastest was to just dupe one line of code, paste x amount and then just use the pick-wip to point and drag. That at least cut the menu out of the equation.
    I thought I remembered doing a batch insert years ago..maybe in GoLive or before Adobe had DW? DW and GoLive are the only GUI html editors I've ever used..but, may it was just something out of an ealier photoshop...they've taken good stuff out and put good stuff in so many times that I can only remember so much!

  • Fastest way to access data between String and char[]

    Hi all,
    I've been programming a small String generator and was curious about the best and fastest way to do so.
    I've done it in two ways and now hope you can tell me if there is a "more java version" or fastest way between those two or if I'm totally wrong as some classe in the API already does that.
    Here are the codes:
    version 1:
            //describe the alphabet
         String alphabet = "abcdefghijklmnopqrstuvwxyz";
         //The "modifiable String"
         StringBuffer stringBuffer = new StringBuffer();
         //Just put the temporary int declaration outside the loop for performance reasons
         int generatedNumber;
         //Just do a "for" loop to get one letter and add it the String
         //Let's say we need a 8 letters word to be generated
         for (int i=0; i < 8; i++)
             generatedNumber = (int)(Math.random() * 26);
             stringBuffer.append(alphabet.charAt(generatedNumber));
         System.out.println(stringBuffer.toString());
         stringBuffer =null;version 2:
            //describe the alphabet
         char[] alphabetArray = {'a', 'b', 'c', 'd', 'e', 'f',
                        'g', 'h', 'i', 'j', 'k', 'l',
                        'm', 'n', 'o', 'p', 'q', 'r',
                        's', 't', 'u', 'v', 'w', 'x',
                        'y', 'z'}
         //The "modifiable String"
         StringBuffer stringBuffer = new StringBuffer();
         //Just put the temporary int declaration outside the loop for performance reasons
         int generatedNumber;
         //Just do a "for" loop to get one letter and add it the String
         //Let's say we need a 8 letters word to be generated
         for (int i=0; i < 8; i++)
             generatedNumber = (int)(Math.random() * 26);
             stringBuffer.append(alphabetArray[generatedNumber]);
         System.out.println(stringBuffer.toString());
         stringBuffer =null;Basically, the question is, what is the safest, fastest and more "to the rules" way to access a char in a sequence?
    Thanks in advance.
    Edited by: airchtit on Jan 22, 2008 6:02 AM

    paul.miner wrote:
    Better, use a char[] instead of a StringBuffer/StringBuilder, since you seem to know the size of the array in advance.
    Although I imagine making "alphabet" a char[] has slightly less overhead than making it a String
    1. It's a lot clearer to write it as a String.
    2. You can just call toCharArray() on the String to get a char[], instead of writing out each char individually.
    3. Or if you're going to be using a plain alphabet, just use (randomNumber + 'a')
    And use Random.nextInt()Hello and thx for the answers,
    I know I shouldn't put constants in my code, it was just a piece of code done in 1 minute to help a colleague.
    Even if it was just a one minute piece of code, I was wondering about the performance problem on large scale calculating, I mean something like a 25 characters word for billions of customers but anyway, once again, the impact should be minimal.
    By the way, I didn't know the Random Class (shame on me, I still don't know the whole API) and, I don't understand why I should be using this one more than the Random.Math which is static and thus take a few less memory and is easier to call.
    Is it because of the repartition factor?
    According to the API, the Random.Math gives (almost) a uniform distribution whether the Random.nextInt() gives a "more random" int.
    I

  • How to add data through matrix from sales order row level to

    user defined document type table ...
    i created matrix in user defined form, i placed one edit text box on that form
    when i entered docnum of sales order the data of sales order row level should have to
    upload to matrix , after fill up the some data in matrix that data should have to add to the user defined document type table
                                any one have code pls post it
                                            thanq

    Hi rajeshwar
    Here is a sample function related to ur senario. just check it out and use the concepts.
    Here I have used a CFL to get the itemcode and I have used a query to  add data to matrix.
    This is a function used.
    Private Sub AddValuesInMatrix(ByRef name As String)
            Try
                'Dim quantemp As Double
                oForm = SBO_Application.Forms.Item("itemdts")
                oMatrix = oForm.Items.Item("matrix").Specific
                Dim rs As SAPbobsCOM.Recordset = ocompany.GetBusinessObject(SAPbobsCOM.BoObjectTypes.BoRecordset)
                '//gitemdesc = "SELECT T0.[ItemName] FROM OITM T0 WHERE T0.[ItemCode] ='" & name & "'"
                oMatrix.Clear()
                rs.DoQuery("SELECT T0.[DocEntry], T0.[ItemCode], T0.[Dscription], T0.[Quantity], T0.[Price], T0.[TaxCode] FROM dbo.[POR1] T0 WHERE T0.[ItemCode] ='" & name & "'")
                rscount = rs.RecordCount
                If (rscount < 1) Then
                    SBO_Application.StatusBar.SetText("No Items Found", SAPbouiCOM.BoMessageTime.bmt_Short)
                Else
                    oForm.Freeze(True)
                    ITE = True
                    For i As Integer = 1 To rs.RecordCount
                        oMatrix.AddRow()
                        oMatrix.Columns.Item("V_5").Cells.Item(i).Specific.Value = rs.Fields.Item("DocEntry").Value
                        oMatrix.Columns.Item("V_4").Cells.Item(i).Specific.Value = rs.Fields.Item("ItemCode").Value
                        oMatrix.Columns.Item("V_3").Cells.Item(i).Specific.Value = rs.Fields.Item("Dscription").Value
                        oMatrix.Columns.Item("V_2").Cells.Item(i).Specific.Value = rs.Fields.Item("Quantity").Value
                        'quansum = quansum + rs.Fields.Item("Quantity").Value
                        oMatrix.Columns.Item("V_1").Cells.Item(i).Specific.Value = rs.Fields.Item("Price").Value
                        'pricesum = pricesum + rs.Fields.Item("Price").Value
                        oMatrix.Columns.Item("V_0").Cells.Item(i).Specific.Value = rs.Fields.Item("TaxCode").Value
                        SBO_Application.SetStatusBarMessage("Data Loading In Progress Please Wait.....>>> " & i & " / " & rs.RecordCount, SAPbouiCOM.BoMessageTime.bmt_Short, False)
                        rs.MoveNext()
                    Next
                    ITE = False
                    oMatrix.AutoResizeColumns()
                    SBO_Application.StatusBar.SetText("Data Loading Completed", SAPbouiCOM.BoMessageTime.bmt_Short, SAPbouiCOM.BoStatusBarMessageType.smt_Success)
                    oForm.Freeze(False)
                    oForm.Refresh()
                End If
            Catch ex As Exception
                SBO_Application.MessageBox("Matrix Load Function : " & ex.Message)
                ITE = False
            End Try
        End Sub
    -Anto

  • What is the fastest way to pass data between parallel threads?

    I have a top level vi running with 6 parallel threads. I need to pass some data from digital I/O to several of the threads. What is the fastest responding way to do this. I am controlling a machine that has quite a few sensed events happening at very close intervals, some as close together as 1 to 2 milliseconds, and I seem to be randomly missing the signal from these sensors. How can I distribute the I/O to the different threads and not miss any inputs?

    I usually use a Queue to pass data from one loop to another. Other
    choices are Functional Globals or Notifiers. It kind of depends on what
    you need to do as to which one is best, so it's a bit hard to recommend
    one over the others without knowing more about your application.
    Both Queues and the Functional Globals (if written correctly) can
    buffer data so you're less likely to lose data if one loop gets behind
    the others.
    Ed
    Ed Dickens - Certified LabVIEW Architect - DISTek Integration, Inc. - NI Certified Alliance Partner
    Using the Abort button to stop your VI is like using a tree to stop your car. It works, but there may be consequences.

  • Fastest way to load data From MS Access to Oracle DB

    Hi All,
    We get an Access DB every week which we need to load to an Oracle DB, currently we have been using SQL loader but the process is slightly painful and horribly boring, so m trying to do some automation kind of stuff. I was thinking of doing the whole thing using a java application and then schedule it to run it at some pre-decided time. The approach I took was to read the access file and then load it using PreparedStatements to the oracle DB. But going through various threads in the forum, i found that its going to b a bit too slow (and the record count in my tables is around 600,000). Can there be a better way to do this? anyone done something similar b4.

    Well the only reason I want to use Java (i may go for C#) is that i dont want to spend time manually creating those CSV files from Access. Can that be done using something else?So use java to create the CSV files.
    And have you actually tried going straight to Oracle? What exactly is your time constraint?
    And another issue is that I sometimes have to make some adjustments (rounding off) to the data which is usually through query, but that is usually done after the data has been loaded in the DB.Which would make it irrelevant to actually moving the data then. If you are cleaning the data with SQL already then it is simple to wrap that in a proc and do it that way. Presumably you are loading to temp (non-production in-lined) tables first, then cleaning and moving.

  • Fastest way to extract data out of xml with following constraints.

    10.2 on linux
    xml files are being dropped off into a queue. in the queue the documents must be stored as clobs so that control can be given back to the client as soon as possible.
    once in the queue we would like to extract all the data from the xml and place it in relational staging tables. the data is then moved from these tables into production.
    the only thing that can change is what happens between the queue and the staging tables. currently i am just using extract statements to pull the data out of the clob.
    the files are around 20mb and currently take over 20 minutes to process which is way too long.
    i looked at DBMS_XMLSTORE but we cannot alter the xml format.
    i looked at Oracle text but if i understand it correctly, we would have to rebuild the entire index after every new queue item.
    i have very little experience with xml so i want to make sure i know all my options.
    from what i can tell my only option is to take the clob and let xml db parse it into o-r tables. ...but even that seems like a horrible waste.
    is there anything else i can do? any pointers?
    thanks for any help!
    by the way...this forum has been of great help. my only problem is that i don't seem to ask the right questions at the right time.

    Chris
    Most people seem to find that allow XML DB to persist the XML using the object based storage and nested tables and then using insert as select operations is the most effective way to do what you want. There are a number of threads on how to best do this..
    The question to ask is do you really need the relational staging tables. If you read through the forum you'll see that once the XML has been persisted as objects, and the XML objects have been stored using a nested table storage models you can easily create relational views to represent the staging tables.
    This process will work very well if there are no updates to the staging tables. Effectively you will process the XML once, when you insert into the Schema based tables, and then use the relational views as the source for the migration from staging to production.
    If you haven't already done so, reading the following posts will help you with this
    XMLType column based on XML Schema: several questions
    http://forums.oracle.com/forums/thread.jspa?threadID=347820&tstart=0
    problem with sql/xml
    XML Query Performance on Nested Tables
    Basically you'll need an XML Schema that describes your XML and you'll need to set up nested table storage for each of the collections in your XML Schema in order to the required performance when using the views.
    The easiest way will be to use the default table that is creted when registering the XML Schema and the annotation xdb:storeVarrayAsTable="true" and then ensure that you sequence each collection correctly.

  • Fastest way to get data from Multiple lists across multiple site collections

    HI
    I need to get data from multiple lists which spread across 20 site collections and need to show it as list view.
    I have searched on internet about this and got some info like options would be to use search core APIs or BCS . I can't use search because I want real time data. Not sure of any other ways.
    if anybody can provide ideas it would be help.

    Might LINQ be an option for you?  Using
    LINQPad and the
    SharePoint Connector, you should be able to write a query that'll retrieve this data, from which you can tabulate it.  I'm not sure how you'd be able to automate this any further so that it's then imported in as list.
    For something more specific, I used a third party tool called the
    Lightning Tools Lightning Conductor, which is essence a powerful content roll-up tool.  In one of my solutions, I created a calculated column that gave an order / ranking on each item, so that when lists were combined, they'd still have some form of
    order.  The web part is also fairly customisable and has always proven a useful tool.
    Hope that helps.
    Steven Andrews
    SharePoint Business Analyst: LiveNation Entertainment
    Blog: baron72.wordpress.com
    Twitter: Follow @backpackerd00d
    My Wiki Articles:
    CodePlex Corner Series
    Please remember to mark your question as "answered" if this solves (or helps) your problem.

  • What is the fastest way to transfer data from old macbook pro to new macbook pro using the migration assistant

    I'm trying to transfer my data (applications, files, etc.) From my older macbook pro to my new macbook pro. I've started the migration wizard and the process aHa its going to take 16 hours. Is there a faster way? Fireside? If so what cables would I need and is it ok to stop this since its already startwd.

    Ethernet or Firewire:
    http://support.apple.com/kb/HT4889
    Sounds like you're using WiFi...that will take forever & can produce errors.

  • Cursor to add data through a wizard

    Hi All,
    Thanks in advance for any help.
    I have several insert statements using cursors but am getting caught up on a few problems.
    For the first portion I was trying to insert the id and description of a variable at the same time with no luck. Below is the code I am using however I get a ORA-01422: exact fetch returns more than requested number of rows error.
    I am not sure if a select into statement is the right thing to do here, any help is appreciated.
    FOR i IN 1..v_wi_phase.count
      Loop
      select description into v_description
      from trs_work_items
      where pha_id = v_wi_phase(i);
    -- Insert data in the TRS_WORK_ITEMS table
        INSERT INTO TRS_WORK_ITEMS
        (ACT_ID
        ,PHA_ID
        ,IS_COMMENT_REQUIRED
        ,IS_OVERTIME_ALLOWED
        ,IS_BILLABLE
        ,OUT_ID
        ,START_DATE
        ,END_DATE
        ,ORG_ID
        ,DESCRIPTION
        VALUES
        (v_activity_id
        ,v_wi_phase(i)
        ,v_wi_is_comment_required
        ,v_wi_is_overtime_allowed
        ,v_wi_is_billable
        ,v_wi_out_id
        ,v_wi_start_date
        ,v_wi_end_date
        ,v_org_id
        ,v_wi_description || '-' || v_description
        )RETURNING ID INTO i_work_item_id;
    End Loop;My second problem is that I need to return multiple ids from an insert statement but am not sure how to do this.
    below is my attempt:
    FOR i IN 1..v_work_item_id.count
        Loop
         FOR x IN 1..v_pha_id.count
         Loop
          If v_pha_id(x) = v_work_item_id(i)
        then
          INSERT INTO TRS_ACTIVITY_ALLOCATIONS
          (ACT_ID
          ,EMP_ID
          ,SEC_ID
          ,WIT_ID
          ,START_DATE
          ,END_DATE
          VALUES
          (v_activity_id
          ,null
          ,v_allocation_sec_id
          ,v_work_item_id(i)
          ,v_allocation_start_date
          ,v_allocation_end_date
    end if;
        End Loop;
          End Loop;

    Assuming that V_WORK_ITEM_ID is defined as some sort of collection rather than as a scalar, you could replace your LOOP and do something like (assuming that all the other local variables are initialized outside your loop
    -- Insert data in the TRS_WORK_ITEMS table
        INSERT INTO TRS_WORK_ITEMS
        (ACT_ID
        ,PHA_ID
        ,IS_COMMENT_REQUIRED
        ,IS_OVERTIME_ALLOWED
        ,IS_BILLABLE
        ,OUT_ID
        ,START_DATE
        ,END_DATE
        ,ORG_ID
        ,DESCRIPTION
        SELECT
         v_activity_id
        ,wi_phases.column_value
        ,v_wi_is_comment_required
        ,v_wi_is_overtime_allowed
        ,v_wi_is_billable
        ,v_wi_out_id
        ,v_wi_start_date
        ,v_wi_end_date
        ,v_org_id
        ,v_wi_description || '-' || trs_phases.description
        FROM table( v_wi_phase ) wi_phases,
             trs_phases
       WHERE trs_phases.id = wi_phase.column_value
    RETURNING ID BULK COLLECT INTO v_work_item_id;This doesn't seem to smell right for lack of a better term, though.
    1) If V_WORK_ITEM_ID is a collection, the name ought to imply that it is a collection, not that it is a scalar number
    1a) V_WI_PHASE is, similarly, not a particularly apt name for a collection of phases
    2) It would seem odd that all the other local variables are initialized outside the loop. Maybe it makes sense that all these attributes are independent of the data in the V_WI_PHASE collection but the names seem to imply that they have something to do with phases. The fact that there is a TRS_PHASES table and a V_WI_PHASE collection also seems confusing at least. It would seem like the other columns in the SELECT ought to be populated with data from one of these two.
    Justin

  • Fastest way to load data

    Option 1:
    Insert statement with:
    table mode: NOLOGGING    
    insert mode: APPEND        
    archivelog mode: noarchive log mode  
    Option 2:
    CTAS with NOLOGGING mode
    Both options above would generate no redo log. Which one is better for performance? I'm loading large volume or rows (a few million) on a daily basis and this is a staging table so there is no problem to reprocess in case of failure.

    Jonathan,
    > Insert /*+ append */ can optimise for indexes by capturing the necessary column values as the data is loaded and then creating the indexes without needing to re-read the table
    How did you do to came to this conclusion?
    I did a simple test (t2 has a single column index) and got the following trace files
    1- Direct path load
    insert /*+ append */ into t2
    select * from t1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          1          0           0
    Execute      1      0.05       0.08          3        140         87        1000
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.05       0.09          3        141         87        1000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 39
    Rows     Row Source Operation
          1  LOAD AS SELECT  (cr=140 pr=3 pw=3 time=84813 us)
       1000   TABLE ACCESS FULL T1 (cr=5 pr=0 pw=0 time=92 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      control file sequential read                    8        0.00          0.00
      db file sequential read                         2        0.00          0.00
      direct path write                               1        0.02          0.02
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        3.49          3.49
    2- Conventional load
    insert
    into t2
    select * from t1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          1          0           0
    Execute      1      0.02       0.00          1         22        275        1000
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.03       0.01          1         23        275        1000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 39
    Rows     Row Source Operation
       1000  TABLE ACCESS FULL T1 (cr=5 pr=0 pw=0 time=31 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.00          0.00
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        1.21          1.21
    The trace file of the append hint contains a direct path write and a control file sequential read wait events confirming the direct path insert. But I still have not found in the trace file an indication on how the index is maintained separately from the table during a direct path load. Additionally I see that in both trace files there are two TABLE ACCESS FULL T1.
    What I used to know is that during a direct path insert indexes are maintained differently from their table. Mini indexes are built on the incoming data and are finally merged with the physical index. But I don't see this also in the trace file.
    However, In the append trace file there is the following select (occurring before the insert statement) that does not exist in the normal insert
    select pos#,intcol#,col#,spare1,bo#,spare2
    from
    icol$ where obj#=:1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute      2      0.00       0.01          0          0          0           0
    Fetch        4      0.00       0.00          0          7          0           2
    total        8      0.00       0.01          0          7          0           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          1  TABLE ACCESS BY INDEX ROWID ICOL$ (cr=3 pr=0 pw=0 time=25 us)
          1   INDEX RANGE SCAN I_ICOL1 (cr=2 pr=0 pw=0 time=21 us)(object id 40)
    I am not sure if this has a relation with the mini-index pre-built data to be merged with the physical index.
    That is my question : where to see this in the trace file?
    Thanks
    Mohamed Houri

  • Fastest way to add spaces to a String?

    I need to have strings with a fixed width containing variable length text. The following (working) code does this with stupid if() and for() statements:
            StringBuffer levelName = new StringBuffer(record.getLevel().getName());
            if (levelName.length() < LEVEL_WIDTH) {
                int g = LEVEL_WIDTH - levelName.length();
                for (int i = 0; i < g; i++) {
                    levelName.append(' ');
            sb.append(levelName); As this code is called a lot of time, i'm looking for a more efficient way to expand strings to a fixed width (by adding spaces). is there a better performing solution?

    In this simple padding situation, there's no real advantage to using a StringBuffer over just a char array:
    //(1) do this once:
    char[] spaces = new char[LEVEL_WIDTH];
    char[] buffer = new char[LEVEL_WIDTH];
    java.util.Arrays.fill(spaces, ' ');
    //(2) Do this whenever you pad:
    int len = yourString.length();
    if (len < LEVEL_WIDTH) {
        System.arraycopy(yourString, 0, buffer, 0, len);
        System.arraycopy(spaces, 0, buffer, len, LEVEL_WIDTH-len);
        yourString = new String(buffer); //(*)
    }Notes:
    1. First off, it should be noted that trying to optimize before profiling could be a waste of time: what if the next operation is a database query that takes 100,000 times are long as this?
    2. arraycopy is a native op, so it should run faster than a loop. Arrays.fill calls arraycopy, btw.
    3. For efficiency, I reuse arrays buffer and spaces.
    4. The char array is copied in line (*). It would be nice to avoid that copying: to copy the original string and append padding directly into the strings buffer, but I don't see how that's possible. Using a StringBuffer involves at least as much copying, as far as I can tell (he says, staring at the source code)...

Maybe you are looking for

  • System Image Backup in windows 8.1 not working

    Hi, I am trying to take system image backup on windows 8.1 with USB drive attached and I am getting the following error which is given at the end of the post. It always says the backup location is invalid.When I tried with -allCritical it fails with

  • Some fonts (Symbol, for ex.) do not render

    When I set the font of some text to some fonts like Symbol, or Apple Symbols, for example, the text is not rendered using these fonts. Instead it looks like the default Helvetica font is used. The Symbol and Apple Symbols fonts I have on my system ar

  • Complete Newbie, Logic won't open

    I'm making the switch to a Mac from my PC. I ordered logic Studio, it installed fine. I was watching tutorial videos and went to open it up and the icon just bounced up and down, the loading window came up, acted like it was going to load, and then..

  • Testing an Interactive form

    Hi people, I have not worked in Interactive forms yet. Just now the Basis Guy installed the ADS service. Now he is asking me to test whether we can work with Interactive forms.. And check  whether u can work  Interactive form in sfp transaction. (Bef

  • I can't convert emailed files to PDF any more.

    Hi, all ! I have the latest version of Adobe Acrobat (XI) and have been using Adobe for years.  It has worked flawlessly.  Up to now.  Of late, when I receive a file, and double-click on it to convert it to PDF, the following MS pop up appears:  "Sec