Simple performance verification

I wish to do a simple performance verification on 6035E and 6034E cards as follows: Input a 2Vpp 1kHz square wave into each analog input on the BNC-2110. Can anyone tell me what tolerances I might allow when viewing the results on the monitor using the scope function?
Thank you,
Jim

Hello Jared, and thank you for replying.
The VIs that we use in the scope program to make measurements are:
AI Acquire Waveform.vi
Amplitude and Levels.vi
Extract Single Tone.vi
We have also selected a +5 to -5 Volt range.
I must admit to being confused by NI's published specs on these boards, that is why I am hoping to have a more familiar spec like 2Vpp +/- xx and 1 kHz +/- xx.
Thank you again,
Jim
Hello,
I am unfamiliar with Scope function that you mentioned. Maybe if you could shine a little more light on this I might be able to better help.
When you say tolerance of a measurement, I'm going to think you are looking to find some specifications for your device. You can find some of the published accuracy numbers accuracy for your devices in the E Series User Help file.
If I'm totally missing on what you really want to know, post back and I'll have a look.

Similar Messages

  • Live discussion on "EJB Performance Verification"

    Empirix, the maker of the Bean-test EJB testing tool, is offering a
    free one hour web event presentation called "Enterprise JavaBeans
    Performance Verification". This event will be held this Friday
    November 16th at 2pm EST.
    To register for this web event or learn about about web events being
    offering by Empirix, go to: http://webevents.empirix.com/Q42001/
    Enterprise JavaBeans Performance Verification
    Enterprise Java Beans (EJBs) are the central components in a
    Java-based enterprise architecture solution. They contain the business
    logic for an enterprise system and implement the communication between
    the Web-tier and database tiers. EJBs typically are architected and
    implemented for months before integrating with the Web-tier hardware
    and software. Waiting to verify the scalability of the overall EJB
    design and the efficient implementation until late in the software
    project with a Web test tool is risky and may cause the entire
    software project to fail.
    Due to the important role EJBs play, performance testing of the EJB
    architecture and implementation is critical during the entire design
    cycle, the test cycle, application server tuning, and with any
    hardware and software environment changes. Manual vs. automated EJB
    component verification strategies will be discussed. During the
    presentation, we will show how an example EJB will be scalability
    tested using Bean-test, Empirix's automated EJB component test tool.
    We will show how Bean-test automatically creates a test harness,
    exercises an EJB under load, isolates a scalability problem, and
    confirms an implemented correction to the scalability problem is
    successful.

    Note, there is a replay of this free web event on Monday, December
    10th at 2pm EST for those who could not make it the first time.
    Steve
    [email protected] (Steven Kolak) wrote in message
    Empirix, the maker of the Bean-test EJB testing tool, is offering a
    free one hour web event presentation called "Enterprise JavaBeans
    Performance Verification". This event will be held this Friday
    November 16th at 2pm EST.
    To register for this web event or learn about about web events being
    offering by Empirix, go to: http://webevents.empirix.com/Q42001/
    Enterprise JavaBeans Performance Verification
    Enterprise Java Beans (EJBs) are the central components in a
    Java-based enterprise architecture solution. They contain the business
    logic for an enterprise system and implement the communication between
    the Web-tier and database tiers. EJBs typically are architected and
    implemented for months before integrating with the Web-tier hardware
    and software. Waiting to verify the scalability of the overall EJB
    design and the efficient implementation until late in the software
    project with a Web test tool is risky and may cause the entire
    software project to fail.
    Due to the important role EJBs play, performance testing of the EJB
    architecture and implementation is critical during the entire design
    cycle, the test cycle, application server tuning, and with any
    hardware and software environment changes. Manual vs. automated EJB
    component verification strategies will be discussed. During the
    presentation, we will show how an example EJB will be scalability
    tested using Bean-test, Empirix's automated EJB component test tool.
    We will show how Bean-test automatically creates a test harness,
    exercises an EJB under load, isolates a scalability problem, and
    confirms an implemented correction to the scalability problem is
    successful.

  • Simple performance question

    Simple performance question. the simplest way possible, assume
    I have a int[][][][][] matrix, and a boolean add. The array is several dimensions long.
    When add is true, I must add a constant value to each element in the array.
    When add is false, I must subtract a constant value to each element in the array.
    Assume this is very hot code, i.e. it is called very often. How expensive is the condition checking? I present the two scenarios.
    private void process(){
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             if (add)
             matrix[i][ii][iii][...]  += constant;
             else
             matrix[i][ii][iii][...]  -= constant;
    private void process(){
      if (add)
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             matrix[i][ii][iii][...]  += constant;
    else
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
           matrix[i][ii][iii][...]  -= constant;
    }Is the second scenario worth a significant performance boost? Without understanding how the compilers generates executable code, it seems that in the first case, n^d conditions are checked, whereas in the second, only 1. It is however, less elegant, but I am willing to do it for a significant improvement.

    erjoalgo wrote:
    I guess my real question is, will the compiler optimize the condition check out when it realizes the boolean value will not change through these iterations, and if it does not, is it worth doing that micro optimization?Almost certainly not; the main reason being that
    matrix[i][ii][iii][...]  +/-= constantis liable to take many times longer than the condition check, and you can't avoid it. That said, Mel's suggestion is probably the best.
    but I will follow amickr advice and not worry about it.Good idea. Saves you getting flamed with all the quotes about premature optimization.
    Winston

  • Performance verification PXI 6608

    Hi,
    I have a question about the performance verification of the PXI 6608.
    This is what I was able to perceive from the manual.
    We have two modules:
    CTR0: referenced to an external clock that has a minimum uncertainty of 0.75 ppb and generates a 400 s pulse to CRT1.
    CTR1: is referenced to the internal 10 MHz OCXO and counts the number of cycles in that 400 s window: 400 * 10 MHz=4*10^9 . Any result different from this means an error.
    Now to my understanding the performance verification is comparing the 10 MHz OCXO with the external clock. So by looking at the manual we have a uncertainty of 75ppb when the 10 MHz OCXO is on the slot 2: 0.75ppb * 10 MHz = +/- 0.75 Hz. BUT the performance verification assumes a tolerance of 0.1 Hz (10,000,000.1 < Hz < 9,999,999.9 Hz)
    The calibration execute has the same approach as the performance verification,
    Why does the PXI 6608 have a tolerance different from the manual?
    Are we not testing the 10 MHz OCXO?

    Hi,
    Thanks for your suggestion but my Qauestion is lillte bit different.
    1)  We have Equipment master in PM module.
    2) Calibration scenario already mapped in PM with maintainance oreder with "14" inpsection type.
    3) This is related to performance verification which is different from Calibration scenario.
    4) i dont understand that what we have to do in QA05?
    5) I want another inspection lot for "Performance verification" except "14".
    6) We have defined frequency with test paramater. e.g. HPLC have PV frequency 3 month, 6 month & 12 month with different test parameter.
    7) now i want that at each time point, i required auto generation of inpsetcion lot with defiened inspection plan in QA32. After testing will be completed, we will give UD and declare as this Equipment is Qaulified.
    It is different from Calibration scenario.
    Calibration actvity done by PM member but perfomance activity monitor by QA (QM) person.
    Pls help me to design this scenario.
    Thanks & Regards,
    Ms. Kruti Shah

  • Oracle: Please implement simple performance improvement in JDBC (Thin) Driver

    Oracle should put dynamic prefetch into their (thin) JDBC driver. I don't use the OCI driver so I don't know how this applies.
    Some of you may be aware that tweaking a statement's row prefetch size can improve performance quite a lot. IIRC, the default prefetch is about 30 and that's pretty useless for queries returning large (# of rows) resultsets, but it's pretty good for queries returning only a small number of records. Just as an example, when running a simple SELECT * FROM foo WHERE ROWNUM <= somenumber query, here's what I got:
    Prefetch = 1: 10000 rows = 15 secs, 1000 rows = 1.5 secs, 10 rows = 30 ms
    Prefetch = 500: 10000 rows = 2.5 secs, 1000 rows = 280 ms, 10 rows = 80 ms
    Prefetch = 2000: 10000 rows = 2 secs, 1000 rows = 700 ms, 10 rows = 460 ms
    From our experience, the default of 30 (?) is too low for most applications, 500 to 1000 would be a better default. In the end, though, the only way to get best performance is to adjust the prefetch size to the expected number of rows for every query. While that sounds like a reasonable effort for developers of a simple client/server application, in a 3-tier system that deals with connection pools in an application server, this just won't work, so here's my suggestion on how Oracle should address this:
    Instead of having just a single prefetch setting for the statement (or connection), there should be an 'initial' prefetch value (with a default of somewhere between 1 and 50) and a maximum prefetch value (with a default of somewhere between 500 and 5000). When the driver pulls the first batch of records from the server it should use the initial refresh. If there are more records to fetch, it should fetch them using the maximum prefetch. This would allow the driver to perform much better for small AND large resultsets while, at the same time, making it transparent to the application (and application developer).
    [email protected]

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • Simple signature verification

    Hi everyone,
    I'm trying to get a simple working example of public key signature verification with openssl/java.security, but so far I haven't been able to get it to verify. Can someone please spot what I might of done wrong?
    openssl commands:
    openssl genrsa -out private_key.pem -3 768
    openssl rsa -in private_key.pem -pubout -out public_key.pem
    openssl dgst -md5 -sign private_key.pem -out sign.file test.file
    openssl dgst -md5 -verify public_key.pem  -signature sign.file test.file      // verifies OKThe file "test.file" only contains the text "message".
    So on to the java side of things. Since openssl encodes the publickey as base64, i used a small utility (http://www.fourmilab.ch/webtools/base64/) to decode it so i could read it in as a byte[]. It says that "-" is an invalid character so I removed the header (-----BEGIN PUBLIC KEY-----) and footer (-----END PUBLIC KEY-----) before i decoded it.
    the following is my code to try verify the public key/signature on the message.
         public static void main(String[] args)
              byte[] pkbytes = getFileBytes("/home/me/RSA/keys/pubkeybytes.pem"); // base64 decoded publickey
              System.out.println("pkbytes: " + new String(pkbytes));
              byte[] sigbytes = getFileBytes("/home/me/RSA/keys/sign.file");
              System.out.println("sigbytes: " + new String(sigbytes));     
              try
                   KeyFactory keyFactory = KeyFactory.getInstance("RSA");
                 EncodedKeySpec publicKeySpec = new X509EncodedKeySpec(pkbytes);
                 PublicKey publicKey = keyFactory.generatePublic(publicKeySpec);
                 //sign
                 Signature sig = Signature.getInstance("MD5withRSA");
                 sig.initVerify(publicKey);
                 // read in message
                  FileInputStream fis = new FileInputStream("/home/me/RSA/keys/test.file");  // just contains "message"
                  byte[] dataBytes = new byte[8192];
                  int nread = fis.read(dataBytes);
                  while (nread > 0)
                       sig.update(dataBytes, 0, nread);
                       nread = fis.read(dataBytes);
                  // verify
                  System.out.println ("Verification: " + sig.verify(sigbytes));
              catch (Exception e)
                   System.out.println(e.toString());
                   e.printStackTrace();
         public static byte[] getFileBytes(String filename)
              byte[] sigBytes = null;
              try
                   FileInputStream in = new FileInputStream(filename);
                   sigBytes = new byte[8192];
                   int count = in.read(sigBytes);
                   in.close();
              catch (Exception e)
                   System.out.println(e.toString());
                   e.printStackTrace();
              return sigBytes;
         }I'm really not sure what is wrong, but it is probably something obvious since I'm fairly new at this.
    Any help is really appreciated,
    Thanks.

    You didn't mention what the output was; did it throw exceptions? Instead of using some ad-hoc base64 decoder, just output the public key in the correct form directly from openssl, like the following:
    openssl rsa -in private_key.pem -pubout -out public_key.der -outform DER.
    NOTE: If you are not going to do something useful with an exception, then DO NOT catch it.

  • What is the recommended way to perform tape verification?

    I currently have 12 protection groups with a total of about 30 protected members.  I have the "Check backup for data integrity (time consuming operation)" option enabled for all jobs.  The problem is with the way that DPM 2012R2 performs
    verification.  Here is the chronology of backing up an SCCM server's SQL databases that I just witnessed:
    The summary of what DPM did is as follows:
    write and verify approximately 5.6 GB of data
    unload and load the same tape 8 times
    elapsed time: 43 minutes, 6 seconds
    average data rate: 2.2 megabytes per second
    When doing verification DPM unloads and loads the same tape once for each protected member.  Obviously this doesn't scale.  Furthermore, these unnecessary cycles of the tape loading mechanism will reduce the life of the tape library because
    the mechanism has a mean time before failure measured in tape load cycles.  So the question is, what is the currently recommend practice for achieving tape verification with DPM 2012R2? 
    I have
    read here that "Tape verify jobs should be scheduled to start after all the tape backups jobs finish."  Is the corollary to this statement to "disable Check backup for data integrity" on all protection groups?  Also, if
    this is indeed the recommended practice, then how, exactly, do you "schedule a tape verify job to start after all the tape backup jobs finish"?
    Thanks for your help,
    Alex

    Ugh.  This is looking pretty awkward.  Here are the facts as I see them.  If you want to verify that the entire contents of a tape was correctly written you have three options:
    enable "Check backup for data integrity (time consuming operation)" for each PG that is written to the tape
    run Test-DPMTapeData for each recovery point on the tape
    recover each recovery point on the tape
    Each and every one of these options results in a minimum of one tape load and unload cycle per recovery point.  I am using a Dell TL2000 tape library and DPM 2012R2.  Based on the example in my original post, DPM was able to load, verify, and unload
    6 recovery points in 32 minutes for a rate of approximately 11 recovery points per hour.  If you are doing daily tape backups, then the absolute highest number of recovery points you can ever expect to verify is 11*24=264 recovery points.  This is
    so even if all of those recovery points are on a single tape and is a best-case scenario assuming 100% of the duty cycle of the tape library is dedicated to verification, which of course would never be the case in real life.
    If I have made a factual error here please correct me.  Assuming I have these facts correct, we can conclude the following:
    Using DPM 2012R2 there is no possible method to comprehensively verify the contents of daily tape backups if there are more than approximately 250 recovery points per day.
    Above that limit, the most verification you could hope for is spot-checking.  Furthermore, the life expectancy of a tape library is likely to be reduced to months from years if it is performing 250 tape load cycles every day.  This is rather an
    unacceptable result for an enterprise-class backup system.  The solution is straightforward: DPM should provide a means of verifying, copying, or recovering all recovery points on a single tape in a single load/read/unload cycle.
    Am I missing something here?  I just don't see how any form of substantial tape backup verification can work using DPM in its current form at scale.
    Alex

  • Performance problem in select data from data base

    hello all,
    could you please suggest me which select statement is good for fetch data form data base if data base contain more than 10 lac records.
    i am using SELECT PACKAGE SIZE n statement,  but it's taking lot of time .
    with best regards
    srinivas rathod

    Hi Srinivas,
    if you have huge data and selecting ,you could decrease little bit time if you use better techniques.
    I do not think SELECT PACKAGE SIZE  will give good performance
    see the below examples :
    ABAP Code Samples for Simple Performance Tuning Techniques
    1. Query including select and sorting functionality
    tables: mara, mast.
        data: begin of itab_new occurs 0,
                 matnr like mara-matnr,
                 ernam like mara-ernam,
                 mtart like mara-mtart,
                 matkl like mara-matkl,
                 werks like mast-werks,
               aenam like mast-aenam,
    stlal like mast-stlal,
         end of itab_new.
    select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
    into table itab_new from mara as f inner join mast as g on
    fmatnr = gmatnr where gstlal = '01' order by fernam.
    Code B
    tables: mara, mast.
    data: begin of itab_new occurs 0,
          matnr like mara-matnr,
          ernam like mara-ernam,
          mtart like mara-mtart,
          matkl like mara-matkl,
          werks like mast-werks,
          aenam like mast-aenam,
          stlal like mast-stlal,
    end of itab_new.
    select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
    into table itab_new from mara as f inner join mast as g on f~matnr =
    gmatnr where gstlal = '01'.
    sort itab_new by ernam.
    Both the above codes essentially do the same function, but the execution time for code B is considerably lesser than that of Code A. Reason: The Order by clause associated with a select statement increases the execution time of the statement, so it is profitable to sort the internal table once after selecting the data.
    2. Performance Improvement Due to Identical Statements – Execution Plan
    Consider the below queries and their levels of efficiencies is saving the execution
    tables: mara, mast.
    data: begin of itab_new occurs 0,
          matnr like mara-matnr,
          ernam like mara-ernam,
          mtart like mara-mtart,
          matkl like mara-matkl,
          werks like mast-werks,
          aenam like mast-aenam,
          stlal like mast-stlal,
    end of itab_new.
    select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
    into table itab_new from mara as f inner join mast as g on f~matnr =
    gmatnr where gstlal = '01' .
    sort itab_new.
    select fmatnr fernam
    fmtart fmatkl gwerks gaenam g~stlal
    into table itab_new from mara as
    f inner join mast as g on f~matnr =
    gmatnr where gstlal
    = '01' .
    Code D (Identical Select Statements)
    tables: mara, mast.
    data: begin of itab_new occurs 0,
          matnr like mara-matnr,
          ernam like mara-ernam,
          mtart like mara-mtart,
          matkl like mara-matkl,
          werks like mast-werks,
          aenam like mast-aenam,
          stlal like mast-stlal,
    end of itab_new.
    select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
    into table itab_new from mara as f inner join mast as g on f~matnr =
    gmatnr where gstlal = '01' .
    sort itab_new.
    select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
    into table itab_new from mara as f inner join mast as g on f~matnr =
    gmatnr where gstlal = '01' .
    Both the above codes essentially do the same function, but the execution time for code B is considerably lesser than that of Code A. Reason: Each SQL statement during the process of execution is converted into a series of database operation phases. In the second phase of conversion (Prepare phase) an “execution  plan” is determined for the current SQL statement and it is stored, if in the program any identical select statement is used, then the same execution plan is reused to save time. So retain the structure of the select statement as the same when it is used more than once in the program.
    3. Reducing Parse Time Using Aliasing
    A statement which does not have a cached execution plan should be parsed before execution; this parsing phase is a highly time and resource consuming, so parsing time for any sql query must include an alias name in it for the following reason.
    1.     Providing the alias name will enable the query engine to resolve the tables to which the specified fields belong to.
    2.     Providing a short alias name, (a single character alias name) is more efficient that providing a big alias name.
    Code E
    select jmatnr jernam jmtart jmatkl
    gwerks gaenam g~stlal into table itab_new from mara as
    j inner join mast as g on jmatnr = gmatnr where
                g~stlal = '01' .
    In the above code the alias name used is ‘ j ‘.
    4. Performance Tuning Using Order by Clause
    If in a SQL query you are going to  read a particular database record based on some key values mentioned in the select statement, then the read query can be very well optimized by ordering the fields in the same order in which we are going to read them in the read query.
    Code F
    tables: mara, mast.
    data: begin of itab_new occurs 0,
          matnr like mara-matnr,
          ernam like mara-ernam,
          mtart like mara-mtart,
          matkl like mara-matkl,
          end of itab_new.
    select MATNR ERNAM MTART MATKL from mara into table itab_new where
    MTART = 'HAWA' ORDER BY  MATNR ERNAM  MTART MATKL.
    read table itab_new with key MATNR = 'PAINT1'   ERNAM = 'RAMANUM'
    MTART = 'HAWA'   MATKL = 'OFFICE'.
    Code G
    tables: mara, mast.
    data: begin of itab_new occurs 0,
          matnr like mara-matnr,
          ernam like mara-ernam,
          mtart like mara-mtart,
          matkl like mara-matkl,
          end of itab_new.
    select MATNR ERNAM MTART MATKL from mara into table itab_new where
    MTART = 'HAWA' ORDER BY  ERNAM MATKL MATNR MTART.
    read table itab_new with key MATNR = 'PAINT1'   ERNAM = 'RAMANUM'
    MTART = 'HAWA'   MATKL = 'OFFICE'.
    In the above code F, the read statement following the select statement is having the order of the keys as MATNR, ERNAM, MTART, MATKL. So it is less time intensive if the internal table is ordered in the same order as that of the keys in the read statement.
    5. Performance Tuning Using Binary Search
    A very simple but useful method of fine tuning performance of a read statement is using ‘Binary search‘ addition to it. If the internal table consists of more than 20 entries then the traditional linear search method proves to be more time intensive.
    Code H
    select * from mara into corresponding fields of table intab.
    sort intab.     
    read table intab with key matnr = '11530' binary search.
    Code I
    select * from mara into corresponding fields of table intab.
    sort intab.     
    read table intab with key matnr = '11530'.
    Thanks
    Seshu

  • Creating a performance report based upon a custom group

    I am trying to create a simple performance report based on a SCOM group that I created, however when I run the report the relevant data cannot be found.  When I look at the group membership I see a list of Windows servers.  I then go into a generic
    performance report, add a single chart, and line series, and select "Add group" and then search and select the SCOM group I created.  I then add % processor time for 2008 systems as my rule.  However when the report is run, no relevant
    data is found. Performance reports run fine when  selecting "Add group" and selecting the members of the group themselves.
    My suspicion is that it is trying to run the performance report based on the group object and not the members of the group. Is there anyway that I can accomplish this?  Perhaps via XML?
    Keith

    Hi,
    For your reference:
    Creating Useful Custom Reports in OpsMgr: How to create a custom performance counter report for a group of servers
    http://www.systemcentercentral.com/creating-useful-custom-reports-in-opsmgr-how-to-create-a-custom-performance-counter-report-for-a-group-of-servers/
    SCOM reports on performance counters for large groups of servers
    http://www.bictt.com/blogs/bictt.php/2010/11/28/scom-reports-on-performance-counters-for-large-groups-of-servers
    Regards,
    Yan Li
    Regards, Yan Li

  • Perform and Form statements

    Hello,
    can anyone give egs of using PERFORM and FORM statement. what do these statements do actually.
    thanks.

    See this sample for PERFORM ... USING...CHANGING
    DATA : c1 TYPE i, c2 TYPE i, res TYPE i.
    c1 = 1.
    c2 = 2.
    <b>PERFORM sum USING c1 c2 CHANGING res.</b>
    WRITE:/ res.
    **& Form sum
    ** text
    form sum using p_c1 p_c2 changing value(p_res).
    p_res = p_c1 + p_c2.
    endform. " sum
    Note the difference between the above and below perform.
    DATA : c1 TYPE i, c2 TYPE i, res TYPE i.
    c1 = 1.
    c2 = 2.
    <b>data: subroutinename(3) VALUE 'SUM'.
    PERFORM (subroutinename) IN PROGRAM Y_JJTEST1 USING c1 c2 CHANGING res</b>.
    WRITE:/ res.
    **& Form sum
    text
    form sum using p_c1 p_c2 changing value(p_res).
    p_res = p_c1 + p_c2.
    endform. " sum
    ANother sample for simple perform
    PERFORM HELP_ME.
    FORM HELP_ME.
    ENDFORM.
    <b>... TABLES itab1 itab2 ...</b>
    TYPES: BEGIN OF ITAB_TYPE,
             TEXT(50),
             NUMBER TYPE I,
           END OF ITAB_TYPE.
    DATA:  ITAB TYPE STANDARD TABLE OF ITAB_TYPE WITH
                     NON-UNIQUE DEFAULT KEY INITIAL SIZE 100,
           BEGIN OF ITAB_LINE,
             TEXT(50),
             NUMBER TYPE I,
           END OF ITAB_LINE,
           STRUC like T005T.
    PERFORM DISPLAY TABLES ITAB
                    USING  STRUC.
    FORM DISPLAY TABLES PAR_ITAB STRUCTURE ITAB_LINE
                 USING  PAR      like      T005T.
      DATA: LOC_COMPARE LIKE PAR_ITAB-TEXT.
      WRITE: / PAR-LAND1, PAR-LANDX.
      LOOP AT PAR_ITAB WHERE TEXT = LOC_COMPARE.
      ENDLOOP.
    ENDFORM.
    Hope this helps.
    Reward points if this helps u.

  • Urgent - Oracle Applications Performance - Please Help me!

    Hi folks!
    I´m having a lot of performance troubles in Oracle applications, and the dba and the network analyst can´t help me. I´m having this trouble since 2 weeks, and nobody can explain why. The system is too slow and if i try a simple select in database it takes a lot of time, i think (i think ok?) there are som database troubles...
    I saw some docs
    1. System Mgmt White Paper http://www.oracle.com/appsnet/technology/managing/collateral/wp_managing11i.pdf
    2. System Mgmt PPT http://www.oracle.com/pls/oow/oow_user.download?p_event_id=15&p_file=P39948.zip
    3. Reducing 11i Downtime PPT http://www.oracle.com/pls/oow/oow_user.download?p_event_id=15&p_file=P39947.zip
    4. Performance and Scalability site : There are a couple of excellent presentations and white papers which will give you the right way to do performance tuning. http://www.oracle.com/appsnet/technology/performance/content.html
    but i need something more specifical like
    simple performance tests.
    Best Regards!
    Filipe
    [email protected]

    Hi
    Check this one.
    http://www.appsworld2004.com/scps/controller/catalog
    Search for item 1066. This is a presentation on "Performance Tuning Users Tips and Techniques" by Ahmed Alomari, Applications Performance Group, Oracle Corporation. You may need to register and then login.
    If you cant access this presentation, let me know, I can mail it to you as well.
    There was a similar presentation of 2002 Appsworld as well. Am not able to locate the link yet.
    Best Wishes
    Vinod Subramanian

  • Spatial Insert Performance

    I'm running 9.2.0.3EE on W2K.
    Ran some simple performance tests...
    With a simple non-spatial table (id, lat, lon), I can get inserts up around 12,000 records per second.
    I setup a similar table for use with spatial:
    CREATE TABLE test2 (
    id number not null,
    location MDSYS.SDO_GEOMETRY not null,
    constraint pk_test2 primary key (id)
    When there is no spatial index, I can get about 10,000 inserts per second, similar to the non-spatial table.
    After adding a spatial index, performance drops to 135 inserts/second. Thats about 2 orders of magnitude different. Am I doing something radically wrong here, or is this typical with this product?
    Here is the index setup (RTREE Geodetic):
    INSERT INTO USER_SDO_GEOM_METADATA
    VALUES (
    'test2',
    'location',
    MDSYS.SDO_DIM_ARRAY(
    MDSYS.SDO_DIM_ELEMENT('Longitude', -180, 180, 10),
    MDSYS.SDO_DIM_ELEMENT('Latitude', -90, 90, 10)
    8307 -- SRID for 'Lon/Lat WGS84 coordinate system
    commit;
    CREATE INDEX test2_spatial_idx
    ON test2(location)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX
    PARAMETERS('LAYER_GTYPE=POINT');
    Any pointers are appreciated!
    thanks,
    --Peter

    Hi,
    Recent testing of 10g on HP 4640 hardware (linux itanium, 1.5 Ghz processors, good disks) yielded insert rates of over 1300 points per second (single process insert rate).
    Features were put into 10g to enable this increase in performance. On other hardware (testing 9iR2 vs. 10g), 10g was better than 2x as fast as 9iR2. I didn't have an older version of Oracle on this machine, so I couldn't compare insert speeds.

  • Workflow verification test failed

    a workflow I created last week was still working properly, but today when I ran it, it failed at the decision step. Therefore I ran the simple workflow verification test, a work item was created and sent to the inbox, then I executed the work item, it failed immediately, looked at the log, the failing point is at decision step again. that's really weird. configuration seems to be all fine in swu3, my doubt is someone in the system changed settings, but it's nearly possible for me to trace what kinda change was made. any suggestion?
    Thanks.

    What error message is appearing? Please analyse the Workflow Log from SWIA. It is impossible to help you with so much less information.
    Thanks
    Arghadip

  • Link Verification program giving wrong results?

    Hey all,
    I wrote a simple link verification program which verifies dynamic links pulled from a database, but the results are not correct. Some valid links turned out to be marked as Error Links. Especially links to some of the pdf files. I am wondering why? Did anyone have similar problems?
    Thank you

    Could be something wrong with your program. Or not. That's about all that can be said based on the information you gave.

  • Simple Authentication with SMP 10.1 and FMS 3.5

    Good day all,
    I am looking to add simple authentication to the SMP player for use with FMS 3.5. I recently came across a technical paper published by Adobe titled, "Video content protection measures enabled by Adobe Flash Media Interactive Server 3.5". Within this document are three examples of user authentication with code samples. I am starting with the "simple" client verification using a unique token authentication key method first.
    I've noticed that SMP doesn't have any FMS security mechanisms built-in at least that I've been able to identify in the documentation or feature specs. Did I miss something? I am looking for assistance in getting started with adding this feature to SMP. So my question is where could I add the client side Actionscript within the SMP structure?
    I'd very much like to hear about others' experiences with adding security mechanisms to SMP used with FMS.
    Thank you.

    Andrian - Thank you for the quick reply. I'm gald SMP has support for the playback of protected content. Is there more documentation than this demo on this topic?
    I'll explain what I'm doing. I am implementing SMP as the default video player application used in online courses at the Savannah College of Art and Design. Identifying the player and implementing its use in our production workflow is the first step in a strategy to deliver a better video experience and leverage the scalibility and flexibility of SMP. On the back end integration with our FMS I have been asked to implement some user authentication. We don't need to re-auth the students as they have already been authenticated through our LMS. What is desired is each player instance authenticates with our server to prevent stream ripping.
    The simple user token authentication key example from the linked document seems to best suit this intial need.

Maybe you are looking for