Diffrence between oraext:lookup-table & DB Adaptor

Hi,
Please, help me to understand  performance wise difference between oraext:lookup-table database X-path function & DB Adaptor operation on table.
Which one is better to fetch single value from table?
The table resides in another application's database.
Thanks
Renu

If you have to fetch a single value performance wise oraext:lookup-table is the better approach,
Creating a db adapter gives you more control over the database query through JCA file.
So if you have a simple data to retrieve you can go ahead with the function.

Similar Messages

  • Using a dynamic lookup table

    Hi,
    Simple question for the ODI guru's.. for an interface that we use, there is a requirement to dynamically look-up value's in a lookup table and then use these value's from the lookup table to write to a target table.
    This instead of hard coding string value's in the target table of the interface.
    ODI (11g) gives the option to use a lookup table, but it kind of assumes a relationship between the lookup table and source table - ie.it nicely allows you specify either an outer join or a subquery in the select statement.
    In this case, using the lookup, there is none however - no relationship between either lookup and source columns. So how would you do this?
    An outer join could be used, but then it requires to specify a join on columns of the source and lookup tables, so you could use dummy columns, but that is not ideal as columns might actually match. Any ideas?
    Cheers

    If there is no relationship between the values in your source table and those in the lookup table I can't see how you expect to perform a lookup - without a relationship it would be guesswork. For your requirements you would have to define a link table which held the associations between the 2 tables.

  • How to import a lookup table

    Hi all,
    I am trying to import a simple XML file using import manager with the following information:
    - NAME: STRING
    - LAST NAME: STRING
    - CITY: STRING
    I have in my repository the next information:
    - NAME: TEXT.
    - LAST NAME: TEXT.
    - CITY: LOOKUP TABLE (CITY TABLE)
    CITY TABLE is a lookup table with just a fiel called 'NAME:TEXT'.
    With import manager I select the XML field CITY:STRING and I clone it. After that, I select the lookup table called TABLE CITY and I make the following map:
    - <b>Remote key</b> (Repository) maps with <b>City clone</b> (XML file)
    - <b>name</b> (Repository) maps with <b>city</b> (XML file).
    Finally, I select the Products table from the repository and I make a new map between CITY:LOOKUP TABLE (from the repository) and city (from the XML File).
    The import status fails. What is wrong?
    Thanks in advance,
    Marta

    Hi Marta,
    you have to import the lookup table before importing the main table data using an extra import map.
    But if I understand your description correctly, you only have one field in your city repository, which is the display field. This way, you can also load into the city lookup table using only the main table import.
    When you select 'city' in the field mapping (destination table is the main table), you will see the distinct values in the value mapping area below.
    In the destination pane of the value mapping, there shouldn't be any values for the initial import. So you have to select all your source values and 'ADD' them to your repository.
    If this still doesn't work, try the two maps aproach and import the cities prior to the main table records, then you can automap them later with the main table import.
    Please tell me if this still doesn't work.
    Regards,
    Christiane

  • What is the diffrence between Row id and primary key ?

    dear all
    my question is about creating materialized views parameters (With Rowid and
    With Primary kry)
    my master table contains a primary key
    and i created my materialized view as follow:
    CREATE MATERIALIZED VIEW LV_BULLETIN_MV
    TABLESPACE USERS
    NOCACHE
    LOGGING
    NOCOMPRESS
    NOPARALLEL
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    AS
    SELECT
    BCODE ID, BTYPE BTYPE_ID,
    BDATE THE_DATE,SYMBOL_CODE STOCK_CODE,
    BHEAD DESC_E, BHEADARB DESC_A,
    BMSG TEXT_E, BMSGARB TEXT_A,
    BURL URL, BTIME THE_TIME
    FROM BULLETIN@egid_sefit;
    I need to know is there a diffrence between using (with row id) and (with primary key) on the performance of the query?

    Hi again,
    fast refreshing complex views based on rowids, according to the previous subject.
    (You're example shows that) are not possible.
    Complex remote (replication) snapshots cannot be based on Rowid too.
    for 10.1
    http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10759/statements_6002.htm#sthref5054
    for 10.2
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_6002.htm#sthref6873
    So I guess (didn't check it) that this applies ONLY to replication snapshots.
    This is not documented clearly though (documentation bug ?!)
    Documentation states that the following is generally not possible with Rowid MVIEWS:
    Distinct or aggregate functions
    GROUP BY or CONNECT BY clauses
    Subqueries
    Joins
    Set operations
    Rowid materialized views are not eligible for fast refresh after a master table reorganization until a complete refresh has been performed.
    The main purpose of my statements was to try to give a few tips how to avoid common problems with this complex subject, like for example: being able to CREATE an MVIEW with fast refresh clause does not really guarantee that it will refresh fast in the long run (reorganisation, partition changes) if ROWID based, further the rowid mviews have limitations according to the documentation (no group by, no connect by, link see above) plus fast refresh means only to use filter columnns of the mview logs, plus for aggregates you need additional count (*) pseudo columns.
    kind regards
    Karsten

  • What is the diffrence between ASCII and BIN mode

    Hello All,
    What is the diffrence between ASCII and BIN mode
    Regards,
    Lisa.

    'ASC' :
    ASCII format. The table is transferred as text. The conversion exits are
    carried out. The output format additionally depends on the parameters
    CODEPAGE, TRUNC_TRAILING_BLANKS, and TRUNC_TRAILING_BLANKS_EOL.
    'IBM' :
    ASCII format with IBM codepage conversion (DOS). This format correspond
    to the 'ASC' format when using target codepage 1103. This codepage is
    often used for data exchange by disc.
    'DAT' :
    Column-by-column transfer. With this format, the data is transferred as
    with ASC text. However, no conversion exists are carried out and the
    columns are separated by tab characters. This format creates files that
    can be uploaded again with gui_upload or ws_upload.
    'DBF' :
    The data is downloaded in dBase format. Because in this format the file
    types of the individual columns are included, import problems, for
    example, into Microsoft Excel can be avoided, especially when
    interpreting numeric values.
    'WK1' :
    The data is downloaded in Lotus 1-2-3 format.
    'BIN' :
    Binary format. The data is transferred in binary format. There is no
    formatting and no codepage conversion. The data is interpreted row by
    row and not formatted in columns. Specify the length of the data in
    parameter BIN_FILESIZE. The table should consist of a column of type X,
    because especially in Unicode systems the conversion of structured data
    into binary data leads to errors.

  • How do I handle values in source that are not in "lookup" table?

    hi there,
    I have 3 tables (all Oracle technology):
    1) Source table: CALLS with columns MSISDN, TRANS_DATE, TYPE, COST, DURATION
    2) Lookup table: SUBSCRIBERS with columns SUBSCRIBERID, MSISDN, IMSI
    3) Target table: FACT_CALLS with columns SUBSCRIBERID (not null), CALLDATE, CALLTYPE, CALLDURATION, CALLCHARGE.
    Join between source and lookup table:
    NVL(CALLS.MSISDN,0) =SUBSCRIBERS.MSISDN)
    Mappings on target:
    FACT_CALLS.SUBSCRIBERID --> SUBSCRIBERS.SUBSCRIBERID
    FACT_CALLS.CALLDATE --> CALLS.TRANS_DATE
    FACT_CALLS.CALLTYPE --> CALLS.TYPE
    FACT_CALLS.CALLDURATION --> CALLS.DURATION
    FACT_CALLS.CALLCHARGE --> CALLS.CHARGE
    I have a dummy value in SUBSCRIBERS with values MSISDN = 0, SUBSCRIBERID = 0 and IMSI = 0, to be used if MSISDN in the source table is null or does not exist in the lookup table.
    The NVL on the join takes care of the case when source MSISDN is null and this is working fine i.e. returns 0 for SUBSCRIBERID.
    The problem occurs when the source MSISDN does have a value but such a value does not exist in the lookup table, such records are rejected.
    How do I implement a solution for this?

    hi Guru,
    Yes I have 2 source tables and a target.
    1) I created a join by dragging MSISDN on CALLS to MSISDN on SUBSCRIBERS then added the NVL part to have NVL(CALLS.MSISDN,0) =SUBSCRIBERS.MSISDN)
    2) the target does not have MSISDN. Using the join the target SUBSCRIBERID column gets populated with the correct value from the lookup table.
    i.e. FACT_CALLS.SUBSCRIBERID = (select SUBSCRIBERID from SUBSCRIBERS where SUBSCRIBERS.MSISDN = CALLS.MSISDN)

  • Whats the diffrence between these two JKM's and when to use which one.

    Please tell the diffrence between these two.
    JKM Oracle Consistent o JKM Oracle Consistent (Update Date)

    Basic difference is in the requirement .
    JKM Oracle Consistent (Update Date) wants a DATE/TIMESTAMP column in the source tables(s) which will get inserted/update by the source application . ODI will capture the changed data based on this column value . Only the DELETE portion requires a trigger to keep track of deletion happening at source table(s).
    Where as JKM Oracle Consistent does not require any DATE/TIMESTAMP column in the source tables(s) . It uses triggers for keeping track of INSERT/UPDATE/DELETE happening in source .
    In terms of performance JKM Oracle Consistent (Update Date) is better as it will create littel over head on to your source system . SO if you have a DATE/TIMESTAMP column in the source tables(s) which gets inserted/update by the source application, then go for JKM Oracle Consistent (Update Date) .
    Else look for JKM Oracle Consistent .
    Thanks,
    Sutirtha

  • Creating Lookup table in Labview 7.1

    For a particular input voltage, the output is distance in cm. For example if the the voltage is 6.2mV the output should be 2.5cm, for 6mV the output is 6cm. Whenever an input voltage between any of these values is given, the output should be interpolated and displayed in cm. I would like to maintain a lookup table for this purpose in Labview 7.1. The graph between input and ouput is a linear graph. I would to know how to create a lookup table and configure it in Labview 7.1..

    Well since it is still AE Week here is an AE implementation of a LUT
    Inititalize it with an array of raw values along with an array of the tranlsated values.
    Use the Lookup action to translate raw to tranalsted.
    Itr uses the threshold and interpolate VI to do the translation.
    Ben
    Message Edited by Ben on 04-12-2007 12:45 PM
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction
    Attachments:
    LUT.JPG ‏42 KB
    Look-up_Table.vi ‏38 KB
    Actions.ctl ‏7 KB

  • How to map lookup table

    Hi friends,
    I did this simple report in obiee 10g(i.e)
    *"NATIONALITY COUNT IN DEPARTMENT WISE"*
    For that i used the following tables:
    per_all_assignments_f----->fact table
    hr_all_organization_units----->dim table(containing departments)
    per_all_people_f---------------->dim table(containing nationality)
    I made all the mappings in the physical diagram, as also viewed my report in BI answers
    It shows the following results like
    NATIONALITY---------------------------------------------------------------------COUNT(NATIONALITY)
    AUS------------------------------------------------------------------------------------------------24
    AFR------------------------------------------------------------------------------------------------25
    PHQ_VB-------------------------------------------------------------------------------------------40
    SH_VT----------------------------------------------------------------------------------------------4
    The problem is for me it is showing the above results, but the nationality column is of various codes of the country.
    Since i doesnt want the code of the nationalitian to display in the results..i need the meaning of each and every nationality..
    like,
    AUS------------------------Australian
    AFR-------------------------African
    PHQ_VB----------------------Germanian(assigned)
    Since i know that the meaning for the nationalitian is available in "FND_LOOKUP_VALUES"...okay..
    I can import "FND_LOOKUP_VALUES" table to the physical layer...but how i can able to give the mapping to the fact table in my physical diagram...
    In my report the fact table is "per_all_assignments_f"
    As my fact table doesnt contains any matching column corresponding to the dimension table "FND_LOOKUP_VALUES".....
    Then how i can give mappings to the fact column???? for viewing the full meaning of the nationalitian in my report.....
    Help me friends...
    All izz Well
    GTA...

    Hi Kranthi,
    Thanks for your reply....
    For the meaning to appear for each and every nationalitian i imported the HR_LOOKUPS table and i have given join to the per_all_people_f which is an dimension table.....
    This is the following query that i executed in toad for getting meaning for each and every nationality...
    select distinct h15.meaning, h15.lookup_code from hr_lookups h15, per_all_people_f papf, per_all_assignments_f paaf
    where h15.lookup_type(+) = 'NATIONALITY'
    AND h15.lookup_code(+) = papf.nationality and h15.meaning is not null and papf.person_id = paaf.person_id The samething i implemented in obiee, that is in physical diagram i have given the join between hr_lookups and per_all_people_f..
    like the lookup_code column in hr_lookups table and nationality column in the per_all_people_f table
    i obtained the results in BI answers, but i didnt get accurate result........
    Since for getting the accurate result i need to give one more join between the hr_lookups table and per_all_people_f table........
    This is the join i need to give to obtain accurate result that i have already mentioned in the above query...
    > h15.lookup_type(+) = 'NATIONALITY'
    since OBIEE is not allowing me to given a second join from the lookup table to the people table...then how i can able to obtain accurate result in BI answers......
    Is there anyother way to give second join between the same two tables in my case between hr_lookups and per_all_people_f..
    Please Help me with this......
    Thanks for your support.....
    All izz Well
    GTA...
    Edited by: GTA on Feb 17, 2011 1:15 AM

  • Diffrence between TDS report and actual TDS

    A report required which will show diffrence between TDS report and actual TDS, to get status of tds (liability/receivable) or to compare it.
    so, how to get it in system???
    I want information/ suggestions for standard and Z development both....
    waiting.....
    Regards,
    Sachin

    The below is perfect report for TDS.
    Just copy the below and provide it to your ABAPER and ask him to develop the Z REPORT
    Technical Field     Description          Table Name     Selection screen     Output file
    BUKRS                  Company code          BKPF               Y                            Y
    BELNR                 Accounting doc number     BKPF               Y                            Y
    XBLNR                 Reference (Voucher Number)     BKPF                                 Y
    GJAHR                 Fiscal year          BKPF               Y                            Y
    WITHT                 WHT Type          WITH_ITEM                                 Y
    WT_WITHCD            WHT Code          WITH_ITEM                                 Y
    WT_QSSHH                 WHT base amount in LC     WITH_ITEM                                 Y
    WT_QBSHH                WHT Amount in LC          WITH_ITEM                                 Y
    QSREC                Recipient type          WITH_ITEM                                 Y
    BUDAT                Posting date          BKPF               Y                            Y
    BLDAT                Document date          BKPF               Y                            Y
    PRCTR                Profit Center          BSEG               Y                            Y
    BUPLA                Business Place          BSIS               Y                            Y
    QSATZ                WHT Tax Rate          WITH_ITEM                                 Y
    LIFNR               Account no of the Vendor     J_1IMOVEND         Y                           Y
    J_1IPANNO               PAN Number          J_1IMOVEND          Y
    NAME1                Vendor name          LFA1                                 Y
    QSCOD               Off WHT key          T059Z               Y                           Y
    PSWBT               G/L Amount          BSEG                               Y
    Hope it helps
    Thnks

  • Diffrence between cpu and elapse time in tkprof

    Hi All
    i found huge diffrence between cpu and elapsed time in tkprof. can you please advice me on this issue.
    >call count cpu elapsed disk query current rows
    ==================================================
    Parse 1 0.12 1.36 2 11 0 0
    Execute 1 14.30 720.20 46548 190520 205 100
    Fetch 0 0.00 0.00 0 0 0 0
    ======================================================
    total 2 14.42 721.56 46550 190531 205 100
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 173 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times waited Max. Wait Total Waited
    ===========================================
    db file sequential read 46544 0.49 632.12
    db file scattered read 1 0.00 0.00
    my select statement
    SELECT cst.customer_id> ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.trx_date) / COUNT(cr.deposit_date))) avgdays
    > ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.due_date) / COUNT(cr.deposit_date))) avgdayslate
    > ,NVL(SUM(DECODE(SIGN(cr.deposit_date - ps.due_date),1, 1, 0)), 0) newlate
    > ,NVL(SUM( DECODE(SIGN(cr.deposit_date - ps.due_date),1, 0, 1)), 0) newontime
    >
    > FROM ar_receivable_applications_all ra
    > ,ar_cash_receipts_all cr
    > ,ar_payment_schedules_all ps
    > ,zz_ar_customer_summary_all cst
    > WHERE ra.cash_receipt_id = cr.cash_receipt_id
    > AND ra.apply_date BETWEEN ADD_MONTHS(SYSDATE, -12) AND SYSDATE
    > AND ra.status = 'APP'
    > AND ra.display = 'Y'
    > AND ra.applied_payment_schedule_id = ps.payment_schedule_id
    > AND ps.customer_id = cst.customer_id
    > AND NVL(ps.receipt_confirmed_flag,'Y') = 'Y'
    > group by cst.customer_id ;
    Thanks,
    Anu

    user653066 wrote:
    Hi All
    i found huge diffrence between cpu and elapsed time in tkprof. can you please advice me on this issue.
    call     count       cpu    elapsed       disk      query    current        rows
    ================================================================================
    Parse        1      0.12       1.36          2         11          0           0
    Execute      1     14.30     720.20      46548     190520        205         100
    Fetch        0      0.00       0.00          0          0          0           0
    ================================================================================
    total        2     14.42     721.56      46550     190531        205         100
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 173     (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on                      Times waited   Max. Wait  Total Waited
    ===========================================================================
    db file sequential read                     46544        0.49        632.12
    db file scattered read                          1        0.00          0.00
    SELECT  cst.customer_id
             ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.trx_date) / COUNT(cr.deposit_date))) avgdays
             ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.due_date) / COUNT(cr.deposit_date))) avgdayslate
             ,NVL(SUM(DECODE(SIGN(cr.deposit_date - ps.due_date),1, 1, 0)), 0)  newlate
             ,NVL(SUM( DECODE(SIGN(cr.deposit_date - ps.due_date),1, 0, 1)), 0) newontime
              FROM ar_receivable_applications_all ra
                  ,ar_cash_receipts_all           cr
                  ,ar_payment_schedules_all       ps
                  ,zz_ar_customer_summary_all cst
              WHERE ra.cash_receipt_id                 = cr.cash_receipt_id
              AND   ra.apply_date                BETWEEN ADD_MONTHS(SYSDATE, -12) AND SYSDATE
              AND   ra.status                          = 'APP'
              AND   ra.display                         = 'Y'
              AND   ra.applied_payment_schedule_id     = ps.payment_schedule_id
              AND   ps.customer_id                     = cst.customer_id          
              AND   NVL(ps.receipt_confirmed_flag,'Y') = 'Y'
              group by cst.customer_id    ;           Toon Koppelaars seems to have pinpointed the problem. Where are the 74 seconds unaccounted for seconds (I might have calculated it incorrectly, but I arrived at 88.08 seconds of unaccounted for time: 721.56 total - 1.36 parse - 632.12 db file sequential reads)?
    It is interesting that the maximum wait for a single block read reported by TKPROF is 0.49 seconds - this might be an indication of excessive competition for the server's CPU - processes are waiting in the CPU run queue, and therefore not on the CPU. As Toon indicated, 632.12 of the 721.56 seconds were spent waiting for single block reads to complete with 46,544 blocks read. Note also that the query executed at dep=1, and TKPROF may be providing misleading information about what actually happened during those executions. An example of misleading information:
    CREATE TABLE T11 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE TABLE T12 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE TABLE T13 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE TABLE T14 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE OR REPLACE TRIGGER HPM_T11 AFTER
    INSERT OR DELETE OR UPDATE OF C1 ON T11
    REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
    BEGIN
      IF INSERTING THEN
        INSERT INTO T12
        SELECT
          ROWNUM,
          DBMS_RANDOM.STRING('A',25)
        FROM
          DUAL
        CONNECT BY
          LEVEL <= 100;
      END IF;
    END;
    CREATE OR REPLACE TRIGGER HPM_T12 AFTER
    INSERT OR DELETE OR UPDATE OF C1 ON T12
    REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
    BEGIN
      IF INSERTING THEN
        INSERT INTO T13
        SELECT
          ROWNUM,
          DBMS_RANDOM.STRING('A',25)
        FROM
          DUAL
        CONNECT BY
          LEVEL <= 100;
      END IF;
    END;
    CREATE OR REPLACE TRIGGER HPM_T13 AFTER
    INSERT OR DELETE OR UPDATE OF C1 ON T13
    REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
    BEGIN
      IF INSERTING THEN
        INSERT INTO T14
        SELECT
          ROWNUM,
          DBMS_RANDOM.STRING('A',25)
        FROM
          DUAL
        CONNECT BY
          LEVEL <= 100;
      END IF;
    END;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_FIND_ME2';
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
    SET TIMING ON
    INSERT INTO T11 VALUES (1,'MY LITTLE TEST CASE');
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';The partial TKPROF output:
    INSERT INTO T11
    VALUES
    (1,'MY LITTLE TEST CASE')
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          8          0           0
    Execute      1      0.00       0.00          0       9788         29           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0       9796         29           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56 
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=9788 pr=7 pw=0 time=0 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    SQL ID : 6asaf110fgaqg
    INSERT INTO T12 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
      BY LEVEL <= 100
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.04       0.09          0          2        130         100
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.04       0.09          0          2        130         100
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 1)
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=9754 pr=7 pw=0 time=0 us)
        100   COUNT  (cr=0 pr=0 pw=0 time=0 us)
        100    CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
          1     FAST DUAL  (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
    SQL ID : db46bkvy509w4
    INSERT INTO T13 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
      BY LEVEL <= 100
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute    100      1.31       1.27          0         93      10634       10000
    Fetch        0      0.00       0.00          0          0          0           0
    total      101      1.31       1.27          0         93      10634       10000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 2)
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=164 pr=0 pw=0 time=0 us)
        100   COUNT  (cr=0 pr=0 pw=0 time=0 us)
        100    CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
          1     FAST DUAL  (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
    SQL ID : 6542yyk084rpu
    INSERT INTO T14 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
      BY LEVEL <= 100
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute  10001     41.60      41.84          0       8961      52859     1000000
    Fetch        0      0.00       0.00          0          0          0           0
    total    10003     41.60      41.84          0       8961      52859     1000000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 3)
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=2 pr=0 pw=0 time=0 us)
        100   COUNT  (cr=0 pr=0 pw=0 time=0 us)
        100    CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
          1     FAST DUAL  (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      log file switch completion                      2        0.07          0.07
    ********************************************************************************In the above note that the "INSERT INTO T11" is reported as completing in 0 seconds, but it actually required roughly 42 seconds - and that would be visible by manually reviewing the resulting trace file. Also note that the log file switch completion wait was not reported for the "INSERT INTO T11" even though it impacted the execution time.
    Back to the possibility of CPU starvation causing lost time. Another test with an otherwise idle server, followed by a second test with the same server having 240 other processes fighting for CPU resources (a simulated load).
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_QUERY_NO_LOAD';
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
    SET TIMING ON
    SELECT
      COUNT(*)
    FROM
      T14;
    SELECT
      SYSDATE
    FROM
      DUAL;
    SQL> SELECT
      2    COUNT(*)
      3  FROM
      4    T14;
      COUNT(*)
       1000000
    Elapsed: 00:00:01.37With no load the COUNT(*) completed in 1.37 seconds. The TKPROF output looks like this:
    SQL ID : gy8nw9xzyg3bj
    SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
      NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
      NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_0"),
      NVL(SUM(C2),:"SYS_B_1")
    FROM
    (SELECT /*+ NO_PARALLEL("T14") FULL("T14") NO_PARALLEL_INDEX("T14") */
      :"SYS_B_2" AS C1, :"SYS_B_3" AS C2 FROM "T14" SAMPLE BLOCK (:"SYS_B_4" ,
      :"SYS_B_5") SEED (:"SYS_B_6") "T14") SAMPLESUB
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.01       0.84        523        172          1           1
    total        3      0.01       0.84        523        172          1           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=172 pr=523 pw=0 time=0 us)
       8733   TABLE ACCESS SAMPLE T14 (cr=172 pr=523 pw=0 time=0 us cost=2 size=12 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         3        0.02          0.04
      db file parallel read                           1        0.31          0.31
      db file scattered read                         52        0.03          0.47
    SQL ID : 96g93hntrzjtr
    select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#,
      sample_size, minimum, maximum, distcnt, lowval, hival, density, col#,
      spare1, spare2, avgcln
    from
    hist_head$ where obj#=:1 and intcol#=:2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.06          2          2          0           0
    total        3      0.00       0.06          2          2          0           0
    Misses in library cache during parse: 0
    Optimizer mode: RULE
    Parsing user id: SYS   (recursive depth: 2)
    Rows     Row Source Operation
          0  TABLE ACCESS BY INDEX ROWID HIST_HEAD$ (cr=2 pr=2 pw=0 time=0 us)
          0   INDEX RANGE SCAN I_HH_OBJ#_INTCOL# (cr=2 pr=2 pw=0 time=0 us)(object id 413)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         2        0.02          0.04
    SELECT
      COUNT(*)
    FROM
      T14
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          1          1          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.03       0.43       6558       6983          0           1
    total        4      0.03       0.44       6559       6984          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56 
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=6983 pr=6558 pw=0 time=0 us)
    1000000   TABLE ACCESS FULL T14 (cr=6983 pr=6558 pw=0 time=0 us cost=1916 size=0 card=976987)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.02          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file scattered read                        111        0.02          0.38
      SQL*Net message from client                     2        0.00          0.00Note that TKPROF reported that it only required 0.44 seconds for the query to execute while the SQL*Plus timing indicate that it required 1.37 seconds for the SQL statement to execute. The SQL optimization (parse) with dynamic sampling query accounted for the remaining time, yet TKPROF provided no indication that this was the case.
    Now the query with 240 other processes competing for CPU time:
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_QUERY_WITH_LOAD';
    SELECT COUNT(*) FROM T14;
    SELECT
      SYSDATE
    FROM
      DUAL;
    SQL> SELECT COUNT(*) FROM T14;
      COUNT(*)
       1000000
    Elapsed: 00:00:59.03The query this time required just over 59 seconds. The TKPROF output:
    SQL ID : gy8nw9xzyg3bj
    SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
      NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
      NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_0"),
      NVL(SUM(C2),:"SYS_B_1")
    FROM
    (SELECT /*+ NO_PARALLEL("T14") FULL("T14") NO_PARALLEL_INDEX("T14") */
      :"SYS_B_2" AS C1, :"SYS_B_3" AS C2 FROM "T14" SAMPLE BLOCK (:"SYS_B_4" ,
      :"SYS_B_5") SEED (:"SYS_B_6") "T14") SAMPLESUB
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.28        423         69          0           1
    total        3      0.00       0.28        423         69          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=69 pr=423 pw=0 time=0 us)
       8733   TABLE ACCESS SAMPLE T14 (cr=69 pr=423 pw=0 time=0 us cost=2 size=12 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                         54        0.01          0.27
      db file sequential read                         2        0.00          0.00
    SQL ID : 7h04kxpa13w1x
    SELECT COUNT(*)
    FROM
    T14
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.03          1          1          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.06      58.71       6551       6983          0           1
    total        4      0.06      58.74       6552       6984          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56 
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=6983 pr=6551 pw=0 time=0 us)
    1000000   TABLE ACCESS FULL T14 (cr=6983 pr=6551 pw=0 time=0 us cost=1916 size=0 card=976987)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.02          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file scattered read                        110        1.54         58.59
      SQL*Net message from client                     1        0.00          0.00Note in the above that the max wait for the db file scattered read is 1.54 seconds due to the extra CPU competition - about 3 times longer than your max wait for a single block read. On your database platform with single block reads, it might be possible that the time in the CPU run queue is not always counted in the db file sequential read wait time or the CPU wait time - what if your operating system is slow at returning timing information to the database instance due to CPU saturation - this might explain the 74 (or 88) lost seconds.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.
    Edited by: Charles Hooper on Aug 28, 2009 10:26 AM
    Fixing formatting problems

  • Exect difference between Fuzzy lookup and Fuzzy grouping

    Hi all,
       Can you pls explain difference between Fuzzy lookup and Fuzzy grouping in simple word,pls
    Thanks
    Selva

    Hi Selva,
    In brief, the Fuzzy Grouping Transformation can be used to group the similar rows in the source dataset and identify rows of data that are likely to be duplicate; while the Fuzzy Lookup Transformation can match records between the source table and reference
    table that are similar, but not identical to, the lookup key.
    Here are good examples about the two transformations:
    http://ssis-tutorial-online.blogspot.com/2013/04/fuzzy-grouping-transformation.html 
    http://www.codeproject.com/Tips/528243/SSIS-Fuzzy-lookup-for-cleaning-dirty-data 
    Regards,
    Mike Yin
    TechNet Community Support

  • When to use qualified lookup table?

    Hi all,
    I am confused with the qualified lookup table and lookup table, for the situation that a company has more than one contact person, I created a table "contact person", which has following fields: first name, last name, phone number, email address.
    Questions:
    1. should I set "contact person" as a lookup table or qualified lookup table?
    2. if to be set as qualified lookup table, which field should be qualifier field? what is the difference between qualifier field and non-qualifier field?
    your reply will be very appreciable
    Bin

    At times data is stored in such a way that duplication is unavoidable due to the storing mechanism and other factors. It may also happen that the data is sparse. The efficient way of storing data in such scenarios is the use of Qualified tables as it reduces the size of the main table and removes the unnecessarily created duplicates.
    Check the foll links
    /people/pooja.khandelwal2/blog/2006/03/29/taming-the-animal--qualified-tables
    /people/avi.rokach/blog/2006/11/14/using-mdm-55-for-data-quality-analysis
    MDM data modelling guide
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5d4211fa-0301-0010-9fb1-ef1fd91719b6
    How to import Qualified tables.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/abe914fa-0301-0010-7bb1-d25c2a4bb655
    Also this one.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/792f57b7-0a01-0010-f3b6-881269136a83
    Please reward for the same.

  • Range based Flex field validation based on a Lookup table

    Hi all,
    I am trying to create a validation in one of the flex field under HRMS Application. Table: PAY_ELEMENT_ENTRIES_F
    Validation is pretty simple - but i am struggling to implement.
    Assume there exists two flex fields;
    Field1 contains State Information
    Field2 contains Percentage Information
    The above two values would be entered by the user.
    the validation should be like this.
    Field1 - State code entered by User
    Field2 - We have a seperate Lookup where we have setup lots of State specific information. Assume ATTRIBUTE3 and ATTRIBUTE4 defines the Min and Max range which would be configured during the Setup. User should be entering any percentage value between the ATTRIBUTE3 and ATTRIBUTE4.
    I have created a Table Validaiton with select 1 from dual with the following where clause.
    exists ( select null from fnd_common_lookups l, fnd_sessions sess
    where
    l.lookup_type like 'CUSTOM_US_STATE_RULES'
    and sess.session_id= userenv('sessionid')
    and sess.effective_date between l.start_date_active
    and NVL(l.end_date_active, sess.effective_date)
    *and l.attribute1 ='01' -- consider 01 state alone for the time being*
    and :$FLEX$.ENTRY_INFORMATION4 between to_number(l.attribute4) and to_number(l.attribute5)
    When i compile the flexfileld, it errors out stating invalid reference to ENTRY_INFORMATION4.
    ENTRY_INFORMATION4 is the field where i am going to attach this validation
    How do i validate a value of the flexfield against the range of values available in another table(in this case a lookup table) ?
    Any ideas on how to implement this.
    Edited by: vaibhav468 on Sep 29, 2008 3:05 PM

    Thanks so much for the reply. apparently the solution which you have suggested may not work 100% as the data entry also happens via API. Its mentioned in the doc that the special validations happen only via Forms.
    But I implemented it in a crude way. But it works though !!!!
    Based on the value which i enter in the first Field1, the value set which i am using it in second Field2(its a table based value set)
    I generated all possible percentages and displaying it.
    My table is: (select trim(to_char(rownum/100,'990D99')) pct from fnd_columns a where rownum<=10000) a, fnd_common_lookups l
    My Where clause:
    where l.lookup_type='CUSTOM_US_STATE_RULES'
    and l.attribute1=substr(:ENTRY.USER_ENTRY4,1,2)
    and a.pct between l.attribute4 and l.attribute5
    I generated the set of sequence numbers with the help of rownum from a table which definitely contains more than 10000 rows.
    I was so glad that FND allowed me to use a INLINE view in the Validation Table not restricting the tables available for that application alone.
    thanks again.

  • Insert Matching Records from Lookup Table to Main Table

    First off, I want to say many thanks for all the help that I've been provided on here with my other posts. I really feel as though my SQL knowledge is much better than it was even a few short weeks ago, largely in part to this forum.
    I ran into a snag, which I'm hoping someone can provide me some guidance on. I have 2 tables an import table and a lookup table. What I need to have happen is anytime there are matches between the "Types" in the 2 tables, I need a single instance
    of the "Type" and all corresponding fields from the lookup table appended to the import table. There will only be a single instance of each type in the "Lookup" table. Below is an example of how the data might look and the results that
    I would need appended.
    tblLookup
    Type Name Address City
    A Dummy1 DummyAddress No City
    B Dummy2 DummyAddress No City
    C Dummy3 DummyAddress No City
    tblImport
    Type Name Address City
    A John Maple Miami
    A Mary Main Chicago
    A Ben Pacific Eugene
    B Frank Dove Boston
    Data that would be appended to tblImport
    Type Name Address City
    A Dummy1 DummyAddress No City
    B Dummy2 DummyAddress No City
    As you can see only a single instance will be inserted even though there may be multiple instances in the import table. This is the part that I'm struggling on. Any assistance would be appreciated.

    I'm not really sure how else to explain it. With my example, the join would be on "Type" As you can see, there are 2 matching records between the tables (A and B). I would need a single instance of A and B to be inserted into the import table. 
    Below is a SQL statement, which I guess is what you're asking for but it will not do what I need it to do. With the example that I have below, it would insert multiple instances of type "A" into the import table.
    INSERT INTO tblImport (Type, Name, Address, City)
    Select tblLookup.Type, tblLookup.Name,
    tblLookup.Address, tblLookup.City)
    From tblLookup
    Join tblImport on tblLookup.Type = tblImport.Type

Maybe you are looking for

  • MS Office Pro 2013 Deployment through SCCM 2012 R2

    Hi Friends, I have deployed MS Office Pro 2013 through SCCM 2012. But from Windows 7 Client Machines It's not Installing. Noticed following error in Software Center. I'm testing two deployments before bring into production. Test Machine 1 :  Purpose

  • PR00 should be not editable in sales order

    D/ Friends, Weu2019re creating Sales Order w.r.t. contract. Weu2019re putting PR00 in contract. PR00 is manual, has no access sequence. Same pricing procedure has been assigned to contract and order. In that Pric Procd PR00 is manual and mandatory. I

  • Flex Deployment & Web Services

    Finally I got to the end of my localhost first Flex project, using web service soap calls to invoke a ..Net asmx file which in turn calls stored procs on MySql 5. Everything worked fine locally. Then I tried to upload to my hoster. My hoster gave me

  • How can i get rid of this new version of itunes?

    my computer installed the new version of itunes on its own and i hate it i dont want it it's SO confusing and i want the old one back how can i do it?

  • IDOC - Change outbound delivery in SAP

    Hello , I would like to change the outbound delivery in SAP(with the picked quantity ) using an IDOC triggered from external system . Im using SHPCON / DELV / IDOC_INPUT_DELVRY . Im not very sure about the fields that i should use to pass the picked