Snapshot query based on local table...

Can anyone help me determine the proper syntax for a snapshot query that will pull records based on an previously sync'ed table? In other words I want to get a subset of data into table2 based on what was just sync'ed into table1, where the table1 snapshot contained a where clause with a :parameter.
I thought the query would look something like this:
select * from SERVER_SCHEMA.table2
where
column1 In (select column1 from &WTG_SCHEMA..table1);
but I get an error when I try to sync. table1 has a weight of 10 and table2 has a weight of 20.
TIA,
Scott

use the snapshot SQL you used for the first table in the snapshot SQL for the 2nd table.
ie.
1st TABLE snapshot
select * from employees e where e.salary > 1
2nd table snapshot
select * from departments d where d.dept_id IN (
select distinct e.dept_id from employees e where e.salary > 1)
OR
select d.*
from departments d, employees e
where d.dept_id = e.dept_id
and e.salary > 1

Similar Messages

  • Query based on main table and audit table

    Hi,
    I had created auditing on some table. Values might not change and if they changed, it should be stored in audit table.
    I want to get the values in the table a on real time basis, like dimentions in datawarehouse.
    Trying to write a query based on table a and aud_a to get point-in-time or values at anytime in the past.
    Something like
    SELECT *
    FROM a (table_name)
    WHERE effective_from >= $DATE_TO_QUERY
    AND effective_to < $DATE_TO_QUERY
    How to get this kind of query .
    Please help. ( Table structure for table a and audit table aud_a and trigger aud_tg_a given below)
    Giving code as follows.
    main table a
    create table a
    ( val1 number,
    val2 number,
    update_by varchar2(30),
    date_updated date);
    creare auidt table aud_a
    create table aud_a
    ( "AUDIT_SEQ" NUMBER,
    "AUDIT_TRAN_ID" NUMBER,
    "AUDIT_PROG_ID" VARCHAR2(30 BYTE),
    "AUDIT_TERMINAL" VARCHAR2(16 BYTE),
    "AUDIT_REASON" VARCHAR2(30 BYTE),
    "AUDIT_ACTION" CHAR(1 BYTE),
    "AUDIT_ACTION_BY" VARCHAR2(20 BYTE),
    "AUDIT_ACTION_DT" DATE,
    val1 number,
    val2 number,
    updated_by varchar2(30),
    date_updated date);
    trigger on  table a to populate aud_a
    CREATE OR REPLACE TRIGGER aud_tg_a AFTER
    INSERT OR
    DELETE OR
    update on a
    for each row
    declare
    v_time_now DATE;
    v_terminal VARCHAR2(16);
    v_tran_id NUMBER;
    v_prog_id VARCHAR2(30);
    V_reason VARCHAR2(30);
    BEGIN
    v_time_now := sysdate;
    v_terminal := userenv('TERMINAL');
    v_tran_id := 1;
    v_prog_id := 'test';
    v_reason := 'AUDIT';
    IF inserting THEN
    INSERT
    INTO a
    audit_seq,
    AUDIT_tran_id,
    AUDIT_prog_id,
    AUDIT_reason,
    AUDIT_terminal,
    AUDIT_action_by,
    AUDIT_action_dt,
    AUDIT_action ,
    val1,
    val2,
    updated_by,
    date_updated
    VALUES
    s_audit_no.nextval,
    v_tran_id,
    v_prog_id,
    v_reason,
    v_terminal,
    USER,
    v_time_now,
    'I' ,
    :new.val1,
    :new.val2,
    :new.updated_by,
    :new.date_updated
    elsif deleting THEN
    INSERT
    INTO a
    audit_seq,
    AUDIT_tran_id,
    AUDIT_prog_id,
    AUDIT_reason,
    AUDIT_terminal,
    AUDIT_action_by,
    AUDIT_action_dt,
    AUDIT_action ,
    us_agy_backed_id,
    industry_subgroup,
    comments,
    updated_by,
    date_updated
    VALUES
    s_audit_no.nextval,
    v_tran_id,
    v_prog_id,
    v_reason,
    v_terminal,
    USER,
    v_time_now,
    'D' ,
    :old.val1,
    :old.val2,
    :old.comments,
    :old.updated_by,
    :old.date_updated
    elsif updating THEN
    INSERT
    INTO a
    audit_seq,
    AUDIT_tran_id,
    AUDIT_prog_id,
    AUDIT_reason,
    AUDIT_terminal,
    AUDIT_action_by,
    AUDIT_action_dt,
    AUDIT_action ,
    us_agy_backed_id,
    industry_subgroup,
    comments,
    updated_by,
    date_updated
    VALUES
    s_audit_no.nextval,
    v_tran_id,
    v_prog_id,
    v_reason,
    v_terminal,
    USER,
    v_time_now,
    'U' ,
    :new.val1,
    :new.val2,
    :new.updated_by,
    :new.date_updated
    END IF;
    END;
    -------------------------

    Hi hoek,
    I am not able to use Oracle's audit functionality becuase I need to trap some changes in particular tables and then rebuild query if required.
    Thanks for your suggestion though.
    Regards,
    Milind

  • Powerpivot filter query based on another table's visible results?

    Excel 2010 x32 on Win 7 x64
    I have multiple tables coming into Powerpivot via SQL connection. They have some relationships pre-defined from the source.
    I need to reduce the amount of data I'm bringing in for my testing. One of the tables has great granularity, containing every event in the database. One field in this data is "Event Type".
    A separate table has a short list of the event types of interest.
    I'd like to filter the first table's data pull (SQL refresh) to only include the event types that are listed (and visible) in the second table, in addition to an existing date range filter that is already in place. Ultimately my goal is to widen
    the date range I can pull in before hitting Excel's memory limits, by eliminating the events I don't care about.
    Currently I'm using a SQL query to pull in the granular data;
    SELECT
      [Fact RawData].*
    FROM
      [Fact RawData]
    WHERE
      [Fact RawData].[Event Date] >= N'2014-06-01T00:00:00'  
    How would I adjust this to also say "only where [Fact RawData].[Event Type] IN {a column in a data pull that is already in powerpivot}"
    and how will that work under a "refresh all" scenario, where I would need the event table to update before this SQL is executed each time?
    Many thanks!

    If I understand correctly, these articles indicate that I can apply filters during
    data import, but I'm not clear how rows can be filtered during import based on
    another powerpivot table results. <o:p></o:p>
    From the first link:<o:p></o:p>
    "For data feeds, you can only change the columns that are imported. You can’t filter
    rows by values unless the source of the data feed is a report, and the report
    is parameterized."<o:p></o:p>
    So I guess the clarification of my original question is: How do I create a parameterized
    report, based on the data in another powerpivot table, and also ensure that the
    parameterized report is executed /after/ the source powerpivot table is
    refreshed so that the proper row filtering is applied?<o:p></o:p>
    Simplified example:<o:p></o:p>
    Table 1 = List of all physicians who have ever had a patient in a large hospital system.
    Filter when bringing results into powerpivot limits results to physicians from
    a target physician group, clinical specialty, or other filter based on reporting needs. <o:p></o:p>
    Table 2 = anonymized records for all patients, physician listed in each record. Filtered
    by time period when bringing into powerpivot.<o:p></o:p>
    I could bring back the whole patient table, but it is so large that Excel runs out of
    resources unless my time period is tiny. If I can limit the returned rows from
    Table 2 based on the current list of physicians shown in Table 1, then I will
    have a much smaller data set and can expand the time period filter to be more
    meaningful and make sure all the target records are brought back, without
    having to run multiple subsets of physicians or time, and still have to
    merge/remove duplicate records.<o:p></o:p>
    Thank you for any advice/URLs/etc.<o:p></o:p>

  • Put the query result of "EXECUTE IMMEDIATE" command in a local table

    Hi all.
    Is it possible to put the output of the "EXECUTE IMMEDIATE" command in a local table so that the ouput can be accessed through other procedures.
    Regards,
    Andila

    Hi Andila, well you could just make your dynamic sql statement an insert statement based on your select. See example below
    create column table test_table_1
    "COL1" nvarchar(10),
    "COL2" nvarchar(10)
    CREATE PROCEDURE INSERT_P() 
    LANGUAGE SQLSCRIPT AS 
      sql_string NVARCHAR(2000) := '';
    BEGIN 
      sql_string := 'insert into test_table_1 (select ''val1'', ''val2'' from dummy) ';
      EXECUTE IMMEDIATE (:sql_string); 
    END; 
    call insert_p();
    select * from test_table_1;
    However you may want to investigate other options instead of using dynamic SQL as this is not a recommended approach. Less optimized compared to standard sql.
    Peter

  • ORA-08180: no snapshot found based on specified time

    Hi,
    on 10g R2, why I can not use Flash version query even if I use a large time interval :
    SQL> SELECT versions_startscn, versions_starttime,
      2         versions_endscn, versions_endtime,
      3         versions_xid, versions_operation,
      4  ename from  scott.EMP
      5  VERSIONS BETWEEN TIMESTAMP
      6        TO_TIMESTAMP('2003-07-18 14:00:00', 'YYYY-MM-DD HH24:MI:SS')
      7    AND TO_TIMESTAMP('2010-07-18 17:00:00', 'YYYY-MM-DD HH24:MI:SS')
      8  ;
    ename from  scott.EMP
    ERROR at line 4:
    ORA-08180: no snapshot found based on specified time
    SQL> select ename, sal from scott.emp;
    ENAME             SAL
    SMITH             800
    SQL> update scott.emp set SAL=SAL*2 where ename='SMITH';
    1 row updated.
    SQL> select ename, sal from scott.emp;
    ENAME             SAL
    SMITH            1600
    SQL> SELECT versions_startscn, versions_starttime,
           versions_endscn, versions_endtime,
           versions_xid, versions_operation,
    ename from  scott.EMP
    VERSIONS BETWEEN TIMESTAMP
          TO_TIMESTAMP('2003-07-18 14:00:00', 'YYYY-MM-DD HH24:MI:SS')
      AND TO_TIMESTAMP('2010-07-18 17:00:00', 'YYYY-MM-DD HH24:MI:SS')
    ename from  scott.EMP
    ERROR at line 4:
    ORA-08180: no snapshot found based on specified time
    SQL> commit;
    Commit complete.
    SQL> SELECT versions_startscn, versions_starttime,
      2         versions_endscn, versions_endtime,
      3         versions_xid, versions_operation,
      4  ename from  scott.EMP
      5  VERSIONS BETWEEN TIMESTAMP
      6        TO_TIMESTAMP('2003-07-18 14:00:00', 'YYYY-MM-DD HH24:MI:SS')
      7    AND TO_TIMESTAMP('2010-07-18 17:00:00', 'YYYY-MM-DD HH24:MI:SS')
      8  ;
    ename from  scott.EMP
    ERROR at line 4:
    ORA-08180: no snapshot found based on specified timeThank you.
    PS :
    ORA-08180: no snapshot found based on specified time
    Cause: Could not match the time to an SCN from the mapping table.
    Action: try using a larger time.

    Thank you Centinul,
    SQL> show parameter undo
    NAME                                 TYPE        VALUE
    undo_management                      string      AUTO
    undo_retention                       integer     900
    SQL> SELECT versions_startscn,
      2             versions_endscn,
      3             versions_xid, versions_operation,
      4      ename,sal from  scott.EMP
      5      VERSIONS BETWEEN TIMESTAMP
      6            TO_TIMESTAMP('2010-04-02 15:08:00', 'YYYY-MM-DD HH24:MI:SS')
      7       AND TO_TIMESTAMP('2010-04-02 15:12:00', 'YYYY-MM-DD HH24:MI:SS')
      8     ;
    VERSIONS_STARTSCN VERSIONS_ENDSCN VERSIONS_XID     V ENAME             SAL
                                                         SMITH            1600
                                                         ALLEN            1600
                                                         WARD             1250
                                                         JONES            2975
                                                         MARTIN           1250
                                                         BLAKE            2850
                                                         CLARK            2450
                                                         SCOTT            3000
                                                         KING             5000
                                                         TURNER           1500
                                                         ADAMS            1100
    VERSIONS_STARTSCN VERSIONS_ENDSCN VERSIONS_XID     V ENAME             SAL
                                                         JAMES             950
                                                         FORD             3000
                                                         MILLER           1300
    14 rows selected.Then why versions_startscn, versions_endscn columns are not filled ?
    Edited by: user522961 on Apr 2, 2010 6:16 AM

  • Using case when statement in the select query to create physical table

    Hello,
    I have a requirement where in I have to execute a case when statement with a session variable while creating a physical table using a select query. let me explain with an example.
    I have a physical table based on a select table with one column.
    SELECT 'VALUEOF(NQ_SESSION.NAME_PARAMETER)' AS NAME_PARAMETER FROM DUAL. Let me call this table as the NAME_PARAMETER table.
    I also have a customer table.
    In my dashboard that has two pages, Page 1 contains a table with the customer table with column navigation to my second dashboard page.
    In my second dashboard page I created a dashboard report based on NAME_PARAMETER table and a prompt based on customer table that sets the NAME_ PARAMETER request variable.
    EXECUTION
    When i click on a particular customer, the prompt sets the variable NAME_PARAMETER and the NAME_PARAMETER table shows the appropriate customer.
    everything works as expected. YE!!
    Now i created another table called NAME_PARAMETER1 with a little modification to the earlier table. the query is as follows.
    SELECT CASE WHEN 'VALUEOF(NQ_SESSION.NAME_PARAMETER)'='Customer 1' THEN 'TEST_MART1' ELSE TEST_MART2' END AS NAME_PARAMETER
    FROM DUAL
    Now I pull in this table into the second dashboard page along with the NAME_PARAMETER table report.
    surprisingly, NAME_PARAMETER table report executes as is, but the other report based on the NAME_PARAMETER1 table fails with the following error.
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 16001] ODBC error state: S1000 code: 1756 message: [Oracle][ODBC][Ora]ORA-01756: quoted string not properly terminated. [nQSError: 16014] SQL statement preparation failed. (HY000)
    SQL Issued: SET VARIABLE NAME_PARAMETER='Novartis';SELECT NAME_PARAMETER.NAME_PARAMETER saw_0 FROM POC_ONE_DOT_TWO ORDER BY saw_0
    If anyone has any explanation to this error and how we can achieve the same, please help.
    Thanks.

    Hello,
    Updates :) sorry.. the error was a stupid one.. I resolved and I got stuck at my next step.
    I am creating a physical table using a select query. But I am trying to obtain the name of the table dynamically.
    Here is what I am trying to do. the select query of the physical table is as follows.
    SELECT CUSTOMER_ID AS CUSTOMER_ID, CUSTOMER_NAME AS CUSTOMER_NAME FROM 'VALUEOF(NQ_SESSION.SCHEMA_NAME)'.CUSTOMER.
    The idea behind this is to obtain the data from the same table from different schemas dynamically based on what a session variable. Please let me know if there is a way to achieve this, if not please let me know if this can be achieved in any other method in OBIEE.
    Thanks.

  • BEx query based on virtual cube donu00B4t display a valid List of Value (LOV)

    Hello
    I have a problem with an invalid LOV. The scenario is the following; There´s a BEx query based on a virtual cube. The query has an exit variable on caracteristic that is based on 0CALMONTH.
    At Universe Designer I simply create a connection, a universe based on this query and export.
    At Web Intelligence (also at Live Office), when I try to execute de query, the prompt to fill my exit variable display a list of value that doesn´t match with the values of the caracteristic at the cube.
    Actually, the list at the prompt starts with 01.0000 and finishes with 05.0968.
    In Universe Designer, the option to edit the list of values is not available. But I think that editing the LOV is not the correct way.
    I´ve tried creating a new query based on the DSO that is the source of the virtual cube. In this case, I had a valid list. Unfortunately, I can´t use this DSO.
    Did anyone already have this problem?

    Hi James,
    can you explain what you mean with "input length for that filed" ?
    The field in the table is varchar2(120). I coudn't found options for the List of value.
    Thanks for your response
    Carsten
    null

  • Purchase register query based on down payment invoice

    Hai All,
    I am creating an invoice based on purchase order.The vat  tax is being calculated in the downpayment. I want the vat to be displayed in the downpayment invoice as well as the a/p invoice but i m not getting it in the a/p invoice. Pls guide me with the linking of tables to get the query working. I have formatted the fields req but could not link the tables...Pls guide me with that...
    Thanks & Regards,
    Neela

    Hi Neela,
    Check the thread.
    Re: Purchase register query based on down payment invoice
    FROM PCH1 T0
    INNER JOIN OPCH T1 ON T0.DocEntry = T1.DocEntry
    INNER JOIN OSLP T2 ON T0.SlpCode = T2.SlpCode
    LEFT OUTER JOIN PCH12 T3 ON T1.DocEntry = T3.DocEntry
    LEFT JOIN ODPI T4 ON T1.CardCode = T4.CardCode
    INNER JOIN DPO1 T5 ON T0.ItemCode = T5.ItemCode
    Close the thread, if issue solved.
    Regards,
    Madhan.

  • SELECT QUERY  BASED ON SECONDARY INDEX

    Hi all,
    CAN ANYONE TELL ME HOW TO WRITE SELECT QUERY BASED ON SECONDARY INDEX.
    IN WHAT WAY DOES IT IMPROVE PERFORMANCE.
    i KNOW WHEN CREATING SECONDARY INDEX I NEED TO GIVE AN INDEX NO -iT SHOULD BE ANY NUMBER RIGHT?
    I HAVE TO LIST ALL PRIMARY KEYS FIRST AND THEN THE FIELD FOR WHICH I AM CREATING SECONDARY INDEX RIGHT?
    LETS SAY I HAVE 2 PRIMARY KEYS AND I WANT TO CREATE SEONDARY INDEX FOR 2 FIELDS THEN
    I NEED TO CREATE A SEPERTE SECONDARY INDEX FOR EACH ONE OF THOSE FIELDS OR ONE SHOULD BE ENOUGH
    pLS LET ME KNOW IF IAM WRONG

    HI,
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVINGclauses, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You create secondary indexes using the ABAP Dictionary. There you can create its columns and define it as UNIQUE. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. <b>As a rule, secondary indexes should not contain more than four fields</b>, <b>and you should not have more than five indexes for a single database table</b>.
    <b>What to Keep in Mind for Secondary Indexes:</b>
    http://help.sap.com/saphelp_nw04s/helpdata/en/cf/21eb2d446011d189700000e8322d00/content.htm
    http://www.sap-img.com/abap/quick-note-on-design-of-secondary-database-indexes-and-logical-databases.htm
    Regards
    Sudheer

  • Select Query failing on a  table that has per sec heavy insertions.

    Hi
    Problem statement
    1- We are using 11g as an database.
    2- We have a table that is partitioned on the date as range partition.
    3- The insertion of data is very high.i.e. several hundreds records per sec. in the current partitioned.
    4- The data is continuously going in the current partitioned as and when buffer is full or per sec timer expires.
    5-- We have to make also select query on the same table and on the current partitioned say for the latest 500 records.
    6- Effecient indexes are also created on the table.
    Solutions Tried.
    1- After analyzing by tkprof it is observed that select and execute is working fine but fetch is taking too much time to show the out put. Say it takes 1 hour.
    2- Using the 11g sql advisior and SPM several baseline is created but the success rate of them is observed also too low.
    please suggest any solution to this issue
    1- i.e. Redisgn of table.
    2- Any better way to quey to fix the fetch issue.
    3- Any oracle seetings or parameter changes to fix the fetch issue.
    Thanks in advance.
    Regards
    Vishal Sharma

    I am uploading the latest stats please let me know how can improve as this is taking 25 minutes
    ####TKPROF output#########
    SQL ID : 2j5w6bv437cak
    select almevttbl.AlmEvtId, almevttbl.AlmType, almevttbl.ComponentId,
      almevttbl.TimeStamp, almevttbl.Severity, almevttbl.State,
      almevttbl.Category, almevttbl.CauseCode, almevttbl.UnitType,
      almevttbl.UnitId, almevttbl.UnitName, almevttbl.ServerName,
      almevttbl.StrParam, almevttbl.ExtraStrParam, almevttbl.ExtraStrParam2,
      almevttbl.ExtraStrParam3, almevttbl.ParentCustId, almevttbl.ExtraParam1,
      almevttbl.ExtraParam2, almevttbl.ExtraParam3,almevttbl.ExtraParam4,
      almevttbl.ExtraParam5, almevttbl.SRCIPADDRFAMILY,almevttbl.SrcIPAddress11,
      almevttbl.SrcIPAddress12,almevttbl.SrcIPAddress13,almevttbl.SrcIPAddress14,
      almevttbl.DESTIPADDRFAMILY,almevttbl.DestIPAddress11,
      almevttbl.DestIPAddress12,almevttbl.DestIPAddress13,
      almevttbl.DestIPAddress14,  almevttbl.DestPort, almevttbl.SrcPort,
      almevttbl.SessionDir, almevttbl.CustomerId, almevttbl.ProfileId,
      almevttbl.ParentProfileId, almevttbl.CustomerName, almevttbl.AttkDir,
      almevttbl.SubCategory, almevttbl.RiskCategory, almevttbl.AssetValue,
      almevttbl.IPSAction, almevttbl.l4Protocol,almevttbl.ExtraStrParam4 ,
      almevttbl.ExtraStrParam5,almevttbl.username,almevttbl.ExtraStrParam6,
      IpAddrFamily1,IPAddrValue11,IPAddrValue12,IPAddrValue13,IPAddrValue14,
      IpAddrFamily2,IPAddrValue21,IPAddrValue22,IPAddrValue23,IPAddrValue24
    FROM
           AlmEvtTbl PARTITION(ALMEVTTBLP20100323) WHERE AlmEvtId IN ( SELECT  * FROM
      ( SELECT /*+ FIRST_ROWS(1000) INDEX (AlmEvtTbl AlmEvtTbl_Index) */AlmEvtId
      FROM AlmEvtTbl PARTITION(ALMEVTTBLP20100323) where       ((AlmEvtTbl.Customerid
      = 0 or AlmEvtTbl.ParentCustId = 0))  ORDER BY AlmEvtTbl.TIMESTAMP DESC) 
      WHERE ROWNUM  <  602) order by timestamp desc
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.10       0.17          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch       42   1348.25    1521.24       1956   39029545          0         601
    total       44   1348.35    1521.41       1956   39029545          0         601
    Misses in library cache during parse: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 82 
    Rows     Row Source Operation
        601  PARTITION RANGE SINGLE PARTITION: 24 24 (cr=39029545 pr=1956 pw=1956 time=11043 us cost=0 size=7426 card=1)
        601   TABLE ACCESS BY LOCAL INDEX ROWID ALMEVTTBL PARTITION: 24 24 (cr=39029545 pr=1956 pw=1956 time=11030 us cost=0 size=7426 card=1)
        601    INDEX FULL SCAN ALMEVTTBL_INDEX PARTITION: 24 24 (cr=39029377 pr=1956 pw=1956 time=11183 us cost=0 size=0 card=1)(object id 72557)
        601     FILTER  (cr=39027139 pr=0 pw=0 time=0 us)
    169965204      COUNT STOPKEY (cr=39027139 pr=0 pw=0 time=24859073 us)
    169965204       VIEW  (cr=39027139 pr=0 pw=0 time=17070717 us cost=0 size=13 card=1)
    169965204        PARTITION RANGE SINGLE PARTITION: 24 24 (cr=39027139 pr=0 pw=0 time=13527031 us cost=0 size=48 card=1)
    169965204         TABLE ACCESS BY LOCAL INDEX ROWID ALMEVTTBL PARTITION: 24 24 (cr=39027139 pr=0 pw=0 time=10299895 us cost=0 size=48 card=1)
    169965204          INDEX FULL SCAN ALMEVTTBL_INDEX PARTITION: 24 24 (cr=1131414 pr=0 pw=0 time=3222624 us cost=0 size=0 card=1)(object id 72557)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                      42        0.00          0.00
      SQL*Net message from client                    42       11.54        133.54
      db file sequential read                      1956        0.20         28.00
      latch free                                     21        0.00          0.01
      latch: cache buffers chains                     9        0.01          0.02
    SQL ID : 0ushr863b7z39
    SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
      NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
      NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0)
    FROM
    (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("PLAN_TABLE") FULL("PLAN_TABLE")
      NO_PARALLEL_INDEX("PLAN_TABLE") */ 1 AS C1, CASE WHEN
      "PLAN_TABLE"."STATEMENT_ID"=:B1 THEN 1 ELSE 0 END AS C2 FROM
      "SYS"."PLAN_TABLE$" "PLAN_TABLE") SAMPLESUB
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.01          1          3          0           1
    total        3      0.00       0.01          1          3          0           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 82     (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=3 pr=1 pw=1 time=0 us)
          0   TABLE ACCESS FULL PLAN_TABLE$ (cr=3 pr=1 pw=1 time=0 us cost=29 size=138856 card=8168)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.01          0.01
    SQL ID : bjkdb51at8dnb
    EXPLAIN PLAN SET STATEMENT_ID='PLUS30350011' FOR select almevttbl.AlmEvtId,
      almevttbl.AlmType, almevttbl.ComponentId, almevttbl.TimeStamp,
      almevttbl.Severity, almevttbl.State, almevttbl.Category,
      almevttbl.CauseCode, almevttbl.UnitType, almevttbl.UnitId,
      almevttbl.UnitName, almevttbl.ServerName, almevttbl.StrParam,
      almevttbl.ExtraStrParam, almevttbl.ExtraStrParam2, almevttbl.ExtraStrParam3,
       almevttbl.ParentCustId, almevttbl.ExtraParam1, almevttbl.ExtraParam2,
      almevttbl.ExtraParam3,almevttbl.ExtraParam4,almevttbl.ExtraParam5,
      almevttbl.SRCIPADDRFAMILY,almevttbl.SrcIPAddress11,almevttbl.SrcIPAddress12,
      almevttbl.SrcIPAddress13,almevttbl.SrcIPAddress14,
      almevttbl.DESTIPADDRFAMILY,almevttbl.DestIPAddress11,
      almevttbl.DestIPAddress12,almevttbl.DestIPAddress13,
      almevttbl.DestIPAddress14,  almevttbl.DestPort, almevttbl.SrcPort,
      almevttbl.SessionDir, almevttbl.CustomerId, almevttbl.ProfileId,
      almevttbl.ParentProfileId, almevttbl.CustomerName, almevttbl.AttkDir,
      almevttbl.SubCategory, almevttbl.RiskCategory, almevttbl.AssetValue,
      almevttbl.IPSAction, almevttbl.l4Protocol,almevttbl.ExtraStrParam4 ,
      almevttbl.ExtraStrParam5,almevttbl.username,almevttbl.ExtraStrParam6,
      IpAddrFamily1,IPAddrValue11,IPAddrValue12,IPAddrValue13,IPAddrValue14,
      IpAddrFamily2,IPAddrValue21,IPAddrValue22,IPAddrValue23,IPAddrValue24 FROM 
           AlmEvtTbl PARTITION(ALMEVTTBLP20100323) WHERE AlmEvtId IN ( SELECT  * FROM
      ( SELECT /*+ FIRST_ROWS(1000) INDEX (AlmEvtTbl AlmEvtTbl_Index) */AlmEvtId
      FROM AlmEvtTbl PARTITION(ALMEVTTBLP20100323) where       ((AlmEvtTbl.Customerid
      = 0 or AlmEvtTbl.ParentCustId = 0))  ORDER BY AlmEvtTbl.TIMESTAMP DESC) 
      WHERE ROWNUM  <  602) order by timestamp desc
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.28       0.26          0          0          0           0
    Execute      1      0.01       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.29       0.27          0          0          0           0
    Misses in library cache during parse: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 82 
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse       13      0.71       0.96          3         10          0           0
    Execute     14      0.20       0.29          4        304         26          21
    Fetch       92   2402.17    2714.85       3819   70033708          0        1255
    total      119   2403.09    2716.10       3826   70034022         26        1276
    Misses in library cache during parse: 10
    Misses in library cache during execute: 6
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                      49        0.00          0.00
      SQL*Net message from client                    48       29.88        163.43
      db file sequential read                      1966        0.20         28.10
      latch free                                     21        0.00          0.01
      latch: cache buffers chains                     9        0.01          0.02
      latch: session allocation                       1        0.00          0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      940      0.51       0.73          1          2         38           0
    Execute   3263      1.93       2.62          7       1998         43          23
    Fetch     6049      1.32       4.41        214      12858         36       13724
    total    10252      3.78       7.77        222      14858        117       13747
    Misses in library cache during parse: 172
    Misses in library cache during execute: 168
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                        88        0.04          0.62
      latch: shared pool                              8        0.00          0.00
      latch: row cache objects                        2        0.00          0.00
      latch free                                      1        0.00          0.00
      latch: session allocation                       1        0.00          0.00
       34  user  SQL statements in session.
    3125  internal SQL statements in session.
    3159  SQL statements in session.
    Trace file: ora11g_ora_2064.trc
    Trace file compatibility: 11.01.00
    Sort options: default
           6  sessions in tracefile.
          98  user  SQL statements in trace file.
        9111  internal SQL statements in trace file.
        3159  SQL statements in trace file.
          89  unique SQL statements in trace file.
       30341  lines in trace file.
        6810  elapsed seconds in trace file.
    ###################################### AutoTrace Output#################  
    Statistics
           3901  recursive calls
              0  db block gets
       39030275  consistent gets
           1970  physical reads
            140  redo size
         148739  bytes sent via SQL*Net to client
            860  bytes received via SQL*Net from client
             42  SQL*Net roundtrips to/from client
             73  sorts (memory)
              0  sorts (disk)
            601  rows processed

  • Best way to remove duplicates based on multiple tables

    Hi,
    I have a mechanism which loads flat files into multiple tables (can be up to 6 different tables) using external tables.
    Whenever a new file arrives, I need to insert duplicate rows to a side table, but the duplicate rows are to be searched in all 6 tables according to a given set of columns which exist in all of them.
    In the SQL Server Version of the same mechanism (which i'm migrating to Oracle) it uses an additional "UNIQUE" table with only 2 columns(Checksum1, Checksum2) which hold the checksum values of 2 different sets of columns per inserted record. when a new file arrives it computes these 2 checksums for every record and look it up in the unique table to avoid searching all the different tables.
    We know that working with checksums is not bulletproof but with those sets of fields it seems to work.
    My questions are:
    should I use the same checksums mechanism? if so, should I use the owa_opt_lock.checksum function to calculate the checksums?
    Or should I look for duplicates in all tables one after the other (indexing some of the columns we check for duplicates with)?
    Note:
    These tables are partitioned with day partitions and can be very large.
    Any advice would be welcome.
    Thanks.

    >
    I need to keep duplicate rows in a side table and not load them into table1...table6
    >
    Does that mean that you don't want ANY row if it has a duplicate on your 6 columns?
    Let's say I have six records that have identical values for your 6 columns. One record meets the condition for table1, one for table2 and so on.
    Do you want to keep one of these records and put the other 5 in the side table? If so, which one should be kept?
    Or do you want all 6 records put in the side table?
    You could delete the duplicates from the temp table as the first step. Or better
    1. add a new column WHICH_TABLE NUMBER to the temp table
    2. update the new column to -1 for records that are dups.
    3. update the new column (might be done with one query) to set the table number based on the conditions for each table
    4. INSERT INTO TABLE1 SELECT * FROM TEMP_TABLE WHERE WHICH_TABLE = 1
    INSERT INTO TABLE6 SELECT * FROM TEMP_TABLE WHERE WHICH_TABLE = 6
    When you are done the WHICH_TABLE will be flagged with
    1. NULL if a record was not a DUP but was not inserted into any of your tables - possible error record to examine
    2. -1 if a record was a DUP
    3. 1 - if the record went to table 1 (2 for table 2 and so on)
    This 'flag and then select' approach is more performant than deleting records after each select. Especially if the flagging can be done in one pass (full table scan).
    See this other thread (or many, many others on the net) from today for how to find and remove duplicates
    Best way of removing duplicates

  • Open Form Based On A Table in same window

    Hi All,
    First to make things clearer I'll explain what I CAN do:
    Create a page which queries a session variable at the start and then
    depending on its value outputs different HTML, but always in the same
    format and more importantly the same window, to keep the look and feel etc...
    I have a link in a page which when clicked opens a form using wwa_app_module.link
    so it auto queries the form. This works fine.
    What I CAN NOT do is:
    The form was created using the "form based on a table wizard" and always opens
    in a new window.
    Can I make the form open in the same window that contains my wwa_app_module.link?
    Is this possible in a newer version that I have (I got Release 1)
    Any Suggestions?
    Cheers,
    Barry

    Firstly thanks Rahul Dubey for responding.
    What I mean by " contains my wwa_app_module.link? " :
    I have a form which contains a link similar to the one below:
    http://xxx.co.uk:8015/pls/pod130/PORTAL30.wwa_app_module.link?p_arg_names=_moduleid&p_arg_values=1389245486&p_arg_names=EMPNO&
    p_arg_values=7654&p_arg_names=_empno_cond&p_arg_values=%3D%3E
    When I click on this link it opens the form and runs a query automatically.
    The problem is I want to click on the link and have the form appear in the
    same window, not a new one.
    Cheers,
    Barry

  • Creating Infoset query based on ABAP program

    Hello
    I have 3 tables FEBEP, BKPF and BSEG and I need to join the 3 tables based on:
    FEBEP-MANDT = BKPF-MANDT = BSEG-MANDT
    FEBEP-NBBLN = BKPF-BELNR = BSEG-AUGBL
    FEBEP-GJAHR = BKPF-GJAHR = BSEG-GJAHR
    Then I have a few view fields from all the 3 tables. After this I can build an infoset query based on structure + ABAP program, and a generic datasource on top of it.
    Can someone give me the ABAP code to be written SE38? Also should I select integrated program/external program in the infoset query?
    Thanks,
    Srini.

    Hi,
    Even if you create an ABAP program for infoset, you will writing a SELECT statement from BSEG table which is quite huge.
    And you will putting JOIN with other tables.
    Performance wise this is not advisble.
    Why do not try the following other tables and check if the fields you need are available?
    You can't join BSEG as it is a Cluster Table.In the place of BSEG you can use:
    Account Recivables data use BSID and BSAD tables
    GL Account Related data use BSIS and BSAS tables
    Account payables data use BSIK and BSAK tables
    Thanks.

  • Child table child column count based on pareent table

    Hi ,
    I have requirement to generate a report .
    based on parent table I want find out child table and child key count.
    In the below query i will give parenet table name it will give child table details and child key details
    "SELECT b.table_name as table_name , d.column_name, b.R_CONSTRAINT_NAME
    FROM user_constraints a, user_constraints b, user_ind_columns c, user_cons_columns d
    WHERE a.constraint_type = 'P' AND
    a.CONSTRAINT_NAME = b.R_CONSTRAINT_NAME AND
    b.CONSTRAINT_TYPE = 'R' AND
    a.table_name = c.table_name AND
    a.constraint_name = c.index_name AND
    b.CONSTRAINT_NAME = d.constraint_name AND
    a.table_name = 'TABLENAME' "
    eg ; here I will give dept table name I want emp table details
    Example output
    Childtable. Childkey Count
    EMP 10 5
    EMP 20 10
    EMP 30 5
    .....etc.
    Please any body has solution for my requirement please help me .
    Thanks
    Edited by: tmadugula on Oct 26, 2012 6:25 AM
    Edited by: tmadugula on Oct 26, 2012 6:28 AM
    Edited by: tmadugula on 26 Oct, 2012 11:08 AM
    Edited by: tmadugula on 26 Oct, 2012 11:21 AM
    Edited by: tmadugula on 26 Oct, 2012 11:30 AM

    Is what you are really asking is how many FK point to a specific table? If so, then you do not need the join to user_ind_columns or to user_cons_columns. You just join user_constraints to itself on a.r_constraint_name = b.constraint_name and b.table_name = target_table
    A FK has to point to the PK or UK of the referenced table so the number of columns pointed to will equal the number of columns in the constraint so I see no need to try to count the individual column references as it will equal the number of FK to the PK or UK constraint.
    HTH -- Mark D Powell --

  • Query related to database tables

    Hi,
    Im having a requirement wherein i would like to create one ztable and the purpose is only to get the fields but not the values for the same. Bez, with the help of these fields in ztable and developing some logic and it shld be dynamic based on the no of fields appended into ztable.
    To be clear, can i write a select query only to get the fields from the table but not the values ( in this no way i will get the values bez i won't insert any records for the same).
    I thnk im clear from my end.
    Thanks
    rohith

    To get the fields, you  can write the query like this too :
    tables: dd03l.
    data: begin of itab occurs 0,
            fieldname like dd03l-fieldname,
          end of itab.
    *parameters: p_tab like dd03l-tabname.
    select fieldname from dd03l into table itab where tabname = 'VBAK'.
    LOOP AT ITAB.
      WRITE:/ itab-fieldname.
    ENDLOOP.

Maybe you are looking for