Why Is Query Against XMLTYPE Table ACME_CUST Doing A Full Table Scan?

On our Oracle Database 11g Enterprise Edition Release 11.2.0.1.0, we have a query against against a 25,214 record XMLTYPE ACME_CUST table.
SELECT rownum   AS seq,
      EID  AS eid,
      SUBSTR(CUST_ID, 1, INSTR(CUST_ID, '|')-1) AS tgt_acme_customer_id,
      SUBSTR(CUST_ID, INSTR(CUST_ID, '|')   +1) AS src_acme_customer_id_list
    FROM
      (SELECT ac.eid EID,
        listagg(ac.acme_cust_id, '|') WITHIN GROUP (
      ORDER BY ac.acme_cust_id, ac.acme_cust_id) CUST_ID
      FROM ACME_CUST ac
      GROUP BY ac.eid
      HAVING COUNT(ac.acme_cust_id)>1)Explain plan shows:
Select Statement
Count
VIEW
FILTER
Filter Predicates
COUNT(*) > 1
SORT GROUP BY
TABLE ACCESS ACME_CUST FULL
The ACME_CUST Table has a virtual column defined on acme_cust_id along with a corresponding index. This filed is also defined as a primary key.
Here is the table definitiion and associated statements:
CREATE
  TABLE "N98991"."ACME_CUST" OF XMLTYPE
    CONSTRAINT "ACME_CUST_ID_PK" PRIMARY KEY ("ACME_CUST_ID") USING INDEX
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536
    NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
    FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
    DEFAULT) TABLESPACE "ACME_DEV" ENABLE
  XMLTYPE STORE AS SECUREFILE BINARY XML
    TABLESPACE "ACME_DEV" ENABLE STORAGE IN ROW CHUNK 8192 CACHE READS LOGGING
    NOCOMPRESS KEEP_DUPLICATES STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1
    MAXEXTENTS 2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
    CELL_FLASH_CACHE DEFAULT)
  ALLOW NONSCHEMA ALLOW ANYSCHEMA VIRTUAL COLUMNS
    "EID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
    'declare default element namespace "http://www.cigna.com/acme/domains/customer/customerprofile/2011/11"; (::)                              
/customerProfile/@eid'
    PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
    16777216,0),50,1,2) AS VARCHAR2(15))),
  *bold*  "ACME_CUST_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
    'declare default element namespace "http://www.cigna.com/acme/domains/customer/customerprofile/2011/11"; (::)                              
/customerProfile/@id' *bold*
    PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
    16777216,0),50,1,2) AS VARCHAR2(50))),
    "CRET_DT" AS (SYS_EXTRACT_UTC(CAST(TO_TIMESTAMP_TZ(SYS_XQ_UPKXML2SQL(
    SYS_XQEXVAL(XMLQUERY(
    'declare default element namespace "http://www.cigna.com/acme/domains/customer/customerprofile/2011/11"; (::)                                                                                                      
/customerProfile/@create_dt'
    PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
    16777216,0),50,1,2),'SYYYY-MM-DD"T"HH24:MI:SS.FFTZH:TZM') AS TIMESTAMP
WITH
  TIME ZONE)))
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
    INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
    FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
    CELL_FLASH_CACHE DEFAULT
  TABLESPACE "ACME_DEV" ;
CREATE
  INDEX "N98991"."ACME_CST_CRET_DT_IDX" ON "N98991"."ACME_CUST"
    "CRET_DT"
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE
    INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
    FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
    CELL_FLASH_CACHE DEFAULT
  TABLESPACE "ACME_DEV" ;
CREATE
  INDEX "N98991"."ACME_CST_EID_IDX" ON "N98991"."ACME_CUST"
    "EID"
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE
    INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
    FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
    CELL_FLASH_CACHE DEFAULT
  TABLESPACE "ACME_DEV" ;
*bold*CREATE UNIQUE INDEX "N98991"."ACME_CUST_ID_PK" ON "N98991"."ACME_CUST"
    "ACME_CUST_ID"
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE *bold*
    INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
    FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
    CELL_FLASH_CACHE DEFAULT
  TABLESPACE "ACME_DEV" ;
  CREATE
    INDEX "N98991"."ACME_CUST_XMLINDEX_IX" ON "N98991"."ACME_CUST"
      OBJECT_VALUE
    INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
      'XMLTABLE ACME_CUST_IDX_TAB XMLNamespaces (''http://www.cigna.com/acme/domains/commoncontact/2011/11'' as "cm",  default ''http://www.cigna.com/acme/domains/customer/customerprofile/2011/11''),      
''/customerProfile''       
columns      
DOB date  PATH ''personInformation/cm:birthDate'',      
FIRSTNAME varchar2(40)    PATH ''name/cm:givenName'',      
LASTNAME varchar2(40)    PATH ''name/cm:surName'',      
SSN varchar2(30)    PATH ''identifiers/ssn'',      
MEMBERINFOS XMLType path ''memberInfos/memberInfo'' VIRTUAL       
XMLTable acme_cust_lev2_idx_tab XMLNAMESPACES(default ''http://www.cigna.com/acme/domains/customer/customerprofile/2011/11''),      
''/memberInfo'' passing MEMBERINFOS         
columns         
ami varchar2(40) PATH ''ami'',        
subscId varchar2(50) PATH ''clientRelationship/subscriberInformation/subscriberId'',        
employeeId varchar2(50) PATH ''systemKeys/employeeId'',        
clientId varchar2(50) PATH ''clientRelationship/clientId''      
CREATE UNIQUE INDEX "N98991"."SYS_C00384339" ON "N98991"."ACME_CUST"
    "SYS_NC_OID$"
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE
    INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
    FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
    CELL_FLASH_CACHE DEFAULT
  TABLESPACE "ACME_DEV" ;
CREATE UNIQUE INDEX "N98991"."SYS_IL0000649948C00003$$" ON "N98991"."ACME_CUST"
    PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576
    MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST
    GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
    TABLESPACE "ACME_DEV" PARALLEL (DEGREE 0 INSTANCES 0) ;Why isn't the unique index ACME_CUST_ID_PK on the virtual column ACME_CUST_ID being used in the explain plan?
Any input would be much appreciated, as really stumped here.
Regards,
Rick

Hi Richard,
The 10053 event appears overkill for this situation. What's the big deal?
Set the event, run the query, unset the event, check the trace file, that's all.
It's not overkill if it helps you understanding what happens and why an index is of no use in this situation.
Tried the /*+ INDEX_FFS(ACME_CUST_ID_PK) */ hint in the 'nested' query.Not sure what nested query you're referring to, so if I misunderstood what you mean, just ignore the following comment.
From what you posted earlier, it looks like you're talking about this part :
listagg(ac.acme_cust_id,'|') WITHIN GROUP (
ORDER BY ac.acme_cust_id,ac.acme_cust_id) CUST_IDThat's not a nested query, it's a projection. All the main work (retrieving rows) has already been done when it comes to this part.
May just have to accept the query performance as it is...Maybe you can try something else.
See the document : Oracle XML DB : Best Practices, page 15 ex. 8 :
When there are multiple scalar values that need to be grouped or ordered, it is better to write it
with XMLTable construct that projects out all columns to be ordered or grouped as shown
below.Here's an example close to your actual requirement :
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
SQL> create table xtab_cols of xmltype
  2  xmltype store as securefile binary xml;
Table created.
SQL> insert /*+ append */ into xtab_cols
  2  select xmlelement("ROW",
  3           xmlforest(
  4            TABLE_NAME, COLUMN_NAME, DATA_TYPE, DATA_TYPE_MOD, DATA_TYPE_OWNER,
  5            DATA_LENGTH, DATA_PRECISION, DATA_SCALE, NULLABLE, COLUMN_ID,
  6            DEFAULT_LENGTH, NUM_DISTINCT, LOW_VALUE, HIGH_VALUE,
  7            DENSITY, NUM_NULLS, NUM_BUCKETS, LAST_ANALYZED, SAMPLE_SIZE,
  8            CHARACTER_SET_NAME, CHAR_COL_DECL_LENGTH,
  9            GLOBAL_STATS, USER_STATS, AVG_COL_LEN, CHAR_LENGTH, CHAR_USED,
10            V80_FMT_IMAGE, DATA_UPGRADED, HISTOGRAM
11           )
12         )
13  from dba_tab_cols
14  where owner = 'SYS'
15  ;
57079 rows created.
SQL> commit;
Commit complete.
SQL> set long 1000
SQL> set pages 100
SQL> select xmlserialize(document object_value) from xtab_cols where rownum = 1;
XMLSERIALIZE(DOCUMENTOBJECT_VALUE)
<ROW>
  <TABLE_NAME>ACCESS$</TABLE_NAME>
  <COLUMN_NAME>D_OBJ#</COLUMN_NAME>
  <DATA_TYPE>NUMBER</DATA_TYPE>
  <DATA_LENGTH>22</DATA_LENGTH>
  <NULLABLE>N</NULLABLE>
  <COLUMN_ID>1</COLUMN_ID>
  <NUM_DISTINCT>7454</NUM_DISTINCT>
  <LOW_VALUE>C2083A</LOW_VALUE>
  <HIGH_VALUE>C3031D18</HIGH_VALUE>
  <DENSITY>,000134156157767642</DENSITY>
  <NUM_NULLS>0</NUM_NULLS>
  <NUM_BUCKETS>1</NUM_BUCKETS>
  <LAST_ANALYZED>2012-01-28</LAST_ANALYZED>
  <SAMPLE_SIZE>34794</SAMPLE_SIZE>
  <GLOBAL_STATS>YES</GLOBAL_STATS>
  <USER_STATS>NO</USER_STATS>
  <AVG_COL_LEN>5</AVG_COL_LEN>
  <CHAR_LENGTH>0</CHAR_LENGTH>
  <V80_FMT_IMAGE>NO</V80_FMT_IMAGE>
  <DATA_UPGRADED>YES</DATA_UPGRADED>
  <HISTOGRAM>NONE</HISTOGRAM>
</ROW>
SQL> exec dbms_stats.gather_table_stats(user, 'XTAB_COLS');
PL/SQL procedure successfully completed.
SQL> set autotrace traceonly
SQL> set timing on
SQL> set lines 120
SQL> select x.table_name
  2       , listagg(x.column_name, ',') within group (order by column_id)
  3  from xtab_cols t
  4     , xmltable('/ROW' passing t.object_value
  5        columns table_name  varchar2(30) path 'TABLE_NAME'
  6              , column_name varchar2(30) path 'COLUMN_NAME'
  7              , column_id   number       path 'COLUMN_ID'
  8       ) x
  9  group by x.table_name
10  ;
4714 rows selected.
Elapsed: 00:00:08.25
Execution Plan
Plan hash value: 602782846
| Id  | Operation           | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT    |           |   466M|   101G|  1580K  (3)| 05:16:04 |
|   1 |  SORT GROUP BY      |           |   466M|   101G|  1580K  (3)| 05:16:04 |
|   2 |   NESTED LOOPS      |           |   466M|   101G|  1552K  (1)| 05:10:32 |
|   3 |    TABLE ACCESS FULL| XTAB_COLS | 57079 |    12M|   408   (1)| 00:00:05 |
|   4 |    XPATH EVALUATION |           |       |       |            |          |
Statistics
          9  recursive calls
          1  db block gets
       1713  consistent gets
          0  physical reads
         96  redo size
     773516  bytes sent via SQL*Net to client
       3873  bytes received via SQL*Net from client
        316  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
       4714  rows processedAnd of course, even better after adding a structured XML index (4714 rows fetched in 1s) :
SQL> CREATE INDEX xtab_cols_sxi ON xtab_cols (OBJECT_VALUE) INDEXTYPE IS XDB.XMLIndex
  2  PARAMETERS (
  3  q'#XMLTable my_xtab
  4  '/ROW'
  5  columns table_name varchar2(30) path 'TABLE_NAME'
  6        , column_name varchar2(30) path 'COLUMN_NAME'
  7        , column_id number path 'COLUMN_ID' #');
Index created.
Elapsed: 00:00:13.42
SQL> select x.table_name
  2       , listagg(x.column_name, ',') within group (order by column_id)
  3  from xtab_cols t
  4     , xmltable('/ROW' passing t.object_value
  5        columns table_name  varchar2(30) path 'TABLE_NAME'
  6              , column_name varchar2(30) path 'COLUMN_NAME'
  7              , column_id   number       path 'COLUMN_ID'
  8       ) x
  9  group by x.table_name
10  ;
4714 rows selected.
Elapsed: 00:00:01.00
Execution Plan
Plan hash value: 3303494605
| Id  | Operation          | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |         | 57520 |  3201K|   174   (3)| 00:00:03 |
|   1 |  SORT GROUP BY     |         | 57520 |  3201K|   174   (3)| 00:00:03 |
|   2 |   TABLE ACCESS FULL| MY_XTAB | 57520 |  3201K|   171   (1)| 00:00:03 |
Note
   - dynamic sampling used for this statement (level=2)
Statistics
        297  recursive calls
          1  db block gets
        989  consistent gets
          0  physical reads
        176  redo size
     773516  bytes sent via SQL*Net to client
       3873  bytes received via SQL*Net from client
        316  SQL*Net roundtrips to/from client
         21  sorts (memory)
          0  sorts (disk)
       4714  rows processed

Similar Messages

  • How to use one query against multiple table and recieve one report?

    I have duplicate tables, (except for their names of course) with commodities prices. They have the same column headings, but the data is different of course. I have a query that gives me a certain piece of information I am looking for but now I need to run this query against every table. I will do this every day as well, to see if the buying criteria is met. There are alot of tables though (256). Is there a way to say run query in all tables and return the results in one place? Thanks for your help.

    hey
    a. the all 256 tables whuld be one big partitoned table
    b. you can use all_tables in order to write a select that will write the report for you:
    SQL> set head off
    SQL> select 'select * from (' from dual
      2  union all
      3  select 'select count(*) from ' || table_name || ' union all ' from a
      4  where table_name like 'DB%' AND ROWNUM <= 3
      5  union all
      6  select ')' from dual;
    select * from (
    select count(*) from DBMS_LOCK_ALLOCATED union all
    select count(*) from DBMS_ALERT_INFO union all
    select count(*) from DBMS_UPG_LOG$ union all
    remove the last 'union all', and tun the generated quary -
    SQL> set head on
    SQL> select * from (
      2  select count(*) from DBMS_LOCK_ALLOCATED union all
      3  select count(*) from DBMS_ALERT_INFO union all
      4  select count(*) from DBMS_UPG_LOG$
      5  );
      COUNT(*)
             0
             0
             0
    Amiel

  • Query occasionally causes table scans (db file sequential read)

    Dear all,
    we periodically issue a query on a huge table via an oracle job.
    Whenever I invoke the query manually, the response time is good. When I start the periodic job, initially the response times are good as well. After some days, however, the query suddenly takes almost forever.
    My vague guess is that for some reason the query suddenly changes the execution plan from using the primary key index to a full table scan (or huge index range scan). Maybe because of some problems with the primary key index (fragmentation? Other?).
    - Could it be the case that the execution plan for a query changes (automatically) like this? For what reasons?
    - Do you have any hints where to look for further information for analysis? (logs, special event types, ...)?
    - Of course, the query was designed having involved indexes in mind. Also, I studied the execution plan and did not find hints for problematic table/range scans.
    - It is not a lock contention problem
    - When the query "takes forever", there is a "db file sequential read" event in v$session_event for the query with an ever increasing wait time. That's why I guess a (unreasonable) table scan is happening.
    Some charachteristics of the table in question:
    - ~ 30 Mio rows
    - There are only insertions to the table, as well as updates on a single, indexed field. No deletes.
    - There is an integer primary key field with an B-tree index.
    Charachteristics of the query:
    The main structure of the query is very simple and as follows: I select a range of about 100 rows via primary key "id", like:
    Select * from TheTable where id>11222300 and id <= 11222400
    There are several joins with rather small tables, which make the overall query more complicated.
    However, the few (100) rows of the huge table in question should always be fetched using the primary key index, shouldn't it?
    Please let me know if some relevant information about the problem is missing.
    Thanks!
    Best regards,
    Nang.

    user2560749 wrote:
    Dear all,
    we periodically issue a query on a huge table via an oracle job.
    Whenever I invoke the query manually, the response time is good. When I start the periodic job, initially the response times are good as well. After some days, however, the query suddenly takes almost forever.
    My vague guess is that for some reason the query suddenly changes the execution plan from using the primary key index to a full table scan (or huge index range scan). Maybe because of some problems with the primary key index (fragmentation? Other?).
    - Could it be the case that the execution plan for a query changes (automatically) like this? For what reasons?Yes possible, One reason is stats of the table has been changed i.e somebody issued dbms_stats. If you are worried about execution plan getting changed then two option 1) Lock the stats 2) Use hint in the query.
    - Do you have any hints where to look for further information for analysis? (logs, special event types, ...)?Have a Ora-10053 trace enabled whenever query plan changes and analysis it.
    - Of course, the query was designed having involved indexes in mind. Also, I studied the execution plan and did not find hints for problematic table/range scans.
    - It is not a lock contention problem
    - When the query "takes forever", there is a "db file sequential read" event in v$session_event for the query with an ever increasing wait time. That's why I guess a (unreasonable) table scan is happening.
    If it db file sequential read then i see two things 1) It is doing index range scan(Not table full scan) or 2) It is scanning undo tablespaces.
    Some charachteristics of the table in question:
    - ~ 30 Mio rows
    - There are only insertions to the table, as well as updates on a single, indexed field. No deletes.
    - There is an integer primary key field with an B-tree index.
    Charachteristics of the query:
    The main structure of the query is very simple and as follows: I select a range of about 100 rows via primary key "id", like:
    Select * from TheTable where id>11222300 and id <= 11222400
    There are several joins with rather small tables, which make the overall query more complicated.
    However, the few (100) rows of the huge table in question should always be fetched using the primary key index, shouldn't it?
    Yes theoreitically it should practically we can only say by looking at run time explain plan(through 10053,10046 trace).
    Please let me know if some relevant information about the problem is missing.
    Thanks!
    Best regards,
    Nang.I am still not sure in which direction you are looking for solution.
    Is your query performing bad once in a fortnight and next day it is all same again.
    I suggest to
    1) Check if the data is scanning undo tablespace. I see you mentioned there is lot of inserts, could be that Oracle would be scanning undo tablespace because of delayed clean out.
    2) Check if that particular day the number of records are high compared to other day.
    Or once it starts performing bad then for next couple of days there is no change in response time.
    1) Check if explain plan has been changed?
    And what action you take to bring back the response time to normal?
    Regards
    Anurag

  • Query against XMLType

    I'm hung up on something that I suspect is very easy, but I have little experience working with XML in the database and think I've just stared at it too long.
    I have an XML document (fragment below) stored as an XMLType in a 10g R2 database table and I want to return the following result set from a query against the document:
    PAGE_ID PAGE_SUBMISSION_FIELD
    21 F39EF21
    21 F85FG3E
    21 F73EF58
    21 FA4939F
    22 FDE77A4
    22 FF3AD33
    Here is a fragment of the XML document:
    <root-interface>
    <page-group>
    <display-page validation-state="new">
    <id>21</id>
    <page-submission-fields>
    <page-submission-field>F39EF21</page-submission-field>
    <page-submission-field>F85FG3E</page-submission-field>
    <page-submission-field>F73EF58</page-submission-field>
    <page-submission-field>FA4939F</page-submission-field>
    </page-submission-fields>
    </display-page>
    <display-page validation-state="new">
    <id>22</id>
    <page-submission-fields>
    <page-submission-field>FDE77A4</page-submission-field>
    <page-submission-field>FF3AD33</page-submission-field>
    </page-submission-fields>
    </display-page>
    </page-group>
    </root-interface>
    Here is the table in which it is stored:
    desc cms_session_interfaces
    Name Null? Type
    SESSION_ID NOT NULL NUMBER
    INTERFACE_ID NOT NULL NUMBER
    CREATED NOT NULL DATE
    LAST_ACCESSED DATE
    INTERFACE_XML SYS.XMLTYPE
    Here is a close as I've come with a query:
    select extract(value(display_pages), '//id/text()').getStringVal() page_id,
    extract(value(display_pages), '//page-submission-fields/page-submission-field') page_submission_field
    from cms_session_interfaces csi,
    table(xmlsequence(extract(csi.interface_xml, '//root-interface/page-group/display-page'))) display_pages
    where csi.session_id = 41
    and csi.interface_id = 596
    (the specified session_id and interface_id are just for testing purposes)
    This returns two rows consisting of the PAGE_ID and an object of XMLType containing the page-submission fields. Almost there, but not quite; any suggestions would be appreciated.

    As Marco said, xmltable is very handy:
    SQL> with cms_session_interfaces as (
      2  select XMLType('
      3  <root-interface>
      4    <page-group>
      5      <display-page validation-state="new">
      6        <id>21</id>
      7        <page-submission-fields>
      8          <page-submission-field>F39EF21</page-submission-field>
      9          <page-submission-field>F85FG3E</page-submission-field>
    10          <page-submission-field>F73EF58</page-submission-field>
    11          <page-submission-field>FA4939F</page-submission-field>
    12        </page-submission-fields>
    13      </display-page>
    14      <display-page validation-state="new">
    15        <id>22</id>
    16        <page-submission-fields>
    17          <page-submission-field>FDE77A4</page-submission-field>
    18          <page-submission-field>FF3AD33</page-submission-field>
    19        </page-submission-fields>
    20      </display-page>
    21          </page-group>
    22  </root-interface>') interface_xml
    23  from dual)
    24  select page.id,fields.field
    25  from cms_session_interfaces csi,
    26  xmltable('//root-interface/page-group/display-page'
    27   passing interface_xml
    28   columns
    29    id varchar2(10) path 'id',
    30    page_fields xmltype path 'page-submission-fields') page,
    31  xmltable('/page-submission-fields/page-submission-field'
    32   passing page_fields
    33   columns
    34    field varchar2(25) path '.') fields
    35  /
    ID         FIELD
    21         F39EF21
    21         F85FG3E
    21         F73EF58
    21         FA4939F
    22         FDE77A4
    22         FF3AD33
    6 rows selected.Best regards
    Maxim

  • What does a "full surface scan" of a hard drive?

    I've been advised to do a "hard format and full surface scan" of new internal hard drives I'm adding to my new mac pro to hopefully avoid encountering any problems down the line. (mostly HDV video editing purposes) If there are any problems I would just exchange the drives now rather than wait for the problem to pop up unexpectedly. I'm not sure if disk utility does a true surface scan. So, is there any included apple application that does a true "full surface scan"? If not, what other inexpensive software would do it? I've heard of disk warrior and similar software but they are $100 which I'd like to avoid spending on a preemptive surface scan. Thanks.

    You can look for free utility. But you won't find a better disk catalogue maintenance program.
    You could buy TechTool Pro that does a good media surface scan but then fails or doesn't actually do anything to map a sector out.
    Windows wlll let you use the vendor's own tool and does an excellent job.
    SMART Utility sounds like it does a good job.
    Intech Speedtools has a suite of tools but I found it to not do a good job when I did notice the side effects and behavior of weak and bad blocks. ZoneBench/QuickBench set is only $29 and you can do a lot with those, helps to create multiple partitions to force write test to every block.
    Bottom line i see: no free lunches.
    I use to believe that a zero all would attempt but the errors have to be really bad to pick them up. And a 7-way write erase takes 7X longer and really strains things. So I stict to WD Diagnostic Utility running in Windows, and I swear by the results and job it does. Excellect.
    In theory, enterprise drives with 1.4 million MTBF hours have longer burn-in and therefore should be safer.
    You could torture a drive for a few days and load it with files and then erase with zero and/or 7-way.

  • XPath query against XMLType

    Hello,
    I am trying to reproduce the following XPath query using XDB functionality against a XMLType column:
    //AtomicPart[@MyID='190' or @MyID='495' or @MyID='1662']
    If I do the following I do get all AtomicParts:
    select X.xml.extract('//AtomicPart') FROM TEST X
    But I havent figured out how to do the or operation. Is it possible or does it require views?
    Thank you,
    Robert

    Robert
    Need to see the instance document in order to answer this..

  • Question about how to create FORIGEN KEY constraints against XMLTYPE table

    Hi,
    1.I have a table called SNPLEX_DESIGN which is created as XMLTYPE, the XMLTYPE is refered to a registered XMLschema. The XMLschema has a data element called BATCH_ID, and I create a primary key for the SNPLEX_DESIGN table on the BATCH_ID. The SQL staement as
    ALTER TABLE SNPLEX.SNPLEX_DESIGN ADD (CONSTRAINT "BATCH_ID_PK" PRIMARY KEY(xmldata."BATCH_ID"))
    2. I have another table call SNPLEX_PROCESS which is a regual relational table with a column TOKENID. I would like to create a forign key on TOKENID which needs to refer to the SNPLEX_DESIGN table BATCH_ID_PK primary key.
    But I got error when I try to alter the SNPLEX_PROCESS table.
    SQL> ALTER TABLE "SNPLEX"."SNPLEX_PROCESS" ADD (CONSTRAINT "BATCH_ID_FK" FOREIGN KEY("TOKENID") RE
    FERENCES "SNPLEX"."SNPLEX_DESIGN"(xmldata."BATCH_ID"));
    ERROR at line 1:
    ORA-02298: cannot validate (SNPLEX.BATCH_ID_FK) - parent keys not found
    3. Can someone helps me on this.. I have no problem to create a foreign key in SNPLEX_DESIGN to refere primary key in relational table. But Why I can not do the other way around.
    Any assistances will be appreciated..
    Jinsen

    Hi Jinsen
    As mentioned in the error message not all rows in PROCESS have a corresponding value in DESIGN.
    To find out which are missing do some selects to compare your data or use the exception clause in the ALTER TABLE statement.
    e.g.: ALTER TABLE "SNPLEX"."SNPLEX_PROCESS" ADD (CONSTRAINT "BATCH_ID_FK" FOREIGN KEY("TOKENID") RE
    FERENCES "SNPLEX"."SNPLEX_DESIGN"(xmldata."BATCH_ID")) EXCEPTIONS INTO <exception table>
    Notice that you have to create the <exception table> with the script $ORACLE_HOME/rdbms/admin/utlexcpt.sql.
    Chris

  • How to create index to speed up query on XMLTYPE table

    I have a table of XMLTYPE called gary_pass_xml. What kind of index can I create on the table to speed up this query.
    SELECT (Extract(Value(FareGroupNodes),'/FareGroup')) FareGroup
    FROM GARY_PASS_XML tx,
    TABLE(XMLSequence(Extract(Value(tx),'/FareSearchRS/FareGroup'))) FareGroupNodes
    WHERE existsnode(value(tx),'/FareSearchRS/FareGroup') = 1

    I have a table of XMLTYPE called gary_pass_xml. What kind of index can I create on the table to speed up this query.
    SELECT (Extract(Value(FareGroupNodes),'/FareGroup')) FareGroup
    FROM GARY_PASS_XML tx,
    TABLE(XMLSequence(Extract(Value(tx),'/FareSearchRS/FareGroup'))) FareGroupNodes
    WHERE existsnode(value(tx),'/FareSearchRS/FareGroup') = 1

  • (urgent) How to make sumarize query against XMLType?

    Hello,
    i have xml document like this
    <List>
    <Item>
    <A>10</A>
    <B>554</B>
    <C>25.5.2005</C>
    </Item>
    <Item>
    <A>20</A>
    <B>49</B>
    <C>26.5.2005</C>
    </Item>
    <Item>
    <A>30</A>
    <B>184</B>
    <C>27.5.2005</C>
    </Item>
    </List>
    in xmltype variable (not table column) and i need e.g. sum (or count or something like that) values in xpath /List/Item/B in one document.
    Is there some quick and elegant way to do this?
    Thanks for quick help.

    Maybe the below will help.
    Did you look at building a view over the xmltype extracting the values into a relational type view, then using the view to sum up the values.
    Jonthan Gennick has an article on the Oracle OTN website with the below code.
    CREATE VIEW cd_master (Title, Artist, Website, Description) AS
    SELECT extractValue(value(x),'/CD/Title'),
    extractValue(value(x),'/CD/Artist'),
    extractValue(value(x),'/CD/Website'),
    extractValue(value(x),'/CD/Description')
    FROM CD331_TAB x;
    CREATE INDEX by_artist ON CD331_TAB x (
    extractValue(value(x),'/CD/Artist'));
    ANALYZE TABLE cd331_tab COMPUTE STATISTICS FOR TABLE;
    ANALYZE INDEX by_artist COMPUTE STATISTICS;

  • Why this query does not show the result?

    Why the query with the schema prefixed does not show the result and the query without schema display the correct results?
    SQL> select data_object_id,object_type from dba_objects where object_name='HR'.'JOBS';
    select data_object_id,object_type from dba_objects where object_name='HR'.'JOBS'
    ERROR at line 1:
    ORA-00933: SQL command not properly ended
    SQL> select data_object_id,object_type from dba_objects where object_name='HR.JOBS';
    no rows selected
    SQL> select data_object_id, OWNER, object_type from dba_objects where object_name='JOBS';
    DATA_OBJECT_ID     OWNER                          OBJECT_TYPE
    69662              HR                                 TABLE
                       OE                                 SYNONYM
    SQL> SELECT USER FROM DUAL;
    USER
    SYS

    Hi,
    the column object_name refers to a object_name which is 'JOBS', the column owner refers to the owner 'HR', the value isn't stored together, so you have to select the two columns. It is the same behaviour as every other table/view. Have a look at the values in the view DBA_OBJECTS.
    Herald ten Dam
    Superconsult.nl

  • Run a query on linked tables to create a new datasource

    Using Crystal XI
    I have a report that draws from two data sources.  They can't be joined at the server side, but they are linked in Crystal Database Expert. 
    I can't figure out if Crystal gives me a way to write an SQL Query to run an aggregate function referencing both tables.  The results of this query would be the datasource for a graph in the report.  I'm wondering if Crystal gives me a way, maybe through subreports, to write the query I need.
    More concretely:
    And I want to include in my datasource alarmId, Hour and the Maximum amount of calls received in any one hour for any one station (this maximum is to provide scale for a graph)
    In one table nameed AlarmStartTimes I have data like
    Alarms
    AlarmID  Hour    Recipient
    Alarm1       8       Joe
    Alarm23  10      Mark
    Alarm60  7      Joe
    Alarm95  8      Linda
    In another I have data like
    EELocation
    Recipient   Location
    Joe         Station1
    Mark         Station2
    Linda          Station1
    So if I could just join my talbes at the server side I'd use a query like:
    select *, max(select count(AlarmID) from Alarms, EELocation from Alarms Join EELocationo on Alarms.Recipient=EELocation.Recipient group by Hour, Location) AS  from Alarms
    Anyway, that's probably got a syntax error or 4 in it, but you get the idea.
    I can't group on the database side.  Since Crystal is able to link the two tables and successfully group them out by Location, it seems like there should be some way for me to run a query against the tables reflecting that existing link, but I can't see how to do it.
    The reason I can't link on database side is that the data is in two databases, and it's not known what the location of the databases is at report-writing time.  The location of the databases gets set via the Crystal API when the report is launched from an application.

    Since you have 2 datasource in the report you are limited on what you can use in crystal, such as crystal will not allow you to use sql expressions.

  • Why does query do a full table scan?

    I have a simple select query that filters on the last 10 or 11 days of data from a table. In the first case it executes in 1 second. In the second case it is taking 15+ minutes and still not done.
    I can tell that the second query (11 days) is doing a full table scan.
    - Why is this happening? ... I guess some sort of threshold calculation???
    - Is there a way to prevent this? ... or encourage Oracle to play nice.
    I find it confusing from a front end/query perspective to get vastly different performance.
    Jason
    Oracle 10g
    Quest Toad 10.6
    CREATE TABLE delme10 AS
    SELECT *
    FROM ed_visits
    WHERE first_contact_dt >= TRUNC(SYSDATE-10,'D');
    Plan hash value: 915912709
    | Id  | Operation                    | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | CREATE TABLE STATEMENT       |                   |  4799 |  5534K|  4951   (1)| 00:01:00 |
    |   1 |  LOAD AS SELECT              | DELME10           |       |       |            |          |
    |   2 |   TABLE ACCESS BY INDEX ROWID| ED_VISITS         |  4799 |  5534K|  4796   (1)| 00:00:58 |
    |*  3 |    INDEX RANGE SCAN          | NDX_ED_VISITS_020 |  4799 |       |    15   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access("FIRST_CONTACT_DT">=TRUNC(SYSDATE@!-10,'fmd'))
    CREATE TABLE delme11 AS
    SELECT *
    FROM ed_visits
    WHERE first_contact_dt >= TRUNC(SYSDATE-11,'D');
    Plan hash value: 1113251513
    | Id  | Operation              | Name      | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | CREATE TABLE STATEMENT |           | 25157 |    28M| 14580   (1)| 00:02:55 |        |      |            |
    |   1 |  LOAD AS SELECT        | DELME11   |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR       |           |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM) | :TQ10000  | 25157 |    28M| 14530   (1)| 00:02:55 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR  |           | 25157 |    28M| 14530   (1)| 00:02:55 |  Q1,00 | PCWC |            |
    |*  5 |      TABLE ACCESS FULL | ED_VISITS | 25157 |    28M| 14530   (1)| 00:02:55 |  Q1,00 | PCWP |            |
    Predicate Information (identified by operation id):
       5 - filter("FIRST_CONTACT_DT">=TRUNC(SYSDATE@!-11,'fmd'))

    Hi Jason,
    I think you're right with kind of "threshold". You can verify CBO costing with event 10053 enabled. There are many possible ways to change this behaviour. Most straightforward would be probably INDEX hint, but you can also change some index-cost related parameters, check histograms, decrease degree of paralellism on table, create stored outline, etc.
    Lukasz

  • Why is DBXML doing a table scan on this query?

    I loaded a database with about 610 documents, each contains about 5000 elements of the form:
    <locations><location><id>100</id> ... </location> <location><id>200</id> ... </location> ... </locations>
    The size of my dbxml file is about 16G. I create this with all default settings, except that I set auto-indexing off, and added 3 indexes, listed here:
    dbxml> listIndexes
    Index: unique-edge-element-equality-string for node {}:id
    Index: edge-element-presence-none for node {}:location
    Index: node-element-presence-none for node {}:locations
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
    4 indexes found.
    I am performing the following query:
    dbxml> query 'for $location in (collection("CitySearch.dbxml")/locations/location[id = 41400]) return $location'
    This has the following query plan:
    dbxml> queryPlan 'for $location in (collection("CitySearch.dbxml")/locations/location[id = 41400]) return $location'
    <XQuery>
    <Return>
    <ForTuple uri="" name="location">
    <ContextTuple/>
    <QueryPlanToAST>
    <ParentOfChildJoinQP>
    <ValueFilterQP comparison="eq" general="true">
    <PresenceQP container="CitySearch.dbxml" index="unique-edge-element-equality-string" operation="prefix" child="id"/>
    <NumericLiteral value="4.140E4" typeuri="http://www.w3.org/2001/XMLSchema" typename="integer"/>
    </ValueFilterQP>
    <ChildJoinQP>
    <NodePredicateFilterQP uri="" name="#tmp0">
    <PresenceQP container="CitySearch.dbxml" index="node-element-presence-none" operation="eq" child="locations"/>
    <LevelFilterQP>
    <VariableQP name="#tmp0"/>
    </LevelFilterQP>
    </NodePredicateFilterQP>
    <PresenceQP container="CitySearch.dbxml" index="edge-element-presence-none" operation="eq" parent="locations" child="location"/>
    </ChildJoinQP>
    </ParentOfChildJoinQP>
    </QueryPlanToAST>
    </ForTuple>
    <QueryPlanToAST>
    <VariableQP name="location"/>
    </QueryPlanToAST>
    </Return>
    </XQuery>
    When I run the query, it is very clearly performing a table scan, the query takes about 10 minutes to run (argh!!) and the disk is read for the length of the query. Why is this doing a table scan, and what can I do to make this a simple, direct node access?
    Andrew

    Hi George,
    I took a subset of my data set, and left auto indexing on to see what the query plan would be, then I duplicated the index being used in my larger data set with auto indexing off. The problem with leaving auto indexing on for the entire data set was the apparent size of the file: with just the single index, the file was about 17G, with auto indexing on, it was climbing over 30G (with 40 indices, I didn't include all of the tags in my original post) when I killed it. Further data load was taking forever, it is much faster with auto indexing off and then add the single index.

  • Why do i get ORA-03113 when doing a spatial query against union all view?

    Hi, i created the following view
    CREATE OR REPLACE FORCE VIEW cola_markets_v
    AS
      (SELECT mkt_id, NAME, shape shape_a, NULL shape_b, NULL shape_c,
              NULL shape_d
         FROM COLA_MARKETS
        WHERE NAME = 'cola_a')
       UNION ALL
      (SELECT mkt_id, NAME, NULL shape_a, shape shape_b, NULL shape_c,
              NULL shape_d
         FROM COLA_MARKETS
        WHERE NAME = 'cola_b')
       UNION ALL
      (SELECT mkt_id, NAME, NULL shape_a, NULL shape_b, shape shape_c,
              NULL shape_d
         FROM COLA_MARKETS
        WHERE NAME = 'cola_c')
       UNION ALL
      (SELECT mkt_id, NAME, NULL shape_a, NULL shape_b, NULL shape_c,
              shape shape_d
         FROM COLA_MARKETS
        WHERE NAME = 'cola_d');added the necessary entries in USER_SDO_GEOM_METADATA and created a spatial index on COLA_MARKETS (SHAPE). However, when i do a spatial query against this view, i get ORA-03113. A spatial query against the base table works fine. Any ideas why this happens? (This is Oracle 10.2.0.3.0)
    Thanks in advance, Markus
    PS: This is my spatial query
    SELECT *
      FROM cola_markets_v t
    WHERE sdo_filter (t.shape_a,
                             SDO_GEOMETRY (2003,
                                           NULL,
                                           NULL,
                                           sdo_elem_info_array (1, 1003, 3),
                                           sdo_ordinate_array (1, 1, 2, 2)
                             'querytype=window'
                            ) = 'TRUE';

    Thank you for your reply. I have tried it with 11.1.0.6.0 today and it works. This might be an issue with 10.2.0.3.0.

  • Why does not  a query go by index but FULL TABLE SCAN

    I have two tables;
    table 1 has 1400 rwos and more than 30 columns , one of them is named as 'site_code' , a index was created on this column;
    table 2 has more 150 rows than 20 column , this primary key is 'site_code' also.
    the two tables were analysed by dbms_stats.gather_table()...
    when I run the explain for the 2 sqls below:
    select * from table1 where site_code='XXXXXXXXXX';
    select * from table1 where site_code='XXXXXXXXXX';
    certainly the oracle explain report show that 'Index scan'
    but the problem raised that
    I try to explain the sql
    select *
    from table1,table2
    where 1.'site_code'=2.'site_code'
    the explain report that :
    select .....
    FULL Table1 Scan
    FULL Table2 Scan
    why......

    Nikolay Ivankin  wrote:
    BluShadow wrote:
    Nikolay Ivankin  wrote:
    Try to use hint, but I really doubt it will be faster.No, using hints should only be advised when investigating an issue, not recommended for production code, as it assumes that, as a developer, you know better than the Oracle Optimizer how the data is distributed in the data files, and how the data is going to grow and change over time, and how best to access that data for performance etc.Yes, you are absolutly right. But aren't we performing such an investigation, are we? ;-)The way you wrote it, made it sound that a hint would be the solution, not just something for investigation.
    select * from .. always performs full scan, so limit your query.No, select * will not always perform a full scan, that's just selecting all the columns.
    A select without a where clause or that has a where clause that has low selectivity will result in full table scans.But this is what I have ment.But not what you said.

Maybe you are looking for

  • Open new Tab error

    When i open a new TAB in mozilla firefox I always have this error: Erro na análise do XML: entidade indefinida Localização: about:newtab Número da linha 116, Coluna 47: <button class="launchButton" id="settings">&abouthome.settingsButton.label;</butt

  • Additive costs

    Hi friends, We have an issue related to Additive Cost. some costs which have been put live in the system since go-live have used additive costs to deliver the fixed ohd into the product cost instead of the purchase order condition type and when condu

  • Best way to export custom postcard as .jpeg

    Hi! I am very new to photoshop, and, I am sure these are very elementary questions... please bear with me. I really appreciate your help. I created a custom postcard with layers - a collage of family photos - to use as a Christmas postcard. After rea

  • Soure system in Bw development

    Hi all, Right now our BW development system (BW1) is connected R/3 Development(R/3D). We need some additional data which is not in R/3D. So we are planning to connect our BW1 to R/3 quality system(R/3Q). I ahve gone thru some posts and gathered some

  • Import Collection from an other catalog.

    Hello from Germany, I am using 2 catalogs in LR 1.1. Now I would like to import a collection I made in catalog 1 to catalog 2. Any chance or tricks to do so? Thanks for your help Burkhard