Definition of indexes

Hi,
on 10g R2, where can I find the definition of indexes ?
I have an index but when In OEM I try to find it , it is not found (under administration tab, index) and in TOAD niether in script tab there is nothing.
Thank you.

Use dbms_metadata.get_ddl package to find it. Not sure why you are looking for the same from teh EM?
Are you using the correct schema name to find the index as this is the most common error which people do while looking for the object.
HTH
Aman....

Similar Messages

  • How to access the text definition of sql indexes

    I am able to access the text definition of stored procedures, views, triggers and functions from sys.sql_modules. I want to get the text definition of indexes.
    Can somebody help me in getting the text definition of indexes in sql server.
    Thanks,
    Puneet

    No, Actually I am trying to compare the indexes of two database programmatically.
    I am able to compare the stored procedures, views, triggers and functions by getting text definition directly from 'sys.sql_modules',  on similar lines do we have any way from where I could get the text definition of indexes
    Not sure if there is any table which stores Index defination but there is a table "sys.indexex" which stores index details.
    You may use below query to get index information (It will list out index details but not defination)
    SELECT s.NAME 'Schema'
    ,t.NAME 'Table'
    ,i.NAME 'Index'
    ,c.NAME 'Column'
    FROM sys.tables t
    INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
    INNER JOIN sys.indexes i ON i.object_id = t.object_id
    INNER JOIN sys.index_columns ic ON ic.object_id = t.object_id
    INNER JOIN sys.columns c ON c.object_id = t.object_id
    AND ic.column_id = c.column_id
    WHERE i.index_id > 0
    AND i.type IN (
    1
    ,2
    ) -- clustered & nonclustered only
    AND i.is_primary_key = 0 -- do not include PK indexes
    AND i.is_unique_constraint = 0 -- do not include UQ
    AND i.is_disabled = 0
    AND i.is_hypothetical = 0
    AND ic.key_ordinal > 0
    ORDER BY ic.key_ordinal
    - Also there is one more SP which you can create and then execute it by specifying DB names. It will list out table columns, indexes, constraints difference in one go.
    EXEC SP_Comparedb db1,db2
    Technet Gallary: Compare two databases for objects differences
    Cheers,
    Vaibhav Chaudhari

  • Rebuild Index VS Drop and Rebuild?

    Hey all,
    I am currently redesigning a weekly process (weekly coz we pre determined the rate of index fragmentation) for specific indexes that get massive updates. The old process has proved to be able to fix and maintain reports performance.
    In this process we rebuild specific indexes using the below command:
    Alter index index_name rebuild online;
    This command takes around 10 min for selected indexes.
    Testing the below took 2 min for 6 or 7 indexes.
    Drop Index Index_Name;
    Create Index Index_Name on Table_name (Col1, col, ..);
    I know that indexes might not be used, and the application performance would be degraded with stale or non-existent stats. But our production and all our test DBs have procedures that daily gather stats on them.
    I tested the below script to make sure that execution plan does not change:
    SELECT ProductID, ProductName, MfrID FROM PRODUCT WHERE MFRID = 'Mfr1';
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 37 | 3737 | 13 (0)|
    | 1 | TABLE ACCESS BY INDEX ROWID| PRODUCT | 37 | 3737 | 13 (0)|
    | 2 | INDEX RANGE SCAN | PRODUCT_X1 | 37 | | 3 (0)|
    dropping PRODUCT_X1 and recreating it only changed the cost to 12.
    Gathering the stats again took the cost to 14.
    No performance issues were faced and index was still used.
    My question is: Is there any oracle recommendation that requires rebuilding the index instead of dropping and recreating it?
    Is there any side effect to my approach that I did not consider?
    Thank you

    Charlov wrote:
    I am currently redesigning a weekly process (weekly coz we pre determined the rate of index fragmentation)Nice. Not only have you defined and located index fragmentation but have also measured the rate at which it occurs.
    Could you please share your definition of index fragmentation, how you detect it, and how you measure the rate of change of this fragmentation.
    I am curious about all this since it can be repeatedly shown that Oracle btree indexes are never fragmented.
    http://richardfoote.files.wordpress.com/2007/12/index-internals-rebuilding-the-truth-ii.pdf
    The old process has proved to be able to fix and maintain reports performance.Great so you have traces and run time statistics from before and after the rebuild that highlight this mysterious fragmentation and show how the fragmentation caused the report to be slow, details what effects the rebuild had that caused the reports to perform better.
    Please share them as these would be an interesting discussion point since no one has been able to show previously how an index rebuild caused a report to run faster or even show the fragmentation that caused it to be slow in the first place.
    I mean it would be a pity if the report was just slow because of an inefficient plan and compressing an index or two that probably shouldn't be used in teh first place appears to temporarily speed it up. Could you imagine rebuilding indexes every week, because some developer put the wrong hint in a query? That would be pretty funny.

  • Why do we create indexes for DSOs and Cubes.What is the use of it?

    Hi All,
    Can you please tell me why are indexes created for DSOs and Cubes.
    What is the use with the creation of indexes.
    Thanks,
    Sravani

    HI ,
    An index is a copy of a database table that is reduced to certain fields. This copy is always in sorted form. Sorting provides faster access to the data records of the table, for example, when using a binary search. A table has a primary index and a secondary index. The primary index consists of the key fields of the table and is automatically created in the database along with the table. You can also create further indexes on a table in the Java Dictionary. These are called secondary indexes. This is necessary if the table is frequently accessed in a way that does not take advantage of the primary index. Different indexes for the same table are distinguished from one another by a separate index name. The index name must be unique. Whether or not an index is used to access a particular table, is decided by the database system optimizer. This means that an index might improve performance only with certain database systems. You specify if the index should be used on certain database systems in the index definition. Indexes for a table are created when the table is created (provided that the table is not excluded for the database system in the index definition). If the index fields represent the primary keys of the table, that is, if they already uniquely identify each record of the table, the index is referred to as an unique index.
    they are created on DSO and cube for the performance purpose ..and reports created on them wil be also more efficent ..
    Regards,
    shikha

  • Query not considering function based index in oracle 11g

    I have a query which used Function Based Index when run in oracle 9i but when I run the same query
    without any changes, it does not consider index. Below is the query:
    SELECT distinct patient_role.domain_key, patient_role.patient_role_key,
    patient_role.emergency_contact_name,
    patient_role.emergency_contact_phone, patient_role.emergency_contact_note,
    patient_role.emergency_contact_relation_id,
    patient_role.financial_class_desc_id, no_known_allergies, patient_role.CREATED_BY,
    patient_role.CREATED_TIMESTAMP,
    patient_role.CREATED_TIMESTAMP_TZ, patient_role.UPDATED_BY, patient_role.UPDATED_TIMESTAMP,
    patient_role.UPDATED_TIMESTAMP_TZ,
    patient_role.discontinued_date
    FROM encounter, patient_role
    WHERE patient_role.patient_role_key = encounter.patient_role_key
    AND UPPER(TRIM(leading :SYS_B_0 from encounter.account_number)) = UPPER(TRIM(leading :SYS_B_1 from
    :SYS_B_2))
    AND patient_role.discontinued_date IS null
    AND encounter.discontinued_date IS null ;
    Index definition:
    CREATE INDEX "user1"."IX_TRIM_ACCOUNT_NUMBER" ON "user1."ENCOUNTER" (UPPER(TRIM(LEADING
    '0' FROM "ACCOUNT_NUMBER")), "PATIENT_ROLE_KEY", "DOMAIN_KEY", "DISCONTINUED_DATE")
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
    BUFFER_POOL DEFAULT)
    TABLESPACE "user1"
    Database : Oracle 11g (11.2.0.3)
    O/S : Linux 64 bit (the query does not consider index even on windows os)
    Any suggestions?
    -Onkar
    Edited by: onkar.nath on Jul 2, 2012 3:32 PM

    Onkar,
    I don't appreciate you posting this question in several forums at the same time.
    If I would know you also posted this on Asktom, I wouldn't even have bothered.
    As to your 'issue':
    First of all: somehow cursor_sharing MUST have been set to FORCE. Oracle is a predictable system, not a fruitmachine.
    Your statement the '0' is replaced by a bind variable anyway is simply false. If you really believe it is not false, SUBMIT a SR.
    But your real issue is not Oracle: it is your 'application', which is a mess anyway. Allowing for alphanumeric numbers is a really bad idea.
    Right now you are already putting workaround on workaround on workaround on workaround.
    Issue is the application: it is terminal., and you either need to kill it, or to replace it.
    Sybrand Bakker
    Senior Oracle DBA

  • RH11: Glossary and Index do not display in generated output

    I am having such a tough time with RoboHelp. My current issue is the glossary and index not appearing in the generated output. They used to. However, yesterday I thought I should go in and clean out all the junk in my project, i.e. I had about 5 glossaries and 5 indexes because I was testing things out. I made sure I didn't delete the populated glossary and index, but by the end of the day, they were both gone. So, I located these again from a backup and imported them back into my current project. Everything is there...all my terms and definitions, the index and the links...blah blah. NOTE: Everything is there, but I was required to block it out due to privacy issues.
    However, this is how it now displays when I generate:  Why won't this populate?
    Thanks,
    Pam

    Rick,
    I'm using IE 10, Version 10. And everything was working fine until I started deleting the extra glossaries and indexes I had in my project. I have been generating locally, then when I get to a point where I'm satisfied with my changes, I generate to the server. I wasn't having a problem with this until I deleted those files. Although I have imported the glossary and index back into the project, still nothing. Now, even if I pull up an older version of this same project, the glossary and index are now gone from that too. I'm assuming that everything must be linked, but I can't figure out what I'm missing. Generating to Chrome has the same issues. I had one of my developers look at it yesterday, and he thinks there is some file missing somewhere that RH11 needs to generate the glossary and index. What file would I be missing? The .glo and the HHK files are in the project. I really need to upload this for the client, so any help you can give me is much appreciated.
    I read on one of the forums that I should have this ticked in the Optimization Settings for Responsive HTML:  "Limit the scope of project styles only to topic contents". I did that and it caused all kinds of other issues.
    On Monday I updated to RH 11.0.1 - would there be an issue with that? Is it a dll issue?
    Thanks Rick,
    Pam

  • INDEX CREATION

    Hi There
    I have the following query to be optimized:
          select avbeln aauart avkorg avtweg a~spart
                 aangdt abnddt aguebg agueen a~vkgrp
                 avkbur agsber akunnr aerdat a~erzet
                 awaerk avbtyp aautlf avsbed a~kvgr1
                 akvgr2 akvgr3 akvgr4 akvgr5 a~abrvw
                 a~abdis
            into table t_vbak
            from vbak as a inner join vbuk as b on bvbeln = avbeln
            where
                  ( ( a~erdat > pre_dat ) and
                  ( a~erdat <= w_date  ) ) and
                  a~vbtyp in s_doccat and
                  a~vbeln in s_ordno and
                  a~vkorg in s_vkorg and
                  a~vtweg in s_vtweg and
                  a~spart in s_spart and
                  ( ( a~lifsk in s_lifsk ) or
                  ( a~lifsk = '  ' ) ) and
                  b~abstk ne 'C'.
         select    w~mandt
                    wvbeln  wposnr  wmeins wmatnr wwerks  wnetwr
                    wkwmeng wvrkme  wmatwa  wcharg w~pstyv
                    wposar  wprodh  wgrkor  wantlf  wkztlf wlprio
                    wvstel  wroute  wumvkz  wumvkn  wabgru wuntto
                    wawahr  werdat  werzet  wfixmg  wprctr  wvpmat
                    wvpwrk  wmvgr1  wmvgr2  wmvgr3  wmvgr4  wmvgr5
                                               wbedae wcuobj w~mtvfp
                    xetenr xwmeng xbmeng   xettyp  xwepos  xabart
                                                                x~edatu
                    xtddat xmbdat   xlddat  xwadat  xabruf xetart
                    x~ezeit
                    into table t_vbap
                   from  vbap as w
                           inner join vbep as x on xvbeln = wvbeln and
                                                   xposnr = wposnr and
                                                   xmandt = wmandt
    BEGIN OF Change for EUCHG352069
             for all entries in t_vbak
    End of changes for EUCHG352069
             where
    BEGIN OF Change for EUCHG352069
                           w~vbeln in s_ordno and
                           w~vbeln = t_vbak-vbeln and
               ( ( werdat > pre_dat ) and ( werdat <= w_date  ) ) and
                ( ( ( erdat > pre_dat  and  erdat < p_syndt ) or
                ( erdat = p_syndt and erzet <= p_syntm ) ) ) and
    End of changes for EUCHG352069
                           w~matnr in s_matnr and
                           w~pstyv in s_itmcat and
                           w~lfrel in s_lfrel and
                           w~abgru = '  ' and
                           w~kwmeng > 0   and
                           w~mtvfp in w_mtvfp  and
                           x~ettyp in w_ettyp  and
                           x~bdart in s_req_tp and
                           x~plart in s_pln_tp and
                           x~etart in s_etart and
                           x~abart in s_abart and
                           ( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
    Is it advisable to create an INDEX for improving this query performance?
    If yes, upon which field of which table, I can create an index?
    Plz suggest!

    Hi,
    An index is a copy of a database table that is reduced to certain fields. This copy is always in sorted form. Sorting provides faster access to the data records of the table, for example, when using a binary search. A table has a primary index and a secondary index. The primary index consists of the key fields of the table and is automatically created in the database along with the table. Secondary Indexes are necessary if the table is frequently accessed in a way that does not take advantage of the primary index. Different indexes for the same table are distinguished from one another by a separate index name. The index name must be unique. Whether or not an index is used to access a particular table, is decided by the database system optimizer. This means that an index might improve performance only with certain database systems. You specify if the index should be used on certain database systems in the index definition. Indexes for a table are created when the table is created (provided that the table is not excluded for the database system in the index definition). If the index fields represent the primary keys of the table, that is, if they already uniquely identify each record of the table, the index is referred to as an unique index.
    Procedure :
    1. Choose the Indexes tab.
    2. To create an index, choose New.
    3. Enter a name for the index. Index names, like table names, also have a prefix, followed by an underscore.
    If the name of an index was registered on the name server, it cannot be deleted.
    4. To select table fields, choose New.
    5. if the index is a unique index,
    6. If the index is used for all databases, choose and whether it is to be created for all databases. Choose the appropriate checkboxes.
    7. Choose File® Save All Metadata.
    Primary index : Its the index which is automatically created for the PRIMARY KEY FIELD(S) of the table.
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    Secondary index : Its created as and when required,
    based upon other field(s) of the table,
    on which search criteria is used in sqls.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    Regards,
    Shiva.

  • When rebuild table and index

    Hi,
    how to know if for a table we should rebuild table and index ?
    Many thanks.

    Safi, I would be less sure than that. There were many many very long threads in database general forum about that subject. If you observed that fact once, that is not very often the case.
    @OP,
    Assuming the question was not about performance, but more on AppDesigner side (because of rebuild table), what I can say is :
    1. Peoplesoft store all the storage definition for all the objects in its own metamodel tables
    2. when you are changing field definition (column's definition) or indexes definition, the new definition is stored in the Peoplesoft metamodel tables
    3. On the table/index build, AppDesigner compare the Peoplesoft metamodel definition for that particular object with in the Oracle (or what ever else database you are using) definition of that same object.
    4. If the defintions reported on step 3 are different, then AppDesigner will fill a file with all the (re)build statements of the objects you are rebuilding (and different).
    5. Finally, you just have to run the generated script.
    Of course, you can bypass the comparison by enforcing AppDesigner the (re)build the record/index in all cases, however, it is not always (do not say never) recommanded.
    Now, when you have to rebuild table (or record in AppDesigner's word) or indexes ? Basically, by the previous explanation I gave above, you could understand this is when you are changing a field/table/index definition through AppDesigner.
    Nicolas.

  • Causes of invalid indexes

    Oracle 11.2.0.1
    Windows
    Can anyone please give me link or a complete list of reasons which can cause of an invalidate index.
    I just want to know all the resons by which one valid index becomes invalid.
    Thanks.

    1.alter index <index_name> unusable;
    2.In the common definition, an index becomes unusable when the database recognizes the rowids in the index are no longer pointing to the rows in the table. This can occur when the table is moved, compressed, etc. while the index is inaccessible.
    3.SQL*Loader fails to update the index because the index runs out of space.
    4.The instance fails during the building of the index.
    5.A unique key has duplicate values.
    6.An index isn't in the same order as that specified by a sorted indexes clause.
    7.Truncating a table makes an unusable index usable again.
    8.The data is not in the order specified by the SORTED INDEXES clause.
    9.There are duplicate keys in a unique index.
    10.Data savepoints are being used, and the load fails or is terminated by a keyboard interrupt after a data savepoint occurred.
    11.Any table-level Partition operation like split,move,import,exchange,merge,truncate,drop except add parition; all non-partitioned and globally partitioned indexes becames unusable; while Locally Partitioned Indexes only affects partitions being affects.
    12.Performing an online redefinition of a table because these operation Shifts the ROWID valuse. And that causes the Index to become unusable.
    Source:[url http://docs.oracle.com/cd/B19306_01/server.102/b14215/ldr_modes.htm#sthref1486]Documentation, This forum and Page No.127 Expert Indexing in Oracle Database 11g by Darl Kuhn,Sam R. Alapati,Bill Padfield.
    This is not a complete list, but I guess these are the main causes for unusable indexes.
    Regards
    Girish Sharma

  • Adding Constraint add extra Index....Why?

    Hi,
    I am creating an index with a field that is function based:
    CREATE INDEX vicc_veh_info##code_year ON vicc_veh_info(UPPER(car_code), model_year DESC, PROGRESS_RECID);
    Now when I create the following constraint:
    ALTER TABLE BILLITEM ADD CONSTRAINT PK_BILLITEM PRIMARY KEY (BUSINESSOBJECTNUM,BUSOBJSEQUENCE,OBJECTNUM,ITEM_NUMBER,ITEMPOST_TYPE);
    It actually creates the constraint and it also creates another Index with the same name as the constraint so when I look at the index it is called PK_BILLITEM and when I look at the table I also see that constraint there as well.
    Why is it doing this. Why does this constraint not just created on it's own???
    Is this because of the function based column in the Index.

    Which index did you specify? The syntax, as per the examples, tells you you can say
    USING INDEX {provide the index definition}
    USING INDEX {provide the name of an existing index}
    Message was edited by: Hans Forbrich
    Copy/paste from the link:
    CREATE TABLE promotions_var3
        ( promo_id         NUMBER(6)
        , promo_name       VARCHAR2(20)
        , promo_category   VARCHAR2(15)
        , promo_cost       NUMBER(10,2)
        , promo_begin_date DATE
        , promo_end_date   DATE
        , CONSTRAINT promo_id_u UNIQUE (promo_id, promo_cost)
             USING INDEX (CREATE UNIQUE INDEX promo_ix1
                ON promotions_var3 (promo_id, promo_cost))
        , CONSTRAINT promo_id_u2 UNIQUE (promo_cost, promo_id)
             USING INDEX promo_ix1);

  • Linking Tables to Oracle Views

    I am not able to see the PKs in MS-ACCESS after creating a linked table to a view within Oracle. The views were created using Select * from the base table which is a materialized view. No WHERE clause in the view.
    Also, why do I get an error when creating a linked table to a materialized view in Oracle. I am getting the following error when creating the linked table:
    "Invalid filed definition M_ROW$$ in definition of index or relationship.
    Thanks,
    Todd Schaberg
    [email protected]

    This is a known problem. We're trying to work with the materialized views folks to get this resolved.
    As a workaround, you can create a view of the materialized view and link to that.
    Justin Cave
    ODBC Development

  • Why Isn't xmlindex being used in slow query on binary xml table eval?

    I am running a slow simple query on Oracle database server 11.2.0.1 that is not using an xmlindex. Instead, a full table scan against the eval binary xml table occurs. Here is the query:
    select -- /*+ NO_XMLINDEX_REWRITE no_parallel(eval)*/
          defid from eval,
          XMLTable(XMLNAMESPACES(DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03',
          'http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7"),
          '$doc/eval/derivedFacts/ns7:derivedFact' passing eval.object_value as "doc" columns defid varchar2(100) path 'ns7:defId'
           ) eval_xml
    where eval_xml.defid in ('59543','55208'); The predicate is not selective at all - the returned row count is the same as the table row count (325,550 xml documents in the eval table). When different values are used bringing the row count down to ~ 33%, the xmlindex still isn't used - as would be expected in a purely relational nonXML environment.
    My question is why would'nt the xmlindex be used in a fast full scan manner versus a full table scan traversing the xml in each eval table document record?
    Would a FFS hint be applicable to an xmlindex domain-type index?
    Here is the xmlindex definition:
    CREATE INDEX "EVAL_XMLINDEX_IX" ON "EVAL" (OBJECT_VALUE)
      INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
      ('XMLTable eval_idx_tab XMLNamespaces(DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03'',
      ''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7"),''/eval''
           COLUMNS defId VARCHAR2(100) path ''/derivedFacts/ns7:derivedFact/ns7:defId''');Here is the eval table definition:
    CREATE
      TABLE "N98991"."EVAL" OF XMLTYPE
        CONSTRAINT "EVAL_ID_PK" PRIMARY KEY ("EVAL_ID") USING INDEX PCTFREE 10
        INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT
        1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
        FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
        DEFAULT) TABLESPACE "ACME_DATA" ENABLE
      XMLTYPE STORE AS SECUREFILE BINARY XML
        TABLESPACE "ACME_DATA" ENABLE STORAGE IN ROW CHUNK 8192 CACHE NOCOMPRESS
        KEEP_DUPLICATES STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT)
      ALLOW NONSCHEMA ALLOW ANYSCHEMA VIRTUAL COLUMNS
        "EVAL_DT" AS (SYS_EXTRACT_UTC(CAST(TO_TIMESTAMP_TZ(SYS_XQ_UPKXML2SQL(
        SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03"; (::)
    /eval/@eval_dt'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2),'SYYYY-MM-DD"T"HH24:MI:SS.FFTZH:TZM') AS TIMESTAMP
    WITH
      TIME ZONE))),
        "EVAL_CAT" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@category'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "ACME_MBR_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@acmeMemberId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "EVAL_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@evalId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50)))
      PCTFREE 0 PCTUSED 80 INITRANS 4 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
        INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
        FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT
      TABLESPACE "ACME_DATA" ; Sample cleansed xml snippet:
    <?xml version = '1.0' encoding = 'UTF-8' standalone = 'yes'?><eval createdById="xxxx" hhhhMemberId="37e6f05a-88dc-41e9-a8df-2a2ac6d822c9" category="eeeeeeee" eval_dt="2012-02-11T23:47:02.645Z" evalId="12e007f5-b7c3-4da2-b8b8-4bf066675d1a" xmlns="http://www.xxxxx.com/vvvv/domains/eval/2010/03" xmlns:ns2="http://www.cigna.com/nnnn/domains/derived/fact/2010/03" xmlns:ns3="http://www.xxxxx.com/vvvv/domains/common/2010/03">
       <derivedFacts>
          <ns2:derivedFact>
             <ns2:defId>12345</ns2:defId>
             <ns2:defUrn>urn:mmmmrunner:Medical:Definition:DerivedFact:52657:1</ns2:defUrn>
             <ns2:factSource>tttt Member</ns2:factSource>
             <ns2:origInferred_dt>2012-02-11T23:47:02.645Z</ns2:origInferred_dt>
             <ns2:factValue>
                <ns2:type>boolean</ns2:type>
                <ns2:value>true</ns2:value>
             </ns2:factValue>
          </ns2:derivedFact>
          <ns2:derivedFact>
             <ns2:defId>52600</ns2:defId>
             <ns2:defUrn>urn:ddddrunner:Medical:Definition:DerivedFact:52600:2</ns2:defUrn>
             <ns2:factSource>cccc Member</ns2:factSource>
             <ns2:origInferred_dt>2012-02-11T23:47:02.645Z</ns2:origInferred_dt>
             <ns2:factValue>
                <ns2:type>string</ns2:type>
                <ns2:value>null</ns2:value>
             </ns2:factValue>
          </ns2:derivedFact>
          <ns2:derivedFact>
             <ns2:defId>59543</ns2:defId>
             <ns2:defUrn>urn:ddddunner:Medical:Definition:DerivedFact:52599:1</ns2:defUrn>
             <ns2:factSource>dddd Member</ns2:factSource>
             <ns2:origInferred_dt>2012-02-11T23:47:02.645Z</ns2:origInferred_dt>
             <ns2:factValue>
                <ns2:type>string</ns2:type>
                <ns2:value>INT</ns2:value>
             </ns2:factValue>
          </ns2:derivedFact>
                With the repeating <ns2:derivedFact> element continuing under the <derivedFacts>The Oracle XML DB Developer's Guide 11g Release 2 isn't helping much...
    Any assitance much appreciated.
    Regards,
    Rick Blanchard

    odie 63, et. al.;
    Attached is the reworked select query, xmlindex, and 2ndary indexes. Note: though namespaces are used; we're not registering any schema defns.
    SELECT /*+ NO_USE_HASH(eval) +/ --/*+ NO_QUERY_REWRITE no_parallel(eval)*/
    eval_xml.eval_catt, df.defid FROM eval,
    --df.defid FROM eval,
    XMLTable(XMLNamespaces( DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03',
                            'http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7" ),
            '/eval' passing eval.object_value
             COLUMNS
               eval_catt VARCHAR2(50) path '@category',
               derivedFact XMLTYPE path '/derivedFacts/ns7:derivedFact')eval_xml,
    XMLTable(XMLNamespaces('http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7",
                              DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03'),
            '/ns7:derivedFact' passing eval_xml.derivedFact
             COLUMNS
               defid VARCHAR2(100) path 'ns7:defId') df
    WHERE df.defid IN ('52657','52599') AND eval_xml.eval_catt LIKE 'external';
    --where df.defid = '52657';
    SELECT /*+ NO_USE_HASH(eval +/ --/*+ NO_QUERY_REWRITE no_parallel(eval)*/
    eval_xml.eval_catt, df.defid FROM eval,
    --df.defid FROM eval,
    XMLTable(XMLNamespaces( DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03',
                            'http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7" ),
            '/eval' passing eval.object_value
             COLUMNS
               eval_catt VARCHAR2(50) path '@category',
               derivedFact XMLTYPE path '/derivedFacts/ns7:derivedFact')eval_xml,
    XMLTable(XMLNamespaces('http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7",
                              DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03'),
            '/ns7:derivedFact' passing eval_xml.derivedFact
             COLUMNS
               defid VARCHAR2(100) path 'ns7:defId') df
    WHERE df.defid IN ('52657','52599') AND eval_xml.eval_catt LIKE 'external';
    --where df.defid = '52657'; create index defid_2ndary_ix on eval_idx_tab_II (defID);
         eval_catt VARCHAR2(50) path ''@CATEGORY''');
    create index eval_catt_2ndary_ix on eval_idx_tab_I (eval_catt);The xmlindex is getting picked up but a couple of problesm:
    1. In the developemnt environment, no xml source records for defid '52657' or '52599' are being displayed - just an empty output set occurs; in spite of these values being present and stored in the source xml.
    This really has me stumped, as can query the eval table to see the actual xml defid vaues '52657' and '52599' exist. Something appears off with the query - which didn't return records even before the corrresponding xml index was created.
    2. The query still performs slowly, in spite of using the xmlindex. The execution plan shows a full table scan of eval occurring whether or not a HASH JOIN or MERGE JOIN (gets used in place of the HASH JOIN when the NO_USE_HASH(eval) int is used.
    3. Single column 2ndary indexes created respectively for eval_catt and defid are not used in the execution plan - which may be expected upon further consideration.
    In the process of running stats at this moment, to see if performance improves....
    At this point, really after why item '1.' is occurring?
    Edited by: RickBlanchardSRSCigna on Apr 16, 2012 1:33 PM

  • Server Side Includes for Apache and parsing text files

    I have an old website I used to do zipped up and I decided to set it up on my personal webserver in OS X . I got apache configured to allow server side includes (edited the httpd.conf to all them:
    <Directory />
    Options FollowSymLinks Indexes MultiViews Includes
    AllowOverride None
    </Directory>
    But I can't get the pages to come up. See I have this .shtml page which loads fine and a part of it has this line:
    <!--#include file="news/news.txt" -->
    But it won't parse that txt file and show the html that its formatted in.
    Anyone have any ideas on how to get it to parse that txt file? Even if I load just that txt file it shows raw code not it formatted. Please help.

    Ignore that first reply. I thought I was dealing with server.
    As usual, I fogot to make sure that Includes was in the Options directive for the DOCUMENT_ROOT or VirtualHost. After 10 years, you'd think I'd remember that. I just configged one of my macs to do SSI. Here's the 3 lines that I changed:
    Line 399 (the DOCUMENT_ROOT definition): Options Indexes FollowSymLinks MultiViews Includes
    Line 887 (To use server parsed HTML): AddType text/html .shtml
    Line 888: AddHandler server-parsed .shtml
    apachectl restart
    and off it went!
    Roger

  • Some questions on the inner workings of Stellent IBPM 7.6

    Hello,
    I'm having some trouble figuring out a Stellent IBPM 7.6 installation at one of our customers' sites.
    I must admit that I am completely new to IBPM, but since our customer was unable to find anyone who still supported this software,
    they have asked me to find out whether it is possible to make a small change.
    They are using it to archive tiff files + metadata (in text files) exported by Kofax.
    I was unable to find any documentation on version 7.6, but I was able to find an administrator's guide of Oracle Imaging and Process Management 7.7, which I have been using so far.
    So here's the thing;
    I've largely been able to figure out how the whole Filer Server works, how it's configured, how applications are defined in the Application Definition editor,
    how meta data and image files get stored based on the defined indexes/fields and storage classes,
    how galleries are made, how users are linked to them and how searches can be built using the Search Builder.
    So far so good. I understand how the current setup is functioning.
    But what I haven't been able to find in the documentation that I posses,
    is how the system deals with any changes made to this setup.
    More specifically: changes made to the definition of indexes and fields.
    The main questions I have at this moment are:
    - How does the database, specified under "output" in the application definition editor, get constructed?
    Does this have to be done manually? And do you just have to name tables and columns exactly as you specify them in the Application Definition?
    Or will a new table automatically be created when I define a new application?
    I assume it will, because I noticed that the names of the tables in the database are <Application Name>+<Index Name>.
    But I haven't been able to find any piece of information on this, and I don't want to base any actions on assumptions.
    - Is it possible to add a field to an application that has already been online and filed?
    And also in this case; what happens to the output database? Do columns get added automatically or is this a manual step?
    I hope that despite the age of this software, someone will still be able to answer these question or point me to some documentation that I have missed.
    Kind regards,
    Wouter

    To answer the questions:
    #1   An interrupt service routine (ISR) running a lower IRQL may be interrupted by another ISR running at a higher level.
    #2   Having never heard of a thread interrupt, all I can say here is that a thread may be pre-empted for a lot of reasons.
    #3   An ISR does not have to schedule a DPC unless it needs to do work that it cannot do at high IRQL.
    #4   Assuming that the Write and IOCTL are on seperate queues, yes you need a a lock to protect shared resources.
    #5   As the name implies kernel dispatcher objects allow scheduling (i.e. dispatching of another thread), while things like spinlocks do not. 
    #6    Timers are just another dispatching object, i.e. one can wait on them the same way one waits on a mutex or a semaphore.  A WDF TIMER is basically a wrapper around a regular timer to take care of some of the housekeeping (particularily
    with respect to stopping the driver etc)
    #7     If a routine is running at DISPATCH_LEVEL you are limited to spin locks for synchronization. You can with a zero timeout check the status of a kernel dispatcher object.   In general, routines like this should be
    designed to only use spinlocks.
    #8    An arbitrary thread is just that, it may be a thread in any process. Basically a currently running thread is grabed and used to run the interrupt so that the scheduler which has overhead and is not designed to run at interrupt level
    IRQL's does not need to run.
    Don Burn Windows Filesystem and Driver Consulting Website: http://www.windrvr.com Blog: http://msmvps.com/blogs/WinDrvr

  • (V7.3) PARTITION VIEWS IN ORACLE7 RELEASE 7.3

    제품 : ORACLE SERVER
    작성날짜 : 2002-05-17
    PURPOSE
    Introduction
    Partition views are a new feature in Release 7.3 design to provide
    better support for very large tables that are commonly found in data
    warehousing environments.
    The partition view feature offers two primary benefits:
    - improved managability and availability of large tables
    - improved performance for table scans on certain queries of
    these large tables
    Explanation & Example
    What is a partion view?
    An example partition view is:
    create view sales as
    select * from jan_sales
    union all
    select * from feb_sales
    union all
    select * from dec_sales
    Each of the base tables (the monthly sales tables) must be identical
    in terms of column names, column datatypes, and indexes. These tables
    must also have a CHECK constraint on its partitioning column (thus,
    the jan_sales table must have a check constraint on the date column
    which constrains the date to fall between Jan 1 and Jan 31).
    All of these base tables, indexes, constraints as well as the
    UNION-ALL view definition are be created by the DBA.
    Managability and availability benefits
    A partition view greatly simplifies the administration of very large
    tables.
    Consider the example of of a data warehousing containing a large
    'sales' table. Once per month, the DBA loads all of new sales data
    into this table. Thus, the DBA would need to drop all of the indexes
    on the sales, load the new data, and rebuild the indexes. Since the
    sales table is very large, this could be a lengthy operation.
    Morever, the sales table is (for most practical DSS application) is
    not available while these load and index operations are occuring.
    Using the partition view feature, the DBA could load the new month's
    data into a separate partition and build indexes on this new partition
    without impacting the original partition view. Then, after the new
    partition is entirely built and indexed, the DBA could recreate the
    UNION-ALL view to include the new partition. The sales partition view
    is unavailable for a very short length of time ... only while the
    UNION-ALL view is being built. Moreover, because the indexes are much
    smaller, the length of time to load and index a new month's worth of
    data is dramatically decreased.
    Performance benefits of partition views
    Any UNION-ALL view (even in earlier releases of Oracle7) can reap the
    aforementioned managability benefits; however, unless the UNION-ALL
    view offers reasonable query performance, the managability benefits
    are useless.
    The enhancement in Release 7.3 is to insure that the query performance
    on UNION-ALL views will at least equivalent to (and, in many cases,
    much better than) single-table access. Note that these performance
    enhancements are only effective when all of the partitions have the
    appropriate CHECK constraints and when all of the partitions have
    identical column definitions and indexes.
    There are two basic performance enhancements for partition views:
    partition elimination
    parallel execution of UNION-ALL views
    Certain queries may not require data from all of the partitions of a
    view partition. For example, consider the following query:
    select sum(revenues) from sales
    where sales_date between '15-JAN-96' and '15-FEB-96';
    With 7.3's new support for partition views, Oracle will evaluate the
    above query using only the January partition and the February
    partition; the remaining ten partitions will not be accessed. This
    feature is commonly called 'partition elimination'.
    Partition elimination is only effective when querying based on the
    partitioning column; in this example, the partitioning column is the
    sales_date column. But the performance savings can be significant. In
    the previous example, the partition view feature results in ten of the
    twelve partitions being eliminated from query processing. This could
    represent six-fold performance gain.
    An additional enhancement in 7.3 is the parallel execution of
    UNION-ALL views. All queries on UNION-ALL views can be executed in
    parallel (when using the Parallel Query Option). It is very
    important to note that the partitioning scheme is absolutely
    independent of the degree of parallelism (this starkly contrasts with
    many of our competitior's parallel query architecture, in which the
    physical data partitioning determines the degree of parallelism).
    Oracle will dynamically distribute the data of a UNION ALL view across
    all parallel query processes, and partition elimintation will not
    impact the degree of parallelism.
    Limitations of Partition Views
    Partition views do not support DML operations. For this reason,
    partition views are most appropriate for read-only applications (such
    as data warehouses).
    Conclusion
    Partition views can be very effective for handling very large tables
    in data warehousing environments. The managability of these large
    tables is vastly improved, with significant performance improvements
    for many queries.
    Reference Ducumment
    ---------------------

    The installer might not be year 2000 compliant. Download a newer version or set your system date back into 1999 ;-)

Maybe you are looking for

  • Mozilla restores pages from last session, but I don't set it and don't want it

    When I'm opening Mozilla firefox, it always restores pages from last session. I tried to restore default startpage, but it restore other pages anyway.

  • JRC XML Data source

    I'm trying to see a report based on an XML data source with JRC I'm using crystal reports XIR2 with websphere 5.1.1; with a simple report i'm able to see it, then i tried to add a parameter (discrete/not multiple) in the report but if i don't pass it

  • Captions apparently broken in Aperture 3.3.2

    I'm running Aperture 3.3.2 on Mountain Lion, and it appears that Aperture isn't writing captions to the IPTC field when exporting images.

  • Constant FCP v7 crashes. Need help deciphering error codes

    Workflow has crawled to a halt today with several FCP crashes. Below is the error message. Running a MacPro w/dual Xeon procs, 16GB RAM, Snow Leopard. I've done the basics such as deleting FCP prefs, zapping PRAM, verifying disks/repairing permission

  • Re-image computer error

    When booting with the Server 2008 R2 DVD in the machine, I select the repair option. When prompted, I then select Restore your computer using a system image that you created earlier. Image selected is for volume with two partitions, C & E. Error mess