Regarding creating and assinging dimensions to characteristics

hai
i have some characteristics and i need to create the dimensions .Then  i need to assing the characteristics to that dimensions..
So on what bases i need to create the dimensions and on what bases i need to assing the characteristics to dimensions ..
pls telll me
i ll assing the points
bye
rizwan

Hi Rizwan,
        The modelling depends on your business requirements. I've a suggestion regarding the Dimensions.
<b>But before modelling, Please confirm this with experts.</b>
Since you've close to 8 lakh records for both equipment master and Location master, I think you should create and assign two seperate dimensions for them.
You mentioned that work order master data + transaction data is around 1,35,000 records. 
There is a concept known as Line Item dimension which you may be aware of.
<i>Characteristics can be defined as line items. In otherwords, aside from this characteristic, no other characteristics can be assigned to a dimension. This kind of dimension is called a line item dimension
(degenerated dimension). This option is used when a characteristic has a large number of values (order number, for example),which, in combination with other characteristics, would lead to a large increase in dimension tables for the fact table, detrimentally affecting query performance.</i>
Line Item dimension does not have any dimension tables.
The SID table of the line item is directly connected to the fact table by way of the external/primary key relationships.
Always load master data first before transaction data to avoid inconsistencies.
Hope this helps.
Regards
Hari

Similar Messages

  • SQL08 Need example of creating AND using a Fact (degenerate) dimension

    Can someone post a link to some examples of setting up and using a Fact (degenerate) dimension.  Ive got the SSAS 2008R2 Adventureworks DW project setup and I see a few Fact dimensions in there, but id like some descriptions to go along with this or
    something similar.
    My scenario:
    Orders - attributes include :  Order#, OrderStartDate, SalesPerson, BusinessType, MarketType
    OrderTransactions - attributes include:  OrderKey , TransactionAmount, TransactionType, TransactionDate, AccountKey
    Description:
    Its more or less the typical Order > Order Detail type scenario, with the addition of Orders have some additional attributes.
    So I want to be able to measure on for example:
    Order counts - broken down by BusinessType and MarketType , and then within a date range
    Revenue - which will be totals of transaction amounts, again grouped by BusinessType and MarketType, but also be able to drill down to see the related Order#

    Hi Shiftbit,
    According to your description, you need some examples about create and use degenerate dimensions, right?
    Degenerate dimensions, also called fact dimensions, are standard dimensions that are constructed from attribute columns in fact tables instead of from attribute columns in dimension tables. Here is document that describes how to create and use degenerate
    dimensions step by step, please refer to the links below.
    https://msdn.microsoft.com/en-us/library/ms167409(v=sql.100).aspx
    http://www.jamesserra.com/archive/2011/11/degenerate-dimensions/
    Regards,
    Charlie Liao
    TechNet Community Support

  • Index out of range when trying to create a new dimension (reporting type)

    I just created all my dimensions/characteristics, created 2 cubes/infocubes (rate & ownership) in my new appset called 'CONSOLIDATION' and I can't seem to create a 'reporting type cube'.
    When I try to create a cube called TEST or CONSO... (generic, finance or conso) it says 'index as out of range. Must be non negative & less than the size of the collection. Parameter name : index'. The message pops up immediately, before I can even try to choose dimensions.
    Anybody who has a clue, I look forward to suggestions.
    Regards
    Nico

    Hi Sorin,
    Thanks for the reply. If I understand correctly, then in my application set, I cannot create any non-reporting type application? As we have deleted the 'Rate' application which was delivered along with APSHELL. Is there any way out to overcome this as from your reply, it seems not possible as the system needs a reference application belonging to the same application type.
    Many Thanks for your reply.
    Santosh

  • Error after creating a new dimension in dev studio

    Hi,
    I have set up ATG 10.1.2 along Endeca 3.1.1
    For the CRS application the cartridges are shown properly.
    But once I create an autogen dimension from dev studio and run the baseline the navigation cartridges disappear giving the below mentioned error.
    error=com.endeca.infront.navigation.NavigationException: com.endeca.navigation.ENEException: HTTP Error 404 - Navigation Engine not able to process request 'http://localhost:15000/graph?node=10098&refinement=dimvalid:10093+dynrank:0+exposed:1&refinement=dimvalid:10001+dynrank:0+exposed:1&refinement=dimvalid:10002+dynrank:0+exposed:1&refinement=dimvalid:1+dynrank:0+exposed:1&refinement=dimvalid:10011+dynrank:0+exposed:1&groupby=product.repositoryId&offset=0&nbins=0&allbins=1&autophrase=1&autophrasedwim=1&filter=AND%28product.priceListPair%3asalePrices_listPrices%2cOR%28product.siteId%3astoreSiteUS%29%29&irversion=640'., displayNameProperty=displayName_en, dimensionId=10001, buryRefinements=[]}, {showMoreLink=false, sort=default, @type=RefinementMenu, boostRefinements=[], maxNumRefinements=200, numRefinements=10, displayNamePropertyAlias=displayName, name=Size, moreLinkText=Show More Refinements..., dimensionName=clothing-sku.size
    I have also put the --back_compat flag as 640 but even then getting the same error.
    Only the redeployment of full application removes the error.
    Please suggest if anyone has faced similar issue.
    Regards,
    Varun

    Please see the suggested solutions in the following docs.
    Entity Maps Not Defined For Attachment Error When Selecting A Deliverable (Doc ID 358385.1)
    Corrupt Personalization - No Entities Found Entitymaps Not Defined For Attachment Item (Doc ID 1085011.1)
    R12:Supplier Page Unexpected Error: 'No Entities Found EntityMaps not Defined for Attachment Item' (Doc ID 1361320.1
    Geography Hierarchy No Entities Found, EntityMaps Not Defined For Attachment Item (Doc ID 831088.1)
    Depot Repair Bulk Receiving Error: "No entities found, entityMaps not defined for attachment item" (Doc ID 1357977.1)
    Thanks,
    Hussein

  • Date and Time dimensions

    After reading the following article, I have decided to use SSAS dimension wizard for generating our Date dimension, which creates a DATETIME PK.
    http://www.made2mentor.com/2011/05/date-vs-integer-datatypes-as-primary-key-for-date-dimensions/ 
    I have also created a separate Time dimension as granularity of an hour is required.
    The Time dimension is very simple and only contains a surrogate key (INTEGER) and actual time in hours (VARCHAR).
    DimTime(TimeKey, TimeInHours)
    Our Fact table will now have a link to both the Date and Time dimension using the PK's.
    Our analysis is required by hour, day, week, month and year.
    My query is; Will this current structure cause any problems when creating MDX scripts to analyse our data (i.e. drilldown and rollup queries) Hour - Day - Week - Month - Year

    Hi Darren,
    According to your description, there a day and hour granularity in your fact table, so you want to a hierarchy like Hour - Day - Week - Month - Year, right?
    In your scenario, you created a time table that only contains a surrogate key (INTEGER) and actual time in hours (VARCHAR). We cannot create a Hour - Day - Week - Month - Year hierarchy without ant relationship between date table and time table. As per my understanding,
    you need create a foreigner key in time table, and join those table in the data source view, then you can create such a hierarchy. Here are some links about create time dimension, please see:
    http://www.ssas-info.com/analysis-services-articles/59-time-dimension/1224-date-and-time-dimensions-template
    http://www.codeproject.com/Articles/25852/Creating-Time-Dimension-in-Microsoft-Analysis-Serv
    Regards,
    Charlie Liao
    TechNet Community Support

  • Creating A new dimension for a characteristic versus adding in the same dim

    Hi Guys,
    I have a scenario where I have 0Material in a line item dimension in the cube.
    Have to add 0Mat_plant which is compounded to 0plant as we need MRP controller
    as one of the Nav attributes. 0Plant is also available in the cube.
    There are two options of doing this:
    1) Either add it to the 0Material dimension removing the line item property.
    2) Or create a new dimension for 0MAt_plant and make it as line item dimension
       considering the large volume of material information.
    Which is a better option and why.
    Please advise.
    Many Thanks and Regards,
    KAte

    Hi Kate,
    I'd recommend to have a new dimension as line itemdimension for 0MAT_PLANT, just for performance purposes (almost logarithmic access of data instead of full table scan)
    The Plant Segments in R/3 usually have a lot more records than the general material master has (max: number of plants * number of materials).
    Adding the object to 0material means that you have to unassign the line item flag. Usually this leads to increasing load- and queryruntimes.
    hth
    cheers
    sven

  • BPC 10.0- Creating report using dimension property filter

    Hi,
    I created a new report in EPM addin for excel using the 'Filter Members by Properties' feature in the EPM Member Selector Window.
    My requirement is as I add new members to my dimension with that property value , the report has to refresh and display the new member in the report.
    The feature met my requirement and I saved the report.When I reopen the report, the filter that I had earlier used to create the report is hard coded with the members with those properties in the EPM selector Window.
    As a result, when I add new Members to the dimension with that property , the report does not recognise that new member and my requirement does not meet.
    Eg: I create report with Entity dimension across row axis and Product Dimension -filter on category  = New  across column axis
    and save the report.
    When I reopen the report, I see that in the EPM Member selector window, across column axis the members with Property Value Category= Y - Product A,Product B, Product E are hard coded and the filter has been auto removed.
    So when I add a new member to my product dimension Product N with property Category= New, the report does not recognise this member.
    Please help on how can I use this feature to meet my requirement.
    Regards,
    Sowmya

    Hi,
    It should work for newly created members which meets the requriement in member selector.
    Member filtering by properties:
    Select property, operator and Value and click Add Dynamic filter then in the right window you will be able to see as follows:
    Member name                           Relation ship
    <property>=<Value>                 Property
    Hope it works,
    Thanks,
    Raju

  • When we create Line item dimension Like before implementation or  productio

    Hi BW experts ,
    i have doubt Regarding the creation of Line item Dimension
    When we creating the lineitem dimension means After Implementation or Before implementation plz Give the Exact answer .
    if u give the exact answer i will assign the moer points
    Regards
    prakash.v
    [email protected]

    Hi Prakash,
                     You can create the line dimension before implenmentation and after implementation.. if u know the size of data u can create at initial stage otherwise go RSRV tcode u can know the size and can create the line dimension, otherwise if dimension table is bigger than fact table u can create line dimension.

  • Ranking and Date dimensions

    Hello everyone,
    I am trying to sort some values descending in order with addtional time dimensions so i can prompt them on my dashboard.
    I can get it to rank correctly when I omit the Date fields (Year,Qtr,Month) however, if these are present then i get the ranking based on smaller values due to the dates because OBIEE is breaking it down to the month
    which have different costs associated to that date.
    Example:
    Field1 | Value(desc) | Rank
    1001     $3,654      1
    1400 $1,520      2
    3501     $1,511      3
    3508     $1,200      4
    1601     $958     5
    1401     $608     6
    1602     $200     7
    So when Month is present for example, my amount is not 3,654 but rather 2 rows that sum up to 3,654. I do not want that to happen, but i still want to have the fields so i can filter.
    Field1 | Value(desc) | Rank | Month
    1001     $1,830      1 Jan-2009
    1001     $1,824      2 Feb-2009
    Does anyone know how to display as the first table with the additional Date fields on affecting the row result?

    Hi Darren,
    According to your description, there a day and hour granularity in your fact table, so you want to a hierarchy like Hour - Day - Week - Month - Year, right?
    In your scenario, you created a time table that only contains a surrogate key (INTEGER) and actual time in hours (VARCHAR). We cannot create a Hour - Day - Week - Month - Year hierarchy without ant relationship between date table and time table. As per my understanding,
    you need create a foreigner key in time table, and join those table in the data source view, then you can create such a hierarchy. Here are some links about create time dimension, please see:
    http://www.ssas-info.com/analysis-services-articles/59-time-dimension/1224-date-and-time-dimensions-template
    http://www.codeproject.com/Articles/25852/Creating-Time-Dimension-in-Microsoft-Analysis-Serv
    Regards,
    Charlie Liao
    TechNet Community Support

  • How to create a Time Dimension

    Hi
    This is regarding a new topic which i could not find the answer in this forum.
    Actually i want to create a time dimension to populate my source data which is in date/time format.
    can anyone please reply back reagrding how to create a time dimension in ODI???
    regards
    Gourisankar

    Hi Gourisankar,
    I am not aware of time dimension. But when i searched in metalink i got the below note. I am not sure whether it will help you or not but still a small contribution. :)
    The note as follows,
    To create new time dimensions, run the following SQL instructions :
    * Example to generate a calendar between 1999-01-01 and 2007-12-31 one row per day
    SELECT to_date('1999-01-01','YYYY-MM-DD')+rownum
    FROM DUAL
    CONNECT BY
    ROWNUM<=to_date('2007-12-31','YYYY-MM-DD')-to_date('1999-01-01','YYYY- MM-DD');
    * Example for generating a calendar random number
    SELECT DBMS_RANDOM.VALUE
    FROM DUAL
    CONNECT BY ROWNUM<=1000000;
    May be you can create a VIEW out of this query ,reverse it and use that as a source in your interface.
    Try this and let me know.
    Thanks,
    G
    Edited by: Gurusank on Dec 22, 2008 4:19 PM

  • Two fact and one dimension table

    Hi folks
    i am new to this field ( 3 months ), my TL has given me one task i have to finish it up today itself, Can anyone one give me some idea how to implement
    req. The req is
    1. there are 2 fact and one dimension table is there, i have to create reports on Quaterly and monthly basis by using one dimension table.
    Can any one tell me what all are the steps in need to follow
    2.What is standalone and integrated systems?
    Regards
    Reddy

    Hi
    If u r having mapping keys for quarter level n monthly level then create a Alias for the dimension table.Use the original dim table for quarter level mapping and the alias dim for monthly level.
    Thanks
    Don

  • Degenerate and Junk dimensions

    I am new to SSAS. I want to understand degenerate and junk dimensions in detail. It would be good if you provide with practical examples please.
    Regards,
    Ramu
    Ramu Gade

    Junk Dimensions
    There are certain scenarios where you will find that the source for a fact table contains a bunch of low-cardinality attributes that don’t really relate to any of the other attributes describing these facts. Some of the more common examples are bit/character
    based “flags” or “codes” which are useful to the end users for filtering and aggregating the facts.  For example, imagine a user who wants to analyze orders from the order fact table that are flagged as “reprocessed”…they can either filter for facts with
    the reprocessed flag if they are only interested in that subset…or they can group by the “reprocessed” flag calculate things like the percent of orders that are “reprocessed”…
    Instead of building a separate dimension for each of these individual attributes, another option is to combine them and build what’s known as a Junk Dimension based on the Cartesian product of each of these attributes and they’re corresponding range of values.
    This technique does 2 important things:
    Saves Disk Space
    consider a single 4byte integer key linking to the junk dimension vs. a handful of 4byte integer keys each linking to a separate dimension.  Might not sound like a lot on a per-record basis, but once you extrapolate out over a 100mm record fact table the
    savings really adds up.
    Improves End-User Experience
    By keeping the total number of dimensions down to a manageable size it will be easier for your end-users to find the attributes they’re looking for during ad-hoc analysis. Kimball recommends <= 26 dimensions per fact table – of course there are always a
    few edge-case exceptions.
    Degenerate Dimensions
    The Degenerate Dimension is another modeling technique for attributes found in the source transaction table.  The main differences between these attributes and the ones that would fall into our Junk Dimension are as follows:
    Cardinality
    these are typically high-cardinality attributes – in some cases having a 1 to 1 relationship with the fact.  These are likely to be the business keys of the fact table such as Purchase Order Number, Work Order Number, etc.  Another potential candidate
    for the degenerate dimension is free-form comment fields.
    Use Case for End-Users
    these attributes are not going to be used for filtering/aggregating facts. Instead, these are the types of attributes that are typically going to be used in drilldown or data mining scenarios (ex. Market Basket Analysis). For example, imagine a user who is
    analyzing purchase orders in the “delayed” status. After drilling down on the delayed POs for a certain supplier in a certain time period…the next step might be to pick up the Purchase Order Number which would allow this user to trace this small subset of
    PO’s back to the source system to find out why they are “delayed”.
    Storage
    Despite the name, these attributes typically remain in the fact table. There really isn’t much point in moving them out to an actual dimension – because of the high-cardinality there’s likely to be zero space savings…in fact it would probably cost you space
    due to the additional surrogate-keys.  You’ll also likely be paying a heavy price on the join at query time.
    Analysis Services Implementation
    For Junk Dimensions, you will create a new dimension at the project-level pointing it to the table (or view) in the data warehouse that materializes the distinct combinations of values for the various junk-attributes.  After configuring the dimension at
    the SSAS project-level, it can be added to the cube(s) and linked up to the measure group(s) via regular relationships (where appropriate).
    For Degenerate Dimensions, the process is the same except you base the project-level dimension off of the fact table (or view). Once the project-level dimension is configured, it can be added to the cube(s) and linked up to the measure group(s) using “fact”-relationships
    (where appropriate).
    Please mark as Answer if this helps!
    Rajasekhar.

  • Update and process dimension by BADI

    Dear Experts,
    We are working on SAP BPC NW 7.5 with BADIs and we want to know how to
    - create id members at dimensions
    - write properties at members
    - delete id members at dimensions
    to summarize we want full control to update and process dimension members like update the master data by mean of BADI uploading info from SAP ECC or wherever and remove some entries if is needed
    we've tried using methods like read_mbr_data, write_mbr_data and process_dimension without success
    Can somebody explain how to do it?
    Can somebody give us an example, please?
    Thanks in advance,
    Albert

    Hi Gersh,
    Finally weu2019ve solved the problem of the transports and analyzed the implementations in detail but u2026 unfortunately there is not implemented a deletion of master data, just creation by mean of UJA_API_PROCESS_DIMENSION (working with XML files) and we prefer to use the method write_mbr_data that belong to class/interface IF_UJA_DIM_DATA).
    Anyway u2026 the point is that we want to know how to delete master data from a BPC dimension using a method u2026 otherwise we will try to delete master data directly from the corresponding InfoObject.
    By the way, it would be great if anybody could tell us how to delete records with signed data u20180u2019 from application u2026 the alternative technique to do it is compressing the InfoCube u2026 but weu2019re looking for something more elegant
    Summarizing u2026  weu2019ve to 2 questions:
    1.     How to delete master data from a BPC dimension using a BPC method
    2.     How to delete records with signed data u20180u2019 from application using a BPC method
    Many thanks in advance!
    Regards,
    Albert

  • Why Doesn't XMLIndex Create and Populate Upon Scale-Up For Eval Table?

    Presently working with Oracle release 11.2.0.1 using xmltype securefile binary xml tables.
    In a quandry here and hoping not to have to open an Oracle SR...
    Able to create a working xmlindex against an 'Acme Eval' table in our development environment against an Acme eval table (estimate ~ 5GB) containing 325,550 rows. Creation takes about 10 mins. No partitioning is being used.
    When trying the exact same xmlindex creation against our, much more powerful, pvs platform environment contaning 13,985,124 rows; the xmlindex object shows up as existing in the data dictionary, but the session never stops running after at least 24 hrs of runtime.
    The pvs hardware environment uses: (1.) 24 processor, (2.) Solaris-64 OS, (3.) 128GB memory.
    Two 1 hr AWR reports for the pvs environment shows a huge amount of logical read/writes. The foreground wait event; 'db file sequential read' dominates the DBTime @ 92%. There is about 4.6 GB physical reads/3.5GB physical writes - not too large relatively speaking. The I/O subsystem is having no problem handling the throughput. The top, by far,Time Model Statistics is the 'sql excute elapsed time' @ 99%. User I/O is the main foreground wait class @92%. These values are similar for both of the AWR report - except one report show the 'CREATE XMLINDEX...' statement as being the top sql. The other report shows ' INSERT INTO CROUTREACH.EVAL_IDX_TAB_I... ' As the top sql.
    Been several days since this post. Hoping someone might be able to provide some insight or share their experiences on xmlindexes scaling up to millions of records in the 5 - 10 gb xmltype table range...
    Regards,
    Rick Blanchard
    The frustration here is; there is no obvious database configuration, physical cpu, memory, or I/O issue - other than the logical gets centered around the db file sequential read' wait event.
    Can't do much as far as adjusting the create index statement and underlying attendent Oracle xml operations - the main frustration factor here...
    The xmlindex is still undergoing record insertions.
    Additionally, in the pvs environment; no dml is allowed on the xmlindex and the select statement that works fine using the xmlindex via the optimizer in the development environment doesnt pick up the xmlindex in the pvs environment - as would be expected if the xmlindex wasn't completely populated.
    Appears the xmlindex record population is stalled...
    In the pvs environment, when performing the dml 'alter index croutreach.eval_xmlindex_ix noparallel';
    get this error - typical when an xmlindex is being populated with records:
    ALTER INDEX croutreach.eval_xmlindex_ix NOPARALLEL
    Error report:
    SQL Error: ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
    00054. 00000 -  "resource busy and acquire with NOWAIT specified"
    *Cause:    Resource interested is busy.
    *Action:   Retry if necessary. xmlindex create statement used in both cases is
    (The underlying eval table is also set to a dop of 20):
    CREATE
      INDEX "EVAL_XMLINDEX_IX" ON "EVAL"
        OBJECT_VALUE
      INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
        'XMLTable eval_idx_tab_I XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",  
    DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/eval''      
    COLUMNS        
    eval_catt VARCHAR2(50) path ''@category'',
    acne_mbr_idd VARCHAR2(50) path ''@acmeMemberId'',
    eval_idd VARCHAR2(50) path ''@evalId'',
    eval_dtt TIMESTAMP WITH TIME ZONE path ''@eval_dt'',
    derivedFact XMLTYPE path ''derivedFacts/ns7:derivedFact'' virtual 
    XMLTable eval_idx_tab_II XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7", 
    DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/ns7:derivedFact'' passing derivedFact     
    COLUMNS         
    defId VARCHAR2(50) path ''ns7:defId'',
    factSource VARCHAR2(50) path ''ns7:factSource'',
    origInferred_dt TIMESTAMP WITH TIME ZONE path ''ns7:origInferred_dt'',
    typee VARCHAR2(20) path ''ns7:factValue/ns7:type'',
    valuee VARCHAR2(1000) path ''ns7:factValue/ns7:value'',
    defUrn VARCHAR2(100) path ''ns7:defUrn'''
      )parallel 20;The development environment eval table is:
    CREATE
      TABLE "N98991"."EVAL" OF XMLTYPE
        CONSTRAINT "EVAL_ID_PK" PRIMARY KEY ("EVAL_ID") USING INDEX PCTFREE 10
        INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT
        1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
        FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
        DEFAULT) TABLESPACE "ACME_DATA" ENABLE
      XMLTYPE STORE AS SECUREFILE BINARY XML
        TABLESPACE "ACME_DATA" ENABLE STORAGE IN ROW CHUNK 8192 CACHE NOCOMPRESS
        KEEP_DUPLICATES STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT)
      ALLOW NONSCHEMA ALLOW ANYSCHEMA VIRTUAL COLUMNS
        "EVAL_DT" AS (SYS_EXTRACT_UTC(CAST(TO_TIMESTAMP_TZ(SYS_XQ_UPKXML2SQL(
        SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03"; (::)
    /eval/@eval_dt'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2),'SYYYY-MM-DD"T"HH24:MI:SS.FFTZH:TZM') AS TIMESTAMP
    WITH
      TIME ZONE))),
        "EVAL_CAT" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@category'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "ACME_MBR_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@acmeMemberId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "EVAL_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@evalId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50)))
      PCTFREE 0 PCTUSED 80 INITRANS 4 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
        INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
        FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT
      TABLESPACE "ACME_DATA" PARALLEL 20 ;
    CREATE
      INDEX "N98991"."EVAL_XMLINDEX_IX" ON "N98991"."EVAL"
        OBJECT_VALUE
      INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
        'XMLTable eval_idx_tab_I XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",  
    DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/eval''      
    COLUMNS        
    eval_catt VARCHAR2(50) path ''@category'',
    acne_mbr_idd VARCHAR2(50) path ''@acmeMemberId'',
    eval_idd VARCHAR2(50) path ''@evalId'',
    eval_dtt TIMESTAMP WITH TIME ZONE path ''@eval_dt'',
    derivedFact XMLTYPE path ''derivedFacts/ns7:derivedFact'' virtual 
    XMLTable eval_idx_tab_II XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7", 
    DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/ns7:derivedFact'' passing derivedFact     
    COLUMNS         
    defId VARCHAR2(50) path ''ns7:defId'',
    factSource VARCHAR2(50) path ''ns7:factSource'',
    origInferred_dt TIMESTAMP WITH TIME ZONE path ''ns7:origInferred_dt'',
    typee VARCHAR2(20) path ''ns7:factValue/ns7:type'',
    valuee VARCHAR2(1000) path ''ns7:factValue/ns7:value'',
    defUrn VARCHAR2(100) path ''ns7:defUrn'''
    CREATE UNIQUE INDEX "N98991"."SYS_C00415365" ON "N98991"."EVAL"
        "SYS_NC_OID$"
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE
        INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
        FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT
      TABLESPACE "ACME_DATA" ;
    CREATE UNIQUE INDEX "N98991"."SYS_IL0000688125C00003$$" ON "N98991"."EVAL"
        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576
        MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST
        GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
        TABLESPACE "ACME_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;
    CREATE UNIQUE INDEX "N98991"."EVAL_ID_PK" ON "N98991"."EVAL" ("EVAL_ID")
      PCTFREE 10 INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536
      NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
      FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
      DEFAULT) TABLESPACE "ACME_DATA" ;The pvs environment's eval table and xmlindex defintion is:
    CREATE
      TABLE "CROUTREACH"."EVAL" OF XMLTYPE
        CONSTRAINT "EVAL_ID_PK" PRIMARY KEY ("EVAL_ID") USING INDEX PCTFREE 10
        INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT
        1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
        FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
        DEFAULT) TABLESPACE "ACME_DATA" ENABLE
      XMLTYPE STORE AS SECUREFILE BINARY XML
        TABLESPACE "ACME_DATA" ENABLE STORAGE IN ROW CHUNK 8192 CACHE NOCOMPRESS
        KEEP_DUPLICATES STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT)
      ALLOW NONSCHEMA ALLOW ANYSCHEMA VIRTUAL COLUMNS
        "EVAL_DT" AS (SYS_EXTRACT_UTC(CAST(TO_TIMESTAMP_TZ(SYS_XQ_UPKXML2SQL(
        SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03"; (::)
    /eval/@eval_dt'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2),'SYYYY-MM-DD"T"HH24:MI:SS.FFTZH:TZM') AS TIMESTAMP
    WITH
      TIME ZONE))),
        "EVAL_CAT" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@category'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "ACME_MBR_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@acmeMemberId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "EVAL_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@evalId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50)))
      PCTFREE 0 PCTUSED 80 INITRANS 4 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
        INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
        FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT
      TABLESPACE "ACME_DATA" PARALLEL 20 ;
    CREATE
      INDEX "CROUTREACH"."EVAL_IDX_MBR_ID_EVAL_CAT" ON "CROUTREACH"."EVAL"
        "ACME_MBR_ID",
        "EVAL_CAT"
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE
        INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
        FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT
      TABLESPACE "ACME_DATA" PARALLEL 16 ;
    CREATE UNIQUE INDEX "CROUTREACH"."SYS_C0018448" ON "CROUTREACH"."EVAL"
        "SYS_NC_OID$"
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE
        INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
        FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT
      TABLESPACE "ACME_DATA" ;
    CREATE UNIQUE INDEX "CROUTREACH"."SYS_IL0000094844C00003$$" ON "CROUTREACH".
      "EVAL"
        PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING STORAGE(INITIAL 65536 NEXT
        1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
        FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
        DEFAULT) TABLESPACE "ACME_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;
    CREATE UNIQUE INDEX "CROUTREACH"."EVAL_ID_PK" ON "CROUTREACH"."EVAL" ("EVAL_ID"
      ) PCTFREE 10 INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536
      NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
      FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
      DEFAULT) TABLESPACE "ACME_DATA" PARALLEL 16 ;
      CREATE
        INDEX "CROUTREACH"."EVAL_XMLINDEX_IX" ON "CROUTREACH"."EVAL"
          OBJECT_VALUE
        INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
          'XMLTable eval_idx_tab_I XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",
    DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/eval''
    COLUMNS
    eval_catt VARCHAR2(50) path ''@category'',
    acne_mbr_idd VARCHAR2(50) path ''@acmeMemberId'',
    eval_idd VARCHAR2(50) path ''@evalId'',
    eval_dtt TIMESTAMP WITH TIME ZONE path ''@eval_dt'',
    derivedFact XMLTYPE path ''derivedFacts/ns7:derivedFact'' virtual
    XMLTable eval_idx_tab_II XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",
    DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/ns7:derivedFact'' passing derivedFact
    COLUMNS
    defId VARCHAR2(50) path ''ns7:defId'',
    factSource VARCHAR2(50) path ''ns7:factSource'',
    origInferred_dt TIMESTAMP WITH TIME ZONE path ''ns7:origInferred_dt'',
    typee VARCHAR2(20) path ''ns7:factValue/ns7:type'',
    valuee VARCHAR2(1000) path ''ns7:factValue/ns7:value'',
    defUrn VARCHAR2(100) path ''ns7:defUrn'''
        PARALLEL 20 ;Wondering if anyone has run into xmlindex creation and populating problems similar to this, when scaling up from thousands of records to millions of records.
    At this point, for my work to be useful; must be able to get the xmlindex to at least successfully create and populate @ the 13.9 million records.
    Any suggestions, much appreciated.
    Regards,
    Rick Blanchard
    Edited by: RickBlanchardSRS on May 29, 2012 1:03 PM

    We didn't use "XMLDB XMLType partitioning" actually, but something simple like
    CREATE TABLE P_DATA
    (    "ID" NUMBER(15,0),
          "DOC" "SYS"."XMLTYPE"
    ) SEGMENT CREATION IMMEDIATE
    NOCOMPRESS NOLOGGING
    TABLESPACE "XML_DATA"
    XMLTYPE COLUMN "DOC" STORE AS SECUREFILE BINARY XML
    (TABLESPACE "XML_DATA"
      NOCOMPRESS  KEEP_DUPLICATES)
    XMLSCHEMA "http://www.xxxxx.com/schema_v3.0.xsd"
    ELEMENT "RECORD"
    DISALLOW NONSCHEMA
    PARTITION BY RANGE(ID)
    (PARTITION Q_DATA_PART_01 VALUES LESS THAN  (100000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_02 VALUES LESS THAN  (200000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_03 VALUES LESS THAN  (300000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_04 VALUES LESS THAN  (400000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_05 VALUES LESS THAN  (500000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_06 VALUES LESS THAN  (600000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_07 VALUES LESS THAN  (700000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_08 VALUES LESS THAN  (800000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_09 VALUES LESS THAN  (900000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_10 VALUES LESS THAN (1000000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_11 VALUES LESS THAN (1100000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_12 VALUES LESS THAN (1200000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_13 VALUES LESS THAN (1300000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_14 VALUES LESS THAN (1400000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_15 VALUES LESS THAN (1500000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_16 VALUES LESS THAN (1600000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_17 VALUES LESS THAN (1700000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_18 VALUES LESS THAN (1800000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_19 VALUES LESS THAN (1900000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_20 VALUES LESS THAN (2000000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_21 VALUES LESS THAN (2100000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_22 VALUES LESS THAN (2200000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_23 VALUES LESS THAN (2300000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_24 VALUES LESS THAN (2400000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_25 VALUES LESS THAN (2500000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_26 VALUES LESS THAN (2600000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_27 VALUES LESS THAN (2700000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_28 VALUES LESS THAN (2800000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_29 VALUES LESS THAN (2900000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_30 VALUES LESS THAN (3000000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_MAX VALUES LESS THAN  (MAXVALUE) TABLESPACE "XML_DATA" NOCOMPRESS
    );Could be mistaken, but if I remember correctly we ended up with 10mill record id ranges. We needed to do this anyway (=using partitioning), otherwise we would have reached the maximum amount of records in a column physical limit (for our used db_block_size)
    Edited by: Marco Gralike on May 29, 2012 10:02 PM

  • Regarding Header And Footer in Data type Creation

    Hi All,
    Can Any One Please Send me One Screen Shot Developed with Header And Footer in the Data type Creation
    I want to Know How to Create And where to create that in DT Creation
    And Why Do we need Both of these in DT Creation
    Regards
    Vamsi

    Hi,
    Will u Please send One Screen With These Details, So that I Can Uderstand More
    ID : [email protected]
    Please send
    Regards
    Vamsi

Maybe you are looking for

  • Missing hard drive space: 600GB?

    I have a mid 2011 21.5" 2.7 GHZ I5 IMac with a 1 TB HDD.  It is reporting that the startup disk is full (<8gb left). I have surveyed the hard drive and found the "\users" folder is using 972GB. The sole "user" folder (me) is using 972 GB An explorati

  • How can I shutdown a database with JDBC?

    I've noticed in jdbc 10g doc "ORACLE DATABASE 10G JDBC: BEST-OF-BREED DRIVERS FOR JAVA, J2EE, WEB SERVICES AND GRID",we could "fully utilize the features offered by the Oracle Database 10g ", and "start/stop database". I tried for several times with

  • Not sure if it can be done...

    Hello all... I'm sorry if this has been discussed already, I haven't seen it yet... I have created a pdf with forms, now I would like to distribute it to anyone though a website link. I would like any user to be able to open it, fill in forms, click

  • Picking output not hitting breakpoint in print program

    Hi Guys, I have an o/p type ZK00, I need to debug the print program and the script for it. I have placed a breakpoint in the print program at the designated routine in TNAPR (Entry). Now when I try to save the delivery I expect it to hit the code but

  • Authorization Role Practice in Implementation Project

    Gurus, Okay, I've been implementing SAP R/3 for around 5 implementation cycles, before realizing my method for creating role may be outdated. I need your advice regarding this. Here's my common scenario on every implementation project in realization