Create and Populate a Hyperion Planning Cube using Hyperion Data Integratio

Friends,
I am new to Essbase and have worked extensively in Informatica. Hyperion DIM (OEM version of Informatica) is chosen to create and populate a Hyperion Planning System (with Essbase cube in the backend).
I am using Hyperion DIM 9.3.
Can someone let me know (or share a document) how I can do the following
1) Create a Planning application with a Essbase Cube in the backend using Hyperion Data Integration Management
2) How to populate the Essbase outline and the actuals Essbase cube with data using DIM.
Thanks a lot for all help.

Hi,
You cannot create planning applications using DIM.
To load metadata have a look at :- http://www.oracle.com/technology/obe/hyp_fp/DIM_Planning/OBE_Dim_Planning.html
You can refresh planning database in DIM by
To enable the Refresh Database property for a session:
In Workflow Manager, right-click the session and select Edit.
Click the Mapping tab.
Select a Planning target.
Check the Refresh Database box.
Ok?
Cheers
John
http://john-goodwin.blogspot.com/

Similar Messages

  • Why Doesn't XMLIndex Create and Populate Upon Scale-Up For Eval Table?

    Presently working with Oracle release 11.2.0.1 using xmltype securefile binary xml tables.
    In a quandry here and hoping not to have to open an Oracle SR...
    Able to create a working xmlindex against an 'Acme Eval' table in our development environment against an Acme eval table (estimate ~ 5GB) containing 325,550 rows. Creation takes about 10 mins. No partitioning is being used.
    When trying the exact same xmlindex creation against our, much more powerful, pvs platform environment contaning 13,985,124 rows; the xmlindex object shows up as existing in the data dictionary, but the session never stops running after at least 24 hrs of runtime.
    The pvs hardware environment uses: (1.) 24 processor, (2.) Solaris-64 OS, (3.) 128GB memory.
    Two 1 hr AWR reports for the pvs environment shows a huge amount of logical read/writes. The foreground wait event; 'db file sequential read' dominates the DBTime @ 92%. There is about 4.6 GB physical reads/3.5GB physical writes - not too large relatively speaking. The I/O subsystem is having no problem handling the throughput. The top, by far,Time Model Statistics is the 'sql excute elapsed time' @ 99%. User I/O is the main foreground wait class @92%. These values are similar for both of the AWR report - except one report show the 'CREATE XMLINDEX...' statement as being the top sql. The other report shows ' INSERT INTO CROUTREACH.EVAL_IDX_TAB_I... ' As the top sql.
    Been several days since this post. Hoping someone might be able to provide some insight or share their experiences on xmlindexes scaling up to millions of records in the 5 - 10 gb xmltype table range...
    Regards,
    Rick Blanchard
    The frustration here is; there is no obvious database configuration, physical cpu, memory, or I/O issue - other than the logical gets centered around the db file sequential read' wait event.
    Can't do much as far as adjusting the create index statement and underlying attendent Oracle xml operations - the main frustration factor here...
    The xmlindex is still undergoing record insertions.
    Additionally, in the pvs environment; no dml is allowed on the xmlindex and the select statement that works fine using the xmlindex via the optimizer in the development environment doesnt pick up the xmlindex in the pvs environment - as would be expected if the xmlindex wasn't completely populated.
    Appears the xmlindex record population is stalled...
    In the pvs environment, when performing the dml 'alter index croutreach.eval_xmlindex_ix noparallel';
    get this error - typical when an xmlindex is being populated with records:
    ALTER INDEX croutreach.eval_xmlindex_ix NOPARALLEL
    Error report:
    SQL Error: ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
    00054. 00000 -  "resource busy and acquire with NOWAIT specified"
    *Cause:    Resource interested is busy.
    *Action:   Retry if necessary. xmlindex create statement used in both cases is
    (The underlying eval table is also set to a dop of 20):
    CREATE
      INDEX "EVAL_XMLINDEX_IX" ON "EVAL"
        OBJECT_VALUE
      INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
        'XMLTable eval_idx_tab_I XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",  
    DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/eval''      
    COLUMNS        
    eval_catt VARCHAR2(50) path ''@category'',
    acne_mbr_idd VARCHAR2(50) path ''@acmeMemberId'',
    eval_idd VARCHAR2(50) path ''@evalId'',
    eval_dtt TIMESTAMP WITH TIME ZONE path ''@eval_dt'',
    derivedFact XMLTYPE path ''derivedFacts/ns7:derivedFact'' virtual 
    XMLTable eval_idx_tab_II XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7", 
    DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/ns7:derivedFact'' passing derivedFact     
    COLUMNS         
    defId VARCHAR2(50) path ''ns7:defId'',
    factSource VARCHAR2(50) path ''ns7:factSource'',
    origInferred_dt TIMESTAMP WITH TIME ZONE path ''ns7:origInferred_dt'',
    typee VARCHAR2(20) path ''ns7:factValue/ns7:type'',
    valuee VARCHAR2(1000) path ''ns7:factValue/ns7:value'',
    defUrn VARCHAR2(100) path ''ns7:defUrn'''
      )parallel 20;The development environment eval table is:
    CREATE
      TABLE "N98991"."EVAL" OF XMLTYPE
        CONSTRAINT "EVAL_ID_PK" PRIMARY KEY ("EVAL_ID") USING INDEX PCTFREE 10
        INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT
        1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
        FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
        DEFAULT) TABLESPACE "ACME_DATA" ENABLE
      XMLTYPE STORE AS SECUREFILE BINARY XML
        TABLESPACE "ACME_DATA" ENABLE STORAGE IN ROW CHUNK 8192 CACHE NOCOMPRESS
        KEEP_DUPLICATES STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT)
      ALLOW NONSCHEMA ALLOW ANYSCHEMA VIRTUAL COLUMNS
        "EVAL_DT" AS (SYS_EXTRACT_UTC(CAST(TO_TIMESTAMP_TZ(SYS_XQ_UPKXML2SQL(
        SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03"; (::)
    /eval/@eval_dt'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2),'SYYYY-MM-DD"T"HH24:MI:SS.FFTZH:TZM') AS TIMESTAMP
    WITH
      TIME ZONE))),
        "EVAL_CAT" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@category'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "ACME_MBR_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@acmeMemberId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "EVAL_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@evalId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50)))
      PCTFREE 0 PCTUSED 80 INITRANS 4 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
        INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
        FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT
      TABLESPACE "ACME_DATA" PARALLEL 20 ;
    CREATE
      INDEX "N98991"."EVAL_XMLINDEX_IX" ON "N98991"."EVAL"
        OBJECT_VALUE
      INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
        'XMLTable eval_idx_tab_I XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",  
    DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/eval''      
    COLUMNS        
    eval_catt VARCHAR2(50) path ''@category'',
    acne_mbr_idd VARCHAR2(50) path ''@acmeMemberId'',
    eval_idd VARCHAR2(50) path ''@evalId'',
    eval_dtt TIMESTAMP WITH TIME ZONE path ''@eval_dt'',
    derivedFact XMLTYPE path ''derivedFacts/ns7:derivedFact'' virtual 
    XMLTable eval_idx_tab_II XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7", 
    DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/ns7:derivedFact'' passing derivedFact     
    COLUMNS         
    defId VARCHAR2(50) path ''ns7:defId'',
    factSource VARCHAR2(50) path ''ns7:factSource'',
    origInferred_dt TIMESTAMP WITH TIME ZONE path ''ns7:origInferred_dt'',
    typee VARCHAR2(20) path ''ns7:factValue/ns7:type'',
    valuee VARCHAR2(1000) path ''ns7:factValue/ns7:value'',
    defUrn VARCHAR2(100) path ''ns7:defUrn'''
    CREATE UNIQUE INDEX "N98991"."SYS_C00415365" ON "N98991"."EVAL"
        "SYS_NC_OID$"
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE
        INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
        FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT
      TABLESPACE "ACME_DATA" ;
    CREATE UNIQUE INDEX "N98991"."SYS_IL0000688125C00003$$" ON "N98991"."EVAL"
        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576
        MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST
        GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
        TABLESPACE "ACME_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;
    CREATE UNIQUE INDEX "N98991"."EVAL_ID_PK" ON "N98991"."EVAL" ("EVAL_ID")
      PCTFREE 10 INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536
      NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
      FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
      DEFAULT) TABLESPACE "ACME_DATA" ;The pvs environment's eval table and xmlindex defintion is:
    CREATE
      TABLE "CROUTREACH"."EVAL" OF XMLTYPE
        CONSTRAINT "EVAL_ID_PK" PRIMARY KEY ("EVAL_ID") USING INDEX PCTFREE 10
        INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT
        1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
        FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
        DEFAULT) TABLESPACE "ACME_DATA" ENABLE
      XMLTYPE STORE AS SECUREFILE BINARY XML
        TABLESPACE "ACME_DATA" ENABLE STORAGE IN ROW CHUNK 8192 CACHE NOCOMPRESS
        KEEP_DUPLICATES STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT)
      ALLOW NONSCHEMA ALLOW ANYSCHEMA VIRTUAL COLUMNS
        "EVAL_DT" AS (SYS_EXTRACT_UTC(CAST(TO_TIMESTAMP_TZ(SYS_XQ_UPKXML2SQL(
        SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03"; (::)
    /eval/@eval_dt'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2),'SYYYY-MM-DD"T"HH24:MI:SS.FFTZH:TZM') AS TIMESTAMP
    WITH
      TIME ZONE))),
        "EVAL_CAT" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@category'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "ACME_MBR_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@acmeMemberId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "EVAL_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@evalId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50)))
      PCTFREE 0 PCTUSED 80 INITRANS 4 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
        INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
        FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT
      TABLESPACE "ACME_DATA" PARALLEL 20 ;
    CREATE
      INDEX "CROUTREACH"."EVAL_IDX_MBR_ID_EVAL_CAT" ON "CROUTREACH"."EVAL"
        "ACME_MBR_ID",
        "EVAL_CAT"
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE
        INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
        FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT
      TABLESPACE "ACME_DATA" PARALLEL 16 ;
    CREATE UNIQUE INDEX "CROUTREACH"."SYS_C0018448" ON "CROUTREACH"."EVAL"
        "SYS_NC_OID$"
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE
        INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
        FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT
      TABLESPACE "ACME_DATA" ;
    CREATE UNIQUE INDEX "CROUTREACH"."SYS_IL0000094844C00003$$" ON "CROUTREACH".
      "EVAL"
        PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING STORAGE(INITIAL 65536 NEXT
        1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
        FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
        DEFAULT) TABLESPACE "ACME_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;
    CREATE UNIQUE INDEX "CROUTREACH"."EVAL_ID_PK" ON "CROUTREACH"."EVAL" ("EVAL_ID"
      ) PCTFREE 10 INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536
      NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
      FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
      DEFAULT) TABLESPACE "ACME_DATA" PARALLEL 16 ;
      CREATE
        INDEX "CROUTREACH"."EVAL_XMLINDEX_IX" ON "CROUTREACH"."EVAL"
          OBJECT_VALUE
        INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
          'XMLTable eval_idx_tab_I XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",
    DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/eval''
    COLUMNS
    eval_catt VARCHAR2(50) path ''@category'',
    acne_mbr_idd VARCHAR2(50) path ''@acmeMemberId'',
    eval_idd VARCHAR2(50) path ''@evalId'',
    eval_dtt TIMESTAMP WITH TIME ZONE path ''@eval_dt'',
    derivedFact XMLTYPE path ''derivedFacts/ns7:derivedFact'' virtual
    XMLTable eval_idx_tab_II XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",
    DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/ns7:derivedFact'' passing derivedFact
    COLUMNS
    defId VARCHAR2(50) path ''ns7:defId'',
    factSource VARCHAR2(50) path ''ns7:factSource'',
    origInferred_dt TIMESTAMP WITH TIME ZONE path ''ns7:origInferred_dt'',
    typee VARCHAR2(20) path ''ns7:factValue/ns7:type'',
    valuee VARCHAR2(1000) path ''ns7:factValue/ns7:value'',
    defUrn VARCHAR2(100) path ''ns7:defUrn'''
        PARALLEL 20 ;Wondering if anyone has run into xmlindex creation and populating problems similar to this, when scaling up from thousands of records to millions of records.
    At this point, for my work to be useful; must be able to get the xmlindex to at least successfully create and populate @ the 13.9 million records.
    Any suggestions, much appreciated.
    Regards,
    Rick Blanchard
    Edited by: RickBlanchardSRS on May 29, 2012 1:03 PM

    We didn't use "XMLDB XMLType partitioning" actually, but something simple like
    CREATE TABLE P_DATA
    (    "ID" NUMBER(15,0),
          "DOC" "SYS"."XMLTYPE"
    ) SEGMENT CREATION IMMEDIATE
    NOCOMPRESS NOLOGGING
    TABLESPACE "XML_DATA"
    XMLTYPE COLUMN "DOC" STORE AS SECUREFILE BINARY XML
    (TABLESPACE "XML_DATA"
      NOCOMPRESS  KEEP_DUPLICATES)
    XMLSCHEMA "http://www.xxxxx.com/schema_v3.0.xsd"
    ELEMENT "RECORD"
    DISALLOW NONSCHEMA
    PARTITION BY RANGE(ID)
    (PARTITION Q_DATA_PART_01 VALUES LESS THAN  (100000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_02 VALUES LESS THAN  (200000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_03 VALUES LESS THAN  (300000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_04 VALUES LESS THAN  (400000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_05 VALUES LESS THAN  (500000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_06 VALUES LESS THAN  (600000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_07 VALUES LESS THAN  (700000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_08 VALUES LESS THAN  (800000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_09 VALUES LESS THAN  (900000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_10 VALUES LESS THAN (1000000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_11 VALUES LESS THAN (1100000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_12 VALUES LESS THAN (1200000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_13 VALUES LESS THAN (1300000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_14 VALUES LESS THAN (1400000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_15 VALUES LESS THAN (1500000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_16 VALUES LESS THAN (1600000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_17 VALUES LESS THAN (1700000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_18 VALUES LESS THAN (1800000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_19 VALUES LESS THAN (1900000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_20 VALUES LESS THAN (2000000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_21 VALUES LESS THAN (2100000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_22 VALUES LESS THAN (2200000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_23 VALUES LESS THAN (2300000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_24 VALUES LESS THAN (2400000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_25 VALUES LESS THAN (2500000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_26 VALUES LESS THAN (2600000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_27 VALUES LESS THAN (2700000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_28 VALUES LESS THAN (2800000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_29 VALUES LESS THAN (2900000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_30 VALUES LESS THAN (3000000000) TABLESPACE "XML_DATA" NOCOMPRESS
    ,PARTITION Q_DATA_PART_MAX VALUES LESS THAN  (MAXVALUE) TABLESPACE "XML_DATA" NOCOMPRESS
    );Could be mistaken, but if I remember correctly we ended up with 10mill record id ranges. We needed to do this anyway (=using partitioning), otherwise we would have reached the maximum amount of records in a column physical limit (for our used db_block_size)
    Edited by: Marco Gralike on May 29, 2012 10:02 PM

  • Unable to load data to Hyperion planning application using odi

    Hi All,
    When I try to load data into planning using odi, the odi process completes successfully with the following status in the operator ReportStatistics as shown below but the data doesn't seem to appear in the planning data form or essbase
    can anyone please help
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (most recent call last):
    File "<string>", line 2, in <module>
    Planning Writer Load Summary:
         Number of rows successfully processed: 20
         Number of rows rejected: 0
    source is oracle database
    target account dimension
    LKM SQL TO SQL
    IKM SQL TO HYPERION PLANNING is used
    In Target the following columns were mapped
    Account(load dimension)
    Data load cube name
    driverdimensionmetadata
    Point of view
    LOG FILE
    2012-08-27 09:46:43,214 INFO [SimpleAsyncTaskExecutor-3]: Oracle Data Integrator Adapter for Hyperion Planning
    2012-08-27 09:46:43,214 INFO [SimpleAsyncTaskExecutor-3]: Connecting to planning application [OPAPP] on [mcg-b055]:[11333] using username [admin].
    2012-08-27 09:46:43,277 INFO [SimpleAsyncTaskExecutor-3]: Successfully connected to the planning application.
    2012-08-27 09:46:43,277 INFO [SimpleAsyncTaskExecutor-3]: The load options for the planning load are
         Dimension Name: Account Sort Parent Child : false
         Load Order By Input : false
         Refresh Database : false
    2012-08-27 09:46:43,339 INFO [SimpleAsyncTaskExecutor-3]: Begining the load process.
    2012-08-27 09:46:43,355 DEBUG [SimpleAsyncTaskExecutor-3]: Number of columns in the source result set does not match the number of planning target columns.
    2012-08-27 09:46:43,371 INFO [SimpleAsyncTaskExecutor-3]: Load type is [Load dimension member].
    2012-08-27 09:46:43,996 INFO [SimpleAsyncTaskExecutor-3]: Load process completed.

    Do any members exist in the account dimension before the load? if not can you try adding one member manually then trying the load again.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • How to delete Hyperion Planning member using ODI

    Hi All,
    Anyone know how to delete Hyperion Planning member using ODI? And how to update account type on Hyperion Planning Using ODI?
    e.g.:
    I have a member with account type Saved Assumption, and I need to change to Revenue, but it cannot change. But if I change to Expense, it can. So what's wrong with the mapping?
    This is my csv file for update a member on Hyperion Planning.
    Parent,Account,Default Alias,Operation,Data Storage,Two Pass Calculation,Account Type,Time Balance,Skip Value,Data Type,Exchange Rate Type,Use 445,Variance Reporting,Source Plan Type,Aggregation,Member Formula
    Account,Statistics,,Update,,,,,,,,,,,,
    Account,Meal,,Update,Store,,Expense,,,,,,,,,
    Account,Test1,,Update,Never Share,,Saved Assumption,Average,None,Non-currency,none,,,Consol,~,
    Account,Test2,,Update,Never Share,,Revenue,Average,None,Non-currency,none,,,Consol,~,
    Account,Test3,,Update,Never Share,,Saved Assumption,Average,None,Non-currency,none,,,Consol,~,
    Thanks in advance.
    Regards,
    Sumardi
    Edited by: Sumardi Wijaya on Mar 31, 2009 10:57 PM

    Hi,
    To delete a member you use the Operation column, the following values can be used.
    Update – This is the default and is used if not populated, it Add, updates, moves the member being loaded.
    Delete Level 0 - Deletes the member being loaded if it has no children
    Delete Idescendants –Deletes the member being loaded and all of its descendants.
    Delete Descendants –Deletes the descendants of the member being loaded, but does not delete the member itself.
    Does the member you trying to change to Revenue have a variance reporting set to "Expense" as it will need to be set to "Non Expense"
    Also in your interface you can add logging options in the IKM, this maybe will give a clearer indication where your problem lies.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • If I create and publish a standalone iPad app using DPS will it work on the iPhone 4?

    If I create and publish a standalone iPad app using DPS will it work on the iPhone 4?

    No.
    Bob

  • I want to create and link web ads to website using indesign any suggestions?

    i want to create and link web ads to website using Indesign any suggestions?

    Ok I am just a designer. I know how to create web ads but linking/uploading these ads is not part of my job description. My boss asked me to create and manage the ads for our current website thru In design. Is there a way to do this? Or do I need to buy a program like muse?

  • I have made a burn folder with photos exported from I-photo.  It now shows in information, that the date is created and modified is different from the original digitized date.  How can I get the original date to show in the info from Finder?

    I have made a burn folder with photos exported from I-photo.  It now shows in information, that the date  created and modified is different from the original digitized date.  How can I get the original date to show in the info from Finder?

    The Finder reports File information. The date and time of the photo are in the Photo's Exif metadata. The Finder has no awareness of this. All photos apps on any system do.
    Regards
    TD

  • If i have droid maxx and am on wifi am i using verizon data

    if i have droid maxx and am on wifi am i using verizon data?

    No you are not using data while on WiFi.  However if WiFi disconnects you can make sure no cell data is used by turning it off in settings / data usage. Turn it on when u out need it. Optional method

  • My music goes to iCloud and I don't want to use my data to download it continuously. How do I keep it on my device and out of iCloud?

    My music goes to iCloud and I don't want to use my data to download it continuously. How do I keep it on my device and out of iCloud?

    Donnakelley62 wrote:
    I don't know. And no I don't use iTunes match.
    WE are travelling and it seems every time I want to listen to music it's on iCloud and not my device. I find a wifi connection and download the songs them bam next time they are back on iCloud!
    iTunes purchases are always on iCloud. They don't get uploaded there. They are simply there because you purchased them.
    When you first purchqse, iTunes (computer or phone) should download them. If they are not downloaded, you will see the cloud icon.

  • Newbie questions regarding pulling data from hyperion planning cube w/OBIEE

    Hello there,
    we've recently implemented Essbase and are currently pumping it full of revenue/expense data from out source systems to calculate NOI. This data is stored in a staging table at the detail level where it is sourced into Hyperion and aggregated. We also have OBIEE 10g (plan to upgrade to 11g later this year) and we would like to connect to and report out of Essase. Our ultimate goal is to be able to report on the NOI numbers in the planning cube but have the ability to drill down to the detail level which is not stored in Essbase. We've heard it is possible, albeit not native for OBIEE 10g to do this. We've also heard that it is not best practice to use our transactional cube for this type of reporting but to actually create a second "reporting cube".
    What are best practices for getting this NOI data out of Hyperion and merging it with our relational detail reporting? Can we somehow export the data from the cube and store it in a relational database? Should we clone the cube (if even possible) and configure both it and our relational source in the BI repository and setup all drill-throughs there?
    Any info is GREATLY appreciated. Thank you.
    Edited by: cisGuy on Sep 20, 2010 5:31 PM

    i have found information on how to use ODI to extract data for the cube. What I'm really trying to find out though is best practices for reporting off summary level data in Essbase with the ability to drill-through to the detail.
    We've heard reporting off the same cube that users are writing back and transacting on is bad. Do we need to make a "reporting cube" and then bring that in OBIEE and merge it with a relational source or is it better to extract the data from Essbase into flat files and join it to detail tables in our relational source?

  • Hyperion planning cube refresh is taking more time

    Hi,
    My planning application has around 4 cubes/databases as below:
    Cube A: 1000+ dense members and  500+ sparse members
    Cube B: 100+ dense members and  400+ sparse members
    Cube C: 100+ dense members and  600+ sparse members
    Cube D: 300+ dense members and  2000+ sparse members
    When some changes(eg:add members) done to any of the above database and perform refresh database, it is taking almost 2 hrs for the refresh to get completed.
    Is there any option to just refresh the database which is updated/changed and not hitting all the 4 databases in an application  ??
    Is there any tuning to be done to minimise this refresh time??
    Kindly let me know.
    Thanks!

    Well, everytime you touch the dense simension it'll do a full restructuring, that's means, it'll rebuild all the .pag files and .ind files.
    The good side is that this will get rid of all the fragmentation too.
    The problem is the amount of data you have in the cube. The bigger the slower.
    Take a look in the follow thing during the restructuring:
    Look in the server how much processing its use:
    The disk usage:
    Memory (If a cube uses 20 gb it'll double during restructuring process, if not it'll uses swap and things will get really slow):
    Take a look in the .pag file the speed that it'll create the file (Refresh the data and see how much it increase since the last refresh and calculate how much kb/s it's loading)
    With this you can think how you'll tunning the cube:
    If the last topic above is too slow, this could be the size of your block (Or too big or too small). I know oracle said that in 64 bits systems you could have a block bigger than 100k but I never see thir work well
    If the third topic is the problem you can (decrease the memory for each cube or increase the memory of the server)
    If is the first and second you need to verify your server at all.You can put your cube .pag files in different disks if the problem is in the disks.
    Also in the worst case scenario you can create a backup database and put old data there. If the user need it he could acess by smart view. This way your cube doesn't increase in size every new year.
    Hope this can help you.

  • Problem in creating and updating of  material by the use of bapi and bdc

    Hello All,
    I am using bapi (BAPI_MATERIAL_SAVEDATA ) and than bdc to create and update classification data of material.
    I am facing a problem.
    1) firstly I am creating material by the use of bapi and than after creation i want to update classification data for taht perticular material .
    2) To update classification data i am using bdc . while at the time of updating material through bdc system showing me error that material is currently locked by user (my login name ).
    Please suggest what to do.
    Tkank you
    With Regards
    Shantanu Modi

    When u update/create data it takes sometime to commit. So after using BAPI
    give 10 minutes wait in ur program before updating classification data.
    you can write like
    wait up to 20 seconds.
    and update the classification.

  • I Can't create and add en entry in Ldap using Java

    Hello there,
    I'm pretty new to LDAP programming, and I have been trying to create and add an entry to a directory using the code sample provided in the JNDI tutorial, but for some reasons, I didn't manage.
    the configuration file is well set up because I can't create, add, delete and modify just about anything I want from openLDAP command lines, but my real problem is to do it with java.
    here is my code:
    import java.util.Hashtable;
    import javax.naming.ldap.*;
    import javax.naming.directory.*;
    import javax.naming.*;
    import javax.net.ssl.*;
    import java.io.*;
    public class Newuser
         public static void main (String[] args)
              //LDAPEntry myEntry = new LDAPEntry();
              Hashtable<String, String> env = new Hashtable<String, String>(11);
              String adminName = "CN=ldap_admin,o=JNDITutorial,dc=img,dc=org";
              String adminPassword = "xxxxxx";
              env.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactory");
              //set security credentials, note using simple cleartext authentication
              env.put(Context.SECURITY_AUTHENTICATION,"simple");
              env.put(Context.SECURITY_PRINCIPAL,adminName);
              env.put(Context.SECURITY_CREDENTIALS,adminPassword);
              //connect to my domain controller
              env.put(Context.PROVIDER_URL, "ldap://localhost:389/o=JNDITutorial,dc=img,dc=org");
                   // Create the initial directory context
                   //JNDILDAPConnectionManager jndiManager = new JNDILDAPConnectionManagerImpl();
                  try{
                       DirContext ctx = new InitialDirContext(env);
                        System.out.println("Connection to LDAP server done" );
                       final String groupDN ="ou=people,o=JNDITutorial,dc=img,dc=org";
                      //DirContext dirCtx = jndiManager.getLDAPDirContext();
                      People people = new People("Thiru","Thiru","Thiru Ganesh","Ramalingam","ou=people","[email protected]");
                      // The Common name must be equal in Attributes common Name
                      ctx.bind("cn=Thiru," + groupDN, people);
                      System.out.println("** Entry added **");
                      //jndiManager.disConnectLDAPConnection(ctx);
                  }catch(NamingException exception){
                      System.out.println("**** Error ****");
                      exception.printStackTrace();
                      System.exit(0);
    }adn this is the class People that i want to instantiate
    import java.util.Hashtable;
    import javax.naming.Binding;
    import javax.naming.Context;
    import javax.naming.Name;
    import javax.naming.NameClassPair;
    import javax.naming.NameParser;
    import javax.naming.NamingEnumeration;
    import javax.naming.NamingException;
    import javax.naming.directory.*;
    public class People implements DirContext {
         public People(String uid,String cn,String givenname,String sn,String ou,String mail) {
        BasicAttributes myAttrs = new BasicAttributes(true);  //Basic Attributes
        Attribute objectClass = new BasicAttribute("objectclass"); //Adding Object Classes
        objectClass.add("inetOrgPerson");
        /*objectClass.add("organizationalPerson");
        objectClass.add("person");
        objectClass.add("top");*/
        Attribute ouSet = new BasicAttribute("ou");
        ouSet.add("people");
        ouSet.add(ou);
        myAttrs.put(objectClass);
        myAttrs.put(ouSet);
        myAttrs.put("cn",cn);
        myAttrs.put("sn",sn);
        myAttrs.put("mail",mail);
    }as I said, I can add the new entry using an LDIF file "new.txt" that looks like this:
    dn:cn=Hamido Saddo,ou=People,o=JNDITutorial,dc=img,dc=org
    cn:Hamido Saddo
    mail:[email protected]
    telephonenumber:3838393038703
    sn:hamido
    objectclass:top
    objectclass:person
    objectclass:organizationalPerson
    objectclass:inetOrgPersonusing the following command:
    ldapadd -D "cn=ldap_admin,o=JNDITutorial,dc=org,dc=img" -W -f new.txtand eveything works
    but when i try with the java, i get the following error:
    javax.naming.directory.SchemaViolationException: [LDAP: error code 65 - entry has no objectClass attribute]
    so, can anyone help me please !!!

    uncomment these lines
    /*objectClass.add("organizationalPerson");
    objectClass.add("person");
    objectClass.add("top");*/
    you need all of them to add successfully
    your LDIF has all the four lines
    objectclass:top
    objectclass:person
    objectclass:organizationalPerson
    objectclass:inetOrgPerson
    hope this helps

  • How to create and configure proxies in ADF mobile using OWSM client agent?

    Hi
    Can anyone please tell me how to create and configure proxies in ADF mobile application using Oracle Web Services Manager (OWSM) Lite Mobile ADF Application Agent. I read it in mobile document that,
    For secured web services, the user credentials are dynamically injected using ADF Mobile uses Oracle Web Services Manager (OWSM) Lite Mobile ADF Application Agent to create and configure proxies, as well as to request services through the proxies. The user credentials are injected into the OWSM enforcement context when proxies are configured.
    I am new with this OWSM, can anyone please give me some hints like how to proceed further for implementing authentication using OWSM lite mobile ADF Application Agent??
    Thanks in advance
    Raj

    Hi Juan
    The demo is very useful, and in that Shay describes about the remote login using a regular ADF webapplication as a secured one and deploying it into the server. But I would like to know how to create a local login using OWSM client agent? .
    Without creating a regular ADF webapplication, how can i call secured webservices(i.e., by using OWSM client agent how to create and configure proxies to call secured webservice, where the user credentials are injected into webservice request by OWSM client as mentioned in ADF mobile document)??
    Regards
    Raj

  • Create and Populate Date Dimension for AdventureWorksDW2012

    Does anybody have scripts for DimDate for AdventureWorksDW 2012 or 2014?
    If I generate Schemas and data, it takes several megabytes. I have requirement that this should be done with script to save space. I have also need to extend the script with local calender requirements.
    Kenny_I

    If you mean to populate the DimDate table with range of dates, try the following:
    Create table #dates (
    [date] date not null
    declare @d date
    set @d ='2010-01-03'; --- change this to the start date that you want
    while @d < '2017-01-01' --- change this to the end date that you want
    begin
    set @d= DATEADD(DAY,1,@d)
    insert into #dates (date) values (@d)
    end
    select date,
    [YEAR] = datepart(year, date),
    [MONTH No.] = datepart(month, date),
    [MONTH] = case datepart(month, date) when 1 then '01 January'
    when 2 then '02 February'
    when 3 then '03 March'
    when 4 then '04 April'
    when 5 then '05 May'
    when 6 then '06 June'
    when 7 then '07 July'
    when 8 then '08 August'
    when 9 then '09 September'
    when 10 then '10 October'
    when 11 then '11 November'
    when 12 then '12 December'
    end,
    [day]= datepart( day, date),
    [weekday] = case datepart(weekday, date) when 1 then 'Sunday'
    when 2 then 'Monday'
    when 3 then 'Tuesday'
    when 4 then 'Wednesday'
    when 5 then 'Thursday'
    when 6 then 'Friday'
    when 7 then 'Saturday'
    end
    from #dates

Maybe you are looking for

  • Sharing an Itunes library with Iphones...

    My wife and I each have our own laptops that our iPhones sync to. Our music is on a separate Mac on the same network. Is there anyway to share the one library so that it can be used on both laptops and sync with both iPhones without making multiple c

  • Power Mac G4 FW800 system hangs every 30 to 45 mins.

    Hi everyone, I have a power mac G4 dual 1.25 FW800 machine running OSX 10.4.11. I have installed both Digital Performer 5.13 and Logic Pro 8.02 on this machine. I have also installed a PCI USB card by Allergo. This card is used mainly for all the ilo

  • How to display labels ( vertically) in a column Chart

    Hi,       As per my requirement I want to show  labels inside the column charts & they must be aligned verically   say ABC is id i want display i want A                                                       B                                          

  • Flash Paper 2 & Document Map

    I'm trying to find a solution which enables PDF files to be converted to Flash Paper with some form of navigation (Same as or similar to the Document Map function in Word). Unfortunately the Document Map function doesn't work with PDF files. My curre

  • What's the best camcorder for taking video to edit in final cut express?

    I had a sony, but the file type was MTS and it required conversion to edit in Final Cut Express 3.  So i sold it on ebay, and I'm looking to replace it.  Does anyone have a set up that is easy to transfer HD video from their camera to their Mac, and