Creating Indexs with CF9 ORM

I'm just starting to play around with CF 9's ORM some, and I love that I can have it automatically create tables for me.  But one thing I have haven't been able to figure out yet is, is there a way to create indexes on tables via CF 9's ORM?  I'm not talking about the indexes for primary keys or FK relationships to other tables.  I'm wanting to create indexes on table columns not related to other tables. 
Is this doable in ORM, or am I just going to have to create the tables on the DB server to do it?

i actually RTFM! so far i got the children relationship working like this
property name="children"
    fieldtype="one-to-many"
    cfc="Object"
    linktable="ObjectRelation"
    fkcolumn="parentID"
    singularname="child"
    lazy=true
    inversejoincolumn="childID";
i had the "inverse" version of this for the parent relationship, it didn't work all to well though.
property name="parent"
    fieldtype="many-to-one"
    cfc="Object"
    linktable="ObjectRelation"
    fkcolumn="childID"
    lazy=true
    inversejoincolumn="parentID";
with that whenever i try to call object.getParent().getName() i get a
Error Messages: Value must be initialized before use.
Its possible that a method called on a Java object created by CreateObject returned null.

Similar Messages

  • CREATE INDEX WITH DUPLICATE COLUMN NAME

    Hi,
    i need to interface our application with an Orale Bridge to CREATE INDEX with duplcate column Name.
    For Example
    CREATE index NLOT_FOURNLOT_idx ON NLOT(FOURNISSEUR ,NOLOT ,FOURNISSEUR ,NOBLFOUNISSEUR,NOLIGNEBL ,NOLOT ,QTECOLISRECUES ,CODENONQUALITE ,QTECOLISACCEPTE);
    CREATE table NLOT(
    FOURNISSEUR VARCHAR2(09)
    ,NOBLFOURNISSEUR VARCHAR2(13)
    ,NOLIGNEBL VARCHAR2(03)
    ,NOLOT VARCHAR2(20)
    ,QTECOLISRECUES VARCHAR2(10)
    ,CODENONQUALITE VARCHAR2(02)
    ,QTECOLISACCEPTE VARCHAR2(10)
    ,NOMBREDECOLISRE VARCHAR2(10)
    ,NOMBREDECOLISAC VARCHAR2(10)
    ,FILLER VARCHAR2(1)
    ,FILLE1 VARCHAR2(1)
    ,TYPEREFERENCE VARCHAR2(01)
    ,REFERENC1 VARCHAR2(15)
    ,CONTROLERECEPTI VARCHAR2(01)
    ,DATEDEPEREMPTIO VARCHAR2(8)
    ,CONTROLEPROCHAI VARCHAR2(1)
    Thanks
    Philippe

    Well, you can't do it. ORA-957 is one of those irrevocable errors for which the solution is to remove the duplicate name from the SQL statement.
    But, anyway, why do you want to do this? I would guess there's no performance benefit from having the same column indexed twice (of course it's impossible to test this, so it's just my opinion).
    Cheers, APC

  • Error when creating index with parallel option on very large table

    I am getting a
    "7:15:52 AM ORA-00600: internal error code, arguments: [kxfqupp_bad_cvl], [7940], [6], [0], [], [], [], []"
    error when creating an index with parallel option. Which is strange because this has not been a problem until now. We just hit 60 million rows in a 45 column table, and I wonder if we've hit a bug.
    Version 10.2.0.4
    O/S Linux
    As a test I removed the parallel option and several of the indexes were created with no problem, but many still threw the same error... Strange. Do I need a patch update of some kind?

    This is most certainly a bug.
    From metalink it looks like bug 4695511 - fixed in 10.2.0.4.1

  • Difference between Create Index without and with Parallel Clause

    Hi all.
    I want to know the difference between Create Index with Parallel clause and Create Index without Parallel clause.
    Any documentation.
    Thanks,
    Hassan

    Sure?
    dimitri@VDB> create table t parallel 3 as select * from all_objects;
    Table created.
    dimitri@VDB> set autotrace traceonly
    dimitri@VDB> select * from t;
    40934 rows selected.
    Execution Plan
    Plan hash value: 3050126167
    | Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT     |          | 40601 |  5075K|    50   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR      |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)| :TQ10000 | 40601 |  5075K|    50   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    PX BLOCK ITERATOR |          | 40601 |  5075K|    50   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   4 |     TABLE ACCESS FULL| T        | 40601 |  5075K|    50   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    --------------------------------------------------------------------------------------------------------------Looks like PQ to me.
    alter table t noparallel;
    Table altered.
    dimitri@VDB> select * from t;
    40934 rows selected.
    Execution Plan
    Plan hash value: 1601196873
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      | 40601 |  5075K|   135   (1)| 00:00:02 |
    |   1 |  TABLE ACCESS FULL| T    | 40601 |  5075K|   135   (1)| 00:00:02 |
    --------------------------------------------------------------------------Same if we use an Index:
    dimitri@VDB> create index t_idx on t(object_name) parallel 3;
    Index created.
    dimitri@VDB> select object_name from t;
    40934 rows selected.
    Execution Plan
    Plan hash value: 4278805225
    | Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT        |          | 40601 |   674K|    50   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR         |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)   | :TQ10000 | 40601 |   674K|    50   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    PX BLOCK ITERATOR    |          | 40601 |   674K|    50   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   4 |     INDEX FAST FULL SCAN| T_IDX    | 40601 |   674K|    50   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    -----------------------------------------------------------------------------------------------------------------Everything done on a single CPU (also only one core in the CPU) AMD Box.
    Dim

  • Wait Events "log file parallel write" / "log file sync" during CREATE INDEX

    Hello guys,
    at my current project i am performing some performance tests for oracle data guard. The question is "How does a LGWR SYNC transfer influences the system performance?"
    To get some performance values, that i can compare i just built up a normal oracle database in the first step.
    Now i am performing different tests like creating "large" indexes, massive parallel inserts/commits, etc. to get the bench mark.
    My database is an oracle 10.2.0.4 with multiplexed redo log files on AIX.
    I am creating an index on a "normal" table .. i execute "dbms_workload_repository.create_snapshot()" before and after the CREATE INDEX to get an equivalent timeframe for the AWR report.
    After the index is built up (round about 9 GB) i perform an awrrpt.sql to get the AWR report.
    And now take a look at these values from the AWR
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    log file parallel write              10,019     .0         132      13      33.5
    log file sync                           293     .7           4      15       1.0
    ......How can this be possible?
    Regarding to the documentation
    -> log file sync: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
    Wait Time: The wait time includes the writing of the log buffer and the post.-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104
    Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.This was also my understanding .. the "log file sync" wait time should be higher than the "log file parallel write" wait time, because of it includes the I/O and the response time to the user session.
    I could accept it, if the values are close to each other (maybe round about 1 second in total) .. but the different between 132 seconds and 4 seconds is too noticeable.
    Is the behavior of the log file sync/write different when performing a DDL like CREATE INDEX (maybe async .. like you can influence it with the initialization parameter COMMIT_WRITE??)?
    Do you have any idea how these values come about?
    Any thoughts/ideas are welcome.
    Thanks and Regards

    Surachart Opun (HunterX) wrote:
    Thank you for Nice Idea.
    In this case, How can we reduce "log file parallel write" and "log file sync" waited time?
    CREATE INDEX with NOLOGGINGA NOLOGGING can help, can't it?Yes - if you create index nologging then you wouldn't be generating that 10GB of redo log, so the waits would disappear.
    Two points on nologging, though:
    <ul>
    it's "only" an index, so you could always rebuild it in the event of media corruption, but if you had lots of indexes created nologging this might cause an unreasonable delay before the system was usable again - so you should decide on a fallback option, such as taking a new backup of the tablespace as soon as all the nologging operatons had completed.
    If the database, or that tablespace, is in +"force logging"+ mode, the nologging will not work.
    </ul>
    Don't get too alarmed by the waits, though. My guess is that the +"log file sync"+ waits are mostly from other sessions, and since there aren't many of them the other sessions are probably not seeing a performance issue. The +"log file parallel write"+ waits are caused by your create index, but they are happeninng to lgwr in the background which is running concurrently with your session - so your session is not (directly) affected by them, so may not be seeing a performance issue.
    The other sessions are seeing relatively high sync times because their log file syncs have to wait for one of the large writes that you have triggered to complete, and then the logwriter includes their (little) writes with your next (large) write.
    There may be a performance impact, though, from the pure volume of I/O. Apart from the I/O to write the index you have LGWR writting (N copies) of the redo for the index and ARCH is reading and writing the completed log files caused by the index build. So the 9GB of index could easily be responsible for vastly more I/O than the initial 9GB.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to create index on Global Temporary Table?

    Hi,
    Can i create index with storage parameters on global temporary table? If possible, how?
    Thanks

    Yes. You can create an index on a global temporary table (GTT) with the regular 'CREATE INDEX' statement.
    Not sure though if you are allowed to locate it in a specific tablespace. Why would you want to do that anyway?
    My guess is, like the GTT, indexes on GTT's also default to the temporary tablespace.

  • Performance issue with drop and re-create index

    My database table has about 2 million records. The index in the table was not optmized, so we created a new index lets call it index2. So this table now was the original index (index1) and the index2. We then inserted data into this table from the other box. It was running for a few weeks.
    Suddenly we noticed that a query which used to take a few seconds now took more than a minute. The execution plan was using the index2 which technically should be faster. We checked if the statistics were upto date and it was. So then we dropped the new index, re-ran the query and it completed in 10 sec's. It was usign the old index. This puzzled me since the point of the index2 was to make it better. So then we re-created index2 and genrated stats for the index. Re-ran the query and it completed in 5 sec's.
    Everytime we timed to run the query, I shutdown and restarted the box to clear all cache's. So all the time I have specified are pure time's and not cached. The execution plan using index2 taking 1 min and 5 sec's are nearly the same, with very minior difference in cost and cardnitality. Any ideas why index2 took 1 min before and after drop and create again takes only 5 sec.
    The reason I want to find the cause is to ensure that this doesn't happen again, since its impossible for me to re-create the index everytime I see this issue. Any thoughts would be helpful.

    Firstly the indexes are different index1 is only on the time column, where as index2 is a composite index consisting of 3 columns.
    Here are the details. The test that I did were last friday, 3/31. Yesterday and today when I executed the same query I get more increased times, yesterday it took 9 sec amd today 17 sec. The stats job kicked in on both days and is upto date. This table, nothing gets deleted. Only added.
    3/31
    Original
    Elapsed: 00:01:02.17
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6553 Card=9240 Bytes
    =203280)
    1 0 SORT (UNIQUE) (Cost=6553 Card=9240 Bytes=203280)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=15982 Card=2306303 Bytes=50738666)
    drop index EVENT_NA_TIME_ETYPE
    Elapsed: 00:00:11.91
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=7792 Card=9275 Bytes
    =204050)
    1 0 SORT (UNIQUE) (Cost=7792 Card=9275 Bytes=204050)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'EVENT' (Cost=2092
    Card=2284254 Bytes=50253588)
    3 2 INDEX (RANGE SCAN) OF 'EVENT_TIME_NDX' (NON-UNIQUE
    ) (Cost=6740 Card=2284254)
    create index EVENT_NA_TIME_ETYPE ON EVENT(NET_ADDRESS,TIME,EVENT_TYPE);
    BEGIN
    SYS.DBMS_STATS.GENERATE_STATS('USER','EVENT_NA_TIME_ETYPE',0);
    end;
    Elapsed: 00:00:05.14
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6345 Card=9275 Bytes
    =204050)
    1 0 SORT (UNIQUE) (Cost=6345 Card=9275 Bytes=204050)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=12878 Card=2284254 Bytes=50253588)
    4/3
    Elapsed: 00:00:09.70
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6596 Card=9316 Bytes
    =204952)
    1 0 SORT (UNIQUE) (Cost=6596 Card=9316 Bytes=204952)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=11696 Card=2409400 Bytes=53006800)
    Statistics
    0 recursive calls
    0 db block gets
    11933 consistent gets
    9676 physical reads
    724 redo size
    467 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    3 rows processed
    4/4
    Elapsed: 00:00:17.99
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6681 Card=9421 Bytes
    =207262)
    1 0 SORT (UNIQUE) (Cost=6681 Card=9421 Bytes=207262)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=12110 Card=2433800 Bytes=53543600)
    Statistics
    0 recursive calls
    0 db block gets
    12279 consistent gets
    9423 physical reads
    2608 redo size
    467 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    3 rows processed
    SQL> select index_name,clustering_factor,blevel,leaf_blocks,distinct_keys from u ser_indexes where index_name like 'EVENT%';
    INDEX_NAME CLUSTERING_FACTOR BLEVEL LEAF_BLOCKS DISTINCT_KEYS
    EVENT_NA_TIME_ETYPE 2393170 2 12108 2395545
    EVENT_PK 32640 2 5313 2286158
    EVENT_TIME_NDX 35673 2 7075 2394055

  • Problems with creating index

    I have over 1500MB free in tablespace
    I use for indexes. When I attempt to create
    index which should have about 300MB
    it ends with error ORA-01652
    unable to extend temp segment in tablespace
    where my indexes reside.
    Why Oracle needs so much space to create an index,
    is there any way to do it without increasing
    the size of tablespace?

    Oracle needs no extra space in the tablespace beyond what is required to hold the index (INITIAL plus any additional extents allocated). My guess would be that you do not have a large enough contiginous chunk of free space to hold an additional extent.
    If your tablespace is dictionary managed, look at the INITIAL, NEXT and PCTINCREASE parameters of your CREATE INDEX statement (or the tablespace defaults if you are not supplying them). Compare these to the sizes of the free space chunks in your index tablespace.
    SELECT file_id, block_id, blocks, bytes
    FROM dba_free_space
    WHERE tablespace_name = 'YOUR_INDEX_TBS'
    ORDER BY blocks DESCcompare the sizes of the first few rows here with the sizes of INITIAL and NEXT.
    HTH
    John

  • Problems creating an spatial index with srid=4326

    Hi!
    I would like to know if somebody can help me with the following problem: We are using the 10.2.0.1 version and we need that our SRID value is 4326. We do not have problems with 8307 or another value. However, when we tried to use srid = 4326, appears the following error message:
    Error on line 17 CREATE INDEX SIDX_D3M_SDO_GEOMETRY ON DAT_3DM_MODEL (DM3_SDO_GEOMETRY) INDEXTYPE ORA-29855: an error in the execution of routine ODCIINDEXCREATE has taken place ORA-13249: internal error in Spatial index: [mdidxrbd] ORA-13249: Error initializing geodetic transform ORA-06512: in “MDSYS.SDO_INDEX_METHOD_10I”, line 10
    The PL/SQL that we used is the following one:
    DELETE FROM USER_SDO_GEOM_METADATA WHERE TABLE_NAME = “DAT_3DM_MODEL”
    COMMIT
    INSERT INTO USER_SDO_GEOM_METADATA (TABLE_NAME, COLUMN_NAME, DIMINFO, SRID) VALUES('DAT_3DM_MODEL', 'DM3_SDO_GEOMETRY', MDSYS.SDO_DIM_ARRAY ( MDSYS.SDO_DIM_ELEMENT ('LONGITUDE', -180, 180, 0,05), -- MDSYS.SDO_DIM_ELEMENT ('LATITUDE', -90, 90, 0.05) ), 4326 )
    CREATE INDEX SIDX_D3M_SDO_GEOMETRY ON DAT_3DM_MODEL (DM3_SDO_GEOMETRY) INDEXTYPE IS MDSYS.SPATIAL_INDEX
    Thanks in advance,
    Susana.

    I cannot reproduce the error, in my environment. However:
    Your insert statement into USER_SDO_GEOM_METADATA appears to have included some typos. They might have happened, when transcribing. Please make sure you use the following:
    INSERT INTO USER_SDO_GEOM_METADATA (
    TABLE_NAME,
    COLUMN_NAME,
    DIMINFO,
    SRID)
    VALUES(
    'DAT_3DM_MODEL',
    'DM3_SDO_GEOMETRY',
    MDSYS.SDO_DIM_ARRAY (
    MDSYS.SDO_DIM_ELEMENT ('LONGITUDE', -180, 180, 10),
    MDSYS.SDO_DIM_ELEMENT ('LATITUDE', -90, 90, 10)),
    4326);
    However, the actual culprit is most certainly different. As I suspected, it might be related to the "decimal comma": In Germany, for instance, a decimal comma is used, instead of a decimal point. You have used a decimal comma in your original INSERT, as well (0,05 instead of 0.05).
    Please try the following:
    SQL> select wktext from cs_srs where srid = 4326;
    WKTEXT
    GEOGCS [ "WGS 84", DATUM ["World Geodetic System 1984 (EPSG ID 6326)", SPHEROID
    ["WGS 84 (EPSG ID 7030)", 6378137, 298.257223563]], PRIMEM [ "Greenwich", 0.0000
    00 ], UNIT ["Decimal Degree", 0.01745329251994328]]
    On your system, you will likely find a decimal comma, where my output has a decimal point. This is bug 5097326, which has been fixed, and backported to 10.2.0.3 and 10.2.0.4.

  • Create table with constraint and index

    If I create a table with constraint key; after that I create an unique index key, I got an error. Please see below.
    Does it mean when I create a table with constraint the unique index are automatically created and I could not create
    index key as I did as below?
    create table test_const(ename varchar2(50) not null,
    key_num number not null,
    descr varchar2(100),
    constraint constraint_test_const unique (ename, key_num));
    create unique index test_const_idx on test_const
    "ENAME","KEY_NUM"
    tablespace tmp_data;
    Error report:
    SQL Error: ORA-01408: such column list already indexed
    *01408. 00000 - "such column list already indexed"*

    Not too hard to check (the answer is yes by the way).
    ME_XE?create table test_const(ename varchar2(50) not null,
    key_num number not null,
    descr varchar2(100),
    constraint constraint_test_const unique (ename, key_num));
      2    3    4 
    Table created.
    Elapsed: 00:00:00.12
    ME_XE?select index_name, index_type
    from user_indexes where table_name = 'TEST_CONST';
      2 
    INDEX_NAME                     INDEX_TYPE
    CONSTRAINT_TEST_CONST          NORMAL
    1 row selected.
    Elapsed: 00:00:00.14
    ME_XE?

  • How to create a concatenated index with a long column (Urgent!!)

    We have a situation where we need to create a concatenated unique
    index with one of the columns as a "long" datatype. Oracle does
    not allow a lot of things with long columns.
    Does anyone know if this is possible or if there is a way to get
    around it.
    All help is appreciated!!!!

    From the Oracle SQL Reference ...
    "You cannot create an index on columns or attributes whose
    type is user-defined, LONG, LONG RAW, LOB, or REF,
    except that Oracle supports an index on REF type columns
    or attributes that have been defined with a SCOPE clause."
    Doesn't mention CLOB or BLOB types, so perhaps you
    should consider using one of those types instead. I have a
    feeling that the LONG type is now deprecated.

  • How to simply create a locally partitioned index with a dedicated tablespac

    Dears,
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64biSuppose you have a range partitioned table t on date d with n partitions p1, p2,....pn
    If you create a locally partitioned index using the following script
    create index ind_loc on t(d) local;all the partitioned indexes will be created on the same default tablespace.
    Is there a simple creation script (not that where we have to add each partition and each tablespace) to use in order to associate each partitioned index to a given tablespace?
    Best Regards
    Mohamed Houri

    Suppose you have a range partitioned table t on date d with n partitions p1, p2,....pn
    If you create a locally partitioned index using the following script
    create index ind_loc on t(d) local;all the partitioned indexes will be created on the same default tablespace.
    Is there a simple creation script (not that where we have to add each partition and each tablespace) to use in order to associate each partitioned index to a given tablespace?
    Hi Mohamed ,
    AFAIK , there is no such script . If you want partition to be stored in different tablespaces, then you will have to specify partition name along with tablespace name in create index command.
    Regards
    Rajesh

  • How to create a new index with already indexed folder

    Hi
    I have a requirement to create a new index.
    I can able to create index. But,When i am specifying the data source, i am trying to include a data source which was already indexed in another index.
    The system giving error that "The folder is already indexed and its available in another index".
    How should i specify same folder in different indexes?

    Hi Patricio,
    Thanks for your reply.
    Our requirement is
    From your example, lets take
    index1 containts a data sources \folder1.
    and i have a requirement to create another index called index2 with data sources
    1) \folder1
    2) \testfolder\testfolder1.
    From help.sap.com i understand that there is no way to specify the same folder in different indexes.
    I am looking for, Is there any solution to meet my requirement
    Thanks

  • BI Loading to Cube Manually with out creating Indexes.

    BW 3.5
    I have a  process chain schedules overnight which loads data from the InfoCubes from the ODS after loading to the staging and transformation layer
    The data loaded into the InfoCube is scheduled in the process chain as
    delete Index > delete contents of the cube> Load Data to the Cube --> Create Index.
    Tha above process chain load to cube normally takes 5 - 6 hrs.
    The only concern I have is at times if the process chain fails at the staging layer and transformation layer then I have to rectify the same manually.
    After rectifying the error, now I have to load the data to the Cube.
    I have only left with couple of hours say 2-3 hrs to complete the process chain of the load to the cube because of business hours.
    Kindly let me know in the above case where I have short of time to load data to the cube via process chain
    can I manually delete the contents of the cube and load the data to the cube. Here I will not be deleting the existing index(s) and create Index(s) after loading to the Cube because creation of Index normally takes a long time which I can avoid where I am short of time.
    Can I do the above at times and what are the impacts
    If the load to the InfoCube schedules via process chain the other normal working days. Is it going to fail or it will go through
    Also deleting contents of the cubes deletes the indexes.
    Thanks
    Note: As far I understand that Index are created to improve the performance at loading and query performance level.
    your input will be appreciated.

    Hi Pawan,
    Please find below my views in bold
    BW 3.5
    I have a process chain schedules overnight which loads data to the InfoCubes from the ODS after loading to the staging and transformation layer
    The data loaded into the InfoCube is scheduled in the process chain as
    delete Index > delete contents of the cube> Load Data to the Cube --> Create Index.
    I assume you are deleting the entire contents of the cube. If this is the normal pattern of loads to this cube and if there are no other loads to this cube you may consider configuring a setting in the infocube which " Delete InfoCube indexes before each data load and then refresh" .This setting you would find in Performance tab in create index batch option. Read F1 help of the checkbox. It will provide with more info.
    Tha above process chain load to cube normally takes 5 - 6 hrs.
    The only concern I have is at times if the process chain fails at the staging layer and transformation layer then I have to rectify the same manually.
    After rectifying the error, now I have to load the data to the Cube.
    I have only left with couple of hours say 2-3 hrs to complete the process chain of the load to the cube because of business hours.
    Kindly let me know in the above case where I have short of time to load data to the cube via process chain
    can I manually delete the contents of the cube and load the data to the cube. YES, you can Here I will not be deleting the existing index(s) and create Index(s) after loading to the Cube because creation of Index normally takes a long time which I can avoid where I am short of time.
    Can I do the above at times and what are the impacts Impacts :Lower query performance and loading performance as you mentioned
    If the load to the InfoCube schedules via process chain the other normal working days. Is it going to fail or it will go through
    I dont probably understand the question above, but i assume you mean, that if you did a manual load will there be a failure next day. - THERE WOULDNT
    Also deleting contents of the cubes deletes the indexes.
    YES it does
    Thanks
    Pavan - You can skip creating indices, but you will have slower query performance. However if you have no further loads to this cube, you could create your indices during business hours as well. I think, the building of indices demands a lock on the cube and since you are not loading anything else u should b able to furnish it. Lastly, is there no way you can remodel this cube and flow...do you really need to have full data loads?
    Note: As far I understand that Index are created to improve the performance at loading and query performance level. TRUE
    your input will be appreciated.
    Hope it helps,
    Regards,
    Sunmit.

  • Is it possible to create indexes & use them on xml docs with namespaces?

    I have put an XML doc in a BDBXML container which looks like this:
    *<ns1:note xmlns:ns1="http://www.testsch.org/ns">*
    *<ns1:to>Eric</ns1:to>*
    *<ns1:from>Brendan</ns1:from>*
    *<ns1:msg>How r u?</ns1:msg>*
    *</ns1:note>*
    Now, I am creating an index on the element "to", as:
    addIndex "ns1" "to" node-element-equality-string
    Though the index has been shown as getting created using the listIndexes command, if I lookup that index with the following command:
    lookupIndex node-element-equality-string "ns1" "to"
    the output is:
    *0 objects returned for eager index lookup 'node-element-equality-string'*
    Whereas, if I do the whole procedure without the namespaces in the document & the commands, the result is fine:
    *1 object returned for eager index lookup 'node-element-equality-string'*
    Can someone please tell me whether using namespaces, the indexes can be created or not? If yes, how?
    Thanks,
    Dev
    Edited by: user11871332 on Sep 7, 2009 3:53 PM

    Hi Dev,
    When using XML, the prefix for the namespace is really just syntactic sugar. The actual namespace in your example is "http://www.testsch.org/ns", and that's the value that you need to use when creating your index:
    addIndex "http://www.testsch.org/ns" "to" node-element-equality-stringJohn

Maybe you are looking for

  • ORGANIZATION_ID column in RCV_SHIPMENT_HEADER  table

    Hi, When we create receipts, for some receipt wth RECEIPT_SOURCE_CODE is VENDOR. system is populating the ORGANIZATION_ID column in RCV_SHIPMENT_HEADER table but most of the time this column value is NULL. I understand that if the RECEIPT_SOURCE_CODE

  • Learn Photoshop CS5.1 or upgrade to CC?

    Hey Guys, I need some advice... I purchased Photoshop CS5.1 a few years ago and never had the resources to learn the program.  I found the videos on YouTube really did not help me for the most part.    I just recently purchased a Canon Rebel T5i and

  • Organisation is not updating in the BP relationship

    Hi, 1. In PPOMA_CRM , i assigned my BP(employee)  to one position of one organization( O1). Then I open the BP transaction. i went to the relationship . I am able to see the Organization O1 in the "Is employee of" relationship. 2. Again i went to  PP

  • Installing adobe flash player to MacBook pro

    I am trying to install adobe flash player on a brand new Mac book Pro but it isn't downloading.

  • Regarding Plant

    Hi, We have list of plants maintained which can be seen in table - T001W. All plants maintained in this table might not be production plants.there are some plants which are maintained only for sales etc.how to identify a particular plant is relevant