PointBase automatic table creation mapping reliability?

If I specify this in the weblogic-cmp-rdbms-jar.xml file for Automatic Table Creation:
<field-map>
<cmp-field>ADDRESS</cmp-field>
<dbms-column>U_ADDRESS</dbms-column>
<dbms-column-type>VARCHAR(30)</dbms-column-type>
</field-map>
<field-map>
<cmp-field>addresses</cmp-field>
<dbms-column>U_ADDRESSES</dbms-column>
<dbms-column-type>RAW(5000)</dbms-column-type>
</field-map>
<create-default-dbms-tables>True</create-default-dbms-tables>
PointBase creates a table w/ the two columns defined as:
U_ADDRESS VARCHAR(150)
U_ADDRESSES BLOB(1000)
If I specify this:
<field-map>
<cmp-field>ADDRESS</cmp-field>
<dbms-column>U_ADDRESS</dbms-column>
<dbms-column-type>VARCHAR(30)</dbms-column-type>
</field-map>
<field-map>
<cmp-field>addresses</cmp-field>
<dbms-column>U_ADDRESSES</dbms-column>
<dbms-column-type>RAW(5000)</dbms-column-type>
</field-map>
<create-default-dbms-tables>True</create-default-dbms-tables>
<database-type>POINTBASE</database-type>
PointBase creates a table w/ the two columns defined as:
U_ADDRESS VARCHAR(150)
U_ADDRESSES BLOB(1)
What's wrong? And how reliable is the PointBase mapping?

Hi
The <dbms-column-type> is not intended for generally specifying
the desired column type for the cmp-field. It is meant to
cause the container to generate code to handle the cmp-field
as a java.sql.Blob in the persistence layer.
What the default table creation does is to examine the
java type of the cmp-field and then make its best guess
at a DBMS column type that will support the cmp-field.
In the case of POINTBASE, byte[] fields are made into
BLOB.
Here's the conversion that the container uses to map
java types to POINTBASE Column types:
if(type.isPrimitive()) {
if(type == Boolean.TYPE) return "BOOLEAN";
if(type == Byte.TYPE) return "SMALLINT";
if(type == Character.TYPE) return "CHAR(1)";
if(type == Double.TYPE) return "DOUBLE PRECISION";
if(type == Float.TYPE) return "FLOAT";
if(type == Integer.TYPE) return "INTEGER";
if(type == Long.TYPE) return "DECIMAL"; // PointBase DECIMAL is DECIMAL(38,0)
// 10**38 is approx: 2**126
it's big enough
if(type == Short.TYPE) return "SMALLINT";
} else {
if (type == String.class) return "VARCHAR(150)";
if (type == BigDecimal.class) return "DECIMAL(38,19)";
if (type == Boolean.class) return "BOOLEAN";
if (type == Byte.class) return "SMALLINT";
if (type == Character.class) return "CHAR(1)";
if (type == Double.class) return "DOUBLE PRECISION";
if (type == Float.class) return "FLOAT";
if (type == Integer.class) return "INTEGER";
if (type == Long.class) return "DECIMAL";
if (type == Short.class) return "SMALLINT";
if (type == java.util.Date.class)return "DATE";
if (type == java.sql.Date.class) return "DATE";
if (type == java.sql.Time.class) return "TIME";
if (type == java.sql.Timestamp.class) return "TIMESTAMP";
if (type.isArray() &&
type.getComponentType() == Byte.TYPE) return "BLOB";
if (!ClassUtils.isValidSQLType(type) &&
java.io.Serializable.class.isAssignableFrom(type)) return "BLOB";
"Brian L" <[email protected]> wrote:
>
If I specify this in the weblogic-cmp-rdbms-jar.xml file for Automatic Table
Creation:
<field-map>
<cmp-field>ADDRESS</cmp-field>
<dbms-column>U_ADDRESS</dbms-column>
<dbms-column-type>VARCHAR(30)</dbms-column-type>
</field-map>
<field-map>
<cmp-field>addresses</cmp-field>
<dbms-column>U_ADDRESSES</dbms-column>
<dbms-column-type>RAW(5000)</dbms-column-type>
</field-map>
<create-default-dbms-tables>True</create-default-dbms-tables>
PointBase creates a table w/ the two columns defined as:
U_ADDRESS VARCHAR(150)
U_ADDRESSES BLOB(1000)
If I specify this:
<field-map>
<cmp-field>ADDRESS</cmp-field>
<dbms-column>U_ADDRESS</dbms-column>
<dbms-column-type>VARCHAR(30)</dbms-column-type>
</field-map>
<field-map>
<cmp-field>addresses</cmp-field>
<dbms-column>U_ADDRESSES</dbms-column>
<dbms-column-type>RAW(5000)</dbms-column-type>
</field-map>
<create-default-dbms-tables>True</create-default-dbms-tables>
<database-type>POINTBASE</database-type>
PointBase creates a table w/ the two columns defined as:
U_ADDRESS VARCHAR(150)
U_ADDRESSES BLOB(1)
What's wrong? And how reliable is the PointBase mapping?

Similar Messages

  • Table creation - order of events

    I am trying to get some help on the order I should be carrying out table creation tasks.
    Say I create a simple table:
    create table title (
    title_id number(2) not null,
    title varchar2(10) not null,
    effective_from date not null,
    effective_to date not null,
    constraint pk_title primary key (title_id)
    I believe I should populate the data, then create my index:
    create unique index title_title_id_idx on title (title_id asc)
    But I have read that Oracle will automatically create an index for my primary key if I do not do so myself.
    At what point does Oracle create the index on my behalf and how do I stop it?
    Should I only apply the primary key constraint after the data has been loaded as well?
    Even then, if I add the primary key constraint will Oracle not immediately create an index for me when I am about to create a specific one matching my naming conventions?

    yeah but just handle it the way you would handle any other constraint violation - with the EXCEPTIONS INTO clause...
    SQL> select index_name, uniqueness from user_indexes
      2  where table_name = 'APC'
      3  /
    no rows selected
    SQL> insert into apc values (1)
      2  /
    1 row created.
    SQL> insert into apc values (2)
      2  /
    1 row created.
    SQL> alter table apc add constraint apc_pk primary key (col1)
      2  using index ( create unique index my_new_index on apc (col1))
      3  /
    Table altered.
    SQL> insert into apc values (2)
      2  /
    insert into apc values (2)
    ERROR at line 1:
    ORA-00001: unique constraint (APC.APC_PK) violated
    SQL> alter table apc drop constraint apc_pk
      2  /
    Table altered.
    SQL> insert into apc values (2)
      2  /
    1 row created.
    SQL> alter table apc add constraint apc_pk primary key (col1)
      2  using index ( create unique index my_new_index on apc (col1))
      3  /
    alter table apc add constraint apc_pk primary key (col1)
    ERROR at line 1:
    ORA-02437: cannot validate (APC.APC_PK) - primary key violated
    SQL> @%ORACLE_HOME%/rdbms/admin/utlexcpt.sql
    Table created.
    SQL> alter table apc add constraint apc_pk primary key (col1)
      2  using index ( create unique index my_new_index on apc (col1))
      3  exceptions into EXCEPTIONS
      4  /
    alter table apc add constraint apc_pk primary key (col1)
    ERROR at line 1:
    ORA-02437: cannot validate (APC.APC_PK) - primary key violated
    SQL> select * from apc where rowid in ( select row_id from exceptions)
      2  /
          COL1
             2
             2
    SQL> All this is in the documentation. Find out more.
    Cheers, APC

  • Automatic TO Creation and Confirm for the 321 Mvt type

    Hello Experts,
    Like to know the customisation invovlved for automatic TO creation and confirmation for the 321 movement type in which QM is activated.
    I have already activated in OMKZ as below
    1.Automatic TO for 321 mvt ytpe
    2.TO item to be confirmed immediately
    3.Propose confirmation
    Table T333 -AUTTA
              T333-SQUIT &
              T333-VQUIT is activated.
    Please let me know any other settings to be performed
    Regards
    Krishna

    Hello Experts,
    I am trying to create the Automatic TO and TO confirmation for the mvt type 321.
    I made the below settings
    I have activated in OMKZ as below
    1.Automatic TO for 321 mvt ytpe   -"A"
    2.TO item to be confirmed immediately  - tick mark
    3.Propose confirmation   - tick mark
    Table T333 -AUTTA
    T333-SQUIT &
    T333-VQUIT is activated.
    I have activated in OMKX as below
    Table T321-TBFKZ Immediate TO Creation with "A"
    T321-TAFKZ TR creation with "X"
    After doing the settings i am not able to see the Posting change number in the Material document created for the mvt type 321.
    Please let me know how to create the Posting chnage notice for the material document and how to convert the Posting change notice to automatic TO and confirmation.
    Thanks & Regards
    Krishna
    Edited by: Hariharan krishna on May 25, 2011 8:01 AM

  • BAPI for automatic Pr creation witth multiple files from excel sheet

    I have written a programm  for automatic PR creation  with the help of bapi , where its picks data from excel sheet and makes PR .It is picking excel files from one folder(Files) for PR creation and moving to another folder(Files sucess).
    now the problem is if Folder (Files) contains one excel sheet ,PR is created fine , but if the Folder(Files) have multiple excel sheet ,its is creating 1st PR right, but next PR's contains all the line item of 1st PR , 2nd PR and so on .can anyone suggest me where is the problem in codes.
    types : begin of x_file ,
            key1(10),
            pur_grp(4),
            maktx(40),
            plant(4) ,
            req(10),
            s_qty(13),
            wbs(24),
            gl(10),
            trackno(10),
            supl(4),
            deladd(255).
    types : end of x_file .
      data : str5 type char10.
      data : mm type char2.
      data : yyyy type char4.
      data : dd type char2.
      data : str9 type char10.
      data : str6 type char10.
      data : month type char2.
      data : year type char4.
      year = sy-datum+0(4).
      month = sy-datum+4(2).
      dd = sy-datum+6(2).
      yyyy = sy-datum+0(4).
      mm = sy-datum+4(2).
      dd = sy-datum+6(2).
      clear str6 .
      clear str5.
      concatenate  dd'.' month '.' year into str5 .
      concatenate  yyyy mm dd into str6 .
    DATA : file type rlgrap-filename .
    data : it_file type table of x_file .
    data : wa_file type x_file .
    data : it_header type table of x_file .
    data : wa_header type x_file .
    *&  Internal Table For Define Row and Coloum Postion
    data: col_start type i value 1 ,
          row_start type i value 2,
          col_end type i value 256 ,
          row_end type i value 65000 .
    *&  Internal Table For Retrieve  Data From Excel
    *data: excel_bdcdata like kcde_cells occurs 0 with header line.
    *data: excel_bdcdata1 like kcde_cells occurs 0 with header line.
    data: excel_bdcdata like ALSMEX_TABLINE occurs 0 with header line.
    data: excel_bdcdata1 like ALSMEX_TABLINE occurs 0 with header line.
    data: it_index type i.
    DATA : IT_INDEX1 TYPE I.
    *&  Define Field Symbol
    field-symbols: <fs> .
    data :   bdcdata like bdcdata    occurs 0 with header line.
    data :   messtab like bdcmsgcoll occurs 0 with header line.
    data : req_items type table of bapiebanc .
    data : wa_req_items type bapiebanc .
    data : req_acc_asg type table of bapiebkn.
    data : wa_req_acc_asg type bapiebkn.
    DATA : RETURN LIKE BAPIRETURN OCCURS 0 WITH HEADER LINE .
    *data : return type table of     bapireturn.
    *data : wa_return type bapireturn .
    data : number type ebeln .
    *****************************MOVE FILES******************************
    data : xsource type string .
    data : xdestin type string .
    data : destin1 type string .
    data : destin2 type string .
    DATA : DEST1 TYPE STRING.
    DATA : DEST11 TYPE STRING.
    DATA : DEST2 TYPE STRING.
    DATA : DEST22 TYPE STRING.
    data : sou_dir_name like SALFILE-LONGNAME.
    data : tar_dir_name like SALFILE-LONGNAME.
    data : tar_dir_name1 like SALFILE-LONGNAME.
    data : sou_filename like EDI_PATH-PTHNAM .
    data : tar_filename like EDI_PATH-PTHNAM .
    data : filename1  type string .
    data : tar_filename1 like EDI_PATH-PTHNAM .
    data : file_itab like SALFLDIR occurs 0 with header line.
    data : wa_file_itab like SALFLDIR.
    data : file_count type i .
    data : dir_count type i.
    data : dir_table like sdokpath occurs 0 with header line.
    data : file_table like SDOKPATH occurs 0 with header line.
    data : wa_file_table like sdokpath.
    data : strr type string ,
           str1 type string ,
           str2 type string ,
           str3 type string .
    DATA : PA_VAL TYPE CHAR1.
    sou_dir_name = 'D:\barcodes\files\'.
    tar_dir_name = 'D:\barcodes\files-success\'.
        "success folder.
    CALL FUNCTION 'TMP_GUI_DIRECTORY_LIST_FILES'
      EXPORTING
        DIRECTORY  = sou_dir_name
        FILTER     = '.'
      IMPORTING
        FILE_COUNT = file_count
        DIR_COUNT  = dir_count
      TABLES
        FILE_TABLE = file_table
        DIR_TABLE  = dir_table
      EXCEPTIONS
        CNTL_ERROR = 1
        OTHERS     = 2.
    IF SY-SUBRC <> 0.
    ENDIF.
    loop at file_table into wa_file_table.
    clear  :  strr , str1 , str2 , str3 .
      strr = wa_file_table-PATHNAME .
      concatenate sou_dir_name strr into str1 .
      concatenate tar_dir_name strr into str2 . " success
      concatenate tar_dir_name1 strr into str3 .         " failed
    FILE = STR1 .
    *start-of-selection.
    *&  Function For Retrieve Data From Excel
    CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
      EXPORTING
        filename                      = FILE
        i_begin_col                   = col_start
        i_begin_row                   = row_start
        i_end_col                     = col_end
        i_end_row                     = row_end
      tables
        intern                        = excel_bdcdata
    EXCEPTIONS
       INCONSISTENT_PARAMETERS       = 1
       UPLOAD_OLE                    = 2
       OTHERS                        = 3.
      IF sy-subrc NE 0.
    WRITE : / 'File Error'.
    EXIT.
    ENDIF.
      loop at excel_bdcdata.
        translate excel_bdcdata to upper case .
        move excel_bdcdata-col to it_index.
        assign component it_index of  structure  wa_file to <fs> .
        move excel_bdcdata-value to <fs>.
        at end of row.
          append wa_file to it_file .
            clear wa_file.
        endat.
      endloop.
    sort it_file by key1. "pur_grp maktx plant  .
    it_header[] = it_file[].
    delete adjacent duplicates from it_header comparing key1 pur_grp maktx
    plant .
    data : h_item(5) type n .
    data : h_pack(10) type n .
    data : line_no(5) type n .
    data : ln_no(5) type n .
    loop at it_header into wa_header .
    ln_no = 1.
    h_item = h_item + 10.
    h_pack = h_pack + 1.
    wa_req_items-preq_item = h_item .
    wa_req_items-doc_type = 'BOM'.
    wa_req_items-pur_group = wa_header-pur_grp .
    wa_req_items-MATERIAL = wa_header-maktx .
    wa_req_items-plant = wa_header-plant .
    wa_req_items-pckg_no =  h_pack .
    wa_req_items-deliv_date = str6 .
    wa_req_items-item_cat = '0'.
    wa_req_items-acctasscat = 'P'.
    *wa_req_items-distrib = '2' .
    **wa_req_items-gr_ind = 'X'.
    wa_req_items-ir_ind = '2'.
    wa_req_items-purch_org = 'TISL' .
    wa_req_items-QUANTITY =  wa_header-s_qty.
    wa_req_items-PREQ_NAME =  wa_header-req.
    wa_req_items-SUPPL_PLNT = wa_header-supl.
    wa_req_items-trackingno = wa_header-trackno.
    append wa_req_items to req_items .
    clear wa_req_items.
    wa_req_acc_asg-preq_item = h_item .
    wa_req_acc_asg-g_l_acct = wa_file-gl .
    WA_req_acc_asg-wbs_elem  = wa_header-wbs .
    append wa_req_acc_asg to req_acc_asg .
    clear wa_req_acc_asg.
    h_pack = h_pack + 1  .
    endloop.
    clear ln_no .
    ***BREAK-POINT.
    *& BAPI FUNCTION
    call function 'BAPI_REQUISITION_CREATE'
    importing
       number                               = number
      tables
        requisition_items                   = req_items
       requisition_account_assignment       = req_acc_asg
       return                               = return .

    Can someone please give me sol........

  • Can Oracle 9i enable schema/table creation to be transacted?

    If anyone can help with this, that would be much appreciated.
    So - the server has disabled autocommit and commits/rollbacks are handled by the application. Even though this is the case, Oracle 9i is not rolling back changes that have (i) created schemas/users and/or (ii) tables.
    Worse still, it seems to be performing a partial rollback - some tables in a schema are left with data and others are not.
    Now, this may be caused by our server creating tables for indexing while adding data to some existing tables - that is the table definitions have auto-committed the transaction to date, also committing the table insertions/updates.
    After some delving, the JDBC driver has the following method: dataDefinitionCausesTransactionCommit - for Pointbase and other databases, this returns false - for Oracle it returns true.
    The questions are therefore:
    1) Is there a solution with Oracle 9i that enables schema and table creation to be transacted?
    2) Does Oracle 10g allow definition clauses to be transacted?

    Actually I believe there is a limited way to make DDL statements transaction based via the CREATE SCHEMA command.
    From the 9.2 SQL manaul >>
    Use the CREATE SCHEMA to create multiple tables and views and perform multiple grants in a single transaction.
    To execute a CREATE SCHEMA statement, Oracle executes each included statement. If all statements execute successfully, Oracle commits the transaction. If any statement results in an error, Oracle rolls back all the statements.
    <<
    This may be of some limited use to you, but your process should probably be changed to track of the DDL and to undo (drop) any created objects if a rollback is issued.
    HTH -- Mark D Powel --

  • Problem with table creation using CTAS parallel hint

    Hi,
    We have a base table (CARDS_TAB) with 1,083,565,232 rows, and created a replica table called T_CARDS_NEW_201111. But the count in new table is 1,083,566,976 the difference is 1744 additional row. I have no idea how the new table can contain more rows compared to original table!!
    Oracle version is 11.2.0.2.0.
    Both table count were taken after table creation. Script that was used to create replica table is:
    CREATE TABLE T_CARDS_NEW_201111
    TABLESPACE T_DATA_XLARGE07
    PARTITION BY RANGE (CPS01_DATE_GENERATED)
    SUBPARTITION BY LIST (CPS01_CURRENT_STATUS)
    SUBPARTITION TEMPLATE
      (SUBPARTITION T_NULL VALUES (NULL),
       SUBPARTITION T_0 VALUES (0),
       SUBPARTITION T_1 VALUES (1),
       SUBPARTITION T_3 VALUES (3),
       SUBPARTITION T_OTHERS VALUES (DEFAULT)
      PARTITION T_200612 VALUES LESS THAN (TO_DATE(' 2007-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        TABLESPACE T_DATA_XLARGE07
      ( SUBPARTITION T_200612_T_NULL VALUES (NULL)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_200612_T_0 VALUES (0)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_200612_T_1 VALUES (1)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_200612_T_3 VALUES (3)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_200612_T_OTHERS VALUES (DEFAULT)    TABLESPACE T_DATA_XLARGE07 ),
      PARTITION T_200701 VALUES LESS THAN (TO_DATE(' 2007-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        TABLESPACE T_DATA_XLARGE07
      ( SUBPARTITION T_200701_T_NULL VALUES (NULL)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_200701_T_0 VALUES (0)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_200701_T_1 VALUES (1)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_200701_T_3 VALUES (3)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_200701_T_OTHERS VALUES (DEFAULT)    TABLESPACE T_DATA_XLARGE07 )
      PARTITION T_201211 VALUES LESS THAN (TO_DATE(' 2012-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        TABLESPACE T_DATA_XLARGE07
      ( SUBPARTITION T_201211_T_NULL VALUES (NULL)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_201211_T_0 VALUES (0)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_201211_T_1 VALUES (1)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_201211_T_3 VALUES (3)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_201211_T_OTHERS VALUES (DEFAULT)    TABLESPACE T_DATA_XLARGE07 ),
      PARTITION T_201212 VALUES LESS THAN (TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        TABLESPACE T_DATA_XLARGE07
      ( SUBPARTITION T_201212_T_NULL VALUES (NULL)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_201212_T_0 VALUES (0)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_201212_T_1 VALUES (1)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_201212_T_3 VALUES (3)    TABLESPACE T_DATA_XLARGE07,
        SUBPARTITION T_201212_T_OTHERS VALUES (DEFAULT)    TABLESPACE T_DATA_XLARGE07 )
    NOCACHE
    NOPARALLEL
    MONITORING
    ENABLE ROW MOVEMENT
    AS
    SELECT /*+ PARALLEL (T,40) */ SERIAL_NUMBER     ,
      PIN_NUMBER        ,
      CARD_TYPE         ,
      DENOMINATION      ,
      DATE_GENERATED    ,
      LOG_PHY_IND       ,
      CARD_ID           ,
      OUTLET_CODE       ,
      MSISDN            ,
      BATCH_NUMBER      ,
      DATE_SOLD         ,
      DIST_CHANNEL      ,
      DATE_CEASED       ,
      DATE_PRINTED      ,
      DATE_RECHARGE     ,
      LOGICAL_ORDER_NR  ,
      DATE_AVAILABLE    ,
      CURRENT_STATUS    ,
      ACCESS_CODE        from CARDS_TAB T
    /Also base table CARDS_TAB has a primary key on SERIAL_NUMBER column. when trying to create a primary key on new table it throws exception:
    ALTER TABLE T_CARDS_NEW_201111 ADD
      CONSTRAINT T_PK2_1
    PRIMARY KEY  (SERIAL_NUMBER) USING INDEX
    TABLESPACE T_INDEX_XLARGE07
    PARALLEL 10 NOLOGGING;
      CONSTRAINT TP_PK2_1
    ERROR at line 2:
    ORA-02437: cannot validate (T_PK2_1) - primary key violatedThanks in advance.
    With Regards,
    Farooq Abdulla

    For parallel processing the documentation suggests the use of automatic degree of parallelism (determined by the system at run time) or choosing a power of 2 value
    Look at Florian's post in yours presently neighbour post How to Delete Duplicate rows from a Table to locate the violations (seemingly due to parallel processing)
    Regards
    Etbin

  • Issue with DWH DB tables creation

    Hi,
    While generating Datawarehouse tables (sec 4.10.1 How to Create Data Warehouse Tables), i have landed up with error that states "Creating Datawarehouse tables Failure'
    But when i checked in the log file 'generate_ctl.log', it have the below message:
    +"Schema will be created from the following containers:+
    Oracle 11.5.10
    Oracle R12
    Universal
    Conflict(s) between containers:
    Table Name : W_BOM_ITEM_FS
    Column Name: INTEGRATION_ID.
    +The column properties that are different :[keyTypeCode]+
    Success! "
    When i checked in the DWH Database, i could found DWH tables but not sure whether all tables were created?
    Can anyone tell me whether my DWH tables are all created? How many tables would be created for the above EBS containers?
    Also, should i need to drop any of EBS container to create DWH tables successfully?
    The Installation guide states when DWH tables creation fails then 'createtables.log' won't be created. But, in my case, this log file got created!
    Edited by: userOO7 on Nov 19, 2008 2:41 PM

    I saw the same message. I also noticed I am unable to load any BOM Items into that fact table. It looks like the BOM_EXPLODER package call is not keeping any rows in BOM_EXPLOSION_TEMP, so no rows are loaded into that fact table. Someone needs to log an SR for this.
    *****START LOAD SESSION*****
    Load Start Time: Wed Nov 19 17:13:42 2008
    Target tables:
    W_BOM_ITEM_FS
    READER_2_1_1> BLKR_16019 Read [0] rows, read [0] error rows for source table [BOM_EXPLOSION_TEMP] instance name [mplt_BC_ORA_BOMItemFact.BOM_EXPLOSION_TEMP]
    READER_2_1_1> BLKR_16008 Reader run completed.
    TRANSF_2_1_1> DBG_21216 Finished transformations for Source Qualifier [mplt_BC_ORA_BOMItemFact.SQ_BOM_EXPLOSION_TEMP]. Total errors [0]
    WRITER_2_*_1> WRT_8167 Start loading table [W_BOM_ITEM_FS] at: Wed Nov 19 17:13:42 2008
    WRITER_2_*_1> WRT_8168 End loading table [W_BOM_ITEM_FS] at: Wed Nov 19 17:13:42 2008
    WRITER_2_*_1> WRT_8035 Load complete time: Wed Nov 19 17:13:42 2008
    LOAD SUMMARY
    ============
    WRT_8036 Target: W_BOM_ITEM_FS (Instance Name: [W_BOM_ITEM_FS])
    WRT_8044 No data loaded for this target
    WRITER_2__1> WRT_8043 ****END LOAD SESSION*****
    WRITER_2_*_1> WRT_8006 Writer run completed.
    I now see it is covered in the release notes:
    http://download.oracle.com/docs/cd/E12127_01/doc/bia.795/e12087/chapter.htm#CHDFJHHB
    1.3.31 No Data Is Loaded Into W_BOM_ITEM_F And W_BOM_ITEM_FS
    The mapping SDE_ORA_BOMItemFact needs to call a Stored Procedure (SP) in the Oracle EBS instance, which inserts rows into a global temporary table (duration SYS$SESSION, that is, the data will be lost if the session is closed). This Stored Procedure does not have an explicit commit. The Stored Procedure then needs to read the rows in the temporary table into the warehouse.
    In order for the mapping to work, Informatica needs to share the same connection for the SP and the SQL qualifier during ETL.This feature was available in the Informatica 7.X release, but it is not available in the Informatica release 8.1.1 (SP4). As a result, W_BOM_ITEM_FS and W_BOM_ITEM_F are not loaded properly.
    Workaround
    For all Oracle EBS customers:
    Open package body bompexpl.
    Look for text "END exploder_userexit;", scroll a few lines above, and add a "commit;" command before "EXCEPTION".
    Save and compile the package.

  • TABLE NOT MAPPED.

    Hello all,
    I am facing a problem which extracting data thro' EJBQL,
    when ever I try this, I get an exception TABLE NOT MAPPED.
    Can anyone tell me what exactly is not mapped. Although I am able to store the data in the database thro' EntityManager.persist(Object) method.
    I am using jboss-4.0.5 (Hibernate for persistence) and Oracle 9i.
    Thanks & Regards,
    Varun Narang.

    HI Jyotika,
       This usually happens when you have a lookup table which has more than one display field. You get this error because you have to create an Compound field, which might not be set in the map. In order to create a Compound filed just check wether all the individual fields in the destinations field BankDetails<X,Y,Z>(X,Y,Z are individually mapped) then just right click in any portion on the source fields and select compound filed's, it creates a Bank details field and automatically maps it to the destination fields. Some times it also maps the values; if not you got to manually map it.
    I you have any problem please let me know.
    Regards,
    CHARAN

  • Internal list. Automatic BP creation

    Hi all!
    I am using internal lists with different companies contacts, particularly email addresses, for automatic BP creation. In  CRM 4.0 I may map email address to the PERSON type of business partner, but I need to automatically create ORGANIZATION business partner type, not the PERSON.
    Does anybody know if there is any solution in v 4.0 for this issue? Or may be CRM 5.0 allows to this?
    Thanks!

    Hi,
    Please check the Solution. Only those SID will be displayed which are in that particular Solution.
    Regards,
    Shyam.

  • ER BC4J Automatic LOV Creation

    I believe it is easy to extend the already defined JDev BC4J structures to implement a new feature,"Automatic LOV Creation", which will help developers to define LOVs based on foreign keys in an easy and comprehensive manner.
    Lets assume we have two Entity Object A and B. Entity Object A references Entity Object B through a foreign key.
    The developer would like to create a View Object which includes references to Entity Object B.
    Currently the developer chooses in the View Object definition wizard first Entity Object A as "Updatable" and then adds Entity Object B beneath it as "Reference".
    Let us assume there is another option named "Create LOV" which activates when the "Reference" option is selected.
    The developer selects the "Create LOV" option and then proceeds to include attributes from Entity Object B which will participate in the View Object the same way s/he does now. When the VO is created it will auomatically include a LOV having as attributes the selected attributes from the Entity Object B which participate in the new VO and the LOV attribute values will map to the corresponding values of the newly created View Object.
    I strongly believe the automatic creation of LOVs dealing with foreign keys is a very useful feature which is easy to implement, it will "sell" well and is worth the investment.
    Best Regards
    Elias.

    The very nice example you are referring to is not appropriate. In the case of a cascading LOV we will have to perform the steps of the example.
    I would like to change the title of ER to "Automatic LOV Declaration" because I believe it is more appropriate.
    I will outline an example by using the HR schema borrowing the terminology from the example you provided.
    Create a "vanilla" ADF BC project based on Countries and Regions and update the CountriesVO to display Region_Name from the Regions Entity.
    The attributes selected for the VO are the following:
    CountryId
    CountryName
    RegionId
    RegionId1 (I set this attribute as hidden)
    RegionName
    Notice at the time we select the Regions Entity the Reference indication is set.
    So after we select attributes from the Regions Entity the JDev Engine knows the following:
    1. It is supposed to retrieve data from another Entity (Regions), therefore it knows the LOV VO --> So it can Declare the LOV for us.
    2. The RegionId of the Countries VO receives values from the Regid of the Entity (Regions) --> So it can link the LOV return value
    3. The attribute RegionName which belongs to another Entity (Regions) is displayed --> So it can add the attribute to be displayed in the LOV VO
    In the case of several "Reference" Entities the JDev Engine would create several LOVs, one for each referenced Entity.
    Best Regards
    Elias.

  • Automate partition creation

    Hi,
    Is there any example out there on how to automate the creation of time based partitions?
    As an example, I would like to create 1 partition for each quarter and when a new quarter start have a new partition automatically added with the same attributes than the previous one.
    I would also have the oldest 4 partitions automatically deleted as soon as the total number of partitions reaches 13.
    Any thoughts or links?
    Thanks,
    Philippe

    Hi Philippe,
    The Project REAL Analysis Services Technical Drilldown discusses one implementation of such automation:
    http://www.microsoft.com/technet/prodtechnol/sql/2005/realastd.mspx
    >>
    Project REAL: Analysis Services Technical Drilldown
    By Dave Wickert, Microsoft Corporation
    SQL Server Technical Article
    Published: September 2005
    Appendix A: Automating Partition Creation
    The Project REAL design uses partitioning quite heavily. The production system has more than 220 extremely large partitions. The sample data uses over 125 partitions that are only tens of thousands of records per partition. The full production system has 180 to 200 million records per partition. With so many partitions, extensive typing was required to create each partition every time we generated a new schema.
    So, as the saying goes, “When the going gets rough, a programmer writes a program.”
    This appendix documents the BuildASPartition SQL Server 2005 Integration Services package that we created to automate the building of Analysis Services measure group partitions in SQL Server 2005 Analysis Services databases. This package synchronizes the relational partition scheme with the Analysis Services partition scheme. It loops through the relational database looking for a weekly fact table partition (by using a table naming convention). If a relational table is found, it looks to see if an Analysis Services measure group partition already exists (using the same naming convention). If not, it constructs and executes a XMLA script that creates it.
    >>

  • Automatic PO creation of free text PR

    Hi!
    We have all our PRs in free text since we are not yet using material master. One of our purchase organization will only order from one vendor and they will enter that vendor and price in the PR. Is it possible to create a PO automatically from that PR without a material or source list?
    Sincerely
    Anders

    Hi
    Thank you very much. Is it possible to restrict the automatic creation to a certain purchase organization?
    If I understand you right:
    1) Create PR with item category D Service
    2) Activate automatic PO creation in ML91
    That will create POs for all PRs created as a service?
    Sincerely
    Anders

  • Problem during automatic PO creation

    Hi,
                I have run MRP for a certain material using MD03 .Message shows MRP run successfully.Then I checked the status of this by using transactions MD04 and it shows the PR no for that material.But when I want to create automatic PO through ME59n using that PR no , message shows no suitable PR exist.I have also set the Automatic PO indicator in Material master Purchasing view.
                 What is the reason behimd it.

    Hi,
    For automatic PO creation follow the below points.
    1) Material Master purchasing view tick automatic PO check box
    2) Vendor Master purchasing view tick automatic PO check box.
    3) Maintain Source list for Vendor & Material.
    4) Maintain Purchase Info Record.
    5) Create a Purchase Requisition.
    6) Use T.Code: ME59N & execute for PR for vendor, you will be able to create automatic PO.
    Ensure that Purchase Info Record should be there.
    If you have more than one vendor then fixe only one vendor in source list.
    Did you set source determination during PR creation
    Regards,
    Biju K

  • Error during automatic po creation ME59N

    Dear All,
    Following errors has been shown after automatic po creation by ME59N:-
    Message text Message Class Message Number
    PO could not be created
    PO header data still faulty MEPO 2
    Enter Validity End ME 83
    Enter Latest GR Date ME 83
    Can delivery date be met? ME 40
    Requisition could not be converted
    Please suggest how to solve above .
    with regards,
    Pradeep Bhardwaj

    Hi
    From the messages it seems that system is expecting a Validity end date at PO header & Latest GR date at item level.
    Do you have these two fields as required in the PO??
    If these are required fields then you need to have process to populate them automatically (Programatically) to resolve the error.
    Thanx
    Prasad

  • SRM Classic - Automatic PO creation in R/3 only for PReqs based on catalogs

    Dear Experts,
    in SRM 7.0 Classic Scenario, i am facing the following question w.r.t. automatic PO creation on the R/3 side:
    Is it possible to have automatic PO creation in R/3 only for those purchase requisition items, that are based on shopping cart items from a third party punch out catalog?
    The idea is, that the requisitioner on the SRM EBP side shops in a catalog. Once the the SC is approved and the purchase requisition is created in the R/3 system, automatically a PO should be created based on that PReq. This would save the time for the buyer, who do not need to pay attention to this document, because the purchase from the catalog guarantees the buyer, that the already pre-agreed conditions are getting applied.
    But PReqs that are based on Free Text shopping cart EBP items must not be taken into account for the automatic PO creation.
    Is there a way to distinguish between the catalo-based and the non-catalog based PReq positions?
    Thank you.

    Hello Ashutosh,
    thank you very much for that idea!
    I would have the following question w.r.t. to such configuration:
    If i configure that the PO creation for complete SC should happen for a certain purchasing group, would it be somehow possible to arrange that when a Catalog item is put into the shopping cart, that only that certain purchasing group gets defaultet? (the idea behind this is, to leave the free text shopping carts to already known purchasing groups and to keep the purchase requisition as the backend object for this configuration. And additionally to create new purchasing groups, that should be linked to the catalog purchases and to POs as the backend objects).
    Thanks again for the help!

Maybe you are looking for

  • Customer service...an open letter

    Hi, I have been a bt customer for most of my life via phone........moved to bt broadband 3 years ago and had no problems untill 6 months ago, but as not always in the country was not a big issue untill recently, after numerous call to customer servic

  • Question on Logic of seperation in OOAbap-Selection screen

    Hello guys I like to ask you one question i found and get your ideas. If you look at the codes for creating an ALV it always recommends you to seperate logic and business layer etc.. this is the new way of creating an ALV So you have  a model view an

  • Problem with JRE (SIGSEGV) in RedHat 6.2 and Oracle 8.1.6

    Hello, All! I try to run installer of Oracle 8.1.6 on my RedHat 6.2 and receive next message: bash$ ./runInstaller bash$ Initializing Java Virtual Machine from ../stage/Components/oracle.swd.jre/ 1.1.8/1/DataFiles/Expanded/linux/bin/jre. Please wait.

  • Compare Dates and select the max date ?

    Hello, I am trying to write a script and will compare the dates in " eff_startdt" and give me the lastest date at the outcome. I have data that some service locations have more than one contract date and I need to get the latest dated conract dates t

  • Carry out new pricing not working in Me23n

    Dear Experts, I am trying to carryout new pricing in purchase order i.e iam going into Item conditions of PO and hit update button on the botton of the conditions tab and select B as the pricing type . The value changes to the new rice and then I sav