Physical Model Data Type Issue

Hi,
I have created ware house tables using E-business suite tables for general ledger model and exported into physical model. I could see all my keys/IDs which were numbers have been converted into decimal.
Ex: journal_id in my oracle table is number (ex:1234). But when I exported into OBIEE and used view data option. All the IDs(data type number) data is showing as decimal format (1234.00).
Ex: If I want to see data for check number in answers, then it is showing as 112456.00 instead of 112456.
Please anyone had this kind of issue. how to resolve?
Is there any default setting which is causing this problem.how to find?
OBIEE version : 10.1.3.2
Appreciate your help.

Hi Sandeep,
When I opened the rpd in offline mode, it is saying the repository can only be opened as Read-Only. I clicked Yes for this and entered admin user name and password. But in Physical model, when I right clicked any table, I couldnt see find any place to modify. I tried with properties also. I am not able change data type as it read-only. How to do it in offline?
FYI,
I opened rpd when OC4J was stopped. Aslo I opened when OC4J is running. But both times I am not able to change datat type.
Thanks!!

Similar Messages

  • Data Type issue in XMLA

    Hi
    In XMLA, I am getting problem in Source datatype for Measures.
    Measuregroup getting data from 3 different partition,this 3 partition populating data from 3 different view.One view populating data from base table,remaining 2 we gave default value with cast
    ex: Cast (0 as Bigint) as MeasureName.
    In DSV for all 3 table data type is System.Int64, but while I am generating deployment script. I am getting Integer as Source datatype for Measure in XMLA.So what might be the problem.
    pls guide me,

    Can you post exact error you're getting?
    Please Mark This As Answer if it solved your issue
    Please Mark This As Helpful if it helps to solve your issue
    Visakh
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Data Types issue in MDM import Manger

    Hi All,
                 I need to import the data from .xls file to MDM. I have a field Postal code in MDM main table and the same field is exists in source file too. In import Manager It is displaying the data types as Numeric in source and Text in destination. When i map, the fields are getting mapped but the values are not inserting/updating in destination MDM.
    Can any one help me on this.
    Regards,
    Venki.

    Hi Venki,
    please check if you map against the correct destination field.
    1. If you want to umport/update record in the main table you ned to choose for Source Hierarchy the Excel sheet with trhe data and for Destination Hierarchy - the Main Table. Then map all fields you need - you need to have at least 1 key field (e.g. Material Number, Vendor number, etc. against which you will do the matching). Then make sure that you map the needed fields(Postal Code) to Destination field (F). Then just choose the Key field as a Matching field and choose import action Create/Update All Fields and execute the upload.
    This should work.
    BR,
    ILIN

  • Data Type Issue Calling Function DATE_GET_WEEK in FOX

    Hi,
    I am calling DATE_GET_WEEK in the FOX Formula in Integrated Planning.
    DATA START_DATE TYPE 0CALDAY.
    DATA START_WEEK TYPE 0CALWEEK.
    CALL FUNCTION DATE_GET_WEEK
    EXPORTING
    DATE = START_DATE
    IMPORTING
    WEEK = START_WEEK.
    I get an error saying
    Type of Paramters WEEK () and variable START_WEEK(N) are inconsistent
    I get a similar error for DATE ().
    Please tell me what DATA Type to use using passing and reciving values from this function module.
    Once I get the value, I want convert the WEEK into Type 0CALWEEK. Please tell me how to this also.
    Regards, Dheeraj

    Hi Mti,
    I tried what you recommended.
    But I get the error TYPE SCAL is not permitted.
    In Fox Formula as I know only the following are permitted.
    1. I for integter
    2. F Float
    3. D Date
    4. Characteristics.
    Regards, Dheeraj

  • SAP BW Time characteristics data type issue in BO OLAP Universe

    Hi Ingo,
    We have time characteristics in our SAP BW Queries, for example OCALMONTH with data type NUMC.
    When we create OLAP Universe these objects are having Character as data type and not working properly when we use these objects as filters. The filter working on string instead of calender month.
    If i try to change the data type in Universe we are not able to execute Web Intelligence queries due to Driver Not Capable error.
    Appreciate you help.
    Regards,
    Ravi Kumar Garre

    Hi,
    Please find my linline answers:
    - are you entering the value manually or did you select the value from a list of values ?
    I am selecting the values from list.
    - based on which item is the list of values ?
    List of valules are coming from BI infoObject 0CALMONTH
    - what about creating a variable for this in the underlying BW query ?
    We have created a BI Variable, found that the Condition operator is Between and hence when execute the WebI query for this object there are two prompts one is for FROM value another is for TO value. If i give a value Dec, 2007 for FROM and leave TO as blank then i am getting data for all months instead of DEC 2007 onwards.
    - did you trace the Web Intleligence part to see the details ?
    I do not have authorization to login on server and find the trace. I will ask our BO Administrator for MDX log file.
    Thanks & Regards,
    Ravi Kumar Garre

  • Unicode and non-unicode string data types Issue with 2008 SSIS Package

    Hi All,
    I am converting a 2005 SSIS Package to 2008. I have a task which has SQL Server as the source and Oracle as the destination. I copy the data from a SQL server view with a field nvarchar(10) to a field of a oracle table varchar(10). The package executes fine
    on my local when i use the data transformation task to convert to DT_STR. But when I deploy the dtsx file on the server and try to run from an SQL Job Agent it gives me the unicode and non-unicode string data types error for the field. I have checked the registry
    settings and its the same in my local and the server. Tried both the data conversion task and Derived Column task but with no luck. Pls suggest me what changes are required in my package to run it from the SQL Agent Job.
    Thanks.

    What is Unicode and non Unicode data formats
    Unicode : 
    A Unicode character takes more bytes to store the data in the database. As we all know, many global industries wants to increase their business worldwide and grow at the same time, they would want to widen their business by providing
    services to the customers worldwide by supporting different languages like Chinese, Japanese, Korean and Arabic. Many websites these days are supporting international languages to do their business and to attract more and more customers and that makes life
    easier for both the parties.
    To store the customer data into the database the database must support a mechanism to store the international characters, storing these characters is not easy, and many database vendors have to revised their strategies and come
    up with new mechanisms to support or to store these international characters in the database. Some of the big vendors like Oracle, Microsoft, IBM and other database vendors started providing the international character support so that the data can be stored
    and retrieved accordingly to avoid any hiccups while doing business with the international customers.
    The difference in storing character data between Unicode and non-Unicode depends on whether non-Unicode data is stored by using double-byte character sets. All non-East Asian languages and the Thai language store non-Unicode characters
    in single bytes. Therefore, storing these languages as Unicode uses two times the space that is used specifying a non-Unicode code page. On the other hand, the non-Unicode code pages of many other Asian languages specify character storage in double-byte character
    sets (DBCS). Therefore, for these languages, there is almost no difference in storage between non-Unicode and Unicode.
    Encoding Formats: 
    Some of the common encoding formats for Unicode are UCS-2, UTF-8, UTF-16, UTF-32 have been made available by database vendors to their customers. For SQL Server 7.0 and higher versions Microsoft uses the encoding format UCS-2 to store the UTF-8 data. Under
    this mechanism, all Unicode characters are stored by using 2 bytes.
    Unicode data can be encoded in many different ways. UCS-2 and UTF-8 are two common ways to store bit patterns that represent Unicode characters. Microsoft Windows NT, SQL Server, Java, COM, and the SQL Server ODBC driver and OLEDB
    provider all internally represent Unicode data as UCS-2.
    The options for using SQL Server 7.0 or SQL Server 2000 as a backend server for an application that sends and receives Unicode data that is encoded as UTF-8 include:
    For example, if your business is using a website supporting ASP pages, then this is what happens:
    If your application uses Active Server Pages (ASP) and you are using Internet Information Server (IIS) 5.0 and Microsoft Windows 2000, you can add "<% Session.Codepage=65001 %>" to your server-side ASP script.
    This instructs IIS to convert all dynamically generated strings (example: Response.Write) from UCS-2 to UTF-8 automatically before sending them to the client.
    If you do not want to enable sessions, you can alternatively use the server-side directive "<%@ CodePage=65001 %>".
    Any UTF-8 data sent from the client to the server via GET or POST is also converted to UCS-2 automatically. The Session.Codepage property is the recommended method to handle UTF-8 data within a web application. This Codepage
    setting is not available on IIS 4.0 and Windows NT 4.0.
    Sorting and other operations :
    The effect of Unicode data on performance is complicated by a variety of factors that include the following:
    1. The difference between Unicode sorting rules and non-Unicode sorting rules 
    2. The difference between sorting double-byte and single-byte characters 
    3. Code page conversion between client and server
    Performing operations like >, <, ORDER BY are resource intensive and will be difficult to get correct results if the codepage conversion between client and server is not available.
    Sorting lots of Unicode data can be slower than non-Unicode data, because the data is stored in double bytes. On the other hand, sorting Asian characters in Unicode is faster than sorting Asian DBCS data in a specific code page,
    because DBCS data is actually a mixture of single-byte and double-byte widths, while Unicode characters are fixed-width.
    Non-Unicode :
    Non Unicode is exactly opposite to Unicode. Using non Unicode it is easy to store languages like ‘English’ but not other Asian languages that need more bits to store correctly otherwise truncation will occur.
    Now, let’s see some of the advantages of not storing the data in Unicode format:
    1. It takes less space to store the data in the database hence we will save lot of hard disk space. 
    2. Moving of database files from one server to other takes less time. 
    3. Backup and restore of the database makes huge impact and it is good for DBA’s that it takes less time
    Non-Unicode vs. Unicode Data Types: Comparison Chart
    The primary difference between unicode and non-Unicode data types is the ability of Unicode to easily handle the storage of foreign language characters which also requires more storage space.
    Non-Unicode
    Unicode
    (char, varchar, text)
    (nchar, nvarchar, ntext)
    Stores data in fixed or variable length
    Same as non-Unicode
    char: data is padded with blanks to fill the field size. For example, if a char(10) field contains 5 characters the system will pad it with 5 blanks
    nchar: same as char
    varchar: stores actual value and does not pad with blanks
    nvarchar: same as varchar
    requires 1 byte of storage
    requires 2 bytes of storage
    char and varchar: can store up to 8000 characters
    nchar and nvarchar: can store up to 4000 characters
    Best suited for US English: "One problem with data types that use 1 byte to encode each character is that the data type can only represent 256 different characters. This forces multiple
    encoding specifications (or code pages) for different alphabets such as European alphabets, which are relatively small. It is also impossible to handle systems such as the Japanese Kanji or Korean Hangul alphabets that have thousands of characters."<sup>1</sup>
    Best suited for systems that need to support at least one foreign language: "The Unicode specification defines a single encoding scheme for most characters widely used in businesses around the world.
    All computers consistently translate the bit patterns in Unicode data into characters using the single Unicode specification. This ensures that the same bit pattern is always converted to the same character on all computers. Data can be freely transferred
    from one database or computer to another without concern that the receiving system will translate the bit patterns into characters incorrectly.
    https://irfansworld.wordpress.com/2011/01/25/what-is-unicode-and-non-unicode-data-formats/
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Function returning string.  Data type question

    Hello all,
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    Our database has a parent record master_member_record_id & and children of those records member_record_id. I wrote a function which returns the master record for the child.
    eg. get_master(member_record_id). simple enough.
    I have wrote the opposite of this also, get_member_records(master_member_record_id). Obviously this function has multiple records to return so I have it set to return a listagg string with ',' separation. They want to be able to use this function as follows:
    select * from member_table where member_record_id in (get_member_records(master_member_record_id));
    or something similar, I realize this is a data type issue here, wondering if this is even possible. What do you think?
    Thanks in advance for your criticism/help,
    Chris

    My disagreement is with how pipeline table functionality is used.
    Instead of this (what it sounds like the OP is doing but returning CSV format)
    create or replace type TClientIdArray is table of number;
    create or replace function GetClientIDs( parentID number ) return TClientIdArray is
      array TClientIdArray;
    begin
      select
        client_id bulk collect into array
      from master_table
      where parent_id = parentID;
      return( array );
    end;A pipeline would look as follows:
    create or replace type TClientIdArray is table of number;
    create or replace function GetClientIDs( parentID number ) return TClientIdArray pipelined is
    begin
      for c in (select
                   client_id
                 from master_table
                 where parent_id = parentID ) loop
        pipe row( c.client_id );
      end loop;
      return;
    end;The first method is in fact more sensible in this case, especially when used from PL/SQL. A single SQL call/context switch to get a list of client identifiers. The issues with the first method are
    - cannot effectively deal with a large data set of client identifiers
    - would be suboptimal to use this function in subsequent SQL
    - if called by a client, a ref cursor (not collection) should be returned
    But assuming that the list of client identifiers are needed to be stepped through via complex PL/SQL app processing, using a (small) array is fine (assuming that concurrency/locking is not needed via a FOR UPDATE on the select that identifies the client identifiers to process).
    A pipeline in this case does not do anything better. It introduces more moving parts - as a native PL/SQL can no longer be used to get the collection and step through it. A TABLE() SQL function is needed, a SQL statement, and more context switching.
    The pipeline is simply an unnecessary layer between the code wanting to use the client identifiers and the SQL statement supplying the client identifiers. This approach of SQL -calls-> PIPELINE -calls-> MORE SQL is invariable wrong - unless the pipeline (PL/SQL code) does something very funky with the data from its SQL call, prior to piping it, that cannot be done using SQL.
    If the "user" of that client identifiers is SQL code, then both the above methods are flawed. As this code is already SQL, why use PL/SQL to call more SQL code? Simply use the SQL code/logic that supplies the client identifier list directly in the SQL code that needs the client identifiers. This means something like SQL JOIN, EXISTS, or IN clauses.

  • Data types in pi 7.1

    Hi,
    when do you use core, free and aggregated data types?
    Thanks in advance
    regards,
    Ramya Shenoy

    This may help you,
    Introduction
    SAP delivers descriptions of Global Data Types (GDTs) as Enterprise Services Repository (ESR) content to customers. Customers can create their own data types in the ES Repository and use SAP GDTs.
    u2022
    Two ways of modeling data types are supported: Classical XSD style and new CCTS (Core Component Technical Specification) style.
    u2022
    Data types are classified as free-style (classical), core (atoms of CCTS modeling), or aggregate (complex CCTS)
    u2022
    Standard fault data types are no longer created automatically for every namespace (only when fault message type is created).
    u2022
    Additional functions in data type editor provide better support for structural changes (e.g., moving subtrees).
    More Information:
    SAP Help Portal: Global Data Types
    According to this, we distinguish between SAP Core Data Type and SAP Global Data Type. The SAP Core Data Type as well as the W3C Type (XSD type) does not have any business semantics. The SAP GDT has business semantics, it actually can be based on either a core or an aggregated DT.
    Free-style data types are directly based on the primitive data types, e.g., xsd:decimal. They do not need any further parameters to define themselves.
    More Information:
    u2022
    SAP Help Portal: Core Data Types and Aggregated Data Types Recommendation
    When to use free, core and aggregated data types
    Core data types are based on CCTS specification. Aggregated data types have business semantics, are based on CDTs, and are the basis for application-specific data types SAP-wide.
    Customers who adhere to the CCTS specification or have business scenarios dealing with SAP backend systems, should try and reuse the existing core and aggregated data types delivered by SAP. However, customers who wish to use their own schema for defining the data types, free-style data type is the preferred approach.
    While transferring the business scenarios from XI 3.0/PI 7.0 to PI 7.1, all the data types in the design objects will be transferred as free-style data types by default.
    While creating new objects in PI 7.1x, it is recommended to reuse the core and aggregated global data types provided by SAP.
    The detailed list of SAP delivered data types is available on SAP Developer Network (SDN) at ES Workplace.
    https://www.sdn.sap.com/irj/sdn/esworkplace
    Core data types
    Core data types are based on representation terms which are ISO 15000 5 (ebCCTS) compliant.
    SAP delivers core data types with the same name as representation terms. Applying the standard methodology simplifies the implementation of cross-company processes.
    CDTs are intermediate data types, and are a level above primitive data types like decimal, string, etc. They are defined by representation terms such as Amount, Identifier, etc. They are themselves not yet application-specific and are therefore referred to as context-free.
    While creating core data type based on any given representation term, we get an option to either use a default XSD type or reuse an existing core data types.
    For using the standard XSD types we select Type as u2018XSD Typeu2019
    For reusing already existing core data type we select Type as u2018Data Typeu2019 Prerequisite
    Define a dependency on the SWC SAP Global to reuse any of the standard core data types shipped by SAP.
    xample

  • Renaming custom data type

    All,
    Is it possible to rename a Custom Data Type (programatically or via a CallBack etc...)? Specifically, the container UUT. I want to call it something else for my program.
    Thanks
    Solved!
    Go to Solution.

    To rename a custom data you must have all files with instances of that type open when you make the change, otherwise they will be considered to have a different type with the old name when they are next opened. Do you mean the UUT process model data type? I would recommend against changing that since you will make your custom version of the process model less compatible with the default version, especially since the end user/operator does not necessarily ever have to see the type name, so it's probably not worth the hassle in that case.
    Hope this helps,
    -Doug

  • SQL Dev Data Modeller:  Auto-generate Surrogate PKs in the Physical Model ?

    How can I have the logical modeller allow the user to design with logical PKs, but then have surrogate primary keys be auto-generated off sequences by the modeller when it comes to create the physical model - as in conventional application design?
    Without this facility, this tool is useless, IMO.
    I want:
    i). sequences to become the physical PKs by default, and that what were the logical PKs in the logical model, to become a unique key in the physical model.
    ii). I want this set by default when generating the physical model....
    iii). ....with an option to turn this off on a entity-by-entity basis (as not all tables will necessarily require such a surrogate PK; so the logical PK may remain the physical PK).

    It is common practice that physical PKs in Oracle tables are defined from sequences (surrogate PKs), and that the logical PK from the entity becomes a unique key in the table.
    This may not always be the case in every application out there, and some people may disagree, but it is nonetheless a needed feature.
    My new Feature Request is therefore:
    I would like to see the following additions to the product.
    1. In the Preferences -> Data Modeler -> Model -> Logical, a flag that by default indicates whether the designer wishes to opt to enable this feature (ie; have all logical PKs converted to unique keys, and replaced by sequence nos. in the physical model). This flags needs to be there since in real life, albeit erroneously IMO, some people will choose not to opt to use this functionality.
    2. On every entity created in the model, there needs to be a flag that allows to override this default option, as not every table will require a surrogate PK to be generated. Being able to (re)set a flag located on the entity properties (perhaps under 'Engineer To'), will accomplish this.
    3. When Forward Engineering to the physical model from the logical, the following should happen.
    ENTITY  1 ---------->TABLE 1
    ---------------------> P * Surrogate PK
    * Attribute 1 -----> U * Column 1
    * Attribute 2 -----> U * Column 2
    o Attribute 3 ----------> Column 3
    Here you can see,
    - Attributes 1 & 2 (the logical PK) of the entity become a unique key in the table (columns 1 & 2),
    - optional Attribute 3 becomes NULLable column 3,
    - and a physical surrogate PK column is added (type unbounded INTEGER, PRIMARY KEY constraint added).
    From entity DEPT as:   (Examples based on SCOTT schema)
    DEPTNO NUMBER(2) NOT NULL <-- Logical primary key on entity
    DNAME VARCHAR2(14)
    LOC VARCHAR2(13)
    CREATE TABLE DEPT
    (PK_DEPT INTEGER, -- New column becomes surrogate physical PK, driven from sequence defined later
    DEPTNO NUMBER(2) NOT NULL, -- Former logical PK becomes a UK
    DNAME VARCHAR2(14),
    LOC VARCHAR2(13))
    ALTER TABLE DEPT
    ADD CONSTRAINT PK_DEPT PRIMARY KEY (PK_DEPT) USING INDEX PCTFREE 0
    ALTER TABLE DEPT
    ADD CONSTRAINT UKLPK_DEPTNO UNIQUE (DEPTNO) USING INDEX PCTFREE 0 -- Former logical PK becomes a UK (constraint name reflects this)
    CREATE SEQUENCE PK_DEPT_SEQ
    CREATE TRIGGER PK_DEPT_SEQ_TRG
    BEFORE INSERT ON DEPT
    FOR EACH ROW
    WHEN (new.PK_DEPT IS NULL)
    BEGIN
    SELECT PK_DEPT_SEQ.NEXTVAL
    INTO :new.PK_DEPT
    FROM DUAL;
    -- Or from 11g onwards, simply,
    :new.PK_DEPT := PK_DEPT_SEQ.NEXTVAL;
    END;
    From entity EMP as:
    EMPNO NUMBER(4) NOT NULL -- Logical primary key on entity
    ENAME VARCHAR2(10)
    JOB VARCHAR2(9)
    MGR NUMBER(4)
    HIREDATE DATE
    SAL NUMBER(7,2)
    COMM NUMBER(7,2)
    DEPTNO NUMBER(2)
    CREATE TABLE EMP
    (PK_EMP INTEGER, -- New column becomes surrogate physical PK, driven from sequence defined later
    FK_DEPT INTEGER, -- New FK to surrogate PK in DEPT table (maybe NOT NULL depending on relationship with parent)
    EMPNO NUMBER(4) NOT NULL, -- Former logical PK becomes a UK
    ENAME VARCHAR2(10),
    JOB VARCHAR2(9),
    MGR NUMBER(4),
    HIREDATE DATE,
    SAL NUMBER(7,2),
    COMM NUMBER(7,2),
    DEPTNO NUMBER(2))
    ALTER TABLE EMP
    ADD CONSTRAINT PK_EMP PRIMARY KEY (PK_EMP) USING INDEX PCTFREE 0
    ALTER TABLE EMP
    ADD CONSTRAINT FK_DEPT FOREIGN KEY (FK_DEPT) REFERENCES DEPT (PK_DEPT)
    ALTER TABLE EMP
    ADD CONSTRAINT UKLPK_EMPNO UNIQUE (EMPNO) USING INDEX PCTFREE 0 -- Former logical PK becomes a UK (constraint name reflects this)
    CREATE SEQUENCE PK_EMP_SEQ
    CREATE TRIGGER PK_EMP_SEQ_TRG
    BEFORE INSERT ON EMP
    FOR EACH ROW
    WHEN (new.PK_EMP IS NULL)
    BEGIN
    SELECT PK_EMP_SEQ.NEXTVAL
    INTO :new.PK_EMP
    FROM DUAL;
    -- Or from 11g onwards, simply,
    :new.PK_EMP := PK_EMP_SEQ.NEXTVAL;
    END;
    [NOTE:   I use PCTFREE 0 to define the index attributes for the primary & unique keys since the assumption is that they will in general not get updated, thereby allowing for the denser packing of entries in the indexes and the (albeit minor) advantages that go with it.
    This is certainly always true of a sequence-driven primary key (as it is by its very nature immutable), but if the unique key is likely to be frequently updated, then this PCTFREE option could be user-configurable on a per table basis (perhaps under Table Properties -> unique Constraints).
    For non-sequence-driven primary keys, this storage option could also appear under Table Properties -> Primary Key.
    I notice no storage options exist in general for objects, so you may like to consider adding this functionality overall].
    Associated Issues :
    - Preferences, 'Naming Standard: Templates' should be updated to allow for the unique key/constraint to be called something different, thus highlighting that it comes from the logical PK. I've used 'UKLPK' in this example.
    - Mark the physical PK as being generated from a sequence; perhaps a flag under Table Properties -> Primary Key.
    - When Forward Engineering, if an entity exists without a logical PK, the forward engineering process should halt with a fatal error.
    !!! MODERATOR PLEASE DELETE ME !!!

  • Issue when SelectOneChoice is used with Domain data type in JDev 11.1.2.0.0

    Hi,
    I am facing one issue while working with SelectOneChoice along with Custom Domain data type. Sample app to simulate the issue is available at http://www.filejumbo.com/Download/6FDF6ECF2922BD24
    Issue Details.
    Base view object’s attribute is of type CustomString, for which another static VO’s attribute is attached as LOV. LOV attribute is of type String. Because of this data type mismatch between LOV VO attribute and Base VO attribute, while working in screen, initially we were facing Class cast exception.
    Cannot convert <<LOV Attr. Val.>> of type class java.lang.String to class model.domain.common.CustomString This is not only for this type of SelectOneChoice but also for InputText field whose underlying VO attribute is of type CustomString (i.e. any Custom Domain type)
    On raising this in Jdeveloper forum, I came to know that adding a default oracle converter against the UI Component will take care of converting to respective data type. After added the converter for InputText and SelectOneChoice components, this issue got resolved. This was our lesson while working in Jdeveloper version 11.1.1.3.0. Converter we used,
    <f:converter converterId="oracle.genericDomain"/> When we try the same scenario in Jdev Version 11.1.1.4.0, without having the oracle converter itself, SelectOneChoice started working fine!! (i.e. it is able to set the base attribute with LOV attribute’s value but with proper base attribute’s domain data type). Anyhow, converter is required for InputText.
    When we try the same scenario in Jdeveloper new version 11.1.2.0.0, it started giving class cast exception when we don’t have oracle converter for SelectOneChoice. But by adding it, though it didn’t give such class cast exception message, though a selection is made in SelectOneChoice, VO attribute has not been updated with the new value. Instead it is updated with null value (Checked the setter method of view row impl by having break point) . Because of this, after a selection is made, when we try to read the attribute value from VO on button click, VO attribute always returns null.
    We have also tried our own converters but there is no change in the behavior.
    The above misbehavior can be tested either by having SOP programmatically or by refreshing the SelectOneChoice by giving its id as Partial trigger to itself with autosubmit set to true, so that the selected value will be reset to null irrespective of the selection made.
    For convenience, Issue details with Sample application is shared. Shared link : http://www.filejumbo.com/Download/6FDF6ECF2922BD24
    Shared folder contains
    1. Sample App developed on Jdev 11.1.1.4.0 to ensure it didn’t give this error.
    2. Sample App developed on Jdev 11.1.2.0.0 to simulate this error.
    3. Error details in a document.
    Can anybody have a look at this and tell me why this misbehavior and is it a bug? If so, any workaround available to continue the development?
    Thanks in Advance.
    Raghu
    Edited by: Raguraman on Sep 10, 2011 10:31 AM

    Sorry for the late reply John and Frank. Ya i did. Thank you.
    One more detail:
    I tested the behavior in Jdeveloper 11.1.2.0.0. The recent surprise is Select One Choice is behaving perfectly when it used in Grid layout and fail to work when it is form layout. I am getting surprised why behavior of component varies based on the way it refers the binding.
    for form layout,
    value=#{bindings.
    for grid layout,
    value=#{row.bindings.
    The bug details (#/title) are Bug 12968871 - RUNTIME CONVERSION FAILURE WHEN USING CUSTOM DOMAIN OBJECT VALIDATION IN EO
    Edited by: Raguraman on Sep 12, 2011 8:23 PM
    Edited by: Raguraman on Sep 12, 2011 8:31 PM

  • Lose data types after model saving

    Hi all.
    I try to import model from erwin 7.3 in data modeler Version 3.0.0.665
    after import i have this log:
    Oracle SQL Developer Data Modeler 3.0.0.665
    Oracle SQL Developer Data Modeler Import Log
    Date and Time: 2011-09-20 16:31:22 MSD
    Design Name: DPO
    Import Finished
    Errors: 0
    Warnings: 0
    i receive similar model as erwin-model. but afeter saving and reopening almost all datatype set as UNKNOWN. what`s wrong?
    i try change type for one column from UNKNOWN to NUMBER and again save/reopen. this change was saving.
    how i can save source datatypes after import model?
    screenshots:
    before saving http://s016.radikal.ru/i336/1109/dc/16cf0ab0c7df.jpg
    after saving http://s011.radikal.ru/i315/1109/7a/808bded51f7b.jpg
    Edited by: Finch on 20.09.2011 6:15

    Hello,
    I have no definitive idea as to why you are losing the dataypes, though I suspect it has to do with setting the Design Rules under the Tools option.
    The suggestion I want to make is that you generate the DDL from Erwin and and import this and save, all data types should save, they have for me. Also Have found the DDL import to be the most reliable along with importing from Oracle Designer and Oracle database data dictionaries.
    If you have logical objects you want to transfer over then I would advise export Erwin to Oracle Designer and import to SQL Data Modeler.
    If you go through the threads on this forum you are no alone in finding issues with Erwin import.
    Hope that helps.
    Yusef

  • Data Modeler Logical Data Type confusion!

    I don't get it.
    When defining a logical model, I want to assign data types to the attributes in my model.
    I understand a logical datatype like money, and that a logical datatype might be implemented differently in different databases. The concept makes perfect sense to me.
    I pick a datatype from the Logical Types.
    It ignores the logical datatype that I picked and puts in another datatype instead. I'm guessing that it's a physical mapping. I would understand the logical-to-relational mapper doing that , but I don't understand it happening at this point in the model life cycle.
    Let's say I pick Money. It puts Double into the logical datatype, not money.
    If I pick Date or DateTime, it puts Date into the logical datatype, so what is the point of giving me two types to pick from?
    Seems kind of wonky.

    David,
    We will publish soon a document on how the data types work.
    If you go to tools, Types administration you will see that a logical type is mapped to a native type and a native type is mapped to a logical type.
    The logical type MONEY is mapped to DOUBLE for Oracle.
    Types and domain files can be customised.
    One could argue that the logical name MONEY should show up and not the native name. Showing the native name has the benifit that you see what the logical type is mapped to. Both approaches could have their supporters, no?
    Kind regards,
    René De Vleeschauwer
    SQL Developer Data Modeling.

  • Using Complex Data Types in Import JavaBean Model

    Hi,
    I have searched and read forums and weblogs related to import javabean model.
    But I am not clear about how to use complex data types like ArrayList, Collection etc in my java bean.
    If I use these complex datatypes in my bean, when creating model in WDF it displays the Complex data elements in Model Relation. I dont know how to use this model relation in my WD project.
    Anyone please explain the<b> step by step solution</b> about using complex data type(used in Bean) in WD Project.
    Thanks,
    Krishna Kumar

    Hi Krishna,
    Valery`s blog contains sample code ( http://www.jroller.com/resources/s/silaev/Employees.zip )
    Another blogs from this area:
    /people/anilkumar.vippagunta2/blog/2005/09/02/java-bean-model-importer-in-web-dynpro
    /people/valery.silaev/blog/2005/08/30/javabean-model-import-when-it-really-works
    And forum topics:
    Import JavaBean Model
    Problem Importing JavaBean Model in NetWeaver Developer Studio
    Issue on "Import JavaBean Model"
    import  JavaBean Model: no executable Methods?
    JavaBeans Model Import
    POLL : JavaBean Model Importer
    JavaBean-Model
    Invalid Class - Javabean not available for import
    WebDynpro Using JavaBean Model ->Please Help
    Best regards, Maksim Rashchynski.

  • Data Type displayed as unknown in Data models

    Hi,
    When I import ddl file, some of the data types are not getting interpreted correctly in the data model. For eg, data types datetime and signednumber are being displayed as unknown. I tried adding Logical Type mappings for signednumber (Tools-> Types Administration), but it still displays as unknown. Is there any solution for this issue?
    Thanks,
    Parvathy

    Hi Philip,
    This is the SQL statement. Sections in Bold are coming as unknown. I'm using Oracle 11g.
    CREATE TABLE PS_JOB (
    "EMPLID" CHAR(11)
    ,"EMPL_RCD" NUMBER(3)
    ,"EFFDT" DATE(10)
    ,"EFFSEQ" NUMBER(3)
    ,"PER_ORG" CHAR(3)
    ,"DEPTID" CHAR(10)
    ,"JOBCODE" CHAR(6)
    ,"POSITION_NBR" CHAR(8)
    ,"SUPERVISOR_ID" CHAR(11)
    ,"HR_STATUS" CHAR(1)
    ,"APPT_TYPE" CHAR(1)
    ,"MAIN_APPT_NUM_JPN" NUMBER(3)
    ,"POSITION_OVERRIDE" CHAR(1)
    ,"POSN_CHANGE_RECORD" CHAR(1)
    ,"EMPL_STATUS" CHAR(1)
    ,"ACTION" CHAR(3)
    ,"ACTION_DT" DATE(10)
    ,"ACTION_REASON" CHAR(3)
    ,"LOCATION" CHAR(10)
    ,"TAX_LOCATION_CD" CHAR(10)
    ,"JOB_ENTRY_DT" DATE(10)
    ,"DEPT_ENTRY_DT" DATE(10)
    ,"POSITION_ENTRY_DT" DATE(10)
    ,"SHIFT" CHAR(1)
    ,"REG_TEMP" CHAR(1)
    ,"FULL_PART_TIME" CHAR(1)
    ,"COMPANY" CHAR(3)
    ,"PAYGROUP" CHAR(3)
    ,"BAS_GROUP_ID" CHAR(3)
    ,"ELIG_CONFIG1" CHAR(10)
    ,"ELIG_CONFIG2" CHAR(10)
    ,"ELIG_CONFIG3" CHAR(10)
    ,"ELIG_CONFIG4" CHAR(10)
    ,"ELIG_CONFIG5" CHAR(10)
    ,"ELIG_CONFIG6" CHAR(10)
    ,"ELIG_CONFIG7" CHAR(10)
    ,"ELIG_CONFIG8" CHAR(10)
    ,"ELIG_CONFIG9" CHAR(10)
    ,"BEN_STATUS" CHAR(4)
    ,"BAS_ACTION" CHAR(3)
    ,"COBRA_ACTION" CHAR(3)
    ,"EMPL_TYPE" CHAR(1)
    ,"HOLIDAY_SCHEDULE" CHAR(6)
    ,"STD_HOURS" NUMBER(7)
    ,"STD_HRS_FREQUENCY" CHAR(5)
    ,"OFFICER_CD" CHAR(1)
    ,"EMPL_CLASS" CHAR(3)
    ,"SAL_ADMIN_PLAN" CHAR(4)
    ,"GRADE" CHAR(3)
    ,"GRADE_ENTRY_DT" DATE(10)
    ,"STEP" NUMBER(2)
    ,"STEP_ENTRY_DT" DATE(10)
    ,"GL_PAY_TYPE" CHAR(6)
    ,"ACCT_CD" CHAR(25)
    ,"EARNS_DIST_TYPE" CHAR(1)
    ,"COMP_FREQUENCY" CHAR(5)
    ,"COMPRATE" NUMBER(19)
    *,"CHANGE_AMT" SIGNEDNUMBER(20)*
    *,"CHANGE_PCT" SIGNEDNUMBER(8)*
    ,"ANNUAL_RT" NUMBER(19)
    ,"MONTHLY_RT" NUMBER(19)
    ,"DAILY_RT" NUMBER(19)
    ,"HOURLY_RT" NUMBER(19)
    ,"ANNL_BENEF_BASE_RT" NUMBER(19)
    ,"SHIFT_RT" NUMBER(19)
    ,"SHIFT_FACTOR" NUMBER(5)
    ,"CURRENCY_CD" CHAR(3)
    ,"BUSINESS_UNIT" CHAR(5)
    ,"SETID_DEPT" CHAR(5)
    ,"SETID_JOBCODE" CHAR(5)
    ,"SETID_LOCATION" CHAR(5)
    ,"SETID_SALARY" CHAR(5)
    ,"SETID_EMPL_CLASS" CHAR(5)
    ,"REG_REGION" CHAR(5)
    ,"DIRECTLY_TIPPED" CHAR(1)
    ,"FLSA_STATUS" CHAR(1)
    ,"EEO_CLASS" CHAR(1)
    ,"FUNCTION_CD" CHAR(2)
    ,"TARIFF_GER" CHAR(2)
    ,"TARIFF_AREA_GER" CHAR(3)
    ,"PERFORM_GROUP_GER" CHAR(2)
    ,"LABOR_TYPE_GER" CHAR(1)
    ,"SPK_COMM_ID_GER" CHAR(9)
    ,"HOURLY_RT_FRA" CHAR(3)
    ,"ACCDNT_CD_FRA" CHAR(1)
    ,"VALUE_1_FRA" CHAR(5)
    ,"VALUE_2_FRA" CHAR(5)
    ,"VALUE_3_FRA" CHAR(5)
    ,"VALUE_4_FRA" CHAR(5)
    ,"VALUE_5_FRA" CHAR(5)
    ,"CTG_RATE" NUMBER(3)
    ,"PAID_HOURS" NUMBER(7)
    ,"PAID_FTE" NUMBER(8)
    ,"PAID_HRS_FREQUENCY" CHAR(5)
    ,"UNION_FULL_PART" CHAR(1)
    ,"UNION_POS" CHAR(1)
    ,"MATRICULA_NBR" NUMBER(5)
    ,"SOC_SEC_RISK_CODE" CHAR(3)
    ,"UNION_FEE_AMOUNT" NUMBER(9)
    ,"UNION_FEE_START_DT" DATE(10)
    ,"UNION_FEE_END_DT" DATE(10)
    ,"EXEMPT_JOB_LBR" CHAR(1)
    ,"EXEMPT_HOURS_MONTH" NUMBER(3)
    ,"WRKS_CNCL_FUNCTION" CHAR(1)
    ,"INTERCTR_WRKS_CNCL" CHAR(1)
    ,"CURRENCY_CD1" CHAR(3)
    ,"PAY_UNION_FEE" CHAR(1)
    ,"UNION_CD" CHAR(3)
    ,"BARG_UNIT" CHAR(4)
    ,"UNION_SENIORITY_DT" DATE(10)
    ,"ENTRY_DATE" DATE(10)
    ,"LABOR_AGREEMENT" CHAR(6)
    ,"EMPL_CTG" CHAR(6)
    ,"EMPL_CTG_L1" CHAR(6)
    ,"EMPL_CTG_L2" CHAR(6)
    ,"SETID_LBR_AGRMNT" CHAR(5)
    ,"WPP_STOP_FLAG" CHAR(1)
    ,"LABOR_FACILITY_ID" CHAR(10)
    ,"LBR_FAC_ENTRY_DT" DATE(10)
    ,"LAYOFF_EXEMPT_FLAG" CHAR(1)
    ,"LAYOFF_EXEMPT_RSN" CHAR(11)
    ,"GP_PAYGROUP" CHAR(10)
    ,"GP_DFLT_ELIG_GRP" CHAR(1)
    ,"GP_ELIG_GRP" CHAR(10)
    ,"GP_DFLT_CURRTTYP" CHAR(1)
    ,"CUR_RT_TYPE" CHAR(5)
    ,"GP_DFLT_EXRTDT" CHAR(1)
    ,"GP_ASOF_DT_EXG_RT" CHAR(1)
    ,"ADDS_TO_FTE_ACTUAL" CHAR(1)
    ,"CLASS_INDC" CHAR(1)
    ,"ENCUMB_OVERRIDE" CHAR(1)
    ,"FICA_STATUS_EE" CHAR(1)
    ,"FTE" NUMBER(8)
    ,"PRORATE_CNT_AMT" CHAR(1)
    ,"PAY_SYSTEM_FLG" CHAR(2)
    ,"BORDER_WALKER" CHAR(1)
    ,"LUMP_SUM_PAY" CHAR(1)
    ,"CONTRACT_NUM" CHAR(25)
    ,"JOB_INDICATOR" CHAR(1)
    ,"WRKS_CNCL_ROLE_CHE" CHAR(30)
    ,"BENEFIT_SYSTEM" CHAR(2)
    ,"WORK_DAY_HOURS" NUMBER(7)
    ,"REPORTS_TO" CHAR(8)
    ,"FORCE_PUBLISH" DATE(10)
    ,"JOB_DATA_SRC_CD" CHAR(3)
    ,"ESTABID" CHAR(12)
    ,"SUPV_LVL_ID" CHAR(8)
    ,"SETID_SUPV_LVL" CHAR(5)
    ,"ABSENCE_SYSTEM_CD" CHAR(3)
    ,"POI_TYPE" CHAR(5)
    ,"HIRE_DT" DATE(10)
    ,"LAST_HIRE_DT" DATE(10)
    ,"TERMINATION_DT" DATE(10)
    ,"ASGN_START_DT" DATE(10)
    ,"LST_ASGN_START_DT" DATE(10)
    ,"ASGN_END_DT" DATE(10)
    ,"LDW_OVR" CHAR(1)
    ,"LAST_DATE_WORKED" DATE(10)
    ,"EXPECTED_RETURN_DT" DATE(10)
    ,"EXPECTED_END_DATE" DATE(10)
    ,"AUTO_END_FLG" CHAR(1)
    *,"LASTUPDDTTM" DATETIME(26)*
    ,"LASTUPDOPRID" CHAR(30)
    )

Maybe you are looking for

  • How do I stop playing animated GIF's within web pages?

    Various websites display numerous images available for download. Some of those images are animated GIF files. These files take time to load and chew up a lot of my limited upload/download usage. IE Explorer let me see a single image for the GIF but d

  • Airport Extreme B.S. shutting down after a few minutes!

    I have one of these,it's the one with the USB port and Ethernet port.Bought it a few months ago and it had been working perfectly until this weekend when I was out of town.Apparently,my Roadrunner cable/internet/phone service was disconected this las

  • View "Purchase Order Text" in Material Master

    Hi, I need replicated text of  view "Purchase Order Text" in ECC to Material Master in SRM. Or When We create a shopping cart in the way in which the text replicated "Purchase Order Text" from the the Material Master(ECC). Regards Angie

  • IPhone as camera on XP

    Whenever I dock iPhone it asks me which program to open for pictures as a camera either "Microsoft Word" or "Microsoft scanner and camera wizard". It asks me this everytime as well as opening iTunes. Is there any way I can disable both of these funct

  • Using mounted drives for storage

    iMovie 08 only allows for saving on internal drives, or connected FireWire drives. Well, I have 2 TB of storage on an old Mac connected over Gigabit Ethernet, and I intend to use it. Here is the solution: 1. Move your iMovie Events onto your choice o