JSF/ADF - Referencing a Map with Integer keys

I'm generating several ADF input text boxes from a HashMap using Java code in my managed bean. The Map contains Integer keys and String values. Each input text corresponds to a key/value pair.
I'm trying to build a string to bind the text box to the String object, but I'm struggling with the EL syntax for Maps. I know a Map with String keys would look like this:
#{bean.object.map['stringKey']}
But is it possible to specify a non-String key?
Thank you.

Hi,
why don't you try it ? My assumption is that yes, it
will work because Integer objects have a toString()
method
FrankI'm building the binding string in my bean, so Integer's value is inserted, resulting in:
#{bean.object.map[1]}
I don't believe the EL supports this for Maps. At least I wasn't able to get it to work. I also tried:
#{bean.object.map['1']}
This suggest a string '1' which isn't right either. My guess is that it isn't possible, but I'm hoping someone can prove me wrong.
From searching the forum I've read that calling processUpdates() might eliminate the need for the ValueBinding step, but I'm not sure. If I called:
component.setValue(myString);
Then processUpdates() on each component after the form submit, would this get the its text back to the bean object? If so, could the code go in the form's save action method?
Thanks.

Similar Messages

  • "Economizing" in a list of maps with identical keys (like a database table)

    Hi there!:
    I've been checking this forum for information about something like what I state in the title of this message, but haven't found such a specific case. I'm commenting my doubt below.
    I'm working with a list of maps whose keys are exactly the same in all them (they're of type String). Indeed it could be considered an extrapolation of a database table: The list contains maps which act as rows, and every map contains keys and values which represent column names and values.
    However, this means to repeat the same key values on every map and this spends memory. Right, maybe it's not such a big spent, but since the list can contains thousands of maps, I think that it would be better to choose a more "economical" way to achieve the same result.
    I had thought about building a class which stored everything as a list of lists and, internally, it mapped that String keys with the corresponding Integer indexes of every list. But then I realized that maybe I was re-inventing the wheel, because it's very probable that someone has already made that. Maybe is there a class on the Core API which allows that?
    Thank you very much for your help.

    Well, after re-reading the Java tutorial which is located in the Sun website I've came to a conclusion which I should have before, when I thought about using StringBuffers as keys of the maps instead of Strings.
    I'm so used to build Strings using literals instead of the "new String ()" constructor (just as everyone) that I had forgotten that, as it happens with any kind of object but not the primary data types, Strings are not passed to the methods by value, but by reference. The fact of them being immutable made me think that they were passed by value.
    Apart of that, my problem also was that using literals I was creating different String objects every time, despite the fact that they were equal about their content (making 400 different keys called "name" for example)
    In other words, I was doing something like this:
    // It makes a list of maps which will contain maps of boy's personal data (as if they were "rows" in a table).
    List <Map <String, Object>> listData = new ArrayList <Map <String, Object>> (listBoy.size ());
    // It loops over a list of Boy objects, obtained using EJB.
    for (Boy boy : listBoy) {
         // It makes a new map containing only the information which I'm interested on from the Boy object.
         Map <String, Object> map = new HashMap <String, Object> (2);
         map.put ("name", boy.getName ());
         map.put ("surname", boy.getSurname ());
         // It adds the map to the list of data.
         listData.add (map);
    }Well, the "problem" here (being too demanding, but I'm :P ) is that I was adding all the time new Strings objects as keys in every map. The key "name" in the first map was different from "name" in the second one and so on.
    I guess that my knowledge got messed at certain point and thought that it was impossible to use exactly the same String object in different maps (the reference, not the same value!). Thus, my idea of using StringBuffers instead.
    But thinking about it carefully, Why not to do this?:
    List <Map <String, Object>> listData = new ArrayList <Map <String, Object>> (listBoy.size ());
    // It makes the necessary String keys previously, instead of using literals on every loop later.
    String name = "name";
    String surname = "surname";
    for (Boy boy : listBoy) {
         // It uses references (pointers) to the same String keys, instead of new ones every time.
         Map <String, Object> map = new HashMap <String, Object> (2);
         map.put (name, boy.getName ());
         map.put (surname, boy.getSurname ());
         listData.add (map);
    }Unfortunately, the "hasCode" method on String is overloaded and instead of returning the typical hash code based on the single ID of the object in memory, it returns one based on its content. That way I can't make sure that the "name" key in one map refers to the same object in memory than another one. I know, I know. The common sense and the Java documentation confirm that, but had loved having an empiric way to demonstrate it.
    I guess that using "javap" and disassembling the generated bytecode is the only way to make sure that it's that way.
    I believe that it's solved now :) (if no one tells me the contrary). I still am mad at myself for thinking that Strings were passed by value. Thinking about it now it had no sense!
    dannyyates: It's curious because re-reading every answer I think that you maybe were pointing to this solution already. But the sentence "you put the +same+ string" was a little ambiguous for me and thought that you meant putting the same String as "putting the same text" (which I already was doing), not the same object reference (in other words, using a common variable). I wish we could have continued discussing that in depth. Thanks a lot for your help anyway :) .

  • ADF Toplink Create Row with foreign key

    Hello,
    I'd like to create a new row in a table that has a required one-to-one mapping relationship (foreign key). I've created an edit form by dragging the return node of a readQuery. The columns holding foreign key values aren't added to the page(because of 1-1 relationship) and when I commit the changes errors are raised.
    How do I pass the foreign key to the ADF Toplink framwork to include in the insert statement.

    TopLink's persistence model is based around relationships, which represent FKs. To have these FKs properly populated you need to create the corresponding object relationships.
    To do this you need to create a data action that will invoke a method on your class passing in the object you want to create the relationship with. There is currently an issue where ADF will not allow you to easily drop a set method onto an action. You will need to create a method on your bean that does not start with 'set'.
    In the demos I have done I have added a method like 'populateAddress(Address)' that internally calls the set method. This populate method will show up on the data control palette and you can then drop it on the action in your page flow.
    One additional tip: To access the object you want to pass in you will need to extend the default expression that reflects the current selected row in the iterator to have '.dataProvider' appended. This accesses the persistent object within row container.
    If you are still struggling please contact me through the email address in my forum profile.
    Doug

  • Select distinct records in Mapping with no Key field (all fields can vary)

    Hi Experts,
    Let me take an example (not the actual requirement but same scenario) to explain the problem where I need your help to get best possible way to resolve. This has to be achieved in mapping, don't have other options as its part of complex end 2 end scenario.
    I have following input XML:
    <Employee>
       <Details>
          <Id>123</Id>
          <Name>ABC</Name>
         <Role>Manager</Role>
          <Area>Bangalore</Area>
        </Details>
        <Details>
           <Id>123</Id>
           <Name>ABC</Name>
            <Role>Manager</Role>
             <Area>Pune</Area>
         </Details>
          <Details>
           <Id>123</Id>
           <Name>ABC</Name>
            <Role>Advisor</Role>
             <Area>Bangalore</Area>
         </Details>
          <Details>
           <Id>123</Id>
           <Name>ABC</Name>
            <Role>Manager</Role>
             <Area>Bangalore</Area>
           <Details>
           <Id>143</Id>
           <Name>ABC</Name>
            <Role>Manager</Role>
             <Area>Bangalore</Area>
         </Details>
    </Employee>
    The output XML is:
    <Employee>
       <MainRec>
           <Id>123</Id>
            <Name>ABC</Name>
             <table name = 'Roles'>
                   <record>
                          <Id>123</Id>
                           <Role>Manager</Role>
                            <Area>Bangalore</Area>
                      </record>
                      <record>
                          <Id>123</Id>
                           <Role>Manager</Role>
                            <Area>Pune</Area>
                      </record>
                      <record>
                          <Id>123</Id>
                           <Role>Advisor</Role>
                            <Area>Bangalore</Area>
                      </record>
                  </table>
          </MainRec>
          <MainRec>
            <Id>123</Id>
            <Name>ABC</Name>
             <table name = 'Roles'>
                   <record>
                          <Id>143</Id>
                           <Role>Manager</Role>
                            <Area>Bangalore</Area>
                      </record>
                </table>
            </MainRec>
    </Employee>
    As you can see from the example above, here I want to populate only distinct records under table, but there is no key fiield to ditunguish. Any of the 3 fields (Id, Role,Area) can vary and between 2 records if all of these fields are same then its duplicate else select it. So in above XML just discard the 4th record from the source XML and populate all others. Each record has to be checked against all other records all 3 values (ID, Role, Area). Only when none of the records have exactly the same values, populate it.
    Also records with different ID come under different table node. Hope my requirement is clear, if not please let me know, i will try to explain better.
    I thought of creating a UDF to achieve this but not able to decide how to match it to the output message here.
    Best Regards,
    Pratik

    Hi,
    For the main record, I think you only need to check for each unique ID, e.g
    Id --> removeContext --> sort:ascending --> splitByValue:valueChanged --> collapseContext --> MainRec
    For the record, however, you need to create a UDF that will filter out the duplicate values. For this, the UDF sample mentioned here contained multipleResult lists
    Id --> removeContext --> concat: : --> concat: : --> UDF --> splitByValue:ValueChanged --> record
    role --> removeContext --> /          /                \ --> Id
    area --> removContext -------------> /                  \ --> role
                                                             \ --> area
    Context type UDF
    Arguments: input
    Result: IdResult
    Result: roleResult
    Result: areaResult
    Vector temp = new Vector();
    for(int a=0;a<input.length;a++){
       if(!temp.contains(input[a])
             temp.add(input[a]);
    for(int a=0;a<temp.size();a++){
       String tmp = (String) temp.get(a);
       /*split according to field */
       IdResult.addValue(tmp.substring(0,tmp.indexOf(":")));
       roleResult.addValue(tmp.substring(tmp.indexOf(":")+1,tmp.lastIndexOf(":")));
       areaResult.addValue(tmp.substring(tmp.lastIndexOf(":")+1,tmp.length()));
    note: Id and record will both be using the IdResult list.
    Hope this helps,
    Mark

  • Mapping error at deployment with foreign key data rule

    I have created data rules for enabling foreign key constraints. There are 4 foreign key constraints on the fact table.
    For the 1st foreign key ... its a single key match key1 on table 1
    For the others, its a composite key .. key1 and key2 on table 2
    key1 and key3 on table 3
    key1 and key4 on table 4
    When I implement with the single key foreign key constraint the mapping works fine. But when I apply the other foreign key data rules for composite keys, I get the following message while deploying ....
    M_CNT Create Warning ORA-06550: line 209, column 3:
    PL/SQL: ORA-00909: invalid number of arguments
    M_CNT Create Warning ORA-06550: line 520, column 65530:
    PL/SQL: SQL Statement ignored
    M_CNT Create Warning ORA-06550: line 56, column 65530:
    PL/SQL: SQL Statement ignored
    M_CNT Create Warning ORA-06550: line 673, column 3:
    PL/SQL: ORA-00909: invalid number of arguments
    The data rule setup done is type Referential
    Specify the number of attributes in row relationship - 2
    Specify the referencing cardinality of row relationship
    Minimum Count 1 Maximum Count n
    Specify the referenced cardinality of row relationship
    Minimum Count 1 Maximum Count n
    What is it that I am doing wrong ?
    Any suggestions. Help !!!!
    Regards,
    AW

    Hi AW,
    How can I overcome this situation ?The best solution as suggested by Jörg is use of surrogate keys.
    For every production key (composite or single) generate a corresponding surrogate key using sequence operator in staging area.
    This will not just solve your problem but it will be faster also (the joins will be faster with system-generated sequence numbers),
    In a data warehouse use of production keys as primary key for linking with (foreign key) is not recommended, keep the production keys as additional attributes.
    Regards,

  • BUG: ADF BC read-only VO with no Key attrs + af:table

    Hello all,
    I've got a bug to report - quite easily reproducable with the HR demo schema. To see it:
    1). Create a new application from the ADF BC + Faces template
    2). Create a read-only VO, use "SELECT employee_id, first_name from employees" and order by "employee_id" - take the defaults for everything - do not set any key attributes.
    3). Create an AM, add the VO to it's data model. Turn off AM pooling in the configuration.
    4). Create a JSPX page. Drag-drop the VO from the data control palette as an ADF read-only table - with selection
    5). Bind the actionlistener of the "submit" button to a backing bean method that just system.out.println's something.
    Now, run the app. Try selecting an employee from the first page (records 1-10) and clicking submit - it works. Now, scroll to the second set of records, select one and click submit - no message appears - the action listener is never called. No errors are thrown.
    Workaround: ensure the VO has employee_id selected as a key attribute.
    My code is at the bottom for reference.
    Cheers,
    John
    untitled1.jspx:
    <?xml version='1.0' encoding='windows-1252'?>
    <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.0"
              xmlns:h="http://java.sun.com/jsf/html"
              xmlns:f="http://java.sun.com/jsf/core"
              xmlns:af="http://xmlns.oracle.com/adf/faces"
              xmlns:afh="http://xmlns.oracle.com/adf/faces/html">
      <jsp:output omit-xml-declaration="true" doctype-root-element="HTML"
                  doctype-system="http://www.w3.org/TR/html4/loose.dtd"
                  doctype-public="-//W3C//DTD HTML 4.01 Transitional//EN"/>
      <jsp:directive.page contentType="text/html;charset=windows-1252"/>
      <f:view>
        <afh:html>
          <afh:head title="untitled1">
            <meta http-equiv="Content-Type"
                  content="text/html; charset=windows-1252"/>
          </afh:head>
          <afh:body>
            <af:messages/>
            <h:form>
              <af:table value="#{bindings.emp1.collectionModel}" var="row"
                        rows="#{bindings.emp1.rangeSize}"
                        first="#{bindings.emp1.rangeStart}"
                        emptyText="#{bindings.emp1.viewable ? 'No rows yet.' : 'Access Denied.'}"
                        selectionState="#{bindings.emp1.collectionModel.selectedRow}"
                        selectionListener="#{bindings.emp1.collectionModel.makeCurrent}">
                <af:column sortProperty="EmployeeId" sortable="false"
                           headerText="#{bindings.emp1.labels.EmployeeId}">
                  <af:outputText value="#{row.EmployeeId}">
                    <f:convertNumber groupingUsed="false"
                                     pattern="#{bindings.emp1.formats.EmployeeId}"/>
                  </af:outputText>
                </af:column>
                <af:column sortProperty="FirstName" sortable="false"
                           headerText="#{bindings.emp1.labels.FirstName}">
                  <af:outputText value="#{row.FirstName}"/>
                </af:column>
                <f:facet name="selection">
                  <af:tableSelectOne text="Select and">
                    <af:commandButton text="Submit" actionListener="#{abc.click}"/>
                  </af:tableSelectOne>
                </f:facet>
              </af:table>
            </h:form>
          </afh:body>
        </afh:html>
      </f:view>
    </jsp:root>abc.java:
    import javax.faces.event.ActionEvent;
    public class abc
      public abc()
      public void click(ActionEvent actionEvent)
        System.out.println("click");
    }emp.xml (View Object):
    <?xml version='1.0' encoding='windows-1252' ?>
    <!DOCTYPE ViewObject SYSTEM "jbo_03_01.dtd">
    <ViewObject
       Name="emp"
       OrderBy="employee_id"
       BindingStyle="OracleName"
       CustomQuery="true"
       ComponentClass="model.empImpl"
       UseGlueCode="false" >
       <SQLQuery><![CDATA[
    select employee_id, first_name
    from employees
       ]]></SQLQuery>
       <DesignTime>
          <Attr Name="_isExpertMode" Value="true" />
          <Attr Name="_version" Value="10.1.3.39.84" />
          <Attr Name="_codeGenFlag2" Value="Access|Coll|VarAccess" />
       </DesignTime>
       <ViewAttribute
          Name="EmployeeId"
          IsUpdateable="false"
          IsPersistent="false"
          IsNotNull="true"
          Precision="6"
          Scale="0"
          Type="oracle.jbo.domain.Number"
          ColumnType="NUMBER"
          AliasName="EMPLOYEE_ID"
          Expression="EMPLOYEE_ID"
          SQLType="NUMERIC" >
          <DesignTime>
             <Attr Name="_DisplaySize" Value="22" />
          </DesignTime>
       </ViewAttribute>
       <ViewAttribute
          Name="FirstName"
          IsUpdateable="false"
          IsPersistent="false"
          Precision="20"
          Type="java.lang.String"
          ColumnType="VARCHAR2"
          AliasName="FIRST_NAME"
          Expression="FIRST_NAME"
          SQLType="VARCHAR" >
          <DesignTime>
             <Attr Name="_DisplaySize" Value="20" />
          </DesignTime>
       </ViewAttribute>
    </ViewObject>appModule.xml:
    <?xml version='1.0' encoding='windows-1252' ?>
    <!DOCTYPE AppModule SYSTEM "jbo_03_01.dtd">
    <AppModule
       Name="AppModule"
       ComponentClass="model.AppModuleImpl" >
       <DesignTime>
          <Attr Name="_isCodegen" Value="true" />
          <Attr Name="_version" Value="10.1.3.39.84" />
          <Attr Name="_deployType" Value="0" />
       </DesignTime>
       <ViewUsage
          Name="emp1"
          ViewObjectName="model.emp" >
       </ViewUsage>
    </AppModule>bc4j.xcfg:
    <?xml version = '1.0' encoding = 'UTF-8'?>
    <BC4JConfig>
       <AppModuleConfigBag>
          <AppModuleConfig name="AppModuleLocal">
             <DeployPlatform>LOCAL</DeployPlatform>
             <JDBCName>local_hr</JDBCName>
             <jbo.ampool.doampooling>false</jbo.ampool.doampooling>
             <jbo.project>Model</jbo.project>
             <jbo.ampool.dynamicjdbccredentials>false</jbo.ampool.dynamicjdbccredentials>
             <AppModuleJndiName>model.AppModule</AppModuleJndiName>
             <ApplicationName>model.AppModule</ApplicationName>
          </AppModuleConfig>
       </AppModuleConfigBag>
       <ConnectionDefinition name="local_hr">
          <ENTRY name="JDBC_PORT" value="1521"/>
          <ENTRY name="ConnectionType" value="JDBC"/>
          <ENTRY name="HOSTNAME" value="localhost"/>
          <ENTRY name="DeployPassword" value="true"/>
          <ENTRY name="user" value="hr"/>
          <ENTRY name="ConnectionName" value="local_hr"/>
          <ENTRY name="SID" value="STGY"/>
          <ENTRY name="password">
             <![CDATA[{904}05DB46A9C39F51D1A4814423FFD9297C71]]>
          </ENTRY>
          <ENTRY name="JdbcDriver" value="oracle.jdbc.OracleDriver"/>
          <ENTRY name="ORACLE_JDBC_TYPE" value="thin"/>
          <ENTRY name="DeployPassword" value="true"/>
       </ConnectionDefinition>
    </BC4JConfig>

    Hi,
    reproduces for me. It appears that the parameter is not applied properly when executing the query.
    Frank

  • Issue in Color Theme on the map with ADF geographic components

    I am facing an issue in bringing up a map using color theme .I can able to bring up a map with color themes using geographic components feature in ADF.
    The problem here is 'Edit Color Map Theme' dialog which lists Map Theme in five categories like continents,counties,countries,states_abbrev,states_names.
    we had implemented map using Color Theme based on the states_abbrev and found it was working properly.
    But after some days we found that the settings we made to work for the map no longer works and resulted in no color theme on the map being displayed on the page.
    It was found that the states_names works properly instead of states_abbrev by that time.
    But again after some days it didnt work with state_names and we revert back our code to follow state_abbrev so as to get the desired result.
    Does somebody know where is the problem and how to approach this issue?

    Quick Install is gone.
    Database repair doesn't work.
    Color coded categories is gone, and as you've discovered Theme colors are gone.
    Print to Excel is gone.
    Many desktop clients that worked with 4.2 do not work.
    It won't work with legacy devces.
    and others.... There are lots of complaints about 6.2 scattered around the forums.  Personally, the HotSync manager keeps forgetting the connections I've set, and goes to default views.
    You aren't missing much of anything, IMHO.
    WyreNut
    Post relates to: Centro (AT&T)
    Message Edited by WyreNut on 02-20-2009 12:18 PM
    I am a Volunteer here, not employed by HP.
    You too can become an HP Expert! Details HERE!
    If my post has helped you, click the Kudos Thumbs up!
    If it solved your issue, Click the "Accept as Solution" button so others can benefit from the question you asked!

  • Mapping in OWB with primary key and foreign key relationship

    Hi all,
    I am new to this datawarehousing field. I have just started my career. I have to now create a mapping in owb where a table has a field which is a primary key of another table in the same staging area. If you guys could help me out with the a method it can be created that would be very helpful to me.
    I thought of 2 ideas,
    1. If I can use a look up, but then I am not sure if i can use a lookup for primary key, foreign key relationship. If I can use also, I do not know how to use that.
    2. What if I can directly take that the first table and link the primary key of that table to the second table which uses that primary key of the first table as one of its fields.
    I do not know how feasible these methods are. Please guys help me out.
    Thanks in advance.

    I have a similar case where table a and table b having relation but table a got inserted with data and table b is empty so there no values for foriegn key column in table b to realte with table a.
    Now i want to load table b foriegn key with primary key column values of table a.
    how can we do this in owb
    thanks
    kumar

  • Mapping problem with compressed key update record

    Hi, could you please advise?
    I'm getting the following problem:
    About a week ago replicat abened with "Error in mapping" error. I found in discard file some record looking like:
    filed1 = NULL
    field2 =
    field3 =
    field4 =
    field5 =
    datefield = -04-09 00:00:00
    field6 =
    field8 =
    field9 = NULL
    field10 =
    Where filed9 = @GETENV("GGHEADER", "COMMITTIMESTAM"), field10 = = @GETENV("GGHEADER", "COMMITTIMESTAM"), others are table fields mapped by USEDEFAULTS
    So I got Mapping problem with compressed key update record at 2012-06-01 15:44
    I guess I need to mention that extract failed in 5 minuts before it with: VAM function VAMRead returned unexpected result: error 600 - VAM Client Report <[CFileInfo::Read] Timeout expired after 10 retries with 1000 ms delay, waiting to read transaction log or backup files. To increase the number of retries, use SETENV (GGS_CacheRetryCount = n) in Extract parameter file. To control retry delay time, use SETENV (GGS_CacheRetryDelay = n). handle: 0000000000000398 ReadFile GetLastError:997 Wait GetLastError:997>.
    I don't know if it has ther same source as data corruption, could you tell me if it is?
    Well, I created new extract, starting 2012-06-01 15:30 to check if there was something with extract at the time, but got the same error.
    If I run extract beging at 15:52 it starts and works.
    But well, I got another one today. Data didn't look that bad, but yet one column came with null value:( And I'm using it as a key column, so I got Mapping problem with compressed key update record again:(
    I'm replicating from SQL Server 2008 to Oracle 11g.
    I'm actually using NOCOMPRESSUPDATES in Extract.
    CDC is enabled for all tables replicated. The only thing is that it is enabled not by ADD TRANDATA command, but by SQL Server sys.sp_cdc_enable_table, does it matter?
    Could you please advise why does it happen?

    Well, the problem begins somewhere in extract or before extract, may be in transaction log, I don't know:(
    Here are extract parameters:
    EXTRACT ETCHECK
    TRANLOGOPTIONS MANAGESECONDARYTRUNCATIONPOINT
    SOURCEDB TEST, USERID **, PASSWORD *****
    exttrail ./dirdat/ec
    NOCOMPRESSUPDATES
    NOCOMPRESSDELETES
    TABLE tst.table1, COLS (field1, field2, field3, field4, field5, field6, field7, field8 );
    TABLE tst.table2, COLS (field1, field2, field3, field4 );
    Data pump:
    EXTRACT DTCHECK
    SOURCEDB TEST, USERID **, PASSWORD *****
    RMTHOST ***, MGRPORT 7809
    RMTTRAIL ./dirdat/dc
    TABLE tst.table1;
    TABLE tst.table2;
    Replicat:
    REPLICAT rtcheck
    USERID tst, PASSWORD ***
    DISCARDFILE ./dirrpt/rtcheck.txt, PURGE
    SOURCEDEFS ./dirdef/sourcei.def
    HANDLECOLLISIONS
    UPDATEDELETES
    MAP tst.table1, t.table1, COLMAP (USEDEFAULTS , filed9 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed10= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (field3);
    MAP dbo.TPROCPERIODCONFIRMSTAV, TARGET R_019_000001.TPROCPERIODCONFIRMSTAV, COLMAP (USEDEFAULTS , field5 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed6= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (filed1, field2, field3);
    Rpt file for replicat:
    Oracle GoldenGate Delivery for Oracle
    Version 11.1.1.1 OGGCORE_11.1.1_PLATFORMS_110421.2040
    Windows x64 (optimized), Oracle 11g on Apr 22 2011 00:34:07
    Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.
    Starting at 2012-06-05 12:49:38
    Operating System Version:
    Microsoft Windows Server 2008 R2 , on x64
    Version 6.1 (Build 7601: Service Pack 1)
    Process id: 2264
    Description:
    ** Running with the following parameters **
    REPLICAT rtcheck
    USERID tst, PASSWORD ***
    DISCARDFILE ./dirrpt/rtcheck.txt, PURGE
    SOURCEDEFS ./dirdef/sourcei.def
    HANDLECOLLISIONS
    UPDATEDELETES
    MAP tst.table1, t.table1, COLMAP (USEDEFAULTS , filed9 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed10= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (field3);
    MAP dbo.TPROCPERIODCONFIRMSTAV, TARGET R_019_000001.TPROCPERIODCONFIRMSTAV, COLMAP (USEDEFAULTS , field5 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed6= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (filed1, field2, field3);
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 512M
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 1G
    CACHESIZEMAX (strict force to disk): 881M
    Database Version:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE     11.2.0.1.0     Production
    TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    Database Language and Character Set:
    NLS_LANG = "AMERICAN_AMERICA.CL8MSWIN1251"
    NLS_LANGUAGE = "AMERICAN"
    NLS_TERRITORY = "AMERICA"
    NLS_CHARACTERSET = "CL8MSWIN1251"
    For further information on character set settings, please refer to user manual.
    ** Run Time Messages **
    Opened trail file ./dirdat/dc000000 at 2012-06-05 12:49:39
    2012-06-05 12:58:14 INFO OGG-01020 Processed extract process RESTART_ABEND record at seq 0, rba 925 (aborted 0 records).
    MAP resolved (entry tst.table1):
    MAP tst.table1, t.table1, COLMAP (USEDEFAULTS , filed9 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed10= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (field3);
    2012-06-05 12:58:14 WARNING OGG-00869 No unique key is defined for table table1. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.
    Using the following default columns with matching names:
    field1=field1, field2=field2, field3=field3, field4=field4, field5=field5, field6=field6, field7=field7, field8=field8
    Using the following key columns for target table R_019_000001.TCALCULATE: field3.
    2012-06-05 12:58:14 WARNING OGG-01431 Aborted grouped transaction on 'tst.table1', Mapping error.
    2012-06-05 12:58:14 WARNING OGG-01003 Repositioning to rba 987 in seqno 0.
    2012-06-05 12:58:14 WARNING OGG-01151 Error mapping from tst.table1 to tst.table1.
    2012-06-05 12:58:14 WARNING OGG-01003 Repositioning to rba 987 in seqno 0.
    Source Context :
    SourceModule : [er.main]
    SourceID : [er/rep.c]
    SourceFunction : [take_rep_err_action]
    SourceLine : [16064]
    ThreadBacktrace : [8] elements
    : [C:\App\OGG\replicat.exe(ERCALLBACK+0x143034) [0x00000001402192B4]]
    : [C:\App\OGG\replicat.exe(ERCALLBACK+0x11dd44) [0x00000001401F3FC4]]
    : [C:\App\OGG\replicat.exe(<RCALLBACK+0x11dd44) [0x000000014009F102]]
    : [C:\App\OGG\replicat.exe(<RCALLBACK+0x11dd44) [0x00000001400B29CC]]
    : [C:\App\OGG\replicat.exe(<RCALLBACK+0x11dd44) [0x00000001400B8887]]
    : [C:\App\OGG\replicat.exe(releaseCProcessManagerInstance+0x25250) [0x000000014028F200]]
    : [C:\Windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007720652D]]
    : [C:\Windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007733C521]]
    2012-06-05 12:58:14 ERROR OGG-01296 Error mapping from tst.table1 to tst.table1.
    * ** Run Time Statistics ** *
    Last record for the last committed transaction is the following:
    Trail name : ./dirdat/dc000000
    Hdr-Ind : E (x45) Partition : . (x04)
    UndoFlag : . (x00) BeforeAfter: A (x41)
    RecLength : 249 (x00f9) IO Time : 2012-06-01 15:48:56.285333
    IOType : 115 (x73) OrigNode : 255 (xff)
    TransInd : . (x03) FormatType : R (x52)
    SyskeyLen : 0 (x00) Incomplete : . (x00)
    AuditRBA : 44 AuditPos : 71176199289771
    Continued : N (x00) RecCount : 1 (x01)
    2012-06-01 15:48:56.285333 GGSKeyFieldComp Len 249 RBA 987
    Name: DBO.TCALCULATE
    Reading ./dirdat/dc000000, current RBA 987, 0 records
    Report at 2012-06-05 12:58:14 (activity since 2012-06-05 12:58:14)
    From Table tst.table1 to tst.table1:
    # inserts: 0
    # updates: 0
    # deletes: 0
    # discards: 1
    Last log location read:
    FILE: ./dirdat/dc000000
    SEQNO: 0
    RBA: 987
    TIMESTAMP: 2012-06-01 15:48:56.285333
    EOF: NO
    READERR: 0
    2012-06-05 12:58:14 ERROR OGG-01668 PROCESS ABENDING.
    Discard file:
    Oracle GoldenGate Delivery for Oracle process started, group RTCHECK discard file opened: 2012-06-05 12:49:39
    Key column filed3 (0) is missing from update on table tst.table1
    Missing 1 key columns in update for table tst.table1.
    Current time: 2012-06-05 12:58:14
    Discarded record from action ABEND on error 0
    Aborting transaction on ./dirdat/dc beginning at seqno 0 rba 987
    error at seqno 0 rba 987
    Problem replicating tst.table1 to tst.table1
    Mapping problem with compressed key update record (target format)...
    filed1 = NULL
    field2 =
    field3 =
    field4 =
    field5 =
    datefield = -04-09 00:00:00
    field6 =
    field8 =
    field9 = NULL
    field10 =
    Process Abending : 2012-06-05 12:58:14

  • Mapping problem with compressed key update record (target format)...

    Hi Guys,
    Getting below error while replication from Source to target. Source table is having NOT NULL Column, but on target replicat process giving error about some NULL value ??
    How to overcome this issue, any idea...
    2011-08-04 10:35:04 INFO OGG-00995 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: REPLICAT RMASTRK starting.
    2011-08-04 10:35:05 INFO OGG-00996 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: REPLICAT RMASTRK started.
    2011-08-04 10:35:06 WARNING OGG-00869 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: OCI Error ORA-01407: cannot update ("INFRA"."CUST"."CODE") to NULL (status = 1407), SQL <UPDATE "INFRA"."CUST" SET "ORD_ID" = :a2,"DP_ID" = :a3,"EXCHNG_CODE" = :a4,"ORD_QTY" = :a5,"ORD_PRICE" = :a6,"CODE" = :a7,"MKRT_CODE" = :a8,"CHANN>.
    2011-08-04 10:35:06 WARNING OGG-01004 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: Aborted grouped transaction on 'INFRA.CUST', Database error 1407 (ORA-01407: cannot update ("INFRA"."CUST"."SCRP_CODE") to NULL).
    2011-08-04 10:35:06 WARNING OGG-01003 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: Repositioning to rba 44132192 in seqno 68708.
    2011-08-04 10:35:06 *WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: SQL error 1407 mapping INFRA.CUST to INFRA.CUST OCI Error ORA-01407:* *cannot update ("INFRA"."CUST"."SCRP_CODE") to NULL (status = 1407), SQL <UPDATE "INFRA"."CUST" SET "ORD_ID" = :a2,"DP_ID" = :a3,"EXCHNG_CODE"=:a4,"ORD_QTY"*
    *= :a5,"ORD_PRICE" = :a6,"SCRP_CODE" = :a7,"MKRT_CODE" = :a8,"CHANN>.*
    2011-08-04 10:35:06 WARNING OGG-01003 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: Repositioning to rba 44132192 in seqno 68708.
    2011-08-04 10:35:06 ERROR OGG-01296 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: Error mapping from INFRA.CUST to INFRA.CUST.
    2011-08-04 10:35:06 ERROR OGG-01668 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: PROCESS ABENDING.
    Oracle GoldenGate Delivery for Oracle process started, group RMASTRK discard file opened: 2011-08-04 10:35:05
    Current time: 2011-08-04 10:35:06
    Discarded record from action ABEND on error 1407
    OCI Error ORA-01407: cannot update ("INFRA"."CUST"."SCRP_CODE") to NULL
    (status = 1407), SQL <UPDATE "INFRA"."CUST" SET "ORD_ID" = :a2,"MKRT_CODE" = :a8,"CHANN>
    Aborting transaction on ./dirdat/pm beginning at seqno 68708 rba 44132192
    error at seqno 68708 rba 44132192
    Problem replicating INFRA.CUST to INFRA.CUST
    *Mapping problem with compressed key update record (target format)...*
    ORD_QTY = 500
    ORD_PRICE = 37430
    SCRP_CODE =
    MKRT_CODE = N
    Oracle GoldenGate Delivery for Oracle process started, group RMASTRK discard file opened: 2011-08-
    04 10:35:05
    Current time: 2011-08-04 10:35:06
    Discarded record from action ABEND on error 1407
    OCI Error ORA-01407: cannot update ("INFRA"."CUST"."SCRP_CODE") to NULL
    (status = 1407), SQL <UPDATE "INFRA"."CUST" SET "ORD_ID" = :a2,"MKRT_CODE" = :a8,"CHANN>
    Aborting transaction on ./dirdat/pm beginning at seqno 68708 rba 44132192
    error at seqno 68708 rba 44132192
    Problem replicating INFRA.CUST to INFRA.CUST
    Mapping problem with compressed key update record (target format)...
    ORD_QTY = 500
    ORD_PRICE = 37430
    SCRP_CODE =
    MKRT_CODE = N
    Any inputs / help would be appreciated.
    Regards,
    Manish

    The SCRP_CODE column has a NOT NULL constraint. The ORA-01407 error is telling you that you cannot update or set a value for this column to null because of the constraint. This has absolutely nothing to do with an index. You can use a marker/sentinel value in lieu of using NULL. For a numeric field, where everything is positive, a negative value (-1) can be decoded as meaning null. For a character field, a code such as NA can represent NULL.
    This also has nothing to do (directly) with GoldenGate failing because of this error. The underlying SQL statement will fail everywhere, regardless of the tool or application. It is not a case of failing only in GoldenGate.

  • JPA - How to map relation with NON-KEY field.

    Hello.
    Problem with mapping is NullPointerException when calling EntityManager em.createNativeQuery:
    Table1 (Bm_Treeassoc):
    MY_ID (Primary Key)
    BOOKMARKID (-> MY_ID in Table2)
    Text
    Table2 (Bm_Bookmark):
    MY_ID ( Primary Key)
    Text
    //CLASS BmTreeassoc
    @OneToMany(targetEntity=BmBookmark.class, mappedBy="treeMaster", cascade = CascadeType.ALL, fetch = FetchType.LAZY)
         private BmTreeassoc treeMaster = null ;
    //CLASS BmBookmark
    @ManyToOne(targetEntity=BmTreeassoc.class, fetch = FetchType.LAZY, cascade = CascadeType.ALL, optional = true)
         @JoinColumn(name="MY_ID", referencedColumnName="BOOKMARKID", unique=true)     
         private ArrayList<BmBookmark> bookmarks = new ArrayList<BmBookmark>() ;
    This Leads to the exception.
    Mapping form MY_ID to MY_ID instead BOOKMARKID will not throw the exception,
    so I assume I have a problem with the KEY Field?
    Any ideas?
    Kind regards
    Frank

    OK,
    after reflecting (after maniacally trying for days).
    Here is the answer by myself:
    I do not use the mapping stuff at all no more.
    I solved my join by using a view:
    CREATE VIEW SAPDEMO.VTree
         AS SELECT t.TreeID, b.MY_ID, b.CLIENTID, b.Nickname, u.URL
         FROM SAPDEMO.BM_TreeAssoc as t
              JOIN SAPDEMO.BM_BOOKMARK as B ON t.BookmarkID = b.MY_ID
              JOIN SAPDEMO.BM_URL as U ON b.URL = u.MY_ID
    Create the entity and then do a:
    select * from VTREE where clientid=1 and treeid=446
    Works great, simple, fast.

  • Working with table type any with mapping according to keys

    Hi All ,
    I have table type any with data and I need to fill structure type any according to respective  key and verify that the field is have mapping .
    i.e. I have a table <lt_itab> and I need to find the specific entry on it according to the key and the mapping .
    I guess that the best way is to give example.
    <lt_itab> -  Is type any and can have lot of entries
    lt_key  -  Is specified table with field_name and value
    lt_map  -  Table with field_name which have mapping (have unique field name in every entry of the table )from f1..fn -
    I need to fill fields in <ls_output> just if they appear in lt_map
    <ls_output> - Is structure type any that in the end should have all the data from <ls_itab> according to the mapping and the keys of the table
    <lt_itab> - table
    f1  f2  f3  f4  f5 f6
    1   2    3  4   5  6 
    5   5    4  3   8  4 
    6   9    2  5   3  5
    1   3    3  4   2  1
    lt_key  - table
    field_name   value
    f1            1
    f2            3
    lt_map  - table
    field_name
    f1
    f2
    f5
    f6
    <ls_output> - structure
    field  value
    f1  -   1
    f2  -   2
    f3  "Not in mapping so it's empty
    f4  "Not in mapping so it's empty
    f5  -   2
    f6  -   1
    <ls_output> have the field values of the last entry of <lt_itab> according to the key of f1 and f2 and according to the mapping f3 and f4 are empty
    since they are not appaer in lt_map
    Regards
    Joy

    Hi
    You have to loop fully your main table in order to get the records in according to they keys:
    LOOP AT <LT_ITAB> ASSIGNING <WT_ITAB>.
       L_KO = SPACE.
       LOOP AT LT_KEY.
            ASSIGN COMPONENT LT_KEY-FIELDNAME OF STRUCTURE <WT_ITAB> TO <FS_KEY>.
            IF <FS_KEY> NE LT_KEY-VALUE.
               L_KO = 'X'.
               EXIT.
            ENDIF.
        ENDLOOP.
        CHECK L_KO IS INITIAL.
        LOOP AT LT_MAP.
            ASSIGN COMPONENT LT_MAP-FIELDNAME OF STRUCTURE <WT_ITAB>      TO <FS_FROM>.
            ASSIGN COMPONENT LT_MAP-FIELDNAME OF STRUCTURE <WT_OUTPUT> TO <FS_TO>.
            <FS_TO> = <FS_FROM>.
        ENDLOOP.
        APPEND <WT_OUTPUT> TO <LT_OUTPUT>.
    ENDLOOP.
    Max

  • How do I commit with JSF + ADF Buisness components

    Hello
    I following the Building Oracle ADF Applications: Workshop found at
    http://www.oracle.com/technology/obe/obe9051jdev/ADFWorkshop/BuildingADFApplicationsWorkshop.htm
    Instead of JSP and Struts I'm using ADF Faces.
    I'm now doing the "are you sure" page where you confirm that you want to delete a customer. I've added a button to do the delete with the ActionListner : #{bindings.Delete.invoke} but I need to be able to do a Commit afterwards.
    Could somebody explain how this is done (I new to Jdeveloper, comming from a
    Delphi background).
    Thanks is advance
    Paul

    First, if you're using 10.1.3, I'd recommend the 10.1.3 OBE here so that you're following a JSF-based tutorial that corresponds with the latest-and-greatest 10.1.3 features:
    http://www.oracle.com/technology/obe/obe1013jdev/masterdetail_adf_bc/master-detail_pagewith_adf_bc.htm
    Second, there are a few different approaches you can use for invoking multiple actions from a single button.
    (1) The model-centric approach is to add a custom method to your application module that performs the multiple operations over encapsulated inside the AM, mark the method as exposed to clients on the "Client Interface" of the AM, then drop that method as a button from the data control palette.
    (2) The controller-centric approach would be to take your existing page, with the "Delete" action binding in your page definition that was created when you dropped the "Delete" action and...
    Insert a new "action" binding inside the "bindings" section of your page definition for the "Commit" operation. (By default, it will be named commit). Double-click on your (Delete) button, and allow JDeveloper to create you a method in a JSF backing bean (keeping the [x] Generate ADF Binding Code checkbox checked. Then, in the backing bean, you can find the "Commit" operation binding and invoke it after the "Delete" operation binding has been invoked. The backing bean method will already contain the generated code to invoke the "Delete" operation. You can just copy/paste that and change the name.
    There are other more generic approaches, too, but these two ideas should get you started.

  • Where can get the adftutoial or adfdevguide for TopLink JPA with JSF/ADF ?

    Hello...
    Can anyone suggest me where on OTN I can find the adftutorial and adfdevguide describing developement using Toplink JPA ( EJB 3.0) with JSF/ADF faces ?
    Thanking you in advance,
    Samba

    Hi Samba,
    - Build a Web Application(JSF) using JPA
    - Oracle ADF Developer Guide
    - ADF Learning Center
    Hope it helps!
    Koen

  • How to display BLOB image column with WEB application, JSF, ADF BC

    I looking for a way to display the content from a blob column on a WEB application, JSF, ADF BC
    The blob column contains a JPEG image.
    About the application
    The model contains a viewobject where the blob column attribute (photoimg) type is of type : BlobDomain
    Now I have to create the view to display the content of photoimg inside a JSF-JSP page.
    Any advice ?

    Search is your friend
    How to display the content of a BLOB column in a ADF/BC pages ?
    John

Maybe you are looking for