Mapping and relations in database ....

Hello Friends,
I'm a bit new to Toplink so i need your help. I am working with snort Rules.
I have 5 tables in my datbase. one table is the parent table. Rules. The other tables are child tables. Addresses, Ports, Flags and values.
I have a GUI interface from where i can inser a new Rule in the table - I have the primary key created by a sequence in each table. A rule contains everything .. i mean IP addresses, Ports, different Flag Values and some other variable values. Now i want to create relations between the tables.
The rules table has one to many relationship with address address, ports and values table. And address table has many to one relationship with rules. Can anybody help me how can i create relationship between tables.
I have written some code by but i am getting errors. The code is as follows.
In rules table i have written this code.
public class Rules implements Serializable {               
@OneToMany
     (mappedBy="rules", targetEntity= Address.class, fetch=FetchType.EAGER )
     private Address address;
Address table has a colum with SID as foreign key.
public class Address implements Serializable {
@ManyToOne(optional=false)
@JoinColumn(name="SID",referencedColumnName="SID")
private Rules rules;
Kindly help me and let me know what is tha problem with this code.
Many thanks in advance.

If one Address can be in more than one Rule, then you do not have a OneToMany relationship, you have a ManyToMany relationship. So make both relationships a Collection and use the @ManyToMany annotation. You will also need to have a JoinTable for the relationship on the database, use a mappedBy on one side of the relationship.
See,
http://en.wikibooks.org/wiki/Java_Persistence/ManyToMany
James : http://www.eclipselink.org

Similar Messages

  • Upgrade Preparation: How to map applications, DTS packages, SSIS packages and reports to databases

    Hi,
    I am on the initial phase of upgrading SQL Server 2005 to SQL Server 2012. Right now, I'm taking as much inventory as I can from our current server, SQL Server 2005. If anyone could help me, how can I map the following:
    Map applications to databases
    Map DTS packages to databases
    Map SSIS packages to databases
    Map reports to databases
    Thank you!

    Some questions and suggestions:
    How are we planning to upgrade whether in-place\parallel\others?
    Will this be with HA[Clustered one] or with standalone?
    Do you have time to do actual migration from now on and you will need to setup the environment for end to end testing for application validation so that all differences between application or system variables can be known before actual deployment?
    Also, it will be good to check with Upgrade Advisor for below pointers preparedness too.
    Map applications to databases: If you have already need to setup databases and logins for each and every applications then your mapping to database for applications will be lot  easier and you will have less challenge.
    Map DTS packages to databases: Good link to check
    https://www.simple-talk.com/sql/ssis/dts-to-ssis-migration/
    Map SSIS packages to databases: Good link to check
    http://www.experts-exchange.com/Database/MS-SQL-Server/Q_28340818.html http://www.sqlservercentral.com/Forums/Topic1531839-2799-1.aspx
    Map reports to databases -- Good link for Reporting Services if you meant that:
    http://msdn.microsoft.com/en-us/library/ms143747.aspx
    http://www.mssqltips.com/sqlservertip/2627/migrating-sql-reporting-services-to-a-new-server/
    Good link to check other thing as well :
    http://thomaslarock.com/2013/03/upgrading-to-sql-2012-ten-things-you-dont-want-to-miss/
    Santosh Singh

  • Difference between Transaction database and relational database

    Whats the Difference between Transaction database and relational database ??

    'Transaction' refers to the usage of a database.  'Relational' refers to the way in which a given database stores data.
    A 'transaction database' (or operational database) could be relational, hierarchical, et al.  A transaction database supports business process flows and is typically an online, real-time system.  The way in which that data is stored is typically
    based on the application(s).  Companies often have multiple 'transaction databases'.
    An 'operational data store' (ODS) is an integrated view or compilation of transaction data.
    The you get into data warehouse databases, where the transaction data is optimized for querying, reporting, and analysis activities.

  • Help me please : Serious problems with collection-mapping, list-mapping and map-mappi

    Hi everybody;
    I have serious problems with list-mapping, collection-mapping and map-mapping.
    Acording to specifications and requirements in a system I am working on it is needed to
    get a "list" of values or an indivudual value.I am working with ORACLE 9i Database,
    ORACLE 9i AS and ORACLE 9i JDEVELOPER.
    I tried to map a master-detail relationship in an entity-bean, using list-mapping.
    And this was very useful in order to get a "list" of details, ...but, when I wanted
    to get a single value I have some problems with persistence, something about "saving a state"
    despite I just want to get the value of a single detail.
    I decided to change it to map-mapping and the problem related with a single detail
    worked successfully, but I can get access to the whole bunch of details.
    May anyone of you help me with that?
    I am very confused I do not know what to do.
    Have any of you a solution for that problem?
    Thank you very much.

    Have you tried a restore in iTunes?

  • Where find java classes corresponding to message mapping and interfaces ?

    Hi
    Forum,
    when i create my objects in Repository, like Message interface and messgae mapping, correspoding to them, java class in created,  where can i see these java classes in the XI's file system,

    Hi sudeep,
    During the installation of Xi we select a database.So all the objetcs and related things that we create in IR and ID will be saved in the database only.I dont know how to check the .class file for each object...
    Check these weblogs from sravya where she has given you the table names where the IR and ID objects are stored:
    /people/sravya.talanki2/blog/2007/01/11/ripping-off-sap-xi-stack-133sharing-the-goodies-of-abap-api146s
    /people/sravya.talanki2/blog/2005/12/02/sxicache--ripped-off
    /people/sravya.talanki2/blog/2006/12/28/skelton-of-mapping-runtime-in-sap-xi
    regards
    BILL

  • Syntax of DDL options and related (table) column names

    Hi,
    where can I find something like a mapping between DDL options and related table column names?
    For example I do have the table options PCTFREE, FREELISTS and NOCOMPRESS. The related table columns out of user_tables are PCT_FREE, FREELIST and COMPRESS.
    PCT(_)FREE wins an Underscore, FREELIST(S) wins an "S" and it is "NOCOMPRESS" if COMPRESS has a value "Y(es)".
    Hope somebody can help.

    So far I didn't find any information that is not in
    the DDL script gernerated from
    DBMS_METADATA.get_DDL.Alright, I give you an example:
    I create a table with the following DDL:
    "CREATE TABLE IntBuch (
    int_bunr integer NOT NULL,
    int_sdat double precision NOT NULL,
    int_hblz char(8) NOT NULL,
    int_hkto char(7) NOT NULL,
    int_hdat double precision NOT NULL,
    KtoNr char(7) NOT NULL,
    BLZ char(8) NOT NULL,
    CONSTRAINT PK_IntBuch PRIMARY KEY (int_bunr)
    USING INDEX
    PCTFREE 10
    STORAGE (
    INITIAL 1000
    NEXT 500
    PCTINCREASE 0
    MINEXTENTS 1
    MAXEXTENTS 4096
    PCTFREE 20
    LOGGING
    CREATE UNIQUE INDEX intid
    ON IntBuch (int_bunr DESC)
    CREATE INDEX hkto
    ON IntBuch (int_hblz,int_hkto)
    COMMENT ON TABLE IntBuch
    IS 'Kommentar zu DB-Tabelle InBuch'
    COMMENT ON COLUMN IntBuch.int_sdat IS 'Kommentar zu DB-Spalte int_sdat'
    ALTER TABLE IntBuch
    ADD CONSTRAINT Gutschrift FOREIGN KEY (int_hkto,int_hblz) REFERENCES Konto
    ON DELETE CASCADE
    ADD FOREIGN KEY (KtoNr,BLZ) REFERENCES Konto
    ADD FOREIGN KEY (int_bunr) REFERENCES Buchung
    ON DELETE CASCADE
    After that I read the DDL with DBMS_METADATA.get_DDL and I get
    " CREATE TABLE "UOENDE"."INTBUCH"
    (     "INT_BUNR" NUMBER(*,0) NOT NULL ENABLE,
         "INT_SDAT" FLOAT(126) NOT NULL ENABLE,
         "INT_HBLZ" CHAR(8) NOT NULL ENABLE,
         "INT_HKTO" CHAR(7) NOT NULL ENABLE,
         "INT_HDAT" FLOAT(126) NOT NULL ENABLE,
         "KTONR" CHAR(7) NOT NULL ENABLE,
         "BLZ" CHAR(8) NOT NULL ENABLE,
         CONSTRAINT "PK_INTBUCH" PRIMARY KEY ("INT_BUNR")
    USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 16384 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "USERS" ENABLE,
         CONSTRAINT "GUTSCHRIFT" FOREIGN KEY ("INT_HKTO", "INT_HBLZ")
         REFERENCES "UOENDE"."KONTO" ("KTONR", "BLZ") ON DELETE CASCADE ENABLE,
         FOREIGN KEY ("KTONR", "BLZ")
         REFERENCES "UOENDE"."KONTO" ("KTONR", "BLZ") ENABLE,
         FOREIGN KEY ("INT_BUNR")
         REFERENCES "UOENDE"."BUCHUNG" ("BU_NR") ON DELETE CASCADE ENABLE
    ) PCTFREE 20 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "USERS"
    If there are no more DDLOptions possible than that, it is fine for me. If not, but all kinds of options are to find in only one or two tables of the database ( user_tables for table options, dba_constraints and dba_segments for column options, primary and foreign keys, user_indexes and user_segments for indexes ...), that is fine for me, too. (As long as I get to know which tables are "sufficient" for that.) If it is different than both of that, that's bad.
    However if you want to use defaults instead of
    absolut values, it is better to remove certain parts,
    like the storage clause, from the generated output.I understand this, but I am more interested to have all and not leaving out some of them in order to have the defaults in the database after running the DDL.
    A totally different approach could be to
    a) create a database link from one DB to another.
    b) create table <new_table> as select * from
    <old_table@dbLink> where 1=2;I keep that in mind, thanks for that hint.

  • Value Mapping  and Lookups

    Hi all,
    I know something about Value Mapping.
    And I heard that the Lookups (DB and RFC) are types of Value Mapping.
    I am confused how value mapping is related to lookups.
    Request you all to provide me some information regd.this.
    Thank you.
    Regards
    Krishna.

    Hi,
    All the links above are quite useful in understanding Value Mapping and RFC lookups.
    Both solutions could be used for getting values from R3 tables at runtime. However, there's difference in way both operate.
    RFC lookup is performed at runtime, since it is executed from the UDF in message mapping. Thus at runtime, call is made to R3 table and value is fetched.
    In value mapping replication, Value Mapping program is run from SAP R3 and values from table are updated in XI Runtime Cache. Thus at runtime, while performing message mapping, call to R3 system is not made. Values are available in XI cache.
    Thus, it is quite obvious that Value Mapping is more performace efficient since it doesn't make a call to SAP R3 system for each incoming message. If the table data does not change very frequently(since for every change value mapping scenario has to be run for update of XI cache), and it is not very large, then it is a good option as compared to RFC lookup.
    However, in cases where either data is frequently changing in table or the volume of data is too large, RFC lookup should be prefered choice.
    Hope it would be helpful.
    Thanks,
    Bhavish
    Kindly award points if comments are useful

  • Mapping and cube browsing

    Hi,
    I have build my first cube in OWB following the book "Oracle Warehouse Builder 11g; Getting Started", but I think that I have some problems with my mappings?
    When I run (start) my stage mapping and dimensions mapping I can see that some rows are selected and inserted, but when I run my mapping to the cube no rows are selected or inserted?
    Another question is how I browse the cube I have created. I just want to make sure that data is present in this cube? I have tried to right-click my cube and select data, but I get an error when I select execute: ENT-06955: Cube Name SALES does not exist in OLAP schema ACME_DWH. Please deploy the Cube SALES into the OLAP schema ACME_DWH. If I use Database Control I can see SALES in the schema?
    I am using 11g R2
    Kind Regards,
    Søren

    Hi,
    When you create a dimension or a cube with ROLAP implementation, their "Deployment Option" will be set to "Deploy Data Object only" by default. If you want to browse data in dimension/cube using "Data Viewer", you have to right click the dimension/cube, click "Configurate..." in the popup menu, find "Deployment Option" property in Configuration editor, set it to "Deploy All" or "Deploy to Catalog only", then deploy and run it again.
    However there is another way to check dimension/cube data if you selected "Deploy Data Object only". You can find dimension/cube implement table, right click on that table, then click "Data...".
    Cheers,
    Dawei
    Edited by: dawsun on May 19, 2010 11:53 PM

  • Horizontal Mapping and Flat mapping with Metadata Value Indicator

    Hi
    I have an abstract class B which itself extends another abstract class A.
    There is no table for A. The fields in A are mapped to B. I believe this
    is called "horizontal mapping"
    C and D inherit off B. There's a also a table named B (mapped to class B),
    but none for C or D. Instances of C and D are recorded in table B. I
    believe this is called "flat mapping"
    B has a field foo whose possible values are 'fred' and 'wilma'.
    If foo='fred', then the record is of type C
    If foo='wilma', then the record is of type D
    I believe this is called "class indicator" of type metadata.
    To express this, I have package.jdo which says
    <class name="A"/>
    <class name="B" persistence-capable-superclass="A"/>
    <class name="C" persistence-capable-superclass="B"/>
    <class name="D" persistence-capable-superclass="B"/>
    In B.mapping, I have
    <mapping>
    <package name="domain">
    <class name="B">
    <jdbc-class-map type="horizontal"/>
    <jdbc-class-ind type="metadata-value" column="foo"/>
    </class>
    </package>
    </mapping>
    B.java has a private String foo.
    In C.mapping, I have
    <mapping>
    <package name="domain">
    <class name="C">
    <jdbc-class-map type="flat"/>
    <jdbc-class-ind-value value="fred"/>
    field mappings for C
    </class>
    </package>
    </mapping>
    and similarly in D for value='wilma'
    My questions are...
    1. Is this kind of mapping supported by kodo?
    2. If so, is this configuration correct? I guess not, since I don't
    specify the table name anywhere. Where should it go?
    3. If I remove the "class indicator" mapping and run a simple test I get
    kodo.util.FatalUserException: There is no superclass mapping for mapping
    for "class domain.D".
         at
    kodo.jdbc.meta.FlatClassMapping.assertParentMapping(FlatClassMapping.java:49)
         at kodo.jdbc.meta.FlatClassMapping.getTable(FlatClassMapping.java:85)
         at
    kodo.jdbc.meta.OneToManyFieldMapping.fromMappingInfo(OneToManyFieldMapping.java:87)
         at
    kodo.jdbc.meta.RuntimeMappingProvider.getFieldMapping(RuntimeMappingProvider.java:160)
         at
    kodo.jdbc.meta.MappingRepository.getFieldMapping(MappingRepository.java:443)
         at
    kodo.jdbc.meta.AbstractClassMapping.getFieldMapping(AbstractClassMapping.java:949)
    4. If I run a simple test with the horizontal, flat and class-indicator
    mappings, I get
    kodo.jdbc.meta.MappingInfoNotFoundException: The "class-column"
    attribute/extension for the class indicator on type
    "domain.B.<class-indicator>" is missing or names a column that does not
    exist.
         at kodo.jdbc.meta.Mappings.invalidMapping(Mappings.java:135)
         at kodo.jdbc.meta.Mappings.invalidMapping(Mappings.java:121)
         at
    kodo.jdbc.meta.ColumnClassIndicator.fromMappingInfo(ColumnClassIndicator.java:95)
         at
    kodo.jdbc.meta.RuntimeMappingProvider.initialize(RuntimeMappingProvider.java:135)
         at
    kodo.jdbc.meta.MappingRepository.getMappingInternal(MappingRepository.java:378)
    What am I doing wrong?
    Thanks in advance
    Srini

    I solved this problem by removing the identifier field from the
    class/mapping (kodo support).
    Thanks
    Srini
    Stephen Kim wrote:
    Do you have two fields mapped to the same column? Did you make sure you
    set everything which maps to the column?
    Srinivasan Ranganathan wrote:
    I found what was wrong with this, fixed it and got a different (more
    sensible) error. To correct this mapping, I specified B's mapping type as
    "base" and gave its table and pk names. Also, I moved the common field
    mappings to B.mapping so C.mapping and D.mapping only have fields that are
    specific to each.
    Now when I run a simple test, I get
    testC:
    kodo.util.FatalUserException: Attempt to set column "B.FOO" to two
    different values: (java.lang.Object)"java.lang.Object@2f608ac2",
    (java.lang.String)"fred" This can occur when you fail to set both sides of
    a two-sided relation between objects, or when you map different fields to
    the same column, but you do not keep the values of these fields in synch.
         at kodo.jdbc.runtime.VRow.setObjectInternal(VRow.java(Compiled Code))
         at kodo.jdbc.sql.AbstractRow.setObject(AbstractRow.java(Compiled Code))
         at
    kodo.jdbc.meta.ColumnClassIndicator.insert(ColumnClassIndicator.java:143)
         at kodo.jdbc.runtime.UpdateManagerImpl.insert(UpdateManagerImpl.java:216)
         at kodo.jdbc.runtime.UpdateManagerImpl.insert(UpdateManagerImpl.java:219)
         at kodo.jdbc.runtime.UpdateManagerImpl.flush(UpdateManagerImpl.java:108)
         at kodo.jdbc.runtime.UpdateManagerImpl.flush(UpdateManagerImpl.java:73)
    testD:
    kodo.util.FatalUserException: Attempt to set column "B.FOO" to two
    different values: (java.lang.Object)"java.lang.Object@2f608ac2",
    (java.lang.String)"wilma" This can occur when you fail to set both sides
    of a two-sided relation between objects, or when you map different fields
    to the same column, but you do not keep the values of these fields in
    synch.
         at kodo.jdbc.runtime.VRow.setObjectInternal(VRow.java(Compiled Code))
         at kodo.jdbc.sql.AbstractRow.setObject(AbstractRow.java(Compiled Code))
         at
    kodo.jdbc.meta.ColumnClassIndicator.insert(ColumnClassIndicator.java:143)
         at kodo.jdbc.runtime.UpdateManagerImpl.insert(UpdateManagerImpl.java:216)
         at kodo.jdbc.runtime.UpdateManagerImpl.insert(UpdateManagerImpl.java:219)
         at kodo.jdbc.runtime.UpdateManagerImpl.flush(UpdateManagerImpl.java:108)
         at kodo.jdbc.runtime.UpdateManagerImpl.flush(UpdateManagerImpl.java:73)
    I've checked for the two possible errors to the best of my knowledge. Any
    input to resolve this issue is appreciated.
    Thanks in advance
    Srini
    Srinivasan Ranganathan wrote:
    Hi
    I have an abstract class B which itself extends another abstract class A.
    There is no table for A. The fields in A are mapped to B. I believe this
    is called "horizontal mapping"
    C and D inherit off B. There's a also a table named B (mapped to class B),
    but none for C or D. Instances of C and D are recorded in table B. I
    believe this is called "flat mapping"
    B has a field foo whose possible values are 'fred' and 'wilma'.
    If foo='fred', then the record is of type C
    If foo='wilma', then the record is of type D
    I believe this is called "class indicator" of type metadata.
    To express this, I have package.jdo which says
    <class name="A"/>
    <class name="B" persistence-capable-superclass="A"/>
    <class name="C" persistence-capable-superclass="B"/>
    <class name="D" persistence-capable-superclass="B"/>
    In B.mapping, I have
    <mapping>
    <package name="domain">
    <class name="B">
    <jdbc-class-map type="horizontal"/>
    <jdbc-class-ind type="metadata-value" column="foo"/>
    </class>
    </package>
    </mapping>
    B.java has a private String foo.
    In C.mapping, I have
    <mapping>
    <package name="domain">
    <class name="C">
    <jdbc-class-map type="flat"/>
    <jdbc-class-ind-value value="fred"/>
    field mappings for C
    </class>
    </package>
    </mapping>
    and similarly in D for value='wilma'
    My questions are...
    1. Is this kind of mapping supported by kodo?
    2. If so, is this configuration correct? I guess not, since I don't
    specify the table name anywhere. Where should it go?
    3. If I remove the "class indicator" mapping and run a simple test I get
    kodo.util.FatalUserException: There is no superclass mapping for mapping
    for "class domain.D".
         at
    kodo.jdbc.meta.FlatClassMapping.assertParentMapping(FlatClassMapping.java:49)
         at kodo.jdbc.meta.FlatClassMapping.getTable(FlatClassMapping.java:85)
         at
    kodo.jdbc.meta.OneToManyFieldMapping.fromMappingInfo(OneToManyFieldMapping.java:87)
    >>
         at
    kodo.jdbc.meta.RuntimeMappingProvider.getFieldMapping(RuntimeMappingProvider.java:160)
    >>
         at
    kodo.jdbc.meta.MappingRepository.getFieldMapping(MappingRepository.java:443)
         at
    kodo.jdbc.meta.AbstractClassMapping.getFieldMapping(AbstractClassMapping.java:949)
    >>
    >>
    >>
    >>
    4. If I run a simple test with the horizontal, flat and class-indicator
    mappings, I get
    kodo.jdbc.meta.MappingInfoNotFoundException: The "class-column"
    attribute/extension for the class indicator on type
    "domain.B.<class-indicator>" is missing or names a column that does not
    exist.
         at kodo.jdbc.meta.Mappings.invalidMapping(Mappings.java:135)
         at kodo.jdbc.meta.Mappings.invalidMapping(Mappings.java:121)
         at
    kodo.jdbc.meta.ColumnClassIndicator.fromMappingInfo(ColumnClassIndicator.java:95)
    >>
         at
    kodo.jdbc.meta.RuntimeMappingProvider.initialize(RuntimeMappingProvider.java:135)
    >>
         at
    kodo.jdbc.meta.MappingRepository.getMappingInternal(MappingRepository.java:378)
    >>
    >>
    >>
    What am I doing wrong?
    Thanks in advance
    Srini
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com

  • Created searchBean map and stored on session fail!

    Hi :
    I met a problem while entering a jsp page , seems like its "Created searchBean map and stored on session" was missing , and not every time I can produce this error , it happened dynamically.
    Any idea about this ?
    =======================================================
    if the jsp page run success , it will be produce debug log like these:
    Storing JhsAuthorizationProxy object on session to allow EL access to request.isUserInRole() and/or
    JhsUser.hasAccess()
    19:09:06 DEBUG (JhsActionServlet) -Request class: com.evermind.server.http.AJPHttpServletRequest
    19:09:06 DEBUG (JhsActionServlet) -Request URI: /BIL/StartBIL3060MBaseGroup.do
    19:09:06 DEBUG (JhsActionServlet) -Request Character Encoding: ISO-8859-1
    19:09:06 DEBUG (JhsActionServlet) -Parameter ArgUser: 0438
    19:09:06 DEBUG (JhsActionServlet) -Request class: com.evermind.server.http.AJPHttpServletRequest
    19:09:06 DEBUG (JhsActionServlet) -Request URI: /BIL/BIL3060MBaseGroup.do
    19:09:06 DEBUG (JhsActionServlet) -Request Character Encoding: ISO-8859-1
    19:09:06 DEBUG (JhsActionServlet) -Parameter ArgUser: 0438
    19:09:07 DEBUG (JhsDataAction) -Executing action /BIL3060MBaseGroup
    19:09:07 DEBUG (JhsDataAction) -Created searchBean map and stored on session
    19:09:07 DEBUG (JhsDataAction) -Created new searchBean for BIL3060MBaseGroupUIModel and added to quick search bean map
    19:09:07 DEBUG (JhsDataAction) -Stored searchBean for BIL3060MBaseGroupUIModel on request
    19:09:07 DEBUG (JhsDataAction) -Setting findMode to true for iterator binding BIL3060MBaseGroupIterator
    19:09:07 DEBUG (JhsDataAction) -Setting max fetch size -1 temporarily to 0 for ViewObject BIL3060MBaseView1
    19:09:07 DEBUG (JhsDataAction) -ViewObject BilPAID_FLAGView1: value of bind param 0 set to PAID_FLAG
    19:09:07 DEBUG (JhsDataAction) -
    ========================================================
    and if the jsp page failure, the debug log will be like folling content:
    07/09/20 19:07:42 0422crhin-jrung
    07/09/20 19:07:53 0433crhin-jrung
    07/09/20 19:07:53 [B@f55759
    07/09/20 19:07:53 [B@6fd560
    07/09/20 19:07:53 userPwd1:s_user_name:Johnson
    07/09/20 19:07:53 null
    19:07:53 DEBUG (JhsActionServlet) -Storing JhsAuthorizationProxy object on session to allow EL access to request.isUserInRole() and/or
      JhsUser.hasAccess()
    19:07:53 DEBUG (JhsActionServlet) -Request class: com.evermind.server.http.AJPHttpServletRequest
    19:07:53 DEBUG (JhsActionServlet) -Request URI: /BIL/StartBIL3060MBaseGroup.do
    19:07:53 DEBUG (JhsActionServlet) -Request Character Encoding: Big5
    19:07:53 DEBUG (JhsActionServlet) -Parameter ArgUser: 0422
    19:07:53 DEBUG (JhsActionServlet) -Request class: com.evermind.server.http.AJPHttpServletRequest
    19:07:53 DEBUG (JhsActionServlet) -Request URI: /BIL/BIL3060MBaseGroup.do
    19:07:53 DEBUG (JhsActionServlet) -Request Character Encoding: Big5
    19:07:53 DEBUG (JhsActionServlet) -Parameter ArgUser: 0438
    19:07:53 DEBUG (JhsDataAction) -Executing action /BIL3060MBaseGroup
    [b] 07/09/20 19:07:53 java.lang.NullPointerException
    07/09/20 19:07:53 at oracle.jheadstart.util.BindingUtils.findIterBinding(BindingUtils.java:116)
    07/09/20 19:07:53 at oracle.jheadstart.controller.strutsadf.action.JhsDataAction.applyIterBindParams(JhsDataAction.java:3450)
    07/09/20 19:07:53 at oracle.jheadstart.controller.strutsadf.action.JhsDataAction.prepareModel(JhsDataAction.java:3875)
    07/09/20 19:07:53 at oracle.adf.controller.struts.actions.DataAction.prepareModel(DataAction.java:486)
    07/09/20 19:07:53 at oracle.adf.controller.lifecycle.PageLifecycle.handleLifecycle(PageLifecycle.java:105)
    07/09/20 19:07:53 at oracle.adf.controller.struts.actions.DataAction.handleLifecycle(DataAction.java:222)
    07/09/20 19:07:53 at oracle.jheadstart.controller.strutsadf.action.JhsDataAction.handleLifecycle(JhsDataAction.java:506)
    07/09/20 19:07:53 at oracle.adf.controller.struts.actions.DataAction.execute(DataAction.java:153)
    07/09/20 19:07:53 at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484)
    07/09/20 19:07:53 at oracle.jheadstart.controller.strutsadf.JhsRequestProcessor.processActionPerform(JhsRequestProcessor.java:118)
    07/09/20 19:07:53 at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274)
    07/09/20 19:07:53 at oracle.jheadstart.controller.strutsadf.JhsRequestProcessor.process(JhsRequestProcessor.java:385)
    07/09/20 19:07:53 at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482)
    07/09/20 19:07:53 at oracle.jheadstart.controller.strutsadf.JhsActionServlet.process(JhsActionServlet.java:130)
    07/09/20 19:07:53 at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507)
    07/09/20 19:07:53 at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
    07/09/20 19:07:53 at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
    07/09/20 19:07:53 at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:824)
    07/09/20 19:07:53 at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:330)
    07/09/20 19:07:53 at com.evermind.server.http.ServletRequestDispatcher.forward(ServletRequestDispatcher.java:222)
    07/09/20 19:07:53 at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1069)
    07/09/20 19:07:53 at org.apache.struts.tiles.TilesRequestProcessor.doForward(TilesRequestProcessor.java:274)
    07/09/20 19:07:53 at org.apache.struts.action.RequestProcessor.internalModuleRelativeForward(RequestProcessor.java:1012)
    07/09/20 19:07:53 at org.apache.struts.tiles.TilesRequestProcessor.internalModuleRelativeForward(TilesRequestProcessor.java:345)
    07/09/20 19:07:53 at org.apache.struts.action.RequestProcessor.processForward(RequestProcessor.java:582)
    07/09/20 19:07:53 at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:260)
    07/09/20 19:07:53 at oracle.jheadstart.controller.strutsadf.JhsRequestProcessor.process(JhsRequestProcessor.java:385)
    07/09/20 19:07:53 at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482)
    07/09/20 19:07:53 at oracle.jheadstart.controller.strutsadf.JhsActionServlet.process(JhsActionServlet.java:130)
    07/09/20 19:07:53 at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507)
    07/09/20 19:07:53 at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
    07/09/20 19:07:53 at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
    07/09/20 19:07:53 at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:65)
    07/09/20 19:07:53 at oracle.jheadstart.controller.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:176)
    07/09/20 19:07:53 at com.evermind.server.http.EvermindFilterChain.doFilter(EvermindFilterChain.java:16)
    07/09/20 19:07:53 at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:239)
    07/09/20 19:07:53 at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:659)
    07/09/20 19:07:53 at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:330)
    07/09/20 19:07:53 at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:830)
    07/09/20 19:07:53 at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:224)
    07/09/20 19:07:53 at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:133)
    07/09/20 19:07:53 at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:192)
    07/09/20 19:07:53 at java.lang.Thread.run(Thread.java:534)
    19:07:53 DEBUG (JhsDataAction) -Executing findForward
    19:07:53 DEBUG (JhsDataAction) -Forward set by parameter property
    19:07:53 DEBUG (JhsDataAction) -Forwarding to: /WEB-INF/page/BIL3060MBaseGroup.jsp
    19:07:53 WARN (RequestProcessor) -Unhandled Exception thrown: class java.lang.NullPointerException

    Eron,
    It looks like there is something wrong with the runtime representation of the pagedef that is not passed to ADF correctly.
    It could be a JDeveloper/ADF issue that is not related to JHeadstart. To simplify the test case, you could create a simple drag-and-drop ADF application without JHeadstart and see if the same problem occurs there. If so, can you please log a TAR at MetaLink ( http://metalink.oracle.com/ ), or ask this question at the JDeveloper forum at http://otn.oracle.com/discussionforums/jdev.html ?
    Thanks,
    Sandra Muller
    JHeadstart Team
    Oracle Consulting

  • CFINCLUDE and relative path

    I have a website which has error catching in it and from time
    to time I have this error come through when using cfinclude.
    Could not find the included template
    project56/images1/page1.html. Note: If you wish to use an absolute
    template path (e.g. TEMPLATE="/mypath/index.cfm") with CFINCLUDE
    then you must create a mapping for the path using the ColdFusion
    Administrator. Using relative paths (e.g. TEMPLATE="index.cfm" or
    TEMPLATE="../index.cfm") does not require the creation of any
    special mappings. It is therefore recommended that you use relative
    paths with CFINCLUDE whenever possible.
    Why would this error occur for a few visiters and not for all
    when browsing my website?
    If I have to put some sort of mapping into CF Administrator
    what should I put as a mapping and why should this help?
    Thanks in advance for your help.
    Simon.

    The only thing I can think of is are you sure the page the
    'affected' users are clicking from are in a parent folder of the
    'project56' folder? The error occurs when the CFINCLUDE is looking
    in the wrong folder or when you try to include URL variables in the
    include statement.
    if you INCLUDE project56/images1/page1.html it will work, but
    if you INCLUDE project56/images1/page1.html?user=123 then you get
    an error because the INCLUDE looks for the actual text of what's
    between the quotation marks...it will not pass variables.
    I'd check the folder structure on where they are clicking
    from. In your error page, just have the referring page
    output.

  • My location service is not working from the day one the response is that (cannot able to determine the location) .What can i need to do to see my location in maps and other application even the location services is enabled in the settings?

    From the day one the response is that (cannot able to determine the location) while using ios5.1 .What can i need to do to see my location in maps and other application even the location services is enabled in the settings?.Which means that is there is any problem in my ipad ? currently running on IOS 6.

    Your location is determined from a location database which contains the MAC addresses of routers and their physical location. A MAC address is a unique number which is built-in to all network devices when manufactured. The database is built and updated by actually driving around and mapping router locations.
    Since you just moved the location of your router has not been updated in the database and when you use Maps you will see your old location.
    If you go outside your house you can be correctly located because your iPod sees other routers which are in the location database. Note that you do not need to be connected to a router for your iPod to see it and obtain its MAC address.

  • What are the differences between Oracle and other NoSQL database

    Hi all,
    I would like to know what the differences between Oracle and other NoSQL database are.
    When and why should we use Oracle?
    Is Oracle NoSQL database link with Big Data Appliance?
    Can we use map-reduce on a single personal computer? How should we install Oracle NoSQL database to use map reduce on a single personal computer?
    Do we also have eventual consistency with Oracle NoSQL database? Can we lose data if master node fails?
    Are transactions ACID with Oracle NoSQL database? How can we prove it?
    Thanks.

    893771 wrote:
    Hi all,
    I would like to know what the differences between Oracle and other NoSQL database are.
    When and why should we use Oracle?I suggest that you start here:
    http://www.oracle.com/technetwork/database/nosqldb/overview/index.html
    Is Oracle NoSQL database link with Big Data Appliance?Yes, Oracle NoSQL Database will be a component of the Big Data Appliance.
    Can we use map-reduce on a single personal computer? How should we install Oracle NoSQL database to use map reduce on a single personal computer?Yes, I believe you can run M/R on a single computer. Consult the various pieces of documentation available on the web. You may run Oracle NoSQL Database on the same computer that you are running M/R on, but it is likely that they will compete for CPU and IO resources and therefore performance may suffer.
    Do we also have eventual consistency with Oracle NoSQL database? Yes.
    Can we lose data if master node fails?If you run Oracle NoSQL Database with the default (recommended) durability settings, then if the master fails, a new one will be elected and data is not lost.
    Are transactions ACID with Oracle NoSQL database? How can we prove it?Yes, each operation is executed in an ACID transaction. The API has the concept of "multi" operations which allow the caller to perform multiple operations on sets of records with the same major key, but different minor keys. Those operations are also performed within a transaction.
    Charles Lamb

  • Engineer between Logical Model and Relational Model

    I am trying to update changes from one model to the other but there are dublicate entries generated instead of updates.
    The Logical Model was imported from Oracle Designer, the Relational Model was imported from data dictionary.
    Our rule is, that the name of Entities/Tables and Attribute/Columns are identical.
    So I changed the Naming Standard of the Logical Model in Preferences
    from Separator = Space to Separator = Character with char = Underline.
    If the entity is not existing, it is created with the correct name.
    If the entity exists, a new entity is created with Namev1.
    The same happens when I try to update changes in the other direction.
    How can I achieve that the existing entity (or table) is updated and not a new one is created?
    Or in other words, is there a way to link entities to corresponding tables?
    Walter

    Hi Walter,
    The Logical Model was imported from Oracle Designer, the Relational Model was imported from data dictionaryit's good to import entities and related tables from Designer repository together. Data Modeler will import the link between them and use that link in synchronization between logical and relational model. After that you can import details for physical model from database.
    If you don't have tables in Designer repository and keep the same names for entities, attributes, tables and columns then you can engineer logical model to relational and import details from database.
    Philip

  • Error publishing plsql webservice (xml schema mapping and/or serializer)

    Hi guys,
    I'm with a problem when publishing plsql webservices from JDeveloper 11.
    The scenario is something like this:
    1) I have a connection with a database 9i, and the objects (packages, object types, etc) are all in there.
    2) In this case, i can publish the package spec using the option "Publish as JAX-RPC Web Service" normally.
    3) A database 11g was created, and i compiled the objects in this environment.
    4) Then i've created a new connection with a database 11g.
    5) In this case, when i'll publish the webservice, the error occurs: "The following types used by the program unit do not have an XML Schema mapping and/or serializer specified: REC_DMP_REMESSA"
    I have no idea in how to solve this case. Someone can help?
    Thank in advance.
    Best Regards,
    Gustavo

    Duplicate of https://forums.oracle.com/thread/2610823
    Timo

Maybe you are looking for

  • Any others with 27" iMac 10.9.4 memory issues... my experience.

    Hi, I just thought I would provide my experience here to both see if others have seen and dealt with the issue - and to alert others that this may be something worth considering in upgrading from Mac OSX 10.9.3 My 27" iMac 11,1 update to 10.9.4 was a

  • XI scenario documentation

    Hi Guys, I am looking for a template to document XI scenarios. - Such as adapter type, structures, volume etc. I have been going through a lot of posts without any luck as most of them has dead links in them. So please do not refere to old posts. Tha

  • Inheritance and mouse handler problem

    I have a super class and two subclass which is extend the super class. I add a mouse handler in one of the subclass. The problem is that the other subclass also affect by the mouse handler. How can i avoid that?? here is the code public abstract clas

  • My macbook pro will not go to sleep

    Macbook pro will not sleep by any means except when it is on battery and if I plug the power back in it will stay sleep for a while and wake up on its own. I have spoken with express lane and verified the HD and the premissions, etc. and it worked fo

  • Error in Exchange Rate

    Hi All I need to work on DataTransfer->Order Routines. Issue: For the Contracts created in VA41 (of type QC or WK1), when we create a Sales Order  (type OR) using a line item in contract, the respective sales order should carry the line item currency