Mapping Queries

Hi All,
I have the following mapping queries.
Can you please help me?
1.I have the following requirement wherein the same field is supposed to be mapped more than once
ORDERS05/ZORDERS05/E1EDK03[QUALF == "012"]
ORDERS05/ZORDERS05/E1EDK03[QUALF == "002"]
Similarly many other fields are supposed to be mapped with the different constants again and again
Can I do it by duplicating the target fields?
2.I have a mapping requirement such that I need to map the source with the target with removing leading zeroes?
Do I need to use any node function for it?
3.I have a mapping requirement such that I need to to map the one source and one target with the first occurrecnce of the
source,similarly I need to map the other source with the other target with the second occurrecnce of the source field.How can I do it?
4.I have a requirement to " Use look up in value table   with tag  concatenated
SCP|Header|<<root element>/DocumentHeader/MessageSubject"
Along with that one condition is also mentioned which is"Do not map the segment when no entry found in the value table".
How to imlement this condition along with the value mapping?
5.I have a mapping requirement wherein it is mentioned that for every Target element create context.Also, the condition is mentioned as "Do not map this segment when no entry found in the value mapping table".How can I do it?
Can you please help me?
Thanks in advance.
Edited by: Shweta Kullkarni on Jun 17, 2010 8:42 AM

Hi,
>> 1.I have the following requirement wherein the same field is supposed to be mapped more than once
ORDERS05/ZORDERS05/E1EDK03QUALF == "012"
ORDERS05/ZORDERS05/E1EDK03QUALF == "002"
Similarly many other fields are supposed to be mapped with the different constants again and again
Can I do it by duplicating the target fields?
Sol : Yes you can acheive it by using duplicate.
>> 2.I have a mapping requirement such that I need to map the source with the target with removing leading zeroes?
     Do I need to use any node function for it?
Sol : Use FormatNumber function and pass Number Format Parameter as 0 (zero) and mapped this in between Source and Target Field.
OR use the following UDF,
String input;
try
input =String.valueOf( Integer.parseInt(a[0]));
result.addValue(input);
catch(Exception e)
  try
result.addValue(a[0]);
catch(Exception ne)
>> 4.I have a requirement to " Use look up in value table with tag concatenated
SCP|Header|<<root element>/DocumentHeader/MessageSubject"
Along with that one condition is also mentioned which is"Do not map the segment when no entry found in the value table".
How to imlement this condition along with the value mapping?
Sol: Refer: http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/00ee347e-aabb-2a10-b298-d15a1ebf43c5
Regards,
Abid

Similar Messages

  • XSLT mapping queries.

    Hi,
    I have a scenario in which I need to post the data in JMS queue from a proxy call.
    [Proxy>PI>JMS]. Here the MQ message is fed to a web application.
    The scenario contains two mapping viz message and XSLT mapping.
    The first one is message mapping and then XSLT is called.
    Now in the XSLT mapping CDATA tag is called to skip the xml parsing and send to JMS queue for HTML output in web application.
    My doubt is how can I achieve table formatting from PI. Few options can be
    1) incorporate xsl in CDATA tag of XSLT mapping for HTML tag for table format?
    2) instead of JMS queue, should I need to post in HTTP receiver directly?
    3) Is it the xsl need to add in Web application side for the sending xml from PI?
    Please help in my understanding.
    Thanks
    PV.

    Hi,
    Try with Java Proxy,you can directly update data in to web application or if you web application team ready to accept data in the form of web service then you no need to generate output in the form of HTML.
    Regards,
    Raj

  • Mapping Queries-File to iDoc

    Hi There,
    I am working on File to iDoc scenario, I will be getting employee details in xml file and I have to create iDoc out of that which will create employee in SAP ECC6.
    I have already mapped the fileds which are coming from xml file but I don;t know what to do with some of these fields,
    BEGIN
    EDI_DC40
    H1HRSM_HEAD
    ..SEGMENT
    ..E1P0000
    ..E1P0001
    I can not disable those fields which has a childs in use.
    Please suggest.
    Thanks,
    Nishant

    Hi All,
    Thanks for your quick replies,
    I am not mapping them with a constant because while loading that iDoc in SAP it must be expecting some specific value. I will be creating employee using the converted idoc so if I map any constant value to those fields will my program in SAP will work?
    Thanks,
    Nishant

  • SQl Developer Where are Map View queries stored

    SQL Developer Latest Version, Linux 86_64, Oracle EE Latest Version.
    I have looked through a lot of docs, schema and files but cannot for the life of me figure out where these are being stored. Little help

    Hey there,
    Oh yes indeed. I work at a University and am introducing students to Oracle Spatial. The ability to be able to give the students, typically graduate students, a basic package of predefined map queries would be very helpful. Unless the file contains other information it would be nice to know just what file it is and that could simply be handed off with an instruction telling them where to drop the file.
    Also the way the select list is handled when editing / running those queries is a bit quirky.
    I don;t know if you saw my other post re: "Point data Display Size" ? I cannot seem to find a way to alter the size of a point when displayed and it really seems to bit kinda weird and mostly seems linked to the number of points being displayed. The work we do is purely road networks so we have lots of 'Links and Nodes" that we display ( like the complete Highway Network the SF Bay Area down to the street level at times ) and so displaying the Nodes as point data and the links as line strings between points is critical so the ability to size the elements for display is as well.
    Thanks for the reply!

  • SAP PI mapping query

    Hi Experts,
    I have  two mapping queries:
    1. In which field A maps with field B.
    For a E-Mail address coming from field A I have to check whether @ and . are available or not.
    2. In which field A maps with field B.
    The value coming from filed A will be:  Only Numerics, Should Migrate Upto  2 decimal places (Ex: 34.21), Ignore the rest if any.
    Please let me know.
    Regards,
    Aniruddha

    chk the UDF given by Satish:
    mapping help-- Number format
    >>For a E-Mail address coming from field A I have to check whether @ and . are available or not.
    u can use standard functions for the same
    A-------------
    -----------indexOf-----greater(constant(0) ----2nd input)
    Constant(.)---
                                 ----------------------AND----IfWithoutElse(A----THEN)-----Target
    A-------------
    ------------indexOf-----greater(constant(0) ----2nd input)
    Constant(@)---

  • SQl Developer Map View

    So how does on change the size of a displayed point that is the result of and sql query that is a geom. Example result provided below:
    SQL> select geom from nodes where rownum <= 10 ;
    GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-93.71519, 32.45661, NULL), NULL, NULL)
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-87.23188, 30.42764, NULL), NULL, NULL)
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-92.43636, 42.5638, NULL), NULL, NULL)
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-76.9374, 42.82129, NULL), NULL, NULL)
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-92.58145, 42.70366, NULL), NULL, NULL)
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-93.99919, 32.53761, NULL), NULL, NULL)
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-93.66337, 34.58685, NULL), NULL, NULL)
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-95.15354, 43.14278, NULL), NULL, NULL)
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-77.71983, 42.57151, NULL), NULL, NULL)
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-77.69377, 42.56063, NULL), NULL, NULL)
    10 rows selected.So when I execute this query in a map view I get points that take up the entire screen. I cannot seem to find a place to alter the size of this displayed points. lines map out just fine from a query, eg:
    SQL> select geom from links where rownum <= 10 ;
    GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
    SDO_GEOMETRY(2002, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(
    -101.76187, 19.99183, -101.76198, 19.99239))
    SDO_GEOMETRY(2002, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(
    -99.32084, 20.04738, -99.32009, 20.04748))
    SDO_GEOMETRY(2002, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(
    -99.09314, 20.24762, -99.09313, 20.24777))
    SDO_GEOMETRY(2002, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(
    -103.04597, 23.81877, -103.04649, 23.81959))
    GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
    SDO_GEOMETRY(2002, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(
    -99.10172, 20.15821, -99.10106, 20.15832))
    SDO_GEOMETRY(2002, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(
    -108.33206, 26.70757, -108.33154, 26.70767))
    SDO_GEOMETRY(2002, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(
    -108.31886, 26.70537, -108.31875, 26.70562))
    SDO_GEOMETRY(2002, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(
    GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
    -99.3392, 20.05963, -99.33956, 20.05967))
    SDO_GEOMETRY(2002, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(
    -98.77567, 20.12205, -98.77588, 20.12236))
    SDO_GEOMETRY(2002, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(
    -98.85369, 20.02178, -98.85325, 20.02202))
    10 rows selected.These map perfectly and display just fine, although I would like to be able to adjust the link width.
    Any help ?

    Hey there,
    Oh yes indeed. I work at a University and am introducing students to Oracle Spatial. The ability to be able to give the students, typically graduate students, a basic package of predefined map queries would be very helpful. Unless the file contains other information it would be nice to know just what file it is and that could simply be handed off with an instruction telling them where to drop the file.
    Also the way the select list is handled when editing / running those queries is a bit quirky.
    I don;t know if you saw my other post re: "Point data Display Size" ? I cannot seem to find a way to alter the size of a point when displayed and it really seems to bit kinda weird and mostly seems linked to the number of points being displayed. The work we do is purely road networks so we have lots of 'Links and Nodes" that we display ( like the complete Highway Network the SF Bay Area down to the street level at times ) and so displaying the Nodes as point data and the links as line strings between points is critical so the ability to size the elements for display is as well.
    Thanks for the reply!

  • BeX-Analyzer : no visible role and working-sheets created bei querie

    Hello experts,
    after creating new working-maps under a role, i can't find it anywhere for the next day.
    For example: After launch BeX-Analyzer 7.0 -- then open working-map -- then connect over BW/BI Developer-System -- then
    roles. I can't find my saved user-working-maps or local objects who are saved one day before. They were under the "PFCG-created"
    user-role : "example :Z_Role_External_".
    When i open the map : QUERIES, i have the include for user-role and can manage and call the designed sheets
    in my workaround.
    In this case one more information that happens before this items. In the last week i had patched the BW-BI 7.0 with then following patches.
    SAPKA70020
    SAPKB70021
    SAPKA70021
    KIPYJ7K und KIPYJ7L
    SAPKW70022
    SAPKW70023
    After this action i heard from this problem in BW-BI "No User-Role and no Test-working-maps" under my User-Profile.
    Have anybody an idea to solve this problem ?
    Thanks
    Michael

    Hi,
    this forum is about Web Intelligence. I would suggest you post your questions into the BusinessExplorer (BEx) Forums.
    ingo

  • How to access remote tables on Oracle from Access fron end?

    Note: Access application and access tables are in separate .mdb files.
    I've successfully migrated my Access 2000 tables to Oracle. And there were no errors during the migration. The two original Access tables were renames to tblBOMdetail_L and tblPartInfo_L and two new linked tables were created with the _R name. Mapping queries were created for tblBOMdetail and tblPartInfo.
    However, when I launch my Access application, I get the following error: "The Microsoft Jet database engine cannot find the input table or query 'tblPartInfo'. Make sure that it exists and that its name is spelled correctly."
    Is there something more that I have to do to enable this connection. I already have the necessary ODBC connection set up.
    Thanks.

    Hi ,
    The first thing you need to verify is that you can view the data in the tables with _R appended to their name. This will verify whether the ODBC link is set up correctly and pointing at the correct Oracle table in the Oracle database.
    Another point. Are you still using 2 .mdb files? If so since your data is now in Oracle it may be prudent to use only one .mdb.
    If you need more help then please contact [email protected]
    John

  • Using stored procedures for insert, update and delete

    Hello all;
    We have a question from our customer (who is the DBA), who has not in the past used TopLink, about whether it makes sense to tie TopLink into existing stored procedures to save and retrieve informations.
    Is it possible?
    Is there any circumstance under which it is a good idea?
    Thanks

    In TopLink any operation for which TopLink generates SQL, can be replaced by custom SQL, or a stored procedure call.
    Custom SQL or stored procedures can be used for each of the descriptor's CRUD operations, but also for mapping queries and named queries. The Mapping Workbench only supports defining custom SQL for the descriptor CRUD operations and named queries, so many of the stored procedures call from the descriptor and mappings will need to be done through amendment methods.
    Whether it makes sense or not depends on the application and the company and their requirements. It will add significant overhead to the development process to have to define and maintain all of the stored procedures, and the stored procedure calls in the descriptors. You may wish develop your application using TopLink generated SQL, and once you have the model and queries stabilized then switch to using stored procedures.
    Whether it is a good idea depends on the application and the company and their requirements. Stored procedures may give the DBA more freedom to change the data-model once in production, and may allow for adding database-level security checks. In general using stored procedures will not improve performance if the procedures contain the same SQL that would have be executed anyway, but they may allow for the DBA to tune the SQL better.

  • BeX-Analyzer : no visible role and working-sheets

    Hello experts,
    after creating new working-maps under a role, i can't find it anywhere for the next day.
    For example: After launch BeX-Analyzer 7.0 -- then open working-map -- then connect over BW/BI Developer-System -- then
    roles. I can't find my saved user-working-maps or local objects who are saved one day before. They were under the "PFCG-created"
    user-role : "example :Z_Role_External_".
    When i open the map : QUERIES, i have the include for user-role and can manage and call the designed sheets
    in my workaround.
    In this case one more information that happens before this items. In the last week i had patched the BW-BI 7.0 with then following patches.
    SAPKA70020
    SAPKB70021
    SAPKA70021
    KIPYJ7K und KIPYJ7L
    SAPKW70022
    SAPKW70023
    After this action i heard from this problem in BW-BI "No User-Role and no Test-working-maps" under my User-Profile.
    Have anybody an idea to solve this problem ?
    Thanks
    Michael

    Found this thread with an allegeded fix from SAP
    Unable to see Roles in Bex Analyzer  - even with SAP_ALL!

  • Location appliance tracking limit??

    Hello,
    One question that I haven't found an answer for yet!
    When the location appliance reaches the maximum number of tracking devices of 2500 what happens next???
    It stops collecting information for new devices?
    It deletes information for older devices hence allowing tracking of new devices?
    Stops working all together??
    It ??????
    Thanks for any clarification.
    Jorge.

    Hi Jorge,
    I am not sure if this would answer your query, but this is what I managed to find -
    The location appliance can track up to 2,500 elements. You can track the following elements: client stations, active asset tags and rogue clients and access points.Updates on the locations of elements being tracked are provided to the location server from the Cisco wireless LAN controller.
    Only those elements designated for tracking by the controller are viewable in Cisco WCS maps, queries and reports. No events and alarms are collected for non-tracked elements and they are not used in calculating the 2,500 element limit.
    You can modify the following tracking parameters using Cisco WCS:
    •Enable and disable which element locations (client stations, active asset tags; and rogue clients and access points) you actively track
    •Set limits on how many of a specific element you want to track
    You can set limits on how many of a specific element you wish to track. For example, given a limit of 2,500 trackable units, you could set a limit to track only 1,500 client stations. Once the tracking limit is met, the number of elements not being tracked is summarized on the Tracking Parameters page.
    •Disable tracking and reporting of ad hoc rogue clients and access points
    http://www.cisco.com/en/US/docs/wireless/location/2700/4.0/configuration/guide/lacg_ch4.html#wp998931
    A major alert appears on the Advanced Parameter window if the tracked elements limit of 2,500 for the location server is reached.
    http://www.cisco.com/en/US/docs/wireless/location/2700/4.0/configuration/guide/lacg_ch8.html#wp999251
    In addition to this I found a resolved issue in release 3.0.37.0 -
    CSCse76683-Previous to this release, once the 2,500 limit of supported elements for the location appliance was met, no new location calculations were reported for any element beyond the 2,500 limit. Location calculations were reported once an element within the recognized 2,500 elements aged out. This condition occurred even when polling was turned "off."
    Symptom:
    The location calculation is not happenning for some of the elements. There are ERROR events saying 'Location limit hit, can not add any more location elements'. Even if the polling is turned OFF for the type of element that are too many, the location is not calculated for new elements.
    -> Sushil

  • Difference between 'Always Refresh' and 'Disable Cache Hits'

    We are using clustered application servers with its own cache. We are not doing any cache synchronization. So, for selective objects we should ensure that cache is not returning stale objects.
    I see two options in MWB for refreshing and disabling the cache. Can somebody explain the difference between these two and recommend what I should do to address my problem.
    Thanks in advance.

    These are similar and often used together settings:
    'Disable Cache Hits'
    Is used by TopLink when a ReadObjectQuery is executed with only the PK fields. By default TopLink will short-circuit going to the database and try to find the object in the cache first. This includes 1:1 mapping queries. Turning this on will cause TopLink to go to the database instead of the cache in these cases.
    'Always Refresh'
    Is used to tell TopLink how to handle the results returned from a query against the database. By default TopLink will trust the cached version of any instances already cached. When this is enabled, or the refreshIdentityMapResult is turned on for a specific query the results of the query are used to refresh the values of the cached version.
    The trick with always-refresh is that it does not force all queries to go to the database. It simply forces all returned rows to refresh cached instances.
    These are often selected by customers who never want to trust a cached instance. They are typically used together to get that effect. This configuration will get minimal performance gain from the cache but still require the cache for object-identity (avoid duplicate instances).
    Be careful though because these will have the effect for EVERY query and mapping traversal. I typically prefer to manually turn this on for my specific queries where I need to get the latest version from the database versus turning it on for all queries.
    On the query the methods you will be interested in are:
    query.refreshIdentityMapResult()
    This will force the query to the database and have the results refresh the cached instance if it already exists.
    One setting that is of definite interest is
    descriptor.onlyRefreshIfNewerVersion();
    This must be used in conjunction with the above query refresh and optimistic locking but it will avoid additional unecessary steps if the row has not changed since the cached version was read.
    Doug

  • Map complicated queries to attributes?

    OK, well, I have a class which has a 1-many mapping. the queries for this are getting kinda complicated and I'm wondering if I can essesntially map the subqueries to a java property on the parent class.
    In other words I have class A mapped one to many descriptor b.
    I have several values for class b (call them 1,2,3). Sometimes, I need to know if something has a 2 but no threes. Sometimes I need to know if it has a 1 entry, but no 2s, etc. Some of these involve SQL subqueries. I'm wondering If I can map it to a propery in the java class and kind of "preMap" the sql/conditions to get certain status (i.e. a single property which returns-- this is read only -- it's 1, but no 2s, 2 but no 3s, etc.).
    Is this possible? Should I use named queries and pre-build them that way? Any advice would be appreciated.

    A bit more brute force... I.e.,
    public class A {
    public Collection b;
    public Collection b1;
    public Collection b2;
    public void addB(B aB) {
    aB.setOwner(a);
    b.add(aB);
    if(aB.value() = 1) b1.add(aB);
    if(aB.value() = 2) b2.add(aB);
    Of course, this is not a good idea if you have a lot of these different values, but given you were asking if you could somehow map them to a query, this doesn't seem that unreasonable...
    - Don

  • Relational queries through JDBC with the help of Kodo's metadata for O/R mapping

    Due to JDOQL's limitations (inability to express joins, when relationships
    are not modeled as object references), I find myself needing to drop down to
    expressing some queries in SQL through JDBC. However, I still want my Java
    code to remain independent of the O/R mapping. I would like to be able to
    formulate the SQL without hardcoding any knowledge of the relational table
    and column names, by using Kodo's metadata. After poking around the Kodo
    Javadocs, it appears as though the relevant calls are as follows:
    ClassMetaData cmd = ClassMetaData.getInstance(MyPCObject.class, pm);
    FieldMetaData fmd = cmd.getDeclaredField( "myField" );
    PersistenceManagerFactory pmf = pm.getPersistenceManagerFactory();
    JDBCConfiguration conf = (JDBCConfiguration)
    ((EEPersistenceManagerFactory)pmf).getConfiguration();
    ClassResolver resolver = pm.getClassResolver(MyPCObject.class);
    Connector connector = new PersistenceManagerConnector(
    (PersistenceManagerImpl) pm );
    DBDictionary dict = conf.getDictionary( connector );
    FieldMapping fm = ClassMapping.getFieldMapping(fmd, conf, resolver, dict);
    Column[] cols = fm.getDataColumns();
    Does that look about right?
    Here's what I'm trying to do:
    class Foo
    String name; // application identity
    String bar; // foreign key to Bar
    class Bar
    String name; // application identity
    int weight;
    Let's say I want to query for all Foo instances for which its bar.weight >
    100. Clearly this is trivial to do in JDOQL, if Foo.bar is an object
    reference to Bar. But there are frequently good reasons for modeling
    relationships as above, for example when Foo and Bar are DTOs exposed by the
    remote interface of an EJB. (Yeah, yeah, I'm lazy, using my
    PersistenceCapable classes as both the DAOs and the DTOs.) But I still want
    to do queries that navigate the relationship; it would be nice to do it in
    JDOQL directly. I will also want to do other weird-ass queries that would
    definitely only be expressible in SQL. Hence, I'll need Kodo's O/R mapping
    metadata.
    Is there anything terribly flawed with this logic?
    Ben

    I have a one point before I get to this:
    There is nothing wrong with using PC instances as both DAO and DTO
    objects. In fact, I strongly recommend this for most J2EE/JDO design.
    However, there should be no need to expose the foreign key values... use
    application identity to quickly reconstitute an object id (which can in
    turn find the persistent version), or like the j2ee tutorial, store the
    object id in some form (Object or String) and use that to re-find the
    matching persistent instance at the EJB tier.
    Otherwise, there is a much easier way of finding ClassMapping instances
    and in turn FieldMapping instances (see ClassMapping.getInstance () in
    the JavaDocs).
    Ben Eng wrote:
    Due to JDOQL's limitations (inability to express joins, when relationships
    are not modeled as object references), I find myself needing to drop down to
    expressing some queries in SQL through JDBC. However, I still want my Java
    code to remain independent of the O/R mapping. I would like to be able to
    formulate the SQL without hardcoding any knowledge of the relational table
    and column names, by using Kodo's metadata. After poking around the Kodo
    Javadocs, it appears as though the relevant calls are as follows:
    ClassMetaData cmd = ClassMetaData.getInstance(MyPCObject.class, pm);
    FieldMetaData fmd = cmd.getDeclaredField( "myField" );
    PersistenceManagerFactory pmf = pm.getPersistenceManagerFactory();
    JDBCConfiguration conf = (JDBCConfiguration)
    ((EEPersistenceManagerFactory)pmf).getConfiguration();
    ClassResolver resolver = pm.getClassResolver(MyPCObject.class);
    Connector connector = new PersistenceManagerConnector(
    (PersistenceManagerImpl) pm );
    DBDictionary dict = conf.getDictionary( connector );
    FieldMapping fm = ClassMapping.getFieldMapping(fmd, conf, resolver, dict);
    Column[] cols = fm.getDataColumns();
    Does that look about right?
    Here's what I'm trying to do:
    class Foo
    String name; // application identity
    String bar; // foreign key to Bar
    class Bar
    String name; // application identity
    int weight;
    Let's say I want to query for all Foo instances for which its bar.weight >
    100. Clearly this is trivial to do in JDOQL, if Foo.bar is an object
    reference to Bar. But there are frequently good reasons for modeling
    relationships as above, for example when Foo and Bar are DTOs exposed by the
    remote interface of an EJB. (Yeah, yeah, I'm lazy, using my
    PersistenceCapable classes as both the DAOs and the DTOs.) But I still want
    to do queries that navigate the relationship; it would be nice to do it in
    JDOQL directly. I will also want to do other weird-ass queries that would
    definitely only be expressible in SQL. Hence, I'll need Kodo's O/R mapping
    metadata.
    Is there anything terribly flawed with this logic?
    Ben
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com

  • Mapping Workbench 9.0.4.5 Queries Tab

    I'm looking for documentation on how I can use this screen (The Queries tab on the Descriptor) to modify Toplink behaviar via the mapping workbench.
    If I were really strong on building Toplink mappings outside of the worbench, this might be obvious. So far, we have stayed within the confines of the mapping workbench.
    Thanks

    Robert,
    The most common usage of this tab is to graphically define named queries. These queries define parameter based queries using TopLink expressions, EJB QL, or SQL.
    http://download-west.oracle.com/docs/cd/B10464_01/web.904/b10316/dscriptr.htm#1041139
    Using this will allow you to minimize the amount of query code you need to write and maintain. This further reduces coupling of the persistence layer from to application and provides better diagnostics on queries. As you mappings change any queries that are broken by changes can be indicated versus having to find the query issue at runtime.
    To execute the queries in see 'Named Queries' at:
    http://download-west.oracle.com/docs/cd/B10464_01/web.904/b10313/queries.htm#1128980
    Doug

Maybe you are looking for

  • Have installed icloud on my lap top but it wont open, password is correct any ideas please

    I have installed icloud on my laptop but canmt open ot to use it. Any ideas please. password is correst. thanks, bulleddie.

  • Sm59 SAPXPG_DBDEST_ sid failed

    Hi expert:     I meet a problem that the DB13 database job can not excute successfully.Our central instance and DB installed on two seperate server.I have searched some subject about this,and found the rfc connection SAPXPG_DBDEST_<SID> is related ,b

  • Help please: C7: My BBC IPlayer wont play for I !!

    I am struggling to get to grips with my new C7. The BBC IPlayer widget is prominent. When I try to stream a programme it says that a wireless connection is needed. But (a) the phone was connected to a wireless network and (b) I had make the adjustmen

  • Mixer Brush in Elements 8

    Does adobe photoshop elements 8 have the mixer brush? If so where can I find it?

  • AAA confusion - local username access

    Hey all, I am a little confused. I have the following commands on my device: username blah privilege 15 secret 5 blah!@#$%% aaa new-model aaa authentication login default group tacacs+ local aaa authentication enable default group tacacs+ enable aaa