O/R mapping tool generates bad descriptors for CMP2.0

The O/R tool has many problems with EJB-CMP 2.0.
It generates bad ejb-jar.xml (misses the <cmr-field-type> tag, has the same relationship field included in both the <entity> tag and the <relationships> tag).
The worst part is that is generates bad TopLink descriptors!
I have numerous indirect 1-1 bean-pojo (plain old java object). When I have the 'tool' generate the ejb-jar.xml and deployment xml it generates all these relationships incorrectly.
The O/R 'tool' forces you to use transparent indirection for CMP 2.0 relationships; transparent indirection does not allow you to specify 'use method accessing'. Fine. But when I go to deploy the compiled jar, I get ALL sorts of errors complaining that:
EXCEPTION [TOPLINK-1] (TopLink (WLS CMP) - 9.0.3 (Build 423)): oracle.toplink.exceptions.DescriptorException
EXCEPTION DESCRIPTION: The attribute [addressToDesc] is not declared as type ValueHolderInterface, but its mapping uses indirection.
MAPPING: oracle.toplink.mappings.OneToOneMapping[addressToDesc]
DESCRIPTOR: Descriptor(...
ARGH!! What the heck do I need to do to get this 'tool' to work? It won't LET me use valueholder in EJB-CMP2.0, but then it complains when when I don't!?!?!
Frustrated,
Andrew

(from toplink support)
Andrew,
Hi there. I've performed some research regarding
your question, and it appears that having an
"automatic" 1:1 indirect relationship between an EJB
2.0 Entity Bean and a dependent Java object is not
currently supported by TopLink. Examining the
documentation, this non-support unfortunately isn't
documented. I have entered a feature request on your
behalf so that we might be able to support it in a
future major version of TopLink.
However, there are a couple of alternatives which
you can use:
1) We do support non-indirect 1:1 relationships
between an EJB 2.0 Entity Bean and a dependent Java
object. You could disable indirection, and maybe
make the relationship a batch read relationship so
that your performance will not be degraded.
2) You can write a concrete getter and setter, and
provide an instance variable on the entity bean for
your dependent object of type ValueHolderInterface.
Provide the private getters and setters for set and
get -Holder. Map this normally using TopLink, and
make sure Indirection is specified in the Mapping
Workbench.
IMPORTANT: TopLink will handle the normal
persistence management for this relationship,
however you will need to handle the merging back of
this object yourself (you had to do this in EJB 1.1
in TopLink as well). Luckily there are good docs on
this, and it's covered in detail - look at the
TopLink for WebLogic documentation : Runtime
Considerations > Managing dependent objects.
Personally, I would implement solution 1) with batch
reading on the attribute, because the dependent
object shouldn't be that "big" (otherwise it would
be an entity bean). This is an easier solution,
however you should investigate 2) fully yourself.

Similar Messages

  • Mappingtool generates bad DDL for sybase (J2ee tutorial)

    Running the mappingtool on the J2EE tutorial app (3.0.0RC1) generates bad
    DDL for sybase. It's trying to create a table with a column of type
    "IndexName":
    C:\devtools\kodo\samples\j2ee>mappingtool -a refresh package.jdo
    0 INFO [main] kodo.Tool - Mapping tool running on type "class
    samples.j2ee.Car" with action "refresh".
    0 INFO [main] kodo.Tool - The tool is now reading existing schema
    information; this process may take some time. En
    able the kodo.jdbc.Schema logging category to see messages about schema
    data. Also see the -readSchema tool flag.
    3716 INFO [main] kodo.Tool - Recording mapping and schema changes.
    Exception in thread "main" kodo.util.FatalException:
    com.solarmetric.jdbc.ReportingSQLException: Can't specify a length
    or scale on type 'IndexName'.
    {stmnt 7576378: CREATE TABLE CAR (COLOR IndexName(255) NULL, JDOCLASS
    IndexName(255) NULL, JDOID NUMERIC(38) NOT NULL,
    JDOVERSION INT NULL, MAKE IndexName(255) NULL, MODEL IndexName(255) NULL,
    YEAR0 IndexName(255) NULL, UNQ_INDEX NUMERIC I
    DENTITY UNIQUE, CONSTRAINT P_CAR PRIMARY KEY (JDOID))} [code=2716,
    state=ZZZZZ]
    NestedThrowables:
    com.solarmetric.jdbc.ReportingSQLException: Can't specify a length or
    scale on type 'IndexName'.
    {stmnt 7576378: CREATE TABLE CAR (COLOR IndexName(255) NULL, JDOCLASS
    IndexName(255) NULL, JDOID NUMERIC(38) NOT NULL,
    JDOVERSION INT NULL, MAKE IndexName(255) NULL, MODEL IndexName(255) NULL,
    YEAR0 IndexName(255) NULL, UNQ_INDEX NUMERIC I
    DENTITY UNIQUE, CONSTRAINT P_CAR PRIMARY KEY (JDOID))} [code=2716,
    state=ZZZZZ]
    at kodo.jdbc.meta.MappingTool.record(MappingTool.java:431)
    at kodo.jdbc.meta.MappingTool.run(MappingTool.java:790)
    at kodo.jdbc.meta.MappingTool.main(MappingTool.java:729)
    NestedThrowablesStackTrace:
    com.solarmetric.jdbc.ReportingSQLException: Can't specify a length or
    scale on type 'IndexName'.
    {stmnt 7576378: CREATE TABLE CAR (COLOR IndexName(255) NULL, JDOCLASS
    IndexName(255) NULL, JDOID NUMERIC(38) NOT NULL,
    JDOVERSION INT NULL, MAKE IndexName(255) NULL, MODEL IndexName(255) NULL,
    YEAR0 IndexName(255) NULL, UNQ_INDEX NUMERIC I
    DENTITY UNIQUE, CONSTRAINT P_CAR PRIMARY KEY (JDOID))} [code=2716,
    state=ZZZZZ]
    at
    com.solarmetric.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:67)
    at
    com.solarmetric.jdbc.LoggingConnectionDecorator.access$400(LoggingConnectionDecorator.java:19)
    at
    com.solarmetric.jdbc.LoggingConnectionDecorator$LoggingConnection$LoggingStatement.executeUpdate(LoggingConne
    ctionDecorator.java:506)
    at
    com.solarmetric.jdbc.DelegatingStatement.executeUpdate(DelegatingStatement.java:125)
    at kodo.jdbc.schema.SchemaTool.executeSQL(SchemaTool.java:1042)
    at kodo.jdbc.schema.SchemaTool.createTable(SchemaTool.java:803)
    at kodo.jdbc.schema.SchemaTool.add(SchemaTool.java:334)
    at kodo.jdbc.schema.SchemaTool.add(SchemaTool.java:186)
    at kodo.jdbc.meta.MappingTool.record(MappingTool.java:364)
    at kodo.jdbc.meta.MappingTool.run(MappingTool.java:790)
    at kodo.jdbc.meta.MappingTool.main(MappingTool.java:729)
    Anyone else seen this? It works ok with Hypersonic (though i can't get the
    tutorial app to run with hypersonic - see my earlier post).
    Alex.

    Some bugs in RCs sometimes don't make it into Bugzilla as it is a
    release candidate and not production quality.
    I would reocmmend upgrading as it fixes a number of major bug fixes and
    you should be able to use the same eval key.
    Alex Robbins wrote:
    Abe White wrote:
    I should also have asked: what JDBC driver are you using?Hi Abe,
    I haven't tried with RC2 - has this been fixed in RC2? I didn't find this
    bug on bugzilla.
    I'm using Sybase JConnect JDBC driver (com.sybase.jdbc2.jdbc.SybDriver in
    jconn2.jar) - looks like this is the version:
    jConnect (TM) for JDBC(TM)/5.5(Build 25008)/P/JDK12/Tue May 29 14:37:46
    2001
    Should I upgrade from RC1 to RC2, and if so, can I continue to use the
    same eval license key or can i download a new one?
    thanks,
    alex
    Stephen Kim
    [email protected]
    SolarMetric, Inc.
    http://www.solarmetric.com

  • Which object-relational mapping tool is the best for Oracle Coherence?

    Which object-relational mapping tool is the best for Oracle Coherence?
    My application is read-and-write-intensive. Which tool is most suitable for this application?
    TopLink essentials, TopLink, Eclipse or Hibernate?
    Thank you

    I would pick Hibernate mainly because of its popularity and wide knowledge base.
    Coherence has provided some documentation for the integration.
    http://download.oracle.com/docs/cd/E14526_01/coh.350/e14537/usehibernateascoh.htm#CEGFEFJH
    If you have the schema in database, myEclipse can provide you hibernate bindings by reverse engineering.

  • Different mapping tools in the market --  For BPM mapping in XI

    Hi All
         I am more intersted in learning the BPM mapping tools(3rd party or any)
         Can any list out the different tools and which are the best ?
         Our company looking forward to implement XI (BPM's) with mapping tools
        Can any help me on this
    Regards
    Rakesh

    Hi Rakesh,
    plz have a look to <a href="http://help.sap.com/saphelp_nw04/helpdata/en/14/80243b4a66ae0ce10000000a11402f/frameset.htm">SAP Library: SAP Exchange Infrastructure</a> for understanding XI, BPM and Mappings.
    BPM has no special mapping tool. You can use Java, ABAP, XSLT and graphical Mapping (Message Mapping) at XI, for any case - BPM or not. 3rd Party has nothing to do with BPM or Mapping. That means a non SAP system.
    Regards,
    Udo

  • How to generate bad records for varchar2 type

    Hi
    Iam using sqlldr to load records from a flat file to a table.
    Table columns contains varchar2 datatype
    Data is loaded by fixedlength position.
    There is no not null restriction on the fields.
    I want to have a bad file generated.
    Can anyone let me know how to get the bad file?
    What type of data can be modified in a file to get a bad record.
    Thanks in advance

    how to get the bad file?Make the field length longer than its corresponding DB field length?
    I.e. when your DB field length is varchar2(10) make one of the fields in the file greater than 10.

  • 3.1.4 reverse mapping tool issue

    (Sorry for the duplicate posting...I meant to start a new thread with
    this but accidentally posted it as a reply to a 6-month old thread)
    Hello,
    I was running Kodo 3.0.2 when Abe and I had the exchange reproduced
    below back in January to deal with Oracle tables with "$" in the column
    names (which I subsequently updated to 3.0.3). The original subject of
    this discussion was "3.0.2 reverse mapping tool generates invalid
    ..mapping file".
    I was able to get this working by running the following commands to
    implement Abe's suggestion:
    reversemappingtool -p kodo.properties -package db \
    -cp custom.properties -ds false schema.xml
    sed -e 's/\$/__DOLLAR__/' db/package.mapping > db/package.mapping.new
    mv db/package.mapping.new db/package.mapping
    javac db/*.java
    mappingtool -p kodo.properties -a import db/package.mapping
    sed -e 's/__DOLLAR__/\$/' db/package.jdo > db/package.jdo.new
    mv db/package.jdo.new db/package.jdo
    In my custom.properties file, I had lines like these to put useful names
    on my class's fields:
    db.TransactionDetailHistory.y$an8.rename : addressNumber
    As I said, in 3.0.3, this worked perfectly.
    I picked this code back up for the first time since getting it working 6
    months ago, and decided to update it to 3.1.4 (since I'm already using
    that on other projects). Problem is, the reverse mapping tool has
    changed and the code it generates no longer works as it once did. I
    tried running the 3.1.2 and 3.1.0 reverse mapping tool, and it failed
    the same way, so it looks like this change happened in the 3.0.x to
    3.1.x version change.
    What happens is this: In the generated Java source, my fields used to
    end up with names as per my specification (e.g., the Oracle column named
    "y$an8" showed up as "addressNumber" in the java source).
    However, it looks like the "$" became special somehow in 3.1.0 - the
    "y$an8" column now shows up as "yAn8" in the generated Java. I tried
    changing my custom.properties file accordingly, but it still shows up as
    yAn8 even after changing my mapping to look like this:
    db.TransactionDetailHistory.yAn8.rename : addressNumber
    What do you make of this?
    Thanks,
    Bill
    Abe White wrote:
    > Hmmm... this is a problem. '$' is not legal in XML names, and there
    is no standard way to escape it.
    >
    > Your best bet is probably to do the following:
    > 1. In the generated .mapping file, replace all '$' characters with
    another token, such as '--DOLLAR--'.
    > 2. Switch your properties to use the metadata mapping factory:
    > kodo.jdbc.MappingFactory: metadata
    > 3. Import your mappings into the metadata mapping factory:
    > mappingtool -a import package.mapping
    > 4. Delete the mapping file.
    > 5. In your .jdo file, replace '--DOLLAR--' with '$' again.
    >
    > The metadata mapping factory doesn't put column names in its XML
    attribute names, so you should be able to use it safely.

    William-
    However, it looks like the "$" became special somehow in 3.1.0 - the
    "y$an8" column now shows up as "yAn8" in the generated Java. I tried
    changing my custom.properties file accordingly, but it still shows up as
    yAn8 even after changing my mapping to look like this:
    db.TransactionDetailHistory.yAn8.rename : addressNumberWell, the reverse mapping tool makes some assumptions based on common
    naming strategies for relational databases and Java naming: columns like
    "FIRST_NAME" will be renamed to "firstName". The Reverse Mapping tool is
    seeing the "$" and treating it as a non-alphanumeric delimiter, so is
    fixing it.
    Can you try a couple of additional properties:
    db.TransactionDetailHistory.y$An8.rename: addressNumber
    db.TransactionDetailHistory.y$an8.rename: addressNumber
    Also, are other rename properties working for you, or is that the only
    field or class you attempt to rename? It might just be the case that
    you aren't correctly specifying the properties file or something.
    Finally, bear in mind that you can always implement your own
    kodo.jdbc.meta.ReverseCustomizer and just use that; not the easiest
    solution, but it can certainly be used to have very fine-grained control
    over the exact names that are generated.
    In article <[email protected]>, William Korb wrote:
    (Sorry for the duplicate posting...I meant to start a new thread with
    this but accidentally posted it as a reply to a 6-month old thread)
    Hello,
    I was running Kodo 3.0.2 when Abe and I had the exchange reproduced
    below back in January to deal with Oracle tables with "$" in the column
    names (which I subsequently updated to 3.0.3). The original subject of
    this discussion was "3.0.2 reverse mapping tool generates invalid
    .mapping file".
    I was able to get this working by running the following commands to
    implement Abe's suggestion:
    reversemappingtool -p kodo.properties -package db \
    -cp custom.properties -ds false schema.xml
    sed -e 's/\$/__DOLLAR__/' db/package.mapping > db/package.mapping.new
    mv db/package.mapping.new db/package.mapping
    javac db/*.java
    mappingtool -p kodo.properties -a import db/package.mapping
    sed -e 's/__DOLLAR__/\$/' db/package.jdo > db/package.jdo.new
    mv db/package.jdo.new db/package.jdo
    In my custom.properties file, I had lines like these to put useful names
    on my class's fields:
    db.TransactionDetailHistory.y$an8.rename : addressNumber
    As I said, in 3.0.3, this worked perfectly.
    I picked this code back up for the first time since getting it working 6
    months ago, and decided to update it to 3.1.4 (since I'm already using
    that on other projects). Problem is, the reverse mapping tool has
    changed and the code it generates no longer works as it once did. I
    tried running the 3.1.2 and 3.1.0 reverse mapping tool, and it failed
    the same way, so it looks like this change happened in the 3.0.x to
    3.1.x version change.
    What happens is this: In the generated Java source, my fields used to
    end up with names as per my specification (e.g., the Oracle column named
    "y$an8" showed up as "addressNumber" in the java source).
    However, it looks like the "$" became special somehow in 3.1.0 - the
    "y$an8" column now shows up as "yAn8" in the generated Java. I tried
    changing my custom.properties file accordingly, but it still shows up as
    yAn8 even after changing my mapping to look like this:
    db.TransactionDetailHistory.yAn8.rename : addressNumber
    What do you make of this?
    Thanks,
    Bill
    Abe White wrote:
    Hmmm... this is a problem. '$' is not legal in XML names, and thereis no standard way to escape it.
    Your best bet is probably to do the following:
    1. In the generated .mapping file, replace all '$' characters withanother token, such as '--DOLLAR--'.
    2. Switch your properties to use the metadata mapping factory:
    kodo.jdbc.MappingFactory: metadata
    3. Import your mappings into the metadata mapping factory:
    mappingtool -a import package.mapping
    4. Delete the mapping file.
    5. In your .jdo file, replace '--DOLLAR--' with '$' again.
    The metadata mapping factory doesn't put column names in its XMLattribute names, so you should be able to use it safely.--
    Marc Prud'hommeaux
    SolarMetric Inc.

  • Design approach help : BIC Mapping Tool Conversion

    Hi All,
    Design approach:
    we know that BIC mapping tool can be used for EDI to XML conversion. also i know that its a Any to any converter.
    But we prefer EDI to EDI XML conversion only via this tool and do the mapping in SAP PI as it would be easy to do mapping change if any in PI than redeploy the mapping in module location every time.
    My question is:
    we know that sender file adapter supports FCC for max of 3 hierarchy levels.
    Can we make it a practice of accepting requirements in SAP PI which has complex Flat file structure (more than 3 levels). and use BIC mapping tool to convert the Complex flat file Structure (with more  than 3 levels) into PI required format and proceed with the mapping and Configuration. (Note: hear the assumption is that the organisation has BIC Mapping tool procured for some EDI message scenario's ).
    or is it best practice to ask the Source system to supply flat files strictly in Header,with 1 line item format?
    Suggestions would be deeply appreciated.
    Warm Regards,
    Senthilprakash

    >
    > if you are thinking of using only BIC mapping tool for converting flat file to xml and want to proceed with normal configuration without using BIC modules then it is not feasible.
    >
    Dear Suresh,
    I am very well aware of that. we have Seeburger SFTP adapter installed in our PI system. and for EDI conversion we use BIC for converting to XML and deploy the module in the adapter level.
    Now my question is that: weather is it the rite way to start accepting requirement from source which has complex flat file structure (non edi messages) and using BIC module in adapter to convert it into XML and proceed with configuration or better to ask the source system to send the flat file in a format which is accepted by our file adater FCC.
    what are the drawbacks/bottlenecks we see in using BIC module for converting non EDI complex flat files into XML format? in PI
    Regards,
    Senthilprakash.

  • BAdI UC_DATATRANSFER for BCS Mapping in "Load from Data Stream" method

    Hello Everyone,
    I need some help on finishing up the code for the UC_DATATRANSFER BAdI.
    I have looked up in the SDN and other places, but could not get comprehensive breakdown of documentation except for the "F1" documentation available on the BAdI.
    So, any help would be appreciated.
    The Steps so far completed,
    1. Have activated the BAdI and have created the filter value for the BAdI.
    2. After the BAdI has been activated, I was able to go into the MAP method and have written the logic for profit center derivation from consolidation hierarchy.
    The issue is there are four components for the Map method,
    IT_DATA_SOURCE
    IS_DATA_TARGET
    ES_DATA_TARGET
    ET_DATA_TARGET
    The data is available from Source system in the table IT_DATA_SOURCE.
    But this is not changeable as it is "Importing" type. Whereas the actual ET_DATA_TARGET which is passed over into FINALIZE method of the BAdI is not filled initially.
    When I try to do a MOVE-CORRESPONDING from the IT_DATA_TARGET into ET_DATA_TARGET I continuously am getting the short dumps as both the tables length is not the same.
    Did anyone else face the same issue as above when trying to do the BAdI implementation for Mapping.
    I will really appreciate if any one can provide me a sample code if possible.
    Let me know if you need additional information.
    Thanks
    Dharma.

    Hello,
    Thanks for looking into the question.
    I already had tried doing that, I get the Short dump stating the object tables are not convertible.
    When I looked into the table structures, I found out that the table structures "IS_DATA_TARGET", "ES_DATA_TARGET" & "ET_DATA_TARGET" belong to the same category in terms of these structures being flat structures or tables of length 484 as per the debugger.
    Whereas the structure "IT_DATA_SOURCE" has the length 404.
    Due to this reason when I say,
    ET_DATA_TARGET = IT_DATA_SOURCE, I keep getting the short dumps.
    Also, is your consolidation process legal or managerial.
    Our Consolidation process is legal and we have the Company and Profit Center fields assigned to the Consolidation Unit role in the Data Basis definition.
    Can you please let me know what is the structures length in your system.
    Thanks
    Dharma.

  • MAP Toolkit - How to use this MAP tool kit for all SQL Server inventory in new work enviornment

    Hi Every one
     Just joined to new job and planning to do Inventory for whole environment so I can get list of all SQL Server installed . I downloaded MAP tool kit just now. So looking for step by step information to use this for SQL Inventory. If anyone have documentation
    or screen shot and can share would be great.
    Also like to run It will be good to run this tool anytime or should run in night time when is less activity? 
    Hoe long generally takes for medium size environment where server count is about 30 ( Dev/Staging/Prod)
    Also any scripts that will give detailed information would be great too..
    Thank you 
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Hi Logicinisde,
    According to your description, since the issue regards Microsoft Assessment and Planning Solution Accelerator. I suggestion you post the question in the Solution Accelerators forums at
    http://social.technet.microsoft.com/Forums/en-US/map/threads/ . It is appropriate and more experts will assist you.
    The Microsoft Assessment and Planning (MAP) Toolkit is an agentless inventory, assessment, and reporting tool that can securely assess IT environments for various platform migrations. You can use MAP as part of a comprehensive process for planning and migrating
    legacy database to SQL Server instances.
    There is more information about how to use MAP Tool–Microsoft Assessment and Planning toolkit, you can review the following articles.
    http://blogs.technet.com/b/meamcs/archive/2012/09/24/how-to-use-map-tool-microsoft-assessment-and-planning-toolkit.aspx
    Microsoft Assessment and Planning Toolkit - Technical FAQ:
    http://ochoco.blogspot.in/2009/02/microsoft-assessment-and-planning.html
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Can I use the mapping tool in Imovie for commercial purposes?

    Can I use the mapping tool in Imovie for commercial purposes?

    In general, yes.
    For the legalese, see this Apple document, section 2.G.iii.
    http://images.apple.com/legal/sla/docs/iLife11.pdf

  • Please help - Dreamweaver Image Map Tool/Outlook for Mac

    Hello:
    I was wondering if you could help me navigate through an issue I am having with the image map tool.  I have created a design file is PSD, saved it as a JPG, then transferred it into Dreamweaver to link the design to a web page.  However, I would like to embed different links within one HTML file to e-mail out, so I used the image map tool to do so.  However, when I copy/paste the completed HTML into an Outlook for Mac message (Command A; Command C; Command V), the image map/additional links do not transfer over once the file is pasted in an e-mail.  I also tested this with G-mail and the Apple E-mail app, and was not successful.  Please help.

    You can't paste images into e-mails.  Use absolute links to images hosted on your domain server like this:
    <img src="http://yourdomain.com/images/your_map.jpg">
    See HTML E-mails: what you need to know
    http://alt-web.com/Articles/HTML-Emails.shtml
    Nancy O.

  • Tool generated files missing after upgrade to Oracle Workshop for Weblogic

    Hi,
    I'm trying to migrate a beehive/netui web-application from Bea Workspace Studio 1.1 to Oracle Workshop for WebLogic 10R3. The application builds successfully, but when I access the application, I get a "Missing Struts Module Configuration" error.
    I checked the build folder, and discovered that several tool-generated files where missing. Entire _pageflow folder is missing and all of the controls related files are missing.
    Anyone have an idea of what is going on here?
    Rune

    Rune,
    Could you provide details on the migration process you followed ?
    Also are you trying to develop and deploy against WLS 10.3 ?
    -Raj

  • User-Exit / Badi / BTE for STWB_WORK for integrating 3rd party tool

    Hi,
                 I need a user-exit / badi / BTE for the transaction STWB_WORK user test worklist. I'm trying to integrate a 3rd party tool to this transaction. When a worklist from this transaction is selected, i'll run the 3rd party tool executable from abap coding in the user-exit/badi/bte. Can anybody help me find one?
    thanks,
    Venky

    Hello Venky,
    I saw your question in sdn and I am wondering, if you got a solution for your question.
    I will be very happy, if you can help me you. I have the same problem as you.
    cu Manfred

  • Mapping tool creates new columns one at a time

    Hi All,
    I'm using Kodo 3.2.2 with MySql 4.1. I have a fairly large table (over
    2million rows) representing one persistent class. I need to add several
    new fields to the object, which will require a refresh mapping on the
    database to add new columns for the fields.
    It seems when I run refresh mapping on the database, Kodo is adding each
    new field individually in seperate ALTER TABLE statements, rather than
    adding all the columns in one ALTER TABLE statement.
    With MySQL and large tables, this can really increase the time required to
    refresh the mapping - I've tried with MSSQL and that database must handle
    ALTER TABLE commands differently because it doesn't take a fraction of the
    time that MySQL takes.
    Is this a bug (I doubt it), but rather a feature improvement?
    Thoughts?
    Thanks,
    Brendan

    Thanks Abe for the reply - I've been meaning to look at the process of
    creating SQL scripts to perform the refresh mappings on our databases.
    I'll have a look at this process before we talk contracting!
    Thanks,
    Brendan
    Abe White wrote:
    Unfortunately, we don't have any immediate plans to optimize ALTER TABLEsince
    it's not a runtime operation, and as you say most DBs don't have a problemwith
    it. Also, note that the documentation shows how to get Kodo to create SQL
    scripts rather than directly modify the DB; you could easily edit thescripts
    Kodo generates to make them more efficient before piping the SQL to MySQLusing
    its client app. (Note that Kodo 3.3 introduces the ability to generate SQL
    scripts directly from the mapping tool, rather than having to take an extrastep
    through the schema tool).
    If getting Kodo to combine ALTER statements is important to you, you couldalso
    contact [email protected] about contracting us to expedite this work.

  • XSLT mapping tool

    Hi All,
    I am abap consultant, so its difficult to write a XSLT mapping program.
    Is there any XSLT mapping tool available on net to use free?
    Which can take source XML & Target XML and upon doing graphical mapping
    it generates the XSLT mapping program between source & target XML.
    Regards

    Rohan,
    Is there any XSLT mapping tool available on net to use free?
    Yes, Stylus Studio and Altova have free download-able tool for XSL Mapping.
    Which can take source XML & Target XML and upon doing graphical mapping
    it generates the XSLT mapping program between source & target XML.
    I don't think there is any tool which will make use of graphical mapping to create an XSL Mapping.
    But the point is, if you are creating a graphical mapping, why you need XSL?
    Regards,
    Neetesh

Maybe you are looking for

  • How do I update a file in an Applet's JAR file from the Applet code

    Here's my problem. My applet is using a serializable history data in which I am storing in the applet's JAR file. When I run the applet, I read the file with "getResourceAsStream()" and run my program with that hist data. When my applet is closed, I

  • Error  While doing PGI from VL02n

    Hello all, Iam doing PGI from VL02n transaction and getting  the following error : E M7 055 "G/L account 500001 does not exist in company code RB02." Please explain why is it going after G/L account 500001. Thanks & Regards,

  • MR11 and Account Posting

    Hi Everybody, i need to change the Account Posting into MR11. I have not found any User_exit or Badi.. so i don't know how can i do it. Have anyone done it before? Can someone help me? Thank you everybody!! Regards, Roberto

  • Reading Multiple Structures

    Hi, Could anyone tell me how to read multiple structures in a single BPEL process? I have a requirement to read multiple files with different structures to be read in a single BPEL process. All these files will be ftped and read from a single source

  • Delevery Qty. not avalable in case having two customer for same item

    Hi Experts I am facing a problem , delivery qty. not available in VL01N in case where I have two sales order for the same item . There are two customer for the same item. Material is available in unrestricted use but it does not show against one sale