Reverse Map Schema

While trying to run the reverse mapping tool on an existing Oracle
Database(Version 9.2.0.1), I am getting an error "An Error occurred while
accessing the Schema". Do we need some special settings in the Kodo
Workbench other than the connection settings? Am I missing something?
Here is my config settings...
For the driver class : oracle.jdbc.driver.OracleDriver
For the Connection URL :jdbc:oracle:thin:@IPAddress:1521:DBNAME

By the way, if you launch the tool from the start menu, then it will
probably use javaw and there will be no stack trace.
"Marc Prud'hommeaux" <[email protected]> wrote in message
news:[email protected]..
Vijay-
Can you post the complete stack trace of the exception?
In article <cpkg6d$q7v$[email protected]>, Vijay Kumar Narayanan
wrote:
>>
While trying to run the reverse mapping tool on an existing Oracle
Database(Version 9.2.0.1), I am getting an error "An Error occurredwhile
accessing the Schema". Do we need some special settings in the Kodo
Workbench other than the connection settings? Am I missing something?
Here is my config settings...
For the driver class : oracle.jdbc.driver.OracleDriver
For the Connection URL :jdbc:oracle:thin:@IPAddress:1521:DBNAME
Marc Prud'hommeaux
SolarMetric Inc.

Similar Messages

  • Reverse mapping tool in 3.0.1 ignores "-schemas" option

    I believe I have discovered a bug in the 3.0.1 version of the reverse
    mapping tool.
    Here is a script of the commands that worked fine in 3.0.0:
    Script started on Mon Jan 12 11:02:19 2004
    1$ which schemagen
    /opt/kodo-jdo-3.0.0/bin/schemagen
    2$ echo $PATH
    /opt/kodo-jdo-3.0.0/bin:/sw/db/oracle/oracle817/bin:/sw/gen/sparc-sun-solaris2.9/acroread/5.06/bin:/sw/gen/sparc-sun-solaris2.9/cvs/1.11.5/bin:/sw/gen/sparc-sun-solaris2.9/esound/0.2.29/bin:/sw/gen/sparc-sun-solaris2.9/mpg123/0.59r/bin:/usr/bin:/sw/gen/sparc-sun-solaris2.9/gnupg/1.2.1/bin:/sw/gen/sparc-sun-solaris2.9/mozilla/1.3/bin:/sw/gen/sparc-sun-solaris2.9/openssh/3.7.1p2/sbin:/sw/gen/sparc-sun-solaris2.9/openssh/3.7.1p2/bin:/sw/pd/workman-1.3.4/bin:/usr/openwin/bin:/usr/bin:/sbin:/bin:/usr/sbin:/usr/ccs/bin:/usr/ucb:/opt/local/bin:/sw/modules/bin:/sw/com/bin:/sw/pd/bin:/sw/pd/office52/program:/sw/pd/RealPlayer8:/users/n9208/bin:/opt/openssh/bin:/usr/dt/bin:/usr/dt/bin:/usr/openwin/bin:/sw/db/tools/bin:/sw/db/iss/bin:/usr/local/bin:/usr/local/scripts
    3$ echo $CLASSPATH
    :/opt/oracle/oracle9.0.1.4.zip:/opt/kodo-jdo-3.0.0:/opt/kodo-jdo-3.0.0/lib/kodo-jdo-runtime.jar:/opt/kodo-jdo-3.0.0/lib/kodo-jdo.jar:/opt/kodo-jdo-3.0.0/lib/jakarta-commons-collections-2.1.jar:/opt/kodo-jdo-3.0.0/lib/jakarta-commons-lang-1.0.1.jar:/opt/kodo-jdo-3.0.0/lib/jakarta-commons-logging-1.0.3.jar:/opt/kodo-jdo-3.0.0/lib/jakarta-commons-pool-1.0.1.jar:/opt/kodo-jdo-3.0.0/lib/jakarta-regexp-1.1.jar:/opt/kodo-jdo-3.0.0/lib/jca1.0.jar:/opt/kodo-jdo-3.0.0/lib/jdbc-hsql-1_7_0.jar:/opt/kodo-jdo-3.0.0/lib/jdbc2_0-stdext.jar:/opt/kodo-jdo-3.0.0/lib/jdo-1.0.1.jar:/opt/kodo-jdo-3.0.0/lib/jndi.jar:/opt/kodo-jdo-3.0.0/lib/jta-spec1_0_1.jar:/opt/kodo-jdo-3.0.0/lib/log4j-1.2.6.jar:/opt/kodo-jdo-3.0.0/lib/xalan.jar:/opt/kodo-jdo-3.0.0/lib/xercesImpl.jar:/opt/kodo-jdo-3.0.0/lib/xml-apis.jar:/opt/kodo-jdo-3.0.0/lib/jfreechart-0.9.13.jar:/opt/kodo-jdo-3.0.0/lib/jcommon-0.8.8.jar
    4$ schemagen -p kodo.properties -f schema.xml -schemas PRODTRDTA.F0101
    0 INFO [main] kodo.Tool - Schema generator running on schemas
    "PRODTRDTA.F0101". This process may take some time. Enable the
    kodo.jdbc.Schema logging category to see messages about the collection of
    schema data.
    136 INFO [main] jdbc.Schema - Reading table information for schema name
    "PRODTRDTA", table name "F0101".
    672 INFO [main] jdbc.Schema - Reading column information for table
    "PRODTRDTA.F0101".
    727 INFO [main] jdbc.Schema - Reading primary keys for schema name
    "PRODTRDTA", table name "F0101".
    2187 INFO [main] jdbc.Schema - Reading indexes for schema name
    "PRODTRDTA", table name "F0101".
    2432 INFO [main] jdbc.Schema - Reading foreign keys for schema name
    "PRODTRDTA", table name "F0101".
    2632 INFO [main] kodo.Tool - Writing XML schema.
    5$
    script done on Mon Jan 12 11:03:14 2004
    Note the first line of logging output: both the schema name and table name
    are properly recognized.
    Here is the scripted output of the same commands in 3.0.1:
    Script started on Mon Jan 12 10:29:03 2004
    1$ which schemagen
    /opt/kodo-jdo-3.0.1/bin/schemagen
    2$ echo $PATH
    /opt/kodo-jdo-3.0.1/bin:/sw/db/oracle/oracle817/bin:/sw/gen/sparc-sun-solaris2.9/acroread/5.06/bin:/sw/gen/sparc-sun-solaris2.9/cvs/1.11.5/bin:/sw/gen/sparc-sun-solaris2.9/esound/0.2.29/bin:/sw/gen/sparc-sun-solaris2.9/mpg123/0.59r/bin:/usr/bin:/sw/gen/sparc-sun-solaris2.9/gnupg/1.2.1/bin:/sw/gen/sparc-sun-solaris2.9/mozilla/1.3/bin:/sw/gen/sparc-sun-solaris2.9/openssh/3.7.1p2/sbin:/sw/gen/sparc-sun-solaris2.9/openssh/3.7.1p2/bin:/sw/pd/workman-1.3.4/bin:/usr/openwin/bin:/usr/bin:/sbin:/bin:/usr/sbin:/usr/ccs/bin:/usr/ucb:/opt/local/bin:/sw/modules/bin:/sw/com/bin:/sw/pd/bin:/sw/pd/office52/program:/sw/pd/RealPlayer8:/users/n9208/bin:/opt/openssh/bin:/usr/dt/bin:/usr/dt/bin:/usr/openwin/bin:/sw/db/tools/bin:/sw/db/iss/bin:/usr/local/bin:/usr/local/scripts
    3$ echo $CLASSPATH
    :/opt/oracle/oracle9.0.1.4.zip:/opt/kodo-jdo-3.0.1:/opt/kodo-jdo-3.0.1/lib/kodo-jdo-runtime.jar:/opt/kodo-jdo-3.0.1/lib/kodo-jdo.jar:/opt/kodo-jdo-3.0.1/lib/jakarta-commons-collections-2.1.jar:/opt/kodo-jdo-3.0.1/lib/jakarta-commons-lang-1.0.1.jar:/opt/kodo-jdo-3.0.1/lib/jakarta-commons-logging-1.0.3.jar:/opt/kodo-jdo-3.0.1/lib/jakarta-commons-pool-1.0.1.jar:/opt/kodo-jdo-3.0.1/lib/jakarta-regexp-1.1.jar:/opt/kodo-jdo-3.0.1/lib/jca1.0.jar:/opt/kodo-jdo-3.0.1/lib/jdbc-hsql-1_7_0.jar:/opt/kodo-jdo-3.0.1/lib/jdbc2_0-stdext.jar:/opt/kodo-jdo-3.0.1/lib/jdo-1.0.1.jar:/opt/kodo-jdo-3.0.1/lib/jndi.jar:/opt/kodo-jdo-3.0.1/lib/jta-spec1_0_1.jar:/opt/kodo-jdo-3.0.1/lib/log4j-1.2.6.jar:/opt/kodo-jdo-3.0.1/lib/xalan.jar:/opt/kodo-jdo-3.0.1/lib/xercesImpl.jar:/opt/kodo-jdo-3.0.1/lib/xml-apis.jar:/opt/kodo-jdo-3.0.1/lib/jfreechart-0.9.13.jar:/opt/kodo-jdo-3.0.1/lib/jcommon-0.8.8.jar:/opt/kodo-jdo-3.0.1/lib/jline.jar:/opt/kodo-jdo-3.0.1/lib/sqlline.jar
    4$ schemagen -p kodo.properties -f schema.xml -schemas PRODTRDTA.F0101
    1 INFO [main] kodo.Tool - Schema generator running on schemas "all".
    This process may take some time. Enable the kodo.jdbc.Schema logging
    category to see messages about the collection of schema data.
    103 INFO [main] jdbc.Schema - Reading table information for schema name
    "null", table name "null".
    Exception in thread "main" java.lang.OutOfMemoryError
    5$
    script done on Mon Jan 12 11:01:45 2004
    Note the first line of logging output here: the schema is listed as "all"
    instead of the limited scope I had specified.
    This run eventually crashes due to the fact that the account which I am
    running the mapping tool in has access to thousands of tables, and thus
    eventually the JVM runs out of available heap.
    My workaround is to fall back to 3.0.0.

    Thanks for the report. We noticed this ourselves a short while ago.
    The bug will be fixed in 3.0.2.

  • Reverse Mapping Tutorial - Finder.java queries the wrong table?!

    I have been almost successful in running the Reverse Mapping Tutorial, by
    creating Java Classes from the hsqldb sample database, and running the JDO
    Enhancer on them.
    However, I cannot get he Finder.java to work. It seems to look in the wrong
    table: MAGAZINEX instead of MAGAZINE?
    Did anyone have trouble with this step, or ran it successfully?
    Liviu
    PS: here is the trace:
    0 [main] INFO kodo.Runtime - Starting Kodo JDO version 2.4.2
    (kodojdo-2.4.2-20030326-1841) with capabilities: [Enterprise Edition
    Features, Standard Edition Features, Lite Edition Features, Evaluation
    License, Query Extensions, Datacache Plug-in, Statement Batching, Global
    Transactions, Developer Tools, Custom Database Dictionaries, Enterprise
    Databases]
    70 [main] WARN kodo.Runtime - WARNING: Kodo JDO Evaluation expires in 25
    days. Please contact [email protected] for information on extending your
    evaluation period or purchasing a license.
    68398 [main] INFO kodo.MetaData -
    com.solarmetric.kodo.meta.JDOMetaDataParser@19eda2c: parsing source:
    file:/C:/Documents%20and%20Settings/default/jbproject/JDO/classes/reversetut
    orial.jdo
    74577 [main] INFO jdbc.JDBC - [ C:24713456; T:31737213; D:22310332 ] open:
    jdbc:hsqldb:hsql_sample_database (sa)
    75689 [main] INFO jdbc.JDBC - [ C:24713456; T:31737213; D:22310332 ] close:
    com.solarmetric.datasource.PoolConnection@17918f0[[requests=0;size=0;max=70;
    hits=0;created=0;redundant=0;overflow=0;new=0;leaked=0;unavailable=0]]
    75699 [main] INFO jdbc.JDBC - [ C:24713456; T:31737213; D:22310332 ] close
    connection
    77331 [main] INFO jdbc.JDBC - Using dictionary class
    "com.solarmetric.kodo.impl.jdbc.schema.dict.HSQLDictionary" to connect to
    "HSQL Database Engine" (version "1.7.0") with JDBC driver "HSQL Database
    Engine Driver" (version "1.7.0")
    1163173 [main] INFO jdbc.JDBC - [ C:3093871; T:31737213; D:22310332 ] open:
    jdbc:hsqldb:hsql_sample_database (sa)
    1163293 [main] INFO jdbc.SQL - [ C:3093871; T:31737213; D:22310332 ]
    preparing statement <17940412>: SELECT DISTINCT MAGAZINEX.JDOCLASSX FROM
    MAGAZINEX
    1163313 [main] INFO jdbc.SQL - [ C:3093871; T:31737213; D:22310332 ]
    executing statement <17940412>: [reused=1;params={}]
    1163443 [main] INFO jdbc.JDBC - [ C:3093871; T:31737213; D:22310332 ]
    close:
    com.solarmetric.datasource.PoolConnection@2f356f[[requests=1;size=0;max=70;h
    its=0;created=1;redundant=0;overflow=0;new=1;leaked=0;unavailable=0]]
    1163443 [main] INFO jdbc.JDBC - [ C:3093871; T:31737213; D:22310332 ] close
    connection
    Hit uncaught exception javax.jdo.JDOFatalDataStoreException
    javax.jdo.JDOFatalDataStoreException:
    com.solarmetric.kodo.impl.jdbc.sql.SQLExceptionWrapper:
    [SQL=SELECT DISTINCT MAGAZINEX.JDOCLASSX FROM MAGAZINEX]
    [PRE=SELECT DISTINCT MAGAZINEX.JDOCLASSX FROM MAGAZINEX]
    Table not found: S0002 Table not found: MAGAZINEX in statement [SELECT
    DISTINCT MAGAZINEX.JDOCLASSX FROM MAGAZINEX] [code=-22;state=S0002]
    NestedThrowables:
    com.solarmetric.kodo.impl.jdbc.sql.SQLExceptionWrapper:
    [SQL=SELECT DISTINCT MAGAZINEX.JDOCLASSX FROM MAGAZINEX]
    [PRE=SELECT DISTINCT MAGAZINEX.JDOCLASSX FROM MAGAZINEX]
    Table not found: S0002 Table not found: MAGAZINEX in statement [SELECT
    DISTINCT MAGAZINEX.JDOCLASSX FROM MAGAZINEX]
    at
    com.solarmetric.kodo.impl.jdbc.runtime.SQLExceptions.throwFatal(SQLException
    s.java:17)
    at
    com.solarmetric.kodo.impl.jdbc.ormapping.SubclassProviderImpl.getSubclasses(
    SubclassProviderImpl.java:283)
    at
    com.solarmetric.kodo.impl.jdbc.ormapping.ClassMapping.getPrimaryMappingField
    s(ClassMapping.java:1093)
    at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.executeQuery(JDBCSto
    reManager.java:704)
    at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCQuery.executeQuery(JDBCQuery.java
    :93)
    at com.solarmetric.kodo.query.QueryImpl.executeWithMap(QueryImpl.java:792)
    at com.solarmetric.kodo.query.QueryImpl.execute(QueryImpl.java:595)
    at reversetutorial.Finder.main(Finder.java:32)
    NestedThrowablesStackTrace:
    java.sql.SQLException: Table not found: S0002 Table not found: MAGAZINEX in
    statement [SELECT DISTINCT MAGAZINEX.JDOCLASSX FROM MAGAZINEX]
    at org.hsqldb.Trace.getError(Trace.java:226)
    at org.hsqldb.jdbcResultSet.<init>(jdbcResultSet.java:6595)
    at org.hsqldb.jdbcConnection.executeStandalone(jdbcConnection.java:2951)
    at org.hsqldb.jdbcConnection.execute(jdbcConnection.java:2540)
    at org.hsqldb.jdbcStatement.fetchResult(jdbcStatement.java:1804)
    at org.hsqldb.jdbcStatement.executeQuery(jdbcStatement.java:199)
    at
    org.hsqldb.jdbcPreparedStatement.executeQuery(jdbcPreparedStatement.java:391
    at
    com.solarmetric.datasource.PreparedStatementWrapper.executeQuery(PreparedSta
    tementWrapper.java:93)
    at
    com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executePreparedQueryI
    nternal(SQLExecutionManagerImpl.java:771)
    at
    com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executeQueryInternal(
    SQLExecutionManagerImpl.java:691)
    at
    com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executeQuery(SQLExecu
    tionManagerImpl.java:372)
    at
    com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executeQuery(SQLExecu
    tionManagerImpl.java:356)
    at
    com.solarmetric.kodo.impl.jdbc.ormapping.SubclassProviderImpl.getSubclasses(
    SubclassProviderImpl.java:246)
    at
    com.solarmetric.kodo.impl.jdbc.ormapping.ClassMapping.getPrimaryMappingField
    s(ClassMapping.java:1093)
    at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.executeQuery(JDBCSto
    reManager.java:704)
    at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCQuery.executeQuery(JDBCQuery.java
    :93)
    at com.solarmetric.kodo.query.QueryImpl.executeWithMap(QueryImpl.java:792)
    at com.solarmetric.kodo.query.QueryImpl.execute(QueryImpl.java:595)
    at reversetutorial.Finder.main(Finder.java:32)

    The reason I did not run importtool is because ... I actually ran it, but it
    was not successfull. **!
    I now tried the solutions directory, from the kodo distribution, and that
    failed as well. Here is what I did:
    - I went to reversetutorial/solutions, and compiled all the classes, and
    then placed them into a reversetutorial folder (to match the package)
    - ran "rd-importtool reversetutorial.mapping" (the mapping file from the
    solutions directory), which failed as below:
    0 [main] INFO kodo.MetaData - Parsing metadata resource
    "file:/C:/kodo/reversetutorial/solutions/reversetutorial.mapping".
    Exception in thread "main"
    com.solarmetric.rd.kodo.meta.JDOMetaDataNotFoundException: No JDO metadata
    was found for type "class reversetutorial.Article".
    FailedObject:class reversetutorial.Article
    at
    com.solarmetric.rd.kodo.meta.JDOMetaDataRepositoryImpl.getMetaData(JDOMetaDa
    taRepositoryImpl.java:148)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.MappingRepository.getMetaData(Mapping
    Repository.java:147)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.MappingRepository.getMapping(MappingR
    epository.java:158)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.getMapping(ImportTo
    ol.java:126)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.importMappings(Impo
    rtTool.java:57)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.run(ImportTool.java
    :408)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.main(ImportTool.jav
    a:385)
    Any idea why? The solutions directory should work, right? I even tried
    specifying a kodo.properties file, but it did not seem to help.
    Liviu
    "Abe White" <[email protected]> wrote in message
    news:[email protected]...
    Running the reversemappingtool creates classes, metadata files, and a
    .mapping file. That .mapping file contains all the O/R mapping
    information for how the generated classes map to your existing database
    tables. What the importtool does is just transfer that mapping
    information to the metadata files, in the form of <extension> elements.
    The reason this is a separate step will be clear once Kodo 3.0 comes out.
    So in sum, the importtool does not affect the database in any way. It
    just moves information from one format (.mapping file) to another
    (<extension> elements in the .jdo file).

  • 3.1.4 reverse mapping tool issue

    (Sorry for the duplicate posting...I meant to start a new thread with
    this but accidentally posted it as a reply to a 6-month old thread)
    Hello,
    I was running Kodo 3.0.2 when Abe and I had the exchange reproduced
    below back in January to deal with Oracle tables with "$" in the column
    names (which I subsequently updated to 3.0.3). The original subject of
    this discussion was "3.0.2 reverse mapping tool generates invalid
    ..mapping file".
    I was able to get this working by running the following commands to
    implement Abe's suggestion:
    reversemappingtool -p kodo.properties -package db \
    -cp custom.properties -ds false schema.xml
    sed -e 's/\$/__DOLLAR__/' db/package.mapping > db/package.mapping.new
    mv db/package.mapping.new db/package.mapping
    javac db/*.java
    mappingtool -p kodo.properties -a import db/package.mapping
    sed -e 's/__DOLLAR__/\$/' db/package.jdo > db/package.jdo.new
    mv db/package.jdo.new db/package.jdo
    In my custom.properties file, I had lines like these to put useful names
    on my class's fields:
    db.TransactionDetailHistory.y$an8.rename : addressNumber
    As I said, in 3.0.3, this worked perfectly.
    I picked this code back up for the first time since getting it working 6
    months ago, and decided to update it to 3.1.4 (since I'm already using
    that on other projects). Problem is, the reverse mapping tool has
    changed and the code it generates no longer works as it once did. I
    tried running the 3.1.2 and 3.1.0 reverse mapping tool, and it failed
    the same way, so it looks like this change happened in the 3.0.x to
    3.1.x version change.
    What happens is this: In the generated Java source, my fields used to
    end up with names as per my specification (e.g., the Oracle column named
    "y$an8" showed up as "addressNumber" in the java source).
    However, it looks like the "$" became special somehow in 3.1.0 - the
    "y$an8" column now shows up as "yAn8" in the generated Java. I tried
    changing my custom.properties file accordingly, but it still shows up as
    yAn8 even after changing my mapping to look like this:
    db.TransactionDetailHistory.yAn8.rename : addressNumber
    What do you make of this?
    Thanks,
    Bill
    Abe White wrote:
    > Hmmm... this is a problem. '$' is not legal in XML names, and there
    is no standard way to escape it.
    >
    > Your best bet is probably to do the following:
    > 1. In the generated .mapping file, replace all '$' characters with
    another token, such as '--DOLLAR--'.
    > 2. Switch your properties to use the metadata mapping factory:
    > kodo.jdbc.MappingFactory: metadata
    > 3. Import your mappings into the metadata mapping factory:
    > mappingtool -a import package.mapping
    > 4. Delete the mapping file.
    > 5. In your .jdo file, replace '--DOLLAR--' with '$' again.
    >
    > The metadata mapping factory doesn't put column names in its XML
    attribute names, so you should be able to use it safely.

    William-
    However, it looks like the "$" became special somehow in 3.1.0 - the
    "y$an8" column now shows up as "yAn8" in the generated Java. I tried
    changing my custom.properties file accordingly, but it still shows up as
    yAn8 even after changing my mapping to look like this:
    db.TransactionDetailHistory.yAn8.rename : addressNumberWell, the reverse mapping tool makes some assumptions based on common
    naming strategies for relational databases and Java naming: columns like
    "FIRST_NAME" will be renamed to "firstName". The Reverse Mapping tool is
    seeing the "$" and treating it as a non-alphanumeric delimiter, so is
    fixing it.
    Can you try a couple of additional properties:
    db.TransactionDetailHistory.y$An8.rename: addressNumber
    db.TransactionDetailHistory.y$an8.rename: addressNumber
    Also, are other rename properties working for you, or is that the only
    field or class you attempt to rename? It might just be the case that
    you aren't correctly specifying the properties file or something.
    Finally, bear in mind that you can always implement your own
    kodo.jdbc.meta.ReverseCustomizer and just use that; not the easiest
    solution, but it can certainly be used to have very fine-grained control
    over the exact names that are generated.
    In article <[email protected]>, William Korb wrote:
    (Sorry for the duplicate posting...I meant to start a new thread with
    this but accidentally posted it as a reply to a 6-month old thread)
    Hello,
    I was running Kodo 3.0.2 when Abe and I had the exchange reproduced
    below back in January to deal with Oracle tables with "$" in the column
    names (which I subsequently updated to 3.0.3). The original subject of
    this discussion was "3.0.2 reverse mapping tool generates invalid
    .mapping file".
    I was able to get this working by running the following commands to
    implement Abe's suggestion:
    reversemappingtool -p kodo.properties -package db \
    -cp custom.properties -ds false schema.xml
    sed -e 's/\$/__DOLLAR__/' db/package.mapping > db/package.mapping.new
    mv db/package.mapping.new db/package.mapping
    javac db/*.java
    mappingtool -p kodo.properties -a import db/package.mapping
    sed -e 's/__DOLLAR__/\$/' db/package.jdo > db/package.jdo.new
    mv db/package.jdo.new db/package.jdo
    In my custom.properties file, I had lines like these to put useful names
    on my class's fields:
    db.TransactionDetailHistory.y$an8.rename : addressNumber
    As I said, in 3.0.3, this worked perfectly.
    I picked this code back up for the first time since getting it working 6
    months ago, and decided to update it to 3.1.4 (since I'm already using
    that on other projects). Problem is, the reverse mapping tool has
    changed and the code it generates no longer works as it once did. I
    tried running the 3.1.2 and 3.1.0 reverse mapping tool, and it failed
    the same way, so it looks like this change happened in the 3.0.x to
    3.1.x version change.
    What happens is this: In the generated Java source, my fields used to
    end up with names as per my specification (e.g., the Oracle column named
    "y$an8" showed up as "addressNumber" in the java source).
    However, it looks like the "$" became special somehow in 3.1.0 - the
    "y$an8" column now shows up as "yAn8" in the generated Java. I tried
    changing my custom.properties file accordingly, but it still shows up as
    yAn8 even after changing my mapping to look like this:
    db.TransactionDetailHistory.yAn8.rename : addressNumber
    What do you make of this?
    Thanks,
    Bill
    Abe White wrote:
    Hmmm... this is a problem. '$' is not legal in XML names, and thereis no standard way to escape it.
    Your best bet is probably to do the following:
    1. In the generated .mapping file, replace all '$' characters withanother token, such as '--DOLLAR--'.
    2. Switch your properties to use the metadata mapping factory:
    kodo.jdbc.MappingFactory: metadata
    3. Import your mappings into the metadata mapping factory:
    mappingtool -a import package.mapping
    4. Delete the mapping file.
    5. In your .jdo file, replace '--DOLLAR--' with '$' again.
    The metadata mapping factory doesn't put column names in its XMLattribute names, so you should be able to use it safely.--
    Marc Prud'hommeaux
    SolarMetric Inc.

  • Problem with reverse mapping

    Hi!
    I am having a problem with reverse mapping. Here's what I do (copying the
    generated files to a correct directory omitted):
    % rd-schemagen -properties jdo.properties -file schema.xml
    % rd-reversemappingtool -properties jdo.properties -package testi
    schema.xml
    % javac -d build/classes src/testi/*.java
    % rd-importtool -properties jdo.properties src/testi/testi.mapping
    Here's a part of the output:
    <clip>
    2958 INFO [main] jdbc.Schema - Found existing table "Kirja" for schema
    "null".
    3002 INFO [main] jdbc.Schema - Found existing table "Kustantaja" for
    schema "n
    ull".
    3047 INFO [main] jdbc.SQL - [C: 5948361; T: 15336018]close
    3125 INFO [main] jdbc.SQL - [C: 2478770; T: 15336018]open:
    jdbc:mysql://localh
    ost/kirjakauppa (root)
    3129 INFO [main] jdbc.Schema - Found existing table "Kirjailija" for
    schema "n
    ull".
    3140 INFO [main] jdbc.SQL - [C: 2478770; T: 15336018]close
    3187 INFO [main] jdbc.SQL - [C: 7529545; T: 15336018]open:
    jdbc:mysql://localh
    ost/kirjakauppa (root)
    3193 INFO [main] jdbc.Schema - Found existing table "Kirjoittaja" for
    schema "
    null".
    3225 INFO [main] jdbc.SQL - [C: 7529545; T: 15336018]close
    Exception in thread "main" javax.jdo.JDOFatalInternalException:
    java.lang.Illega
    lArgumentException: You are attempting to link to a primary key column in
    table "Kirja" in a foreign key that is already linked to primary key
    columns in table "Kirjailija".
    NestedThrowables:
    java.lang.IllegalArgumentException: You are attempting to link to a primary
    key column in table "Kirja" in a foreign key that is already linked to
    primary key c
    olumns in table "Kirjailija".
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.Mappings.createClassMapping(Ma
    ppings.java:160)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.MappingRepository.getMapping(M
    appingRepository.java:279)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.MappingRepository.getMetaData(
    MappingRepository.java:147)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.MappingRepository.getMapping(M
    appingRepository.java:158)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.getMapping(I
    mportTool.java:126)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.importMappin
    gs(ImportTool.java:57)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.run(ImportTo
    ol.java:408)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.main(ImportT
    ool.java:385)
    NestedThrowablesStackTrace:
    java.lang.IllegalArgumentException: You are attempting to link to a primary
    key column in table "Kirja" in a foreign key that is already linked to
    primary key c
    olumns in table "Kirjailija".
    at
    com.solarmetric.rd.kodo.impl.jdbc.schema.ForeignKey.join(ForeignKey.j
    ava:238)
    at
    com.solarmetric.rd.kodo.impl.jdbc.schema.SchemaGenerator.generateFore
    ignKeys(SchemaGenerator.java:625)
    at
    com.solarmetric.rd.kodo.impl.jdbc.schema.DynamicSchemaFactory.findTab
    le(DynamicSchemaFactory.java:111)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.map.BaseClassMapping.fromMappi
    ngInfo(BaseClassMapping.java:113)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.Mappings.createClassMapping(Ma
    ppings.java:144)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.MappingRepository.getMapping(M
    appingRepository.java:279)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.MappingRepository.getMetaData(
    MappingRepository.java:147)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.MappingRepository.getMapping(M
    appingRepository.java:158)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.getMapping(I
    mportTool.java:126)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.importMappin
    gs(ImportTool.java:57)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.run(ImportTo
    ol.java:408)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.main(ImportT
    ool.java:385)
    </clip>
    Here's what MySQLCC gives for creation statement of the tables:
    <clip>
    # Host: localhost
    # Database: kirjakauppa
    # Table: 'Asiakas'
    # CREATE TABLE `Asiakas` (
    `Asiakas_id` int(11) NOT NULL auto_increment,
    `Nimi1` varchar(50) default NULL,
    `Nimi2` varchar(50) default NULL,
    `KatuOsoite` varchar(50) default NULL,
    `Postiosoite` varchar(50) default NULL,
    `Email` varchar(50) default NULL,
    `Puhelin` varchar(50) default NULL,
    `Fax` varchar(50) default NULL,
    `Salasana` varchar(50) default NULL,
    `ExtranetTunnus` varchar(50) default NULL,
    PRIMARY KEY (`Asiakas_id`),
    KEY `Asiakas_id` (`Asiakas_id`)
    ) TYPE=InnoDB;
    # Host: localhost
    # Database: kirjakauppa
    # Table: 'Kirja'
    # CREATE TABLE `Kirja` (
    `Kirja_id` int(11) NOT NULL auto_increment,
    `Kustantaja_id` int(11) default NULL,
    `Nimi` varchar(60) default NULL,
    `Nimi2` varchar(60) default NULL,
    `ISBN` varchar(50) default NULL,
    `Kieli` varchar(50) default NULL,
    `Kansi_URL` varchar(50) default NULL,
    `Sisalto_URL` varchar(50) default NULL,
    `Tukkuhinta` decimal(10,2) default NULL,
    `Kuluttajahinta` decimal(10,2) default NULL,
    `Varastokpl` int(11) default NULL,
    PRIMARY KEY (`Kirja_id`),
    KEY `Kirja_id` (`Kirja_id`),
    KEY `Kustantaja_id` (`Kustantaja_id`),
    FOREIGN KEY (`Kustantaja_id`) REFERENCES `kirjakauppa.Kustantaja`
    (`Kustantaja_id`)
    ) TYPE=InnoDB;
    # Host: localhost
    # Database: kirjakauppa
    # Table: 'Kirjailija'
    # CREATE TABLE `Kirjailija` (
    `Kirjailija_id` int(11) NOT NULL auto_increment,
    `Sukunimi` varchar(50) default NULL,
    `Etunimi` varchar(50) default NULL,
    `Maa` varchar(50) default NULL,
    `Kirjailija_URL` varchar(50) default NULL,
    PRIMARY KEY (`Kirjailija_id`),
    KEY `Kirjailija_id` (`Kirjailija_id`)
    ) TYPE=InnoDB;
    # Host: localhost
    # Database: kirjakauppa
    # Table: 'Kirjoittaja'
    # CREATE TABLE `Kirjoittaja` (
    `Kirjoittaja_id` int(11) NOT NULL auto_increment,
    `Kirjailija_id` int(11) NOT NULL default '0',
    `Kirja_id` int(11) NOT NULL default '0',
    PRIMARY KEY (`Kirjoittaja_id`),
    KEY `Kirjailija_id` (`Kirjailija_id`),
    KEY `Kirja_id` (`Kirja_id`),
    FOREIGN KEY (`Kirjailija_id`) REFERENCES `kirjakauppa.Kirjailija`
    (`Kirjailija_id`),
    FOREIGN KEY (`Kirja_id`) REFERENCES `kirjakauppa.Kirja` (`Kirja_id`)
    ) TYPE=InnoDB;
    # Host: localhost
    # Database: kirjakauppa
    # Table: 'Koodi'
    # CREATE TABLE `Koodi` (
    `Koodi_id` int(11) NOT NULL auto_increment,
    `Koodi` varchar(50) default NULL,
    `Tyyppi` varchar(50) default NULL,
    `Arvo` varchar(50) default NULL,
    PRIMARY KEY (`Koodi_id`)
    ) TYPE=InnoDB;
    # Host: localhost
    # Database: kirjakauppa
    # Table: 'Kustantaja'
    # CREATE TABLE `Kustantaja` (
    `Kustantaja_id` int(11) NOT NULL auto_increment,
    `Nimi` varchar(80) default NULL,
    `Maa` varchar(50) default NULL,
    `Kustantaja_URL` varchar(50) default NULL,
    `KirjaLkm` int(11) default NULL,
    PRIMARY KEY (`Kustantaja_id`),
    KEY `Kustantaja_id` (`Kustantaja_id`)
    ) TYPE=InnoDB;
    # Host: localhost
    # Database: kirjakauppa
    # Table: 'Luokittelu'
    # CREATE TABLE `Luokittelu` (
    `Luokittelu_id` int(11) NOT NULL auto_increment,
    `Luokka_id` int(11) NOT NULL default '0',
    `Kirja_id` int(11) NOT NULL default '0',
    PRIMARY KEY (`Luokittelu_id`),
    KEY `Luokka_id` (`Luokka_id`),
    KEY `Kirja_id` (`Kirja_id`),
    FOREIGN KEY (`Luokka_id`) REFERENCES `kirjakauppa.Luokka` (`Luokka_id`),
    FOREIGN KEY (`Kirja_id`) REFERENCES `kirjakauppa.Kirja` (`Kirja_id`)
    ) TYPE=InnoDB;
    # Host: localhost
    # Database: kirjakauppa
    # Table: 'Luokka'
    # CREATE TABLE `Luokka` (
    `Luokka_id` int(11) NOT NULL auto_increment,
    `Luokka` varchar(50) default NULL,
    PRIMARY KEY (`Luokka_id`),
    KEY `Luokka_id` (`Luokka_id`)
    ) TYPE=InnoDB;
    # Host: localhost
    # Database: kirjakauppa
    # Table: 'Myyja'
    # CREATE TABLE `Myyja` (
    `Myyja_id` int(11) NOT NULL auto_increment,
    `Myyja` varchar(50) default NULL,
    `Myyja_URL` varchar(50) default NULL,
    PRIMARY KEY (`Myyja_id`),
    KEY `Myyja_id` (`Myyja_id`)
    ) TYPE=InnoDB;
    # Host: localhost
    # Database: kirjakauppa
    # Table: 'Tilaus'
    # CREATE TABLE `Tilaus` (
    `Tilaus_id` int(11) NOT NULL auto_increment,
    `Asiakas_id` int(11) NOT NULL default '0',
    `Myyja_id` int(11) default NULL,
    `TilausPvm` timestamp(14) NOT NULL,
    `EnsimmToimitusPvm` timestamp(14) NOT NULL,
    `ViimToimitusPvm` timestamp(14) NOT NULL,
    `Tila` int(11) NOT NULL default '0',
    `Mk` decimal(10,2) default NULL,
    PRIMARY KEY (`Tilaus_id`),
    KEY `Asiakas_id` (`Asiakas_id`),
    KEY `Myyja_id` (`Myyja_id`),
    KEY `Tilaus_id` (`Tilaus_id`),
    FOREIGN KEY (`Asiakas_id`) REFERENCES `kirjakauppa.Asiakas`
    (`Asiakas_id`),
    FOREIGN KEY (`Myyja_id`) REFERENCES `kirjakauppa.Myyja` (`Myyja_id`)
    ) TYPE=InnoDB;
    # Host: localhost
    # Database: kirjakauppa
    # Table: 'Tilausrivi'
    # CREATE TABLE `Tilausrivi` (
    `TilausRivi_id` int(11) NOT NULL auto_increment,
    `Tilaus_id` int(11) NOT NULL default '0',
    `Kirja_id` int(11) NOT NULL default '0',
    `TilausLkm` int(11) default NULL,
    `Ahinta` decimal(10,2) default NULL,
    `Alepros` float default NULL,
    `Mk` decimal(10,2) default NULL,
    `ToimitettuLkm` int(11) default NULL,
    `ToimitusPvm` timestamp(14) NOT NULL,
    `ViimToimitusPvm` timestamp(14) NOT NULL,
    `Tila` int(11) NOT NULL default '0',
    PRIMARY KEY (`TilausRivi_id`),
    KEY `Tilaus_id` (`Tilaus_id`),
    KEY `Kirja_id` (`Kirja_id`),
    FOREIGN KEY (`Tilaus_id`) REFERENCES `kirjakauppa.Tilaus` (`Tilaus_id`),
    FOREIGN KEY (`Kirja_id`) REFERENCES `kirjakauppa.Kirja` (`Kirja_id`)
    ) TYPE=InnoDB;
    </clip>
    I can find the original creation script if it is necessary.
    My guess was that I need to define the foreign keys myself into the
    generated schema.xml This is stated in the manual. However, this did not
    help, although it changed the stack trace a little (it complains about
    different classes than before):
    <clip>
    Exception in thread "main" javax.jdo.JDOFatalInternalException:
    java.lang.IllegalArgumentException: You are attempting to link to a primary
    key column in table "Myyja" in a foreign key that is already linked to
    primary key columns in table "Asiakas".
    NestedThrowables:
    java.lang.IllegalArgumentException: You are attempting to link to a primary
    key column in table "Myyja" in a foreign key that is already linked to
    primary key columns in table "Asiakas".
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.Mappings.createFieldMapping(Mappings.java:208)
    </clip>
    I don't think I fully understand the error message, what exactly is wrong
    here? How can I fix it?
    Here's a sample of the changes I made to schema.xml:
    - added the name - attribute to schema (it was missing)
    <schema name="kirjakauppa">
    - added the foreign key elements according to the table creation statements
    given above
    <fk name="Kustantaja_id" to-table="Kustantaja" column="Kustantaja_id"/>
         etc...
    -Antti

    On Mon, 16 Jun 2003 17:55:35 -0500, Abe White <[email protected]>
    wrote:
    It seems the last three options are being ignored - I still get a
    mapping
    file with schema names in front of tables (e.g. kirjakauppa.Asiakas, not
    Asiakas),That, unfortunately, is impossible to turn off. The -useSchemaName
    option controls whether the schema name is included as part of the
    generated class name; it doesn't affect the mapping data that is
    generated. What problems does including the schema name in the mapping
    data cause?
    rd-importtool -properties jdo.properties gensrc/testi/testi.mapping0 INFO [main] kodo.MetaData - Parsing metadata resource
    "file:/home/akaranta/work/kurssit/jdo/Harjoituskoodi/kirjakauppa/gensrc/testi/testi.mapping".
    Exception in thread "main"
    com.solarmetric.rd.kodo.meta.JDOMetaDataNotFoundException: No JDO metadata
    was found for type "class testi.Asiakas".
    FailedObject:class testi.Asiakas
    at
    com.solarmetric.rd.kodo.meta.JDOMetaDataRepositoryImpl.getMetaData(JDOMetaDataRepositoryImpl.java:126)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.MappingRepository.getMetaData(MappingRepository.java:184)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.MappingRepository.getMapping(MappingRepository.java:197)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.getMapping(ImportTool.java:128)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.importMappings(ImportTool.java:60)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.run(ImportTool.java:400)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.main(ImportTool.java:377)
    This exception goes away if I edit the schema name out of the mapping
    file from all classes.
    separate classes are being generated for join tables with
    primary keysDo these join tables have an extra primary key column? TheYes, they do. Ok, now I know where the problem is.
    -primaryKeyOnJoin flag tells Kodo to ignore a join table with a primary
    key on the join columns. But Kodo can't handle join tables with extra
    column(s) just for a primary key identifier. This isn't a limitation of
    the reverse mapping tool, it's a limitation of Kodo. Kodo wouldn't know
    what to insert in those extra primary key column(s) when adding membersWhy not? If it can handle single numeric pk columns when making the
    generated classes use data store identity, it has to generate something to
    those columns. I can't see why this is different.
    That is simply out of curiosity - the next thing fixed my problem:
    to the join table. Of course, if the primary key is an auto-increment or
    something where Kodo can ignore it for inserts, you can just remove the
    <column> elements and the <pk> element from your .schema file and the
    reverse mapping tool will map it as a join table appropriately.It is auto-increment, so I did this and it worked. Thanks.
    , and application id is used for all classes.Are your primary keys on single, numeric columns? Kodo uses Java longsYes (int in MySQL), so that should not be a problem. They are also auto-
    incremented. This seems to be the only real problem remaining with this
    schema.
    -Antti

  • Unrecognized types in Reverse Mapping

    In our database schema there is use of a user-defined database type. The
    reverse mapping tool cannot recogize this type and automatically
    classifies it as a blob in the mapping file and the generic Object for the
    java sources. I would like to cast this user-defined type to a String in
    java, because otherwise kodo blows up when I try to retrieve the field. I
    extended PropertiesReverseCustomizer and was able to get the java sources
    to output String instead. But I couldn't find an easy way of getting the
    mapping file to use "value" instead of "blob". Right now I am having to do
    a query replace on the mapping file, but I would like to know if there's
    way of getting the Reverse mapping tool to do this for you?
    Toby

    You can probably just add the fields for UDT types manually in your
    customizer:
    import java.sql.*;
    import kodo.meta.*;
    import kodo.jdbc.meta.*;
    import kodo.jdbc.schema.*;
    private ReverseMappingTool tool; // set in setTool ()
    public boolean customize (ClassMapping cls)
    Column[] cols = cls.getTable ().getColumns ();
    for (int i = 0; i < cols.length; i++)
    if (cols.isCompatible (Types.BLOB, 0))
    addStringField (cls, cols[i]);
    return super.customize (cls);
    private void addStringField (ClassMapping cls, Column col)
    String name = tool.getFieldName (col.getName (), cls);
    FieldMetaData fmd = tool.newFieldMetaData (name, String.class, cls);
    ValueFieldMapping field = new ValueFieldMapping (fmd);
    mapping.setColumn (col);
    tool.addFieldMapping (field, cls);

  • Reverse Mapping: letting Kodo manage pk-column

    Hi again,
    i have a db with many predefined tables. Its not allowed to change the db
    schema. The generated Java-classes are looking fine but i want to let kodo
    manage the pk columns (like JDOIDX in generated tables). I dont want the
    more technical pks in my business classes. Is it possible?
    Any help is welcome!

    Abe White wrote:
    Adding a data store identity option to the reverse mapping tool is relatively
    high on our to-do list, but it's not implemented yet. For now, you can
    follow the steps in the documentation for reverse-mapping your classes, then
    switch over to datastore identity manually by changing the class definitions
    and metadata.Ok, this solution is workable. Will try it, thanks!

  • Partition Map Schemes: HFS+ and FAT32 partitions with OSX and Windows

    OK so I know this question has been practically beaten to death, but I keep finding conflicting information. I am using a 2011 MacBook Pro, on which I will set up Windows through Boot Camp. I recently purchased a 750 GB WD external hard drive to use with time machine for a backup on my Mac. However, I also need to be able to use part of this for Windows files. SO.. I intend to use the HFS+ partition for the the Mac (500GB) and create a FAT 32 partition (250GB) to use for backing up windows files (using it for solely computer modeling and need to be able to transfer/share files with Mac users who use Parallels as well as copying to PC desktops). My question is what to use as the partition map scheme. I have heard that when using these two partition types, a Master Boot Record is needed (so Windows can recognize the FAT32 partition) and also that a GUID partition map is required for use with time machine, meaning windows would no longer be able to read the FAT32 partition. Is there a way to reconcile this? Either using Time Machine with HFS+ partition that is set to MBR or uisng FAT32 on Windows with a GUID partition map? Also if I were to use Parallels (with a GUID setup) instead of Boot Camp, could that be the way to save the windows files to the FAT32 Partition and avoid problems with Time Machine not working with MBR? Thanks for any expertise, as I have heard that both setups that I have mentioned both will work and both will not work. Any experience with a similar situation?

    Wow. Thanks for the extremely quick responses. Just for a few points of clarification.. I'm a complete newb at backing up strategies.
    Steve, you would recommend to not backup files from my Mac OSX and files from Windows (also on my Mac) on the same drive, correct?
    I appreciate the strategy of using it only as a backup, that makes quite a bit of sense. However, if I want to only backup my OSX files, and also store (solely as backup copies) say, a number of computer models (Rhino, Revit, etc.) that were created in Windows programs (not needing to store the entire Windows disk), would it not make sense to store these on the same drive in a different partition, creating the need for two different partition formats? And if I were to do this, maybe I should use NTFS instead of FAT32 (and reformat to GUID since that seems to be a standard for Apple and Windows 7 recognizes it..?) to keep them completely separate since the computer model files cannot be opened unless running the Windows programs.
    How do you use your drive with HFS+ and NTFS if not for backups? I will not need to access the HFS+ backup files in Windows, nor need to access files from an NTFS partition in OSX, so that seems to simplify things in that, at least at the moment, I will not need any Paragon software.
    Currently my drive is partitioned as HFS+ and FAT32 as MBR, with the HFS+ partition set up with Time Machine. It appears to be successful, I see my files in Mac HD -> "users" and all my docs, desktop items, etc. are listed. Seems that there is in fact no limit on TM's use of MBR maps, or else it is way above 160GB.
    Third, are you using CarbonCopyClone in place of Time Machine or in addition to it? If in addition would it create the bit-wise clone on the same HFS+ partition as TM is backing up to? Or a separate drive? I'd like to only have one external that I am backing up to for simplicity's sake. I've never used TM before, so this is all new to me. Also, I suppose I have been missing the distinction between storing copies of files, and making a complete backup of a disk image... just now realizing the difference. Thanks so much.

  • 2 Different Partition Map Scheme's for one external hard drive

    Is it possible to have one external hard drive with 2 different Partition Map Scheme's?
    One of them will be used to externally boot tiger on a powerpc, and the other for use on my macbook pro?

    And this drive is currently not partitioned I take it ?
    You need to make a backup of the data on the drive then partition it, the first partition must be set under Disc Utility / Partition / Options as Mac GUI Bootable so it installs the drivers, then install Tiger onto the first partition and move your current data back onto the second one.
    You will then be able to Boot Tiger on the older PowerBook by holding down 'option' at start up, if you just have it connected to the MacBook Pro and start up normally then it will appear as two volumes on your Desktop, the Tiger Boot Volume and your Data Volume.

  • Finding exception with the read-write-backing-map-scheme configuration.

    Finding exception with the <read-write-backing-map-scheme> configuration, that is setup against a simple database cache store implementation. The class SimpleCacheEventStoreImpl implements CacheStore interface.
    Exception in thread "main" java.lang.UnsupportedOperationException: configureCache: read-write-backing-map-scheme
         at com.tangosol.net.DefaultConfigurableCacheFactory.configureCache(DefaultConfigurableCacheFactory.java:995)
         at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:277)
         at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:689)
         at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:667)
         at Sample.SimpleEventStoreConsumer.main(SimpleEventStoreConsumer.java:10)
    The cache store is interfaced to the program SimpleEventStoreConsumer(where I have a put and get operation) through the following cache configuration descriptor. On running the SimpleEventStoreConsumer, the exception happens on trying to get the Named cache from the cache factory
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>Evt*</cache-name>
                   <scheme-name>SampleDatabaseScheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <read-write-backing-map-scheme>
                   <scheme-name>SampleDatabaseScheme</scheme-name>
                   <internal-cache-scheme>
                        <local-scheme>
                             <scheme-ref>SampleMemoryScheme</scheme-ref>
                        </local-scheme>
                   </internal-cache-scheme>
                   <cachestore-scheme>
                        <class-scheme>
                             <class-name>com.emc.srm.cachestore.SimpleCacheEventStoreImpl</class-name>
                             <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                             </init-params>
                        </class-scheme>
                   </cachestore-scheme>
              </read-write-backing-map-scheme>
              <local-scheme>
                   <scheme-name>SampleMemoryScheme</scheme-name>
              </local-scheme>
         </caching-schemes>
    </cache-config>

    you are missing <backing-map-scheme>. Do like following:
    <caching-schemes>
              <distributed-scheme>
                   <scheme-name>distributed-scheme</scheme-name>
                   <service-name>DistributedQueryCache</service-name>
                   <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <scheme-ref>rw-bm</scheme-ref>
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
    <autostart>true</autostart>
              </distributed-scheme>
              <read-write-backing-map-scheme>
                   <scheme-name>rw-bm</scheme-name>
    <internal-cache-scheme>
         <local-scheme>
                        </local-scheme>
                   </internal-cache-scheme>               
              </read-write-backing-map-scheme>
    </caching-schemes>

  • Ssh has stopped working - reverse mapping causes segmentation fault

    This was working on Friday, believe me. I haven't done anything that I'm aware of (apart from reboot the machine) to change things, except in trying to fix it.
    Briefly, ssh crashes out with a segmentation fault and a crash log (below). Poking around with verbosity gives (real ip obscured):
    % ssh ip4 -vvvv
    OpenSSH_3.8.1p1, OpenSSL 0.9.7i 14 Oct 2005
    debug1: Reading configuration data /etc/ssh_config
    debug2: ssh_connect: needpriv 0
    debug1: Connecting to ip4 [ip4] port 22.
    debug1: Connection established.
    debug1: identity file /Users/rpg/.ssh/identity type -1
    debug1: identity file /Users/rpg/.ssh/id_rsa type -1
    debug1: identity file /Users/rpg/.ssh/id_dsa type -1
    debug1: Remote protocol version 1.99, remote software version OpenSSH_3.8.1p1
    debug1: match: OpenSSH_3.8.1p1 pat OpenSSH*
    debug1: Enabling compatibility mode for protocol 2.0
    debug1: Local version string SSH-2.0-OpenSSH_3.8.1p1
    debug3: Trying to reverse map address ip4.
    Segmentation fault
    I get similar reports for %ssh FQDN and %ssh $USER@[FQDN*|*ip4].
    Although I'm trying to ssh to a machine on another continent, trying to ssh into my own machine (from a Terminal window on my own machine) also does not work. Setting UseDNS no in /etc/sshd_config on my machine does not help. Oddly, trying to ssh to my own machine by
    %ssh 127.0.0.1 gives
    debug1: Enabling compatibility mode for protocol 2.0
    debug1: Local version string SSH-2.0-OpenSSH_3.8.1p1
    debug3: Trying to reverse map address 127.0.0.1.
    debug1: An invalid name was supplied
    Configuration file does not specify default realm
    debug1: An invalid name was supplied
    A parameter was malformed
    Validation error
    debug1: An invalid name was supplied
    Configuration file does not specify default realm
    debug1: An invalid name was supplied
    A parameter was malformed
    Validation error
    debug1: SSH2MSGKEXINIT sent
    debug1: SSH2MSGKEXINIT received
    and then works (after a lot more info).
    I can ssh into this machine from elsewhere, I just can't ssh out. Below the crash log is a report from running a server with
    % sudo sshd -D -ddd -e -p 10000
    and connecting with
    % ssh -vvv -p 10000 $USER@FQDN
    ssh crash log:
    Date/Time: 2006-05-29 15:42:09.284 +1000
    OS Version: 10.4.6 (Build 8I1119)
    Report Version: 4
    Command: ssh
    Path: /usr/bin/ssh
    Parent: bash [1020] (note: also fails under tcsh)
    Version: ??? (???)
    PID: 1021
    Thread: 0
    Exception: EXCBADACCESS (0x0001)
    Codes: KERNINVALIDADDRESS (0x0001) at 0xb1d255e4
    Thread 0 Crashed:
    0 libstdc++.6.dylib 0x90b37e3a _cxa_getglobals + 324
    1 libstdc++.6.dylib 0x90b3853a _gxx_personalityv0 + 658
    2 libgcc_s.1.dylib 0x90bcabf7 UnwindRaiseException + 147
    3 libstdc++.6.dylib 0x90b38857 _cxathrow + 87
    4 edu.mit.Kerberos 0x94c4a238 CCIContextDataMachIPCStub::OpenCCache(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 314
    5 edu.mit.Kerberos 0x94c49fde CCEContext::OpenCCache(cccontextd*, char const*, ccccached**) + 160
    6 edu.mit.Kerberos 0x94c49d5e cc_open + 64
    7 edu.mit.Kerberos 0x94c49bf6 krb5stdccresolve + 182
    8 edu.mit.Kerberos 0x94c4f1a1 __KLGetCCacheByName + 254
    9 edu.mit.Kerberos 0x94c4ee8a __KLAcquireInitialTicketsForCache + 179
    10 edu.mit.Kerberos 0x94c4ed7f krb5intccdefault + 85
    11 edu.mit.Kerberos 0x94c40215 krb5gss_acquirecred + 2409
    12 edu.mit.Kerberos 0x94c4ed11 kggetdefcred + 73
    13 edu.mit.Kerberos 0x94c4da14 krb5gss_init_seccontext + 208
    14 ssh 0x00024305 0x1000 + 144133
    15 ssh 0x000246f4 0x1000 + 145140
    16 ssh 0x000247fb 0x1000 + 145403
    17 ssh 0x0000c462 0x1000 + 46178
    18 ssh 0x0000a251 0x1000 + 37457
    19 ssh 0x000042c7 0x1000 + 12999
    20 ssh 0x000025f2 0x1000 + 5618
    21 ssh 0x0000250d 0x1000 + 5389
    Thread 0 crashed with i386 Thread State:
    eax: 0x00000000 ebx: 0x90b3880d ecx:0xbfffda7c edx: 0xa4c425a0
    edi: 0xb1d255e4 esi: 0xa4c425a0 ebp:0xbfffd9e8 esp: 0xbfffd9b0
    ss: 0x0000002f efl: 0x00010246 eip:0x90b37e3a cs: 0x00000027
    ds: 0x0000002f es: 0x0000002f fs:0x00000000 gs: 0x00000037
    sudo sshd -D -ddd -e -p 10000:
    debug2: readserverconfig: filename /etc/sshd_config
    debug1: sshd version OpenSSH_3.8.1p1
    debug1: private host key: #0 type 0 RSA1
    debug3: Not a RSA1 key file /etc/sshhost_rsakey.
    debug1: read PEM private key done: type RSA
    debug1: private host key: #1 type 1 RSA
    debug3: Not a RSA1 key file /etc/sshhost_dsakey.
    debug1: read PEM private key done: type DSA
    debug1: private host key: #2 type 2 DSA
    debug1: Bind to port 10000 on ::.
    Server listening on :: port 10000.
    debug1: Bind to port 10000 on 0.0.0.0.
    Server listening on 0.0.0.0 port 10000.
    Generating 768 bit RSA key.
    RSA key generation complete. <- pause here
    debug1: Server will not fork when running in debugging mode.
    Connection from ip4 port 50148
    debug1: Current Session ID is 00B16810 / Session Attributes are 00008030
    debug1: Creating new security session...
    debug1: New Session ID is 0F7C2940 / Session Attributes are 00009020
    debug1: Client protocol version 2.0; client software version OpenSSH_3.8.1p1
    debug1: match: OpenSSH_3.8.1p1 pat OpenSSH*
    debug1: Enabling compatibility mode for protocol 2.0
    debug1: Local version string SSH-1.99-OpenSSH_3.8.1p1
    debug2: Network child is on pid 1101
    debug3: preauth child monitor started
    debug3: mmrequestreceive entering
    debug3: privsep user:group 75:75
    debug1: permanentlysetuid: 75/75
    debug1: listhostkeytypes: ssh-rsa,ssh-dss
    debug3: mmrequestsend entering: type 40
    debug3: mmrequest_receiveexpect entering: type 41
    debug3: mmrequestreceive entering
    debug3: monitor_read: checking request 40
    debug1: Miscellaneous failure
    No such file or directory
    debug3: mmrequestsend entering: type 41
    debug3: mmrequestreceive entering
    debug1: no credentials for GSSAPI mechanism Kerberos
    debug1: SSH2MSGKEXINIT sent
    Connection closed by ip4
    debug1: do_cleanup
    debug1: PAM: cleanup
    debug3: PAM: sshpamthreadcleanup entering
    debug1: do_cleanup
    debug1: PAM: cleanup
    debug3: PAM: sshpamthreadcleanup entering
    and
    % ssh -vvv -p 10000 $USER@FQDN :
    OpenSSH_3.8.1p1, OpenSSL 0.9.7i 14 Oct 2005
    debug1: Reading configuration data /etc/ssh_config
    debug2: ssh_connect: needpriv 0
    debug1: Connecting to FQDN [ip4] port 10000.
    debug1: Connection established.
    debug1: identity file /Users/rpg/.ssh/identity type -1
    debug1: identity file /Users/rpg/.ssh/id_rsa type -1
    debug1: identity file /Users/rpg/.ssh/id_dsa type -1
    debug1: Remote protocol version 1.99, remote software version OpenSSH_3.8.1p1
    debug1: match: OpenSSH_3.8.1p1 pat OpenSSH*
    debug1: Enabling compatibility mode for protocol 2.0
    debug1: Local version string SSH-2.0-OpenSSH_3.8.1p1
    debug3: Trying to reverse map address ip4.
    Segmentation fault

    10.4.7 fixed this.
    But broke iCal. . .
    Actually, I never had any problems with 10.4.6, but ssh on my nat'ed Intel Macbook now segfaults when doing reverse mapping after upgrading to 10.4.7.
    OpenSSH_3.8.1p1, OpenSSL 0.9.7i 14 Oct 2005
    debug1: Reading configuration data /etc/ssh_config
    debug1: Applying options for *
    debug2: ssh_connect: needpriv 0
    debug1: Connecting to amrshampine [10.4.51.45] port 22.
    debug1: Connection established.
    debug1: identity file /Users/lindkvis/.ssh/identity type -1
    debug1: identity file /Users/lindkvis/.ssh/id_rsa type -1
    debug1: identity file /Users/lindkvis/.ssh/id_dsa type -1
    debug1: Remote protocol version 1.99, remote software version OpenSSH_3.6.1p2
    debug1: match: OpenSSH_3.6.1p2 pat OpenSSH*
    debug1: Enabling compatibility mode for protocol 2.0
    debug1: Local version string SSH-2.0-OpenSSH_3.8.1p1
    debug3: Trying to reverse map address 10.4.51.45.
    Macbook White 13" 1.83GHz 1GB   Mac OS X (10.4.6)  

  • Can't install OSX Mountain Lion, on the disk selection screen i cant select the Macintosh HD to install OSX giving a message (This disk cannot be used to start up your computer).only have one disk to select and my partition map scheme is GUID partition

    just bough OSX Mountain Lion, my laptop operating with v10.6.8.  Can't install OSX Mountain Lion, on the disk selection screen i cant select the Macintosh HD to install OSX giving a message (This disk cannot be used to start up your computer).only have one disk to select and my partition map scheme is GUID partition table. 24.44gb disk available.

    Verify your computer can run Mountain Lion:
    Upgrading to Mountain Lion
    To upgrade to Mountain Lion you must have Snow Leopard 10.6.8 or Lion installed. Purchase and download Mountain Lion from the App Store. Sign in using your Apple ID. Mountain Lion is $19.99 plus tax. The file is quite large, over 4 GBs, so allow some time to download. It would be preferable to use Ethernet because it is nearly four times faster than wireless.
         OS X Mountain Lion - System Requirements
           Macs that can be upgraded to OS X Mountain Lion
             1. iMac (Mid 2007 or newer) - Model Identifier 7,1 or later
             2. MacBook (Late 2008 Aluminum, or Early 2009 or newer) - Model Identifier 5,1 or later
             3. MacBook Pro (Mid/Late 2007 or newer) - Model Identifier 3,1 or later
             4. MacBook Air (Late 2008 or newer) - Model Identifier 2,1 or later
             5. Mac mini (Early 2009 or newer) - Model Identifier 3,1 or later
             6. Mac Pro (Early 2008 or newer) - Model Identifier 3,1 or later
             7. Xserve (Early 2009) - Model Identifier 3,1 or later
    To find the model identifier open System Profiler in the Utilities folder. It's displayed in the panel on the right.
    Open Disk Utility and verify the drive is partitioned using GUID and formatted Mac OS Extended, Journaled. If it is then do this:
    Repair the Hard Drive and Permissions
    Boot from your Snow Leopard Installer disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Utilities menu. After DU loads select your hard drive entry (mfgr.'s ID and drive size) from the the left side list.  In the DU status area you will see an entry for the S.M.A.R.T. status of the hard drive.  If it does not say "Verified" then the hard drive is failing or failed. (SMART status is not reported on external Firewire or USB drives.) If the drive is "Verified" then select your OS X volume from the list on the left (sub-entry below the drive entry,) click on the First Aid tab, then click on the Repair Disk button. If DU reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported click on the Repair Permissions button. Wait until the operation completes, then quit DU and return to the installer.
    If DU reports errors it cannot fix, then you will need Disk Warrior and/or Tech Tool Pro to repair the drive. If you don't have either of them or if neither of them can fix the drive, then you will need to reformat the drive and reinstall OS X.
    Now try installing Mountain Lion.

  • Why can't a backing-map-scheme be specified in caching-schemes?

    Most other types of schemes except backing-map-scheme can be specified in the caching-schemes section of the cache configuration XML file and after that be reused in other scheme definitions. What is the motivation for excluding the backing-map-scheme?
    /Magnus

    Hi Magnus,
    you can specify an "abstract" service-type scheme (e.g. distributed-scheme) containing the backing map scheme instead of the backing map scheme itself.
    I know it is not as flexible as having a backing map scheme separately, but it is almost as good.
    Best regards,
    Robert

  • Could you explain how the read-write-backing-map-scheme is configured in...

    Could you explain how the read-write-backing-map-scheme is configured in the following example?
    <backing-map-scheme>
        <read-write-backing-map-scheme>
         <internal-cache-scheme>
          <class-scheme>
           <class-name>com.tangosol.util.ObservableHashMap</class-name>
          </class-scheme>
         </internal-cache-scheme>
         <cachestore-scheme>
          <class-scheme>
           <class-name>coherence.DBCacheStore</class-name>
           <init-params>
            <init-param>
             <param-type>java.lang.String</param-type>
             <param-value>CATALOG</param-value>
            </init-param>
           </init-params>
          </class-scheme>
         </cachestore-scheme>
         <read-only>false</read-only>
         <write-delay-seconds>0</write-delay-seconds>
        </read-write-backing-map-scheme>
    </backing-map-scheme>
    ...Edited by: qkc on 30-Nov-2009 10:48

    Thank you very much for reply.
    In the following example, the cachestore element is not specified in the <read-write-backing-map-scheme> section. Instead, a class-name ControllerBackingMap is designated. What is the result?
    If ControllerBackingMap is a persistence entity, is the result same with that of cachestore-scheme?
    <distributed-scheme>
                <scheme-name>with-rw-bm</scheme-name>
                <service-name>unlimited-partitioned</service-name>
                <backing-map-scheme>
                    <read-write-backing-map-scheme>
                        <scheme-ref>base-rw-bm</scheme-ref>
                    </read-write-backing-map-scheme>
                </backing-map-scheme>
                <autostart>true</autostart>
            </distributed-scheme>
            <read-write-backing-map-scheme>
                <scheme-name>base-rw-bm</scheme-name>
                <class-name>ControllerBackingMap</class-name>
                <internal-cache-scheme>
                    <local-scheme/>
                </internal-cache-scheme>
            </read-write-backing-map-scheme>

  • Behaviour of read-write-backing-map-scheme in combination with local-scheme

    Hi,
    I have the following questions related to the distributed cache defined below:
    1. If I start putting unlimited number of entries to this cache, will I run out of memory or the local-scheme has some default size limit?
    2. What is default eviction policy for the local-scheme?
    <distributed-scheme>
    <scheme-name>A</scheme-name>
    <service-name>simple_service</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme></local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>SomeCacheStore</class-name>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    Best regards,
    Jarek

    Hi,
    The default value for expiry-delay is zero which implies no expiry.
    http://wiki.tangosol.com/display/COH34UG/local-scheme
    Thanks,
    Tom

Maybe you are looking for

  • My experience of location of the control file

    Just learn bad experience by putting control file on a remote server. My original idea with the remote drive was to spread out th risk of losing all control files on the same location. But with one on remote drive it means 1. That remote server has t

  • Error when running dbms_metadata.get_ddl on different as logged in schema

    Hi , I am running 10gR2 and trying to export views from different schema as logged in by using dbms_metadata.get_ddl. Now I logged as user 'USER_01' and run query like : SELECT dbms_metadata.get_ddl('TABLE','MY_TABLE', 'USER_01') FROM DUAL; I get the

  • Sharp LC32D62U as computer monitor

    Could anyone please offer some advice for using the Sharp LC32D62U as a part time computer monitor and as a HDTV the rest of the time. Would it work and how would the change over be made from computer to TV, how difficult. Many thanks in advance

  • Hi ... in forms 6i  can i make the mouse scroll verticaly & horizontly??

    when i navigate the form in runtime ,, ca i make the mouse scroll the page instead of pushing the left button on the scroll bar ?? can anyone help? thanks in advance?

  • Copying standard KANBAN adobeform in SAP SNC

    Hi, We have a requirement where we need to copy standard adobeform and add barcode to that for KANBAN process. I have following questions regarding this: 1. while copying standard adobeform whether we need to copy Interface also? 2. How to find print