Persistent Schema Conversion

I am trying to increase the version number of the persistent data from 1.0 to 2.0 using a null conversion as described in the documentation. My problem is that when opening a document with the old version it it marked as converted and when saving it comes and asks for file to save as (as a save as). Is it not possible to do schema conversions without forcing the user to overwrite documents manually?
Changes done to the .fr file:
resource PluginVersion (kSDKDefPluginVersionResourceID)
kTargetVersion,
kKBWFPluginID,
kKBWFMajorVersion, kKBWFMinorVersion,
kSDKDefHostMajorVersionNumber, kSDKDefHostMinorVersionNumber,
2, 0,
{ kInDesignProduct, kInCopyProduct },
{ kWildFS },
SDK_DEF_MAKE_VERSIONSTRING(kKBWFMajorVersion, kKBWFMinorVersion, kKBWFBugfixVersion, kKBWFBuildNumber)
resource ClassDescriptionTable(kSDKDefClassDescriptionTableResourceID)
Class
kKBWFConversionProviderBoss,
kInvalidClass,
IID_ICONVERSIONPROVIDER,kSchemaBasedConversionImpl,
IID_IK2SERVICEPROVIDER, kConversionServiceImpl,
resource SchemaList(kSDKDefSchemaListResourceID)
// <none>
resource SchemaFormatNumber(kSDKDefSchemaFormatNumberResourceID)
{1, 0},
{2, 0},

Why did you add the SchemaList ?
The ww-data-conversion.pdf page 9 (Null Conversions) is only specific that there must not be any DirectiveList.
I do not see SchemaList mentioned there as either good or bad, IMO an omission.
No idea whether that solves your Save-As problem, as I'm currently working on application prefs.
Dirk

Similar Messages

  • Upgrade SQL 2000 to SQL 2005 --- schema conversion error

    Hi, I execute an upgrade from MSSQL 2000 to MSSQL 2005 and everything executed succesfully. Then i download the "SAP Tools for MSSQL Server"
    to execute the schema conversion of the database. But i get an error when a step tries to create the stored procedures in the db. Specifically i get the error in the sap_blocked_procccess store procedure with the following error:
    ERROR 2006-12-07 10:19:57
    FCO-00011  The step ExeProcs with step key |SAPMSSTOOLS|ind|ind|ind|ind|0|0|MssSysCopy|ind|ind|ind|ind|4|0|MssProcs|ind|ind|ind|ind|4|0|ExeProcs was executed with status ERROR .
    ERROR 2006-12-07 10:19:57
    MDB-05053  Errors when executing sql command: <p nr="0"/> If this message is displayed as a warning - it can be ignored. If this is an error - call your SAP support.
    INFO 2006-12-07 10:30:00
    An error occured and the user decided to retry the current step: "|SAPMSSTOOLS|ind|ind|ind|ind|0|0|MssSysCopy|ind|ind|ind|ind|4|0|MssProcs|ind|ind|ind|ind|4|0|ExeProcs".
    ERROR 2006-12-07 10:30:29
    FCO-00011  The step ExeProcs with step key |SAPMSSTOOLS|ind|ind|ind|ind|0|0|MssSysCopy|ind|ind|ind|ind|4|0|MssProcs|ind|ind|ind|ind|4|0|ExeProcs was executed with status ERROR .
    I didn't find any note or something else.
    Do you have any idea about this problem?
    Best Regards,
    Thanasis

    Hello,
    please ensure that the Database compatibility level for all databases is 90 by
    running
    exec sp_dbcmptlevel 'master',90
    exec sp_dbcmptlevel 'model',90
    exec sp_dbcmptlevel 'msdb',90
    exec sp_dbcmptlevel 'tempdb',90
    exec sp_dbcmptlevel 'EA5',90
    Then rerun the STM tools.
    Regards
      Clas

  • Export Scott Schema Conversion Error  [SOLVED]

    I'm trying to export default user SCOTT to DMP file.
    But error occured. Some kind of character conversion error.
    This is the log generated:
    Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - 64bit Production
    JServer Release 9.2.0.1.0 - Production
    Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    server uses AL32UTF8 character set (possible charset conversion)
    EXP-00056: ORACLE error 942 encountered
    ORA-00942: table or view does not exist
    EXP-00000: Export terminated unsuccessfully
    What is the solution for this case?
    Thanks
    Message was edited by:
    user465837+++eric

    >
    How if I dont have 9.2 DB?You have attempted to export from one above...?
    Can you skip the TOAD util and just log on to the server and run exp from there?
    Another option would involve downloading the 9.2 Database Client (9i Downloads) and install on a nearby PC, or if you have one with 9.2 already installed, and export from there.
    Any tricks to export user from different version of
    DB instance? (Is it a must that having same version
    of DB to exp-imp?)No tricks. The rule is: to export from lower to higher, you must use the lower versions exp tool. (Then use target version of imp for importing. The key is to get the exp dump of correct version and format.)
    NB: while
    C:\>dir c:\exp.exe / s
    Volume in drive C is Disk_C
    Volume Serial Number is 347F-B884
    only return thatIt's searching, give it (plenty of) time.
    Message was edited by:
    orafad

  • Schema conversion not enable in ssma migration..?

    i connected oracle and sql server in ssma tool. I have a table in oracle called customer and i have a db in sql called users. I
    opened oracle table and sql db in ssma tool. But i am not able to convert my oracle table into schema. On right click convert schema is disabled. Y.? What is the solution.?

    Does this help?
    http://technet.microsoft.com/en-us/library/hh313198.aspx
    Convert Schema
    Converts the selected Oracle objects to SQL Server objects.
    This command is disabled unless objects are selected in Oracle Metadata Explorer.
    Satheesh
    My Blog |
    How to ask questions in technical forum

  • Persistent Data Question

    Hi all,
    As I was looking into implementing persistent data conversion, I tried something for giggle (see below), and I found out that I didn't actually have to do anything, to perform the conversion. This baffles me, and so if anyone could explain to me what's going on, I'd really appreciate it:
    I have a persistent interface on kWorkspaceBoss:
    AddIn {
    kWorkspaceBoss,
    kInvalidClass, {
    IID_SOME_PERSIST_INTERF, kSomePersistInterfImpl,
    In the older version of my plug-in, I have:
    void SomePersistInterfImpl::ReadWrite(IPMStream* stream, ImplementationID prop) {
    stream->XferBool(boolVal1);
    stream->XferBool(boolVal2);
    stream->XferBool(boolVal3);
    stream->XferBool(boolVal4);
    // and many more...
    In the newer version of my plug-in, I have:
    void SomePersistInterfImpl::ReadWrite(IPMStream* stream, ImplementationID prop) {
    stream->XferBool(boolVal1);
    stream->XferBool(boolVal2);
    stream->XferInt32(int32Val);
    pmStringVal.ReadWrite(stream);
    // and many more...
    Using the older plugin, I set boolVal1, boolVal2, boolVal3, boolVal4 to, say, kFalse, kTrue, kFalse, kFalse, and then closed InDesign. And then I replaced the older plugin with the newer one. Now using the newer plug-in, I set boolVal1, boolVal2, int32Val, pmStringVal to, say, kTrue, kFalse, 50, "random", and then closed InDesign.
    It turned out that switching back to the older plug-in, the four bools' values are still preserved. The same also for all the newer plug-in's data when switching back to the newer one.
    1. Does this mean I don't even need to worry about data conversion?
    2. I realize that I should probably also persist the version information (as one of the suggested approach offered by the programming guide) for future changes. The problem is I didn't persist this information in my older plugin, can I still stick that value in for the newer one?
    Thanks a bunch!
    Dennis

    Hi all,
    As I was looking into implementing persistent data conversion, I tried something for giggle (see below), and I found out that I didn't actually have to do anything, to perform the conversion. This baffles me, and so if anyone could explain to me what's going on, I'd really appreciate it:
    I have a persistent interface on kWorkspaceBoss:
    AddIn {
    kWorkspaceBoss,
    kInvalidClass, {
    IID_SOME_PERSIST_INTERF, kSomePersistInterfImpl,
    In the older version of my plug-in, I have:
    void SomePersistInterfImpl::ReadWrite(IPMStream* stream, ImplementationID prop) {
    stream->XferBool(boolVal1);
    stream->XferBool(boolVal2);
    stream->XferBool(boolVal3);
    stream->XferBool(boolVal4);
    // and many more...
    In the newer version of my plug-in, I have:
    void SomePersistInterfImpl::ReadWrite(IPMStream* stream, ImplementationID prop) {
    stream->XferBool(boolVal1);
    stream->XferBool(boolVal2);
    stream->XferInt32(int32Val);
    pmStringVal.ReadWrite(stream);
    // and many more...
    Using the older plugin, I set boolVal1, boolVal2, boolVal3, boolVal4 to, say, kFalse, kTrue, kFalse, kFalse, and then closed InDesign. And then I replaced the older plugin with the newer one. Now using the newer plug-in, I set boolVal1, boolVal2, int32Val, pmStringVal to, say, kTrue, kFalse, 50, "random", and then closed InDesign.
    It turned out that switching back to the older plug-in, the four bools' values are still preserved. The same also for all the newer plug-in's data when switching back to the newer one.
    1. Does this mean I don't even need to worry about data conversion?
    2. I realize that I should probably also persist the version information (as one of the suggested approach offered by the programming guide) for future changes. The problem is I didn't persist this information in my older plugin, can I still stick that value in for the newer one?
    Thanks a bunch!
    Dennis

  • After system copy we have old schema in sql 2005

    Hi,
    we have done the system copy using backup/restore method. After  restore we could see the database owner (schema) [prd and SAPPRDDB] in the Target QAS system.
    what is the procedure to change the schema from PRD to QAS and SAPPRDDB to SAPQASDB in target system.
    Thanks  & Regards,
    Kasi

    Hi Kashi,
    You can use the STM tool provided by SAP for schema conversion after DB copy. The tool is available on Service marketplace. It will automatically do all the required post-processing for MSSQL DB after a DB restore such that you can proceed with the SAP related post-steps.
    However, there are additional steps involved if you have both ABAP+JAVA stack. For JAVA stack you need to first export the JAVA schema using SAPInst and then reimport after the DB restore.
    Hope this helps.
    Regards,
    Deoraj Alok.

  • Change the Schema in Sql 2005

    Hello Experts,
    We have refreshed our QAS system from PRD (abap + java) system. The schema on the system is MCOD (IE SID). Although the database has come up, but in SAP we are not able to access the schema because the schema currently belongs to PRD system.
    I have looked at OSS note 551915 but this script is valid for Sql 2000. We have Sql 2005.
    Also, I have looked at a couple of scripts but that have not worked on our scenario to convert the scema.
    Can anyone share and paste the Alter schema script to convert all the tables both for (SAPPRDDB and PRD) ?
    Please Help!
    Thanks,
    Antarpreet

    Hello Antarpreet,
    Please suggest if I am wrong:
    u have refreshed QAS from PRD and now the schema owner of QAS is SID of PRD and you want to change this to SID of QAS
    Well then in that case you will need to perfrom schema conversion using SQL tools mentioned in the note I have given to you,I assure your issues will get resolved,we have been doing it a lot in our setup and have no issues,we also have schema systems
    Also note 551915 also says this- to perform schema upgarde using SQL tools
    Also what makes you say that the scripts mentioned in the note are for SQL server 2000,it is no where mentioned
    But I strongly suggest you to go for SQL tools,even if it does no gud to your system(which is a remotest possibility),it will not do any harm....
    Rohit

  • Convert Char (##) to P

    Hi experts
    I have to convert a char-variable to a p-variable.
    The char-variable contains "##" (HEX = 001C).
    Normaly it should be "1" and not '##'. Because when I put a "1" to the P-variable, I get the same hex-value like in the char.
    When I say p-variable = char-variable, I get a dumpt CONVT_NO_NUMBER.
    So, how can I convert the char-variable to the p-variable.
    Thx in advance.
    Christian

    Hi Christian,
    The problem here lies in interpretation.
    The same sequence of bytes (or bits) doesn't always mean the same. In your case 001C hex value is something different for character variable and means something other for p-var. It is the system who interprets this byte sequence.
    With p var the system parses each byte as sequence of two digits (BCD - byte coded decimal). This means if we have such declaration
    data p_var(2) type p value 12.
    Its hex representation is 012C . 1 - is stored in memory on 4 bits (half a byte), 2 - is stored as 4 bits too. C - stands for sign of this variable (half a byte), 0 - is just added in front (half a byte). In total its 2 bytes.
    If p_var was -12 then its hex would be 012D .
    In contrary your c_var with hex 001C is UTF-16 representation of special character 1C which in [ASCII table|http://ascii-table.com] is not a printable one. That's why it is displayed to you as ##
    What is more it doens't really make sense to convert p_var to c_var or vice versa, as system treats these two variables in totally different manner. It only make sense to take the integer representation of c_var . This is similar to saying: "please find this character in current code page or in Unicode and give me its position in this table (its integer value)".
    So you can perform such conversion
    data i_var type i.
    data c_var type c value 'A'
    i_var = c_var.
    "i_var = 65
    Only then you could map this i_var to your p_var but it really doens't make sense as the sytem would write it like 065C - p_var = 65
    Hope you get the difference and will not persist on conversion b/w those two incompatible types
    Regards
    Marcin

  • Data migration from ASE 11.9.2 to 16.0

    Hi all,
    I'm at a customer site running Sybase ASE 11.9.2 on Windows 2000. I wish to migrate their set of databases to ASE 15.7 or 16.0 hosted on Server 2012 (for obvious reasons). I've downloaded and installed ASE 16.0 developer edition on a new Windows 2012 virtual server and and am moving to the database migration stage....
    From what I have read, a single step migration from 11.9.2 to 15.7 is possible, thus I assume a migration to 16.0 might also be possible.
    What I have done:
    i) Dumped the existing database and transaction logs from ASE 11.9.2 to some files on disk.
    ii) Created a datastore for data and logs and created a database with 'load on' the ASE 16.0 environment
    iii) loaded a 11.9.2 database and transaction logs into the blank database on ASE 16.0 environment (successful)
    iv) run an 'online database DBNAME' (failing)
    From what I have read, schema conversion etc. occurs during the online database action, not the initial load. This is the error i'm getting:
    1> online database Cust201213
    2> go
    Started estimating recovery log boundaries for database 'Cust201213'.
    Database 'Cust201213', checkpoint=(677861, 3), first=(677861, 3), last=(677861, 3).
    Completed estimating recovery log boundaries for database 'Cust201213'.
    Started ANALYSIS pass for database 'Cust201213'.
    Completed ANALYSIS pass for database 'Cust201213'.
    Recovery of database 'Cust201213' will undo incomplete nested top actions.
    Database 'Cust201213' appears to be at an older version '12.5' than the present installation at version '16.0'; ASE will assess it, and upgrade it as required.
    Database 'Cust201213': beginning upgrade step [ID 2]: validate basic system type data
    Database 'Cust201213': beginning upgrade step [ID 3]: alter table (table sysindexes)
    (182 rows affected)
    Msg 644, Level 21, State 5:
    Server 'SYBASE', Line 1:
    Index row entry for data row id (232233, 0) is missing from index page 232225 of index id 2 of table 'sysindexes' in database 'Cust201213'. Xactid is (677873,8). Drop and re-create the i
    Msg 3469, Level 20, State 1:
    Server 'SYBASE', Line 1:
    Database 'Cust201213': upgrade failed to create index 2 on table 'csysindexes'. Please refer to previous error messages to determine the problem. Fix the problem, then try again.
    Msg 3461, Level 20, State 1:
    Server 'SYBASE', Line 1:
    Database 'Cust201213': upgrade could not install required upgrade item '3'. Please refer to previous error messages to determine the problem. Fix the problem, then try again.
    Msg 3452, Level 20, State 1:
    Server 'SYBASE', Line 1:
    Database 'Cust201213': upgrade item 1134 depends on item 3, which could not be installed. Please refer to previous messages for the cause of the failure, correct the problem and try agai
    Msg 3451, Level 20, State 1:
    Server 'SYBASE', Line 1:
    Database 'Cust201213': upgrade has failed for this database. Please refer to previous messages for the cause of the failure, correct the problem and try again.
    Msg 3454, Level 20, State 1:
    Server 'SYBASE', Line 1:
    Database 'Cust201213': ASE could not completely upgrade this database; upgrade item 1134 could not be installed.
    ASE could not bring database 'Cust201213' online.
    As the database is not online of ASE 16.0 i've been unable to conduct must troubleshooting on that version, so I have cloned a copy of the database on the 11.9.2 instance instead to work on the 644 error.
    From the error I assume that the sysindexes system table is corrupt. I've read various forum entries that talk about running a dbcc reindex etc. however the Sybase knowledge base states that such a command cannot be run on the sysindexes table.
    Can anyone give me some advice on rebuilding a corrupt sysindexes table please. Go easy on me; I've only been exposed to Sybase ASE for 2 days so i'm learning .
    Regards,
    Mike Squirrell

    Hi Brad,
    Banging my head against a brick wall here. Constructing an ASE 12.5.4 environment on a server 2012 VM (likely not supported) is proving challenging. It half works in XP compatibility mode but java is not happy.... To get a 12.5.4 staging environment working properly I would need to build a windows server 2003 VM and a windows XP client, get the ODBC drivers working properly etc. and test it. Software of this age is extremely hard to track down and license (particularly without an up to date support agreement with SAP)...
    I am working on an option that is looking positive so far, though it's the last thing I wanted to do.... namely migrate the data from Sybase 11.9.2 to SQL server 2012. The SQL Server migration tool does support this early version of Sybase, and fortunately the 9 databases I need to convert are not too big or too complex so i'm cautiously optimistic. Analysis and dry-runs to date suggests this may be the least painful solution. Keen to rip the data out of this ancient environment before it dies.
    Thanks for you help. This just demonstrates the folly of running a production system in a 16 year old environment; all good fun.

  • Implementation Difference b/w CMP1.1 and CMP2.0

    Hello Java gurus,
    Can any body help me in understanding the difference between CMP1.1 and CMP2.0 implementation. As per to EJB 2.0 specification, which says, In EJB 2.0 a container-managed entity bean is defined to be abstract and its persistent fields are not defined directly in the bean class. Instead, an abstract persistent schema has been developed that lets the bean provider declare the persistent fields and bean relationships indirectly, while in EJB1.1 spec, the persistent fields were defined in the bean class directly. I am unable to understand why in EJB 2.0 the persistent fields needs to be accessed thru setters/getters n not directly. What i can achieve by doing this?? Whether it is responsible for making relationship between various EJBs?? If yes then how does this will help to maintain a relationship between ejbs like one to one, one to many, and many to many. Because the relationship between beans whatever relationship it is) is defined in the deployment descriptor file. So if i am using the EJB1.1 approach for developing CMPs, i can still add those xml tags which establishes the relationship between EBJs. Can't I ???
    I just read one from java world site on "New In EJB2.0" but really unable to understand the problem which i have mentioned above. I am confused. Plz explain.
    Tx in Advance
    Jam

    Is there anyone to answer my question??? Plz help
    Thanx n regards
    Jam

  • Importing xsd problems

    I'm  using two xsd.
    The first one is like  a dictionory the second one form specific.
    In the form specific xsd  I use restriction from the dictionary
    for eg: dictionary.xsd is like this
    <?xml version="1.0" encoding="UTF-8"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
    xmlns="http://www.example.org/Base"
    targetNamespace="http://www.example.org/Base"
    elementFormDefault="unqualified">
    <xs:element name="A" type="AType" />
    <xs:complexType name="AType">
    <xs:sequence>
    <xs:element name="title" type="xs:string" />
    <xs:element name="name" type="xs:string" minOccurs="0" />
    </xs:sequence>
    </xs:complexType>
    </xs:schema>
    form.xsd is
    <?xml version="1.0" encoding="UTF-8"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
    xmlns="http://www.example.org/Restriction"
    targetNamespace="http://www.example.org/Restriction"
    xmlns:base="http://www.example.org/Base"
    elementFormDefault="unqualified">
    <xs:import namespace="http://www.example.org/Base"
    schemaLocation="dictionary.xsd" />
    <xs:element name="ARestricted" type="ARestrictedType" />
    <xs:complexType name="ARestrictedType">
    <xs:complexContent>
    <xs:restriction base="base:AType">
    <xs:sequence>
    <xs:element name="name" type="xs:string" />
    </xs:sequence>
    </xs:restriction>
    </xs:complexContent>
    </xs:complexType>
    </xs:schema>
    when i import this into livecyle  I'm expecting to see only restricted elements but i see all the elements from dictionary.
    Here in this case I want to see in dataview only "name", but i see both "title" and "name"
    In my actual forms "dictionary xsd" is so big which makes my livecycle designe very slow.
    As a result i get this error message, if i remove some of the elements then livecycle can import.
    Can somebody help me please.

    EMF doesn't provide any tools for doing DTD to XML Schema conversion.
    The first error sounds like there are things with names that aren't well
    formed and the second sounds like references either to a namespace that
    isn't imported or that doesn't define the referenced name.
    as13 wrote:
    > Hello,
    > I got a problem by importing an xsd schema.
    > I converted a dtd to a xsd schema first by Stylus Studio. After that i
    > tried to create a new EMF project by importing the xsd schema.
    >
    > Allways got problems like that:
    >
    > "Problems were detected while validating and converting the XML Schemas
    > Error: XSD: The value 'xxx' of attribute 'name' must conform to
    > pattern '(\i\c*) & ([\i-[:]][\c-[:]]*)' as constrained by
    > 'http://www.w3.org/2001/XMLSchema#NCName' : URI
    > file:/C:/.../EnterpriseDTD.xsd Line 21 Column 4
    >
    > or
    >
    > Error: XSD: Element reference 'UML#xxx' is unresolved : URI
    > file:/C:/.../EnterpriseDTD.xsd Line 569 Column 5
    >
    >
    > Because of the xsd Schema is about 700kB, these problems appears round
    > about 3000 times!!!
    >
    > Does anyone has an idea? I think it´s about a not valid xsd schema,
    > but I don´t know, how i can generate a valid xsd schema out of a DTD.
    > I tried it with stylus Studio an XMLSpy. Couldn´t get a "valid" Schema...
    >
    >
    >
    > greets
    >

  • Low cost XML editor

    Did any of you guys work with a decent low cost XML editor?
    We have a scenario where we receive the data in big XML files and we do somewhat complex mapping, it would be really nice to have an editor to create test files, mostly deleting (lots of) nodes from the production files in order to debug the issues.
    Is there any good (free?) tool to do this?
    Thanks a lot,
    Viktor Varga

    Hi Viktor,
    have a look at <a href="http://www.freexmleditor.com/index.html">Exchanger XML Lite</a>.
    It is free for personal use.
    <i>It features XML Schema, RelaxNG and DTD based editing, tag prompting and validation, XPath and regular expression searches, schema conversion, XSLT, XQUERY and XSLFO transformations, comprehensive project management, an SVG viewer and conversion, easy SOAP invocations, and more....</i>(<a href="http://www.freexmleditor.com/fxeeditor/features.html">features</a>)
    Kind Regards,
    Sergio

  • Cache configuration overrides

    So i have a cache config with several caches and schemes defined, the below is a snippet for one cache.
    What i'd like to do is override the high/low units setting in a an environment specific way. Eg, One prod installation might get a different value from a 2nd installation.
    <cache-mapping>
                <cache-name>user</cache-name>
                <scheme-name>distributed-persistent</scheme-name>
                <init-params>
                    <init-param>
                        <param-name>cache-store-class-name</param-name>
                        <param-value>spring-bean:userStore</param-value>
                    </init-param>
                    <init-param>
                        <param-name>expiry-time</param-name>
                        <param-value>4h</param-value>
                    </init-param>
                    <init-param>
                        <param-name>high-units-param</param-name>
                        <param-value>1000000</param-value>
                    </init-param>
                    <init-param>
                        <param-name>low-units-param</param-name>
                        <param-value>750000</param-value>
                    </init-param>
                </init-params>
    </cache-mapping>I believe i can use something like:
    <param-value system-property="user.cache.high-units">100000</param-value>But this may get out of hand with many caches, and is also unsavory in a multi-war (same jvm) installation.
    Ideally, i'd like to do something like:
    <param-value>${user.cache.high-units:1000000}</param-value>And then have a properties file that i specify at runtime "in some way", with all of the overrides in that file.
    Is there some way to do this?
    Thanks.

    Hi,
    We have a lot of system properties in the system I currently work on and these are all loaded from environment specific properties files when the nodes in the cluster start up.
    The most basic way to do this would be something like setting a single system property when the JVM starts that specifies the environment - e.g -Dcoherence.env=uat
    Then the very first thing you do in the main method of the class that starts the application is load the relevant properties file...
    public class Main
        public static void main(String[] args) throws Exception
            String env = System.getProperty("coherence.env", "dev");
            String propertiesFile = env + ".properties";
            InputStream inputStream = Main.class.getResourceAsStream(propertiesFile);
            Properties properties = new Properties();
            properties.load(inputStream);
            System.getProperties().putAll(properties);
            // now do the rest of the application start-up
    }The above code is very simple but should give you a start.
    Obviously you could have a more complex naming convention for the properties file.
    In our case we also do not overwrite System properties that are already set. This allows us to have a property set in the environment specific properties file but then override it at run time by specifying the same property on the command line with a different value. To do that you would change the above code like this...
    public class Main
        public static void main(String[] args) throws Exception
            String env = System.getProperty("coherence.env", "dev");
            String propertiesFile = env + ".properties";
            InputStream inputStream = Main.class.getResourceAsStream(propertiesFile);
            Properties properties = new Properties();
            properties.load(inputStream);
            Properties systemProperties = System.getProperties();
            for (Map.Entry<String,String> entry : (Set<Map.Entry<String,String>>)properties.entrySet())
                if (!systemProperties.containsKey(entry.getKey()))
                    systemProperties.setProperty(entry.getKey(), entry.getValue());
            // now do the rest of the application start-up
    }Our implementation is slightly more complex again as we also allow properties files to import other properties files. This allows us to only have to specify properties common to all environments in a single place.
    Hopefully that gives you some ideas to start with.
    If you currently use the DefaultCacheServer class to start your cluster nodes it is easy enough to write a new main class, do the properties setup in a new class then just call DefaultCacheServer main.
    JK

  • Partition X does not exist at PartitionSplittingBackingMap

    Hi Guys,
    I recently upgraded to Coherence 3.5 and I now seem to regularly get errors similar to below when starting the cluster followed by node death.
    Any ideas what the cause might be?
    2009-10-22 15:12:17,331 ERROR lccohd1-2 1.7.596 Log4j [Logger@9236976 3.5.1/461p2] - 46.747 <Error> (thread=DistributedCache, member=2):
    java.lang.IllegalStateException: Partition 45 does not exist at PartitionSplittingBackingMap{Name=tradeoverview$Backup,Partitions=[63,128,165,166,167,168,169,170,192,193,194,195,196,197,198,199,200,201,202,203,]}
    at com.tangosol.net.partition.PartitionSplittingBackingMap.reportMissingPartition(PartitionSplittingBackingMap.java:566)
    at com.tangosol.net.partition.PartitionSplittingBackingMap.putAll(PartitionSplittingBackingMap.java:161)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onPutRequest(DistributedCache.CDB:132)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$PutRequest.run(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheKeyRequest.ExtendedKeyRequest.onReceived(ExtendedKeyRequest.CDB:8)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    Thanks, Paul

    Hi Paul,
    pmackin wrote:
    1) How many members are running, including the one that won't start.This is happening randomly in integration which has 2 machines each with 6 members (4 storage enabled, 2 disabled).
    pmackin wrote:
    2) Are you running the same version of coherence for all members? If not, what versions are running.All members are the same version - 3.5.1/461p2
    Thanks, Paul
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "dtd/cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>cache-control</cache-name>
                   <scheme-name>distributed-identifiable-scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>event-registration-cache</cache-name>
                   <scheme-name>replicated-identifiable-scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>reference-data-*</cache-name>
                   <scheme-name>replicated-identifiable-scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>timeseries-*</cache-name>
                <scheme-name>distributed-timeseries-scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>distributed-timeseries-*</cache-name>
                   <scheme-name>distributed-timeseries-scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                <cache-name>replicated-identifiable-*</cache-name>
                <scheme-name>replicated-identifiable-scheme</scheme-name>
            </cache-mapping>
             <cache-mapping>
                <cache-name>distributed-identifiable-*</cache-name>
                <scheme-name>distributed-identifiable-scheme</scheme-name>
            </cache-mapping>      
            <cache-mapping>
                <cache-name>distributed-token-*</cache-name>
                <scheme-name>distributed-token-scheme</scheme-name>
            </cache-mapping>
            <cache-mapping>
                <cache-name>token-*</cache-name>
                <scheme-name>distributed-token-scheme</scheme-name>
            </cache-mapping>
            <cache-mapping>
                   <cache-name>single-start-services</cache-name>
                <scheme-name>replicated-identifiable-scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>event-registration-cache</cache-name>
                   <scheme-name>replicated-identifiable-scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>order</cache-name>
                   <scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
                   <init-params>
                      <init-param>
                            <param-name>cachestore-name</param-name>
                             <param-value>OrderCacheStore</param-value>
                      </init-param>
                    </init-params>
              </cache-mapping>          
              <cache-mapping>
                   <cache-name>execution</cache-name>
                   <scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
                   <init-params>
                      <init-param>
                            <param-name>cachestore-name</param-name>
                             <param-value>ExecutionCacheStore</param-value>
                      </init-param>
                    </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>tradeoverview</cache-name>
                   <scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
                   <init-params>
                      <init-param>
                            <param-name>cachestore-name</param-name>
                             <param-value>TradeOverviewCacheStore</param-value>
                      </init-param>
                    </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>tradeoverview-latest</cache-name>
                   <scheme-name>distributed-identifiable-scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>calculators</cache-name>
                   <scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
                   <init-params>
                      <init-param>
                            <param-name>cachestore-name</param-name>
                             <param-value>CompositeCalculatorCacheStore</param-value>
                      </init-param>
                    </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>instrumentstatistics</cache-name>
                   <scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
                   <init-params>
                      <init-param>
                            <param-name>cachestore-name</param-name>
                             <param-value>InstrumentStatisticsCacheStore</param-value>
                      </init-param>
                    </init-params>
              </cache-mapping>          
              <cache-mapping>
                   <cache-name>eclipsesequence</cache-name>
                   <scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
                   <init-params>
                      <init-param>
                            <param-name>cachestore-name</param-name>
                             <param-value>EclipseSequenceCacheStore</param-value>
                      </init-param>
                    </init-params>
              </cache-mapping>          
              <cache-mapping>
                   <cache-name>eodrecord</cache-name>
                   <scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
                   <init-params>
                      <init-param>
                            <param-name>cachestore-name</param-name>
                             <param-value>EodRecordCacheStore</param-value>
                      </init-param>
                    </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>auditrecord</cache-name>
                   <scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
                   <init-params>
                      <init-param>
                            <param-name>cachestore-name</param-name>
                             <param-value>AuditRecordCacheStore</param-value>
                      </init-param>
                    </init-params>
              </cache-mapping>
              <cache-mapping>          
                   <cache-name>loggedinusers</cache-name>
                   <scheme-name>distributed-identifiable-evict-scheme</scheme-name>
                   <init-params>
                        <init-param>
                             <param-name>cache-scheme-name</param-name>
                             <param-value>eventsource-local-scheme</param-value>
                        </init-param>
                        <init-param>
                             <param-name>flush-delay</param-name>
                             <param-value>10s</param-value>
                        </init-param>
                        <init-param>
                             <param-name>expiry-delay</param-name>
                             <param-value>10s</param-value>
                        </init-param>
                        <init-param>
                             <param-name>high-units</param-name>
                             <param-value>10000</param-value>
                        </init-param>
                   </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>datatransfer</cache-name>
                   <scheme-name>distributed-identifiable-evict-scheme</scheme-name>
                    <init-params>
                        <init-param>
                             <param-name>flush-delay</param-name>
                             <param-value>10s</param-value>
                        </init-param>
                        <init-param>
                             <param-name>expiry-delay</param-name>
                             <param-value>10s</param-value>
                        </init-param>
                        <init-param>
                             <param-name>high-units</param-name>
                             <param-value>10000</param-value>
                        </init-param>
                   </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>timeseries-log</cache-name>
                <scheme-name>distributed-timeseries-log-scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>alerts</cache-name>
                   <scheme-name>distributed-identifiable-evict-scheme</scheme-name>
                    <init-params>
                        <init-param>
                             <param-name>flush-delay</param-name>
                             <param-value>10s</param-value>
                        </init-param>
                        <init-param>
                             <param-name>expiry-delay</param-name>
                             <param-value>24h</param-value>
                        </init-param>
                        <init-param>
                             <param-name>high-units</param-name>
                             <param-value>600</param-value>
                        </init-param>
                   </init-params>
              </cache-mapping>
              <!-- BEGIN: com.oracle.coherence.patterns.command
                    The following section needs to be included in your application
                   Cache Configuration file to make use of the Command Pattern
              -->
              <cache-mapping>
                   <cache-name>sequence-generators</cache-name>
                   <scheme-name>distributed-scheme-for-sequence-generators</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>commands</cache-name>
                   <scheme-name>distributed-scheme-with-backing-map-listener</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>contexts</cache-name>
                   <scheme-name>distributed-scheme-with-backing-map-listener</scheme-name>
                   <init-params>
                        <init-param>
                             <param-name>backing-map-listener-class-name</param-name>
                             <param-value>com.oracle.coherence.patterns.command.internal.ContextBackingMapListener</param-value>
                        </init-param>
                   </init-params>
              </cache-mapping>
              <!-- END: com.oracle.coherence.patterns.command -->
         </caching-scheme-mapping>
         <!-- ****************************************************************** -->
         <caching-schemes>
              <distributed-scheme>
                   <scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
                   <backing-map-scheme>
                      <read-write-backing-map-scheme>
                             <internal-cache-scheme>
                               <local-scheme>
                                    <scheme-ref>binary-eventsource-local-scheme</scheme-ref>
                               </local-scheme>
                             </internal-cache-scheme>
                             <cachestore-scheme>
                               <class-scheme>
                                    <class-name>container:com.core.cache.cachestores.{cachestore-name}</class-name>
                               </class-scheme>
                             </cachestore-scheme>
                             <cachestore-timeout>1800000</cachestore-timeout>
                             <write-delay>1</write-delay>
                             <write-requeue-threshold>50000</write-requeue-threshold>
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                   </serializer>
              </distributed-scheme>
              <distributed-scheme>
                   <scheme-name>distributed-identifiable-persist-evict-scheme</scheme-name>          
                   <backing-map-scheme>
                      <read-write-backing-map-scheme>
                             <internal-cache-scheme>                  
                               <local-scheme>
                                       <expiry-delay>10s</expiry-delay>
                                        <high-units>10000</high-units>                                                      
                               </local-scheme>
                             </internal-cache-scheme>                      
                             <cachestore-scheme>
                               <class-scheme>
                                    <class-name>container:com.core.cache.cachestores.{cachestore-name}</class-name>
                               </class-scheme>
                             </cachestore-scheme>
                             <cachestore-timeout>1800000</cachestore-timeout>
                             <write-delay>1s</write-delay>
                             <write-requeue-threshold>50000</write-requeue-threshold>
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                   </serializer>
              </distributed-scheme>
              <distributed-scheme>
                   <scheme-name>distributed-identifiable-evict-scheme</scheme-name>          
                   <thread-count>5</thread-count>
                   <backing-map-scheme>
                        <local-scheme>
                             <scheme-ref>eventsource-local-scheme</scheme-ref>
                              <flush-delay>{flush-delay}</flush-delay>
                             <expiry-delay>{expiry-delay}</expiry-delay>
                              <high-units>{high-units}</high-units>
                        </local-scheme>
                   </backing-map-scheme>
                <autostart>true</autostart>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                   </serializer>
              </distributed-scheme>
              <!--
                   ********* A distributed store that contains identifiable  *******
                   ********* objects. Keys are all Ids and values are AbstractIdentifiable.  ******
              -->
              <distributed-scheme>
                   <scheme-name>distributed-identifiable-scheme</scheme-name>
                   <thread-count>5</thread-count>
                   <backing-map-scheme>
                        <local-scheme>
                             <scheme-ref>binary-eventsource-local-scheme</scheme-ref>
                        </local-scheme>
                   </backing-map-scheme>
                <autostart>true</autostart>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                   </serializer>
              </distributed-scheme>
              <distributed-scheme>
                <scheme-name>distributed-token-scheme</scheme-name>
                <thread-count>1</thread-count>
                <backing-map-scheme>
                    <local-scheme>
                        <scheme-ref>token-eventsource-local-scheme</scheme-ref>
                    </local-scheme>
                </backing-map-scheme>
                <autostart>true</autostart>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                   </serializer>
            </distributed-scheme>
              <!--
                   ********* A replicated scheme with unlimited local storage *******
                   ********* all items should extend from AbstractIdentifiable ******
              -->
              <replicated-scheme>
                   <scheme-name>replicated-identifiable-scheme</scheme-name>
                   <backing-map-scheme>
                        <local-scheme>
                             <scheme-ref>eventsource-local-scheme</scheme-ref>
                        </local-scheme>
                   </backing-map-scheme>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                   </serializer>
              </replicated-scheme>
              <!--
                   ********* A timeseries scheme with limited local storage *******
                   ********* all items should be keyed using a TimeseriesKey ******
              -->
              <distributed-scheme>
                   <scheme-name>distributed-timeseries-scheme</scheme-name>
                <lease-granularity>member</lease-granularity>
                   <backing-map-scheme>
                        <local-scheme>
                             <scheme-ref>binary-eventsource-local-scheme</scheme-ref>
                        </local-scheme>
                   </backing-map-scheme>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                   </serializer>
              </distributed-scheme>
              <distributed-scheme>
                   <scheme-name>distributed-timeseries-log-scheme</scheme-name>
                <lease-granularity>member</lease-granularity>
                   <backing-map-scheme>
                        <local-scheme>
                             <scheme-ref>eventsource-local-scheme</scheme-ref>
                            <high-units>500</high-units>
                             <expiry-delay>4h</expiry-delay>
                             <flush-delay>10s</flush-delay>
                        </local-scheme>
                   </backing-map-scheme>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                   </serializer>
              </distributed-scheme>
              <!-- BEGIN: com.oracle.coherence.patterns.command
                    The following section needs to be included in your application
                   Cache Configuration file to make use of the Command Pattern
              -->
              <distributed-scheme>
                   <scheme-name>distributed-scheme-with-backing-map-listener</scheme-name>
                   <backing-map-scheme>
                        <local-scheme>
                           <listener>
                                <class-scheme>
                                     <class-name>{backing-map-listener-class-name com.oracle.coherence.common.backingmaplisteners.NullBackingMapListener}</class-name>
                                     <init-params>
                                          <init-param>
                                               <param-type>com.tangosol.net.BackingMapManagerContext</param-type>
                                               <param-value>{manager-context}</param-value>
                                          </init-param>
                                     </init-params>
                                </class-scheme>
                           </listener>
                        </local-scheme>
                   </backing-map-scheme>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                   </serializer>
              </distributed-scheme>
              <distributed-scheme>
                   <scheme-name>distributed-scheme-for-sequence-generators</scheme-name>
                   <backing-map-scheme>
                        <local-scheme>
                        </local-scheme>
                   </backing-map-scheme>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                   </serializer>
              </distributed-scheme>
              <!-- END: com.oracle.coherence.patterns.command -->
              <!--
                   ********* A local scheme that pushes events to the eventrouter *******
              -->
              <local-scheme>
                   <scheme-name>eventsource-local-scheme</scheme-name>
                   <listener>
                        <class-scheme>
                             <class-name>container:com.core.cache.events.EventSourceBackingMapListener</class-name>
                             <init-params>
                                  <init-param>
                                       <param-type> com.tangosol.net.BackingMapManagerContext</param-type>
                                       <param-value>{manager-context}</param-value>
                                  </init-param>
                             </init-params>
                        </class-scheme>
                   </listener>
              </local-scheme>
              <local-scheme>
                   <scheme-name>binary-eventsource-local-scheme</scheme-name>
                   <unit-calculator>BINARY</unit-calculator>
                   <listener>
                        <class-scheme>
                             <class-name>container:com.core.cache.events.EventSourceBackingMapListener</class-name>
                             <init-params>
                                  <init-param>
                                       <param-type> com.tangosol.net.BackingMapManagerContext</param-type>
                                       <param-value>{manager-context}</param-value>
                                  </init-param>
                             </init-params>
                        </class-scheme>
                   </listener>
              </local-scheme>
              <!--
                   ********* A local scheme that pushes events to the eventrouter, including DISTRIBUTION events. *******
              -->
              <local-scheme>
                   <scheme-name>token-eventsource-local-scheme</scheme-name>
                   <listener>
                        <class-scheme>
                             <class-name>container:com.core.cache.events.TokenEventSourceBackingMapListener</class-name>
                             <init-params>
                                  <init-param>
                                       <param-type>com.tangosol.net.BackingMapManagerContext</param-type>
                                       <param-value>{manager-context}</param-value>
                                  </init-param>
                             </init-params>
                        </class-scheme>
                   </listener>
              </local-scheme>
              <!--
                   ********* The TCP Extend proxy scheme ****************************
              -->
              <proxy-scheme>
                   <service-name>ExtendTcpProxyService</service-name>
                   <thread-count>5</thread-count>
                   <acceptor-config>
                        <tcp-acceptor>
                             <local-address>
                                  <address>localhost</address>
                                  <port system-property="tangosol.coherence.ems.port">10001</port>
                                  <reusable>true</reusable>
                             </local-address>
                        </tcp-acceptor>
                        <serializer>
                             <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        </serializer>
                        <use-filters>
                             <filter-name>wrapped-gzip</filter-name>
                             <filter-name>version-check-filter</filter-name>
                        </use-filters>
                   </acceptor-config>
                   <autostart system-property="tangosol.coherence.ems.enabled">true</autostart>
              </proxy-scheme>
         </caching-schemes>
    </cache-config>

  • ORA-12913: Cannot create dictionary managed tablespace

    I am using the following statement:
    CREATE TABLESPACE data01
    DATAFILE 'D:\ORACLE\ORADATA\SSR\data01.dbf' SIZE 2M
    EXTENT MANAGEMENT DICTIONARY;
    and I get the following error message:
    ORA-12913: Cannot create dictionary managed tablespace
    If I take off EXTENT MANAGEMENT DICTIONARY, the statement executes. What am I doing wrong?

    Details about create a tablespace:
    CREATE TABLESPACE
    Purpose
    Use the CREATE TABLESPACE statement to create a tablespace, which is an allocation of space in the database that can contain persistent schema objects.
    When you create a tablespace, it is initially a read/write tablespace. You can subsequently use the ALTER TABLESPACE statement to take the tablespace offline or online, add datafiles to it, or make it a read-only tablespace.
    You can also drop a tablespace from the database with the DROP TABLESPACE statement.
    You can use the CREATE TEMPORARY TABLESPACE statement to create tablespaces that contain schema objects only for the duration of a session.
    See Also:
    Oracle9i Database Concepts for information on tablespaces
    ALTER TABLESPACE for information on modifying tablespaces
    DROP TABLESPACE for information on dropping tablespaces
    CREATE TEMPORARY TABLESPACE
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_74a.htm#SQLRF01403
    Joel P�rez

Maybe you are looking for