Schema Changes from JSIM 5.5 to 6.0

Hello,
In reviewing the documentation for JSIM 6.0, it appears to note that there are schema changes from V 5.5 in 6.0. However, these changes are not detailed in the doc. Can anyone detail for me exactly what these schema changes are, whether any scripts are required and available, and/or if the schema is automatically updated if an aupgrade in place occurs?

There has been a doc bug logged about the fact that this was missing, fixed in SP1 release notes.
There are scripts provided for you to upgrade from 5.x to 6.0 in the db_scripts diretcory of the install. There are a number of changes which are different for each database type. Each database type comes with a script called upgradeto2005Q4M3.*
You can only upgrade from a version 5.x repo to a version 6 not from before 5.x You run the upgrade scripts before you import the update.xml file.
Major changes:
Oracle can now use BLOB's (optional)
MySQL 4.1 is the only version supported (4.0 can not be used anymore)
Index changes for performance enhancements
The scripts contain comments explaining what they do and why they do it.
WilfredS

Similar Messages

  • Pwd-storage-scheme change (from CRYPT to SHA)

    Greetings;
    My pwd-storage-scheme (global-policy) is currently set to CRYPT, I am now required to change this to SHA.
    Most of my clients are Solaris, with a few RHEL (different flavors).
    What is the best way to make the above change?
    What effects, if any, will this have on my UNIX clients?
    Also, my pwd-compat-mode is set to "DS6-migration-mode", and I need to change this to "DS6-mode", would this cause any issues for me?
    I only have DS6 servers in my environment, no DS5 at all, and no other DS servers, although at some point I may wish to sync with AD.
    Thanks all,

    Hi,
    (Unix) Crypt is a one-way function, so plain text password is required to generate the SHA hash.
    You can change the password storage scheme to SHA, but passwords will be stored with SHA when users update their passwords. To speed up the process, you can configure password expiration or force users to change their passwords. Note that users with passwords stored in crypt format will still be able to authenticate even if password storage scheme is set to SHA. Said differently, different password storage schemes can cohabit across existing entries. Over time, every password will be stored with the configured password storage scheme as users update passwords.
    Ds6 password policy mode introduces new operational attributes in user entries. These new attr are generated when passwords are changed, so to have a fully featured password policy based on ds6-mode, you should 1/ move to migration mode 2/ have users update their passwords 3/ switch to ds6 mode . This admin action relates somehow to the switch CRYPT/SHA that already requite password changes.
    Note that there is a tool provided with ODSEE11gr2 that generates the appropriate operational password policy attr w/o requiring users to change their password. This might be an alternate solution if you have the right to use that version.
    HTH
    -Sylvain

  • Removing schema changes in AD made by software

    Hi,
    I've got an application in my forest, which has extended the schema. This is not Exchange, but if I uninstall this application for any reason, how can I remove all the associated schema changes from AD?
    Thanks

    Hello,
    It is not possible to roll back the schema to the previous state. As other experts mentioned in their post the only way is to do a full forest recovery. Have a look at this link for more information:
    Best Practices for Implementing Schema Updates
    Regards.
    Mahdi Tehrani   |  
      |  
    www.mahditehrani.ir
    Please click on Propose As Answer or to mark this post as
    and helpful for other people.
    This posting is provided AS-IS with no warranties, and confers no rights.
    How to query members of 'Local Administrators' group in all computers?

  • Order Management 11i Schema Changes

    Hello,
    Does any one have all the schema changes from Order Entry to Order Management listed out? Oracle has it all over the place and I'd appreciate if someone who has figured this out posts it.
    TIA,
    Sudha

    I wanted more a mapping of old --> new tables/columns etc.. For other modules, we have Produt notes where all this is listed - But OM is considered a new product and not included in the product notes :-(
    I am doing my own mapping but wanted to know if someone else has it so I don't miss anything (we have a lot of custom views/reports/procedures which need to be upgraded and am having to rewrite most of them)
    I'd appreciate if someone who has done this mapping can share.
    Can you please send me to [email protected]

  • How to port database changes from development to a production environment

    How do I port database changes from the development to the production environment?
    I am using v8 and have always had to redo everything using the schema manager all over in the production environment. Is there an easy way to generate a script, for example to dump the database changes on the development machine to be executed later on the production machine?

    This should already be a clearly defined change control process. Once a procedure, function, package, trigger, or whatever completes the testing rounds, it should be promoted to production.
    Forgive me if it seems I'm trivializing, but I don't see the problem, just copy the object(s) from your software library (or development) into production using whatever tool works best or has been chosen. If you are doing data copies then you have various options again including good old export/import.

  • What is the best methodology to handle database schema changes after an application has been deployed?

    Hi,
    VS2013, SQL Server 2012 Express LocalDB, EF 6.0, VB, desktop application with an end user database
    What is a reliable method to follow when there is a schema change for an end user database used by a deployed application?  In other words, each end user has their own private data, but the database needs to be expanded for additional features, etc. 
    I list here the steps it seems I must consider.  If I've missed any, please also inform:
    (1) From the first time the application is installed, it should have already moved all downloaded database files to a separate known location, most likely some sub-folder in <user>\App Data.
    (2) When there's a schema change, the new database file(s) must also be moved into the location in item (1) above.
    (3) The application must check to see if the new database file(s) have been loaded, and if not, transfer the data from the old database file(s) to the new database file(s).
    (4) Then the application can operate using the new schema.
    This may seem basic, but for those of us who haven't done it, it seems pretty complicated.  Item (3) seems to be the operative issue for database schema changes.  Existing user data needs to be preserved, but using the new schema.  I'd like
    to understand the various ways it can be done, if there are specific tools created to handle this process, and which method is considered best practice.
    (1) Should we handle the transfer in a 'one-time use' application method, i.e. do it in application code.
    (2) Should we handle the transfer using some type of 'one-time use' SQL query.  If this is the best way, can you provide some guidance if there are different alternatives for how to perform this in SQL, and where to learn/see examples?
    (3) Some other method?
    Thanks.
    Best Regards,
    Alan

    Hi Uri,
    Thank you kindly for your response.  Also thanks to Kalman Toth for showing the right forum for such questions.
    To clarify the scenario, I did not mean to imply the end user 'owns' the schema.  I was trying to communicate that in my scenario, an end user will have loaded their own private data into the database file originally delivered with the application. 
    If the schema needs to be updated for new application features, the end user's data will of course need to be preserved during the application upgrade if that upgrade includes a database schema change.
    Although I listed step 3 as transferring the data, I should have made more clear I was trying to express my limited understanding of how this process "might work", since at the present time I am not an expert with this.  I suspected my thinking
    is limited and someone would correct me.
    This is basically the reason for my post; I am hoping an expert can point me to what I need to learn about to handle database schema changes when application upgrades are deployed.  For example, if an SQL script needs to be created and deployed
    then I need to learn how to do that.  What's the best practice, or most reliable/efficient way to make sure the end user's database is changed to the new schema after the upgraded application is deployed?  Correct me if I'm wrong on this,
    but updating the end user database will have to be handled totally within the deployment tool or the upgraded application when it first starts up.
    If it makes a difference, I'll be deploying application upgrades initially using Click Once from Visual Studio, and eventually I may also use Windows Installer or Wix.
    Again, thanks for your help.
    Best Regards,
    Alan

  • Best Practice for Replicating Schema Changes

    Hi,
    We manage several merge replication topologies (each topology has a single publisher/distributor with several pull subscriptions, all servers/subscribers are SQL Server 2008 R2).  When we have a need to perform schema changes in support of pending software
    upgrades we do the following:
    a) Have all subscribers synchronize to ensure there are no unsynchronized changes present in the topology at the time of schema update,
    b) Make full copy-only backup of distribution and publication databases,
    c) Execute snapshot agent,
    d) Execute schema change script(s) on publisher (*) when c and d are reversed this has caused issues with changes to view definitions which has resulted in us having to reinitialize subscriptions,
    e) Have subscribers synchronize again to receive schema updates.
    Each topology has it's own quirks in terms of subscriber availability and consequently the best time to perform such updates.
    The above process would seem necessary when making schema changes to remove tables, columns and/or views from the database, but when schema changes are focused on adding and/or updating objects, and/or adding/updating data, is the entire process above necessary? 
    In this instance, if it's possible to remove the step of coordinating the entire topology to synchronize prior to performing these changes I would like to do that.
    The process as we currently perform it works without issue, but I'd like to streamline it if and where possible, while maintaining integrity and avoiding potential for non-convergence.
    Any assistance or insight you can provide is greatly appreciated.
    Best Regards
    Brad

    If you need to make schema changes then you will need to use ALTER syntax at the publisher.  By default the schema change will be propagated to subscribers automatically, publication property
    @replicate_ddl must be set to true.  This is covered in
    Make Schema Changes on Publication Databases.
    This can be done at anytime, without the need to synchronize unsynchronized changes, make a backup, or execute the snapshot agent.
    Adding an a new article involves adding the article to the publication, creating a new snapshot, and synchronizing the subscription to apply the schema and data for the newly added article. Reinitialization is not required, but a new snapshot is.
    Dropping an article from a publication involves dropping the articles, creating a new snapshot, and synchronizing subscriptions. Special considerations must be made for Merge publications with parameterized filters and compatibility level lower than 90RTM.
    This is covered in
    Add Articles to and Drop Articles from Existing Publications.
    Brandon Williams (blog |
    linkedin)

  • Using Change Data Capture in SSIS - how to handle schema changes

    I was asked to consider change data capture for a database recently.  I can see that from the database perspective, its quite nice.  When I considered how I'd do this in SSIS, it seemed pretty obvious that I might have a problem, but I wanted to
    confirm here.
    The database in question changes the schema about once per month in production.  We have a lot of controls in our environment, so everytime a tables schema is changed, I'd have to do a formal change request to deal with a change to my code
    base, in this case my SSIS package; it can be a lot of work.   If I wanted to track the data changes for inserts, update and deletes using an SSIS package to send the data changes to the destination tables, would I have to change my SSIS package
    with every schema change, or is there a way to keep the exact same SSIS package with CDC without having to change it every month?
    Thanks,
    Keith

    Hi Keith,
    What is your exact requirement?
    If you want to capture the object_created, object_deleted or object_altered informations, you can try using
    Extended events .
    As mentioned in your OP:
    "If I wanted to track the data changes for inserts, update and deletes using an SSIS package to send the data changes to the destination tables, would I have to change my SSIS package with
    every schema change, or is there a way to keep the exact same SSIS package with CDC without having to change it every month?"
    If you want the databases in two different environments to be in sync, then take periodic
    backup and apply(restore) on the another destination DB.
    (or)
    you can also try with
    SQL Server replication if it is really needed.
    As I understand from your description, if you want the table data & schema to be in sync in two different database:
    then create job [script that will drop the destination DB table & create the copy of source DB table ] as per your requirement:
    --CREATE DATABASE db1
    --CREATE DATABASE db2
    USE db1
    GO
    CREATE TABLE tbl(Id INT)
    USE db2
    GO
    IF EXISTS (SELECT * FROM SYS.OBJECTS WHERE name = 'tb1' and TYPE = 'u')
    DROP TABLE dbo.tb1
    SELECT * INTO db2.dbo.tb1 FROM db1.dbo.tbl
    SELECT * FROM dbo.tb1
    --DROP DATABASE db1,db2
    sathya - www.allaboutmssql.com ** Mark as answered if my post solved your problem and Vote as helpful if my post was useful **.

  • DM 3.0.0.665 / Can't delete schema information from PK in relational model

    I’m trying to erase schema infromation from primary keys in relational mode by opening Primary Key Properties window and selecting the empty line from “Schema” pull down menu. When I click OK I get the following: "There is a Foreign Key on this Key. The status cold be PK or UK only". I didn’t change anything else, only tried to clear the schema information. This does not happen with every primary key, only some of them, but I haven’t figured out how they differ from each other.
    I managed to clear schema information from some primary keys by deleting the schemaObject tag from the xml file. But I still have few primary keys left that didn’t have the schemaObject tag in xml and still have schema name in Primary Key Properties, and I’m not able to choose empty from the Schema pull down menu

    Hi,
    Thanks. I logged a bug on this. It looks like the problem is occurring if there is a Foreign Key relationship to the Table with the PK.
    David

  • Generic query on documenting and maintaining schema changes -Oracle 10g

    Hi All,
    This is a generic query and i was not getting any particular forum here to put this for suggestions and help.i have put this in the documentation forum also but havent got any inputs.
    Could you all please advise a good and easy way I can store all db related stuff including every detail and minutest change done.
    I work as a Oracle-DB developer and it has been particularly difficult to keep track the changes very much.
    We have a Development pipeline(a code base for development),Testing pipeline(another code base-for Testing)
    and then comes Production Deployment(another one for clients)
    Presently,we have all our DDL,SQL,etc everything scripts stored in a sort of shared location but keeping them updated with every change done
    in any of the above pipelines is very difficult and I end up finding discrepencies between the stored versions and what is in Testing and Production.
    I typically spend a good deal of time doing comparing the entire scripts,schema changes to try to figure out what the right and latest version of a script should be.
    So,need your inputs on any particular free tool,best practises and good process for tracking easily and also maintaining db changes.
    --Thanks
    Dev

    The problem with most configuration management systems is that they were made for files, not database objects. Files such as dlls get replaced by the new version upon deployment, but database tables do not. Tables get upgraded with alter scripts. So if you want to maintain create table scripts, you need to upgrade them with every change, as well as creating alter table scripts.
    For a free configuration management system, see http://subversion.tigris.org/. Such software will also do file comparisons including version comparison.
    For comparing installation scripts to existing schemas, I would create a new schema with the installation script and then compare that new schema to an existing schema using the schema comparison feature in Toad or PLSQL Developer.
    Alternately, you can forget about maintaining installation scripts and just take an export without data from your production or test schema whenever you need to create a new installation. You can also extract DDL from prod using the dbms_metadata package. Embarcadero Rapid SQL was also good for extracting DDL.

  • Migration Verification with schemas changed

    Hi
    Does the migration verification tool supports this scenario ?
    Source DataBase -
    Target Database - with schema changed in some tables.
    We have changed the schemas in some tables . Do we have any mechanism by which we can check the migration correctness. (May be by giving the mapping for fields between source and target tables).
    Can this tool do this kind of verification .
    We have both databases as Oracle only.
    Alternatively - i have been working to develop the tool to do the same.
    But the basic problem is that
    how do we ensure that the rows we are checking(from source and target table) are related ones , so that we can check all the fields. We dont want to do m*n comparisons. Even after doing m*n comparisons, how do we ensure that we have checked the right row, because the same values can be same in a column for many rows, which will match.
    I hope that my problem is understood.
    Thanks
    Atul

    There is a commercially available/generalized tool available to verify migrated data that permits the user/developer to specify the desired source to destination mappings between two databases with dissimilar schemas. The tool provides the ability to map source to destination fields, define the mappings between the fields, compare source to target records after the migration has been completed and to report on the results. The tool has been successfully used to test/validate results for FDA compliance as well as other mission critical requirements.
    The product is TRUcompare and additional information is available from the product website www.trucompare.com, or you could call for additional information (800) 880-4540.

  • Any tools to rollback schema change and map new data to old schema?

    What is the best way to rollback a schema change, but preserve the data that is in the new schema?
    We are updating an application for a customer, however the customer would like to rollback to the old application if after a few days they run into a critical problem.
    The application schema is changing - not a lot, but a number of tables will have extra columns.
    To rollback, one would have to re-create the original schema and load data into it from the updated schema.
    I thought that the Migration toolkit/workbench might have been able to do this for Oracle --> Oracle migrations, but it appears to only support non-Oracle Database migrations to Oracle.
    Are there any oracle tools in say OEM or elsewhere that can handle data transformations?

    Kirk,
    You are correct, the focus of the Oracle Migration Workbench is on non Oracle databases.
    I am not aware of a tool that can automatically do what you want, but it should be possible to do what you want. Depending on the version of Oracle they are running, it is now possible to drop columns for example. Your approach would be based on the nature of the changes. Have you looked into the OEM Change Management pack?
    http://www.oracle.com/technology/products/oem/pdf/ds_change_pack.pdf
    You might also post you message to Database General forum.
    Donal

  • HT202174 In need to change from appleshare to mac os journaled disk format on my time capsule. How to proceed?

    In need to change from appleshare to mac os journaled disk format on my time capsule. How to proceed?

    No Time Capsule disk can be formatted other than the automatic system in the TC itself.. You simply select erase and it will reformat the drive.
    If you are referring to a USB drive.. plugged into a Time Capsule.. you must plug it into your Mac and use Disk Utility.
    However you do need to understand.. appleshare is not a disk format.. it is a partition scheme.. so when you pick the disk you need to change its partition to GUID.. this is nothing to do with Mac OS journaled format.. the two are distinctly different. Also Appleshare works fine for most people.. there really is no need to change.
    Anyway here is how to do it.
    I have sd card I can use as example. Click on the Partition Layout.. so you can choose something else.. notice it is presently set to Master Boot Record.. but is formatted Mac OS Extended Journaled.
    When you change the partition layout Options at the bottom will become active and you can change the setting. Click Options.
    Choose whichever. GUID is the current system for boot drives.
    None of this is relevant to the internal disk of the Time Capsule.

  • Schema changes, minimal downtime

    We are a software development company, using Oracle 10g (10.2.0.2.0). We need to implement schema changes to our application, in a high traffic environment, with minimal downtime. The schema changes will probably mean that we have to migrate data from the old schema to new or modified tables.
    Does anyone have any experience with this, or a pointer to a 'best practices' document?

    It really depends on what "minimal" entails and how much you're willing to invest in terms of development time, testing, hardware, and complexity in order to meet that downtime requirement.
    At the high end, you could create a second database either as a clone of the current system that you would then run your migration scripts against or as an empty database using the new schema layout, then use Streams, Change Data Capture, or one of Oracle's ETL tools like Warehouse Builder (which is using those technologies under the covers) to migrate changes from the current production system to the new system. Once the new system is basically running in sync with the old system (or within a couple of seconds), you can shut down the old system and switch over to the new system. If the application front end can move seamlessly to the new system, and you can script everything else, you can probably get downtime to the 5-10 second range, less if both versions of the application can run simultaneously (i.e. a farm of middle-tier application servers that can be upgraded 1 by 1 to use the new system).
    Of course, at this high end, you're talking about highly non-trivial investments of time/ money/ testing and a significant increase in complexity. If your definition of 'minimal' gets broader, the solutions get a lot easier to manage.
    Justin

  • Hi friends. I want copy the scheme 1 from

    Hi friends. I want copy the scheme 1 from Machine A to machine B.
    With Exp and Imp, I have rigth now:
    In machine A:
    Scheme 1 with synonyms to objects in Schema SYS.
    In Machine B:
    Scheme 1
    I want copy from Machine 1, only the objects of Scheme SYS refered by Scheme 1. But if I EXP by user SYS, the export contain more objects that I no want import in Machine 2.
    How I can copy only the objects of SYS from machine 1 to machine 2 that are refered by scheme 1? Thanks a lot for your help

    Hi Ganga,
    Use Adapter Scheduling,
    1. Go to Runtime Workbench -> Component Monitoring -> Communication Channel Monitoring
    2. Locate the link Availability Time Planning on the top right corner of your Communication Channel Monitoring page.
    3. In Availability Time Planning, choose the Availability time as daily and say create.
    4. Provide the details like the time 12:00
    5. Select the communication channel, go to the Communication Channels tab and filter and add the respective channel (File Sender).
    6. Once all the above has been done 'Save' the changes.
    Note: You will need to have the authorizations of the user group SAP_XI_ADMINISTRATOR with the role modify.
    <a href="/people/shabarish.vijayakumar/blog/2006/11/26/adapter-scheduling--hail-sp-19- Scheduling - Hail SP 19 :-)</a> By Shabarish Vijayakumar
    Regards
    San

Maybe you are looking for