Directory Migration with changing schema

Hi,
We are planning a directory migration as part of an implementation of Sun Identity Manager.
The directory migration is from one set of servers to another and comprises an upgrade from Directory 5.2 to 6.3, and a schema change.
We'd like to have a rollback plan that involves copying changes from the new directory to our legacy servers. A delay is acceptable.
We're doing two things to our schema which increases the complexity:
- Users are being segregated, e.g.:
OLD: uid=123456,ou=People,ou=UK,dc=root
NEW: uid=123456,ou=Internal,ou=Users,dc=root OR uid=567890,ou=External,ou=Users,dc=root
- Replacing an ou hierarchy containing Groups (groupofuniquenames) to nsRoleDN attributes on users.
We'd like to avoid writing some kind of custom script to retrieve changes, modify them and insert them into the old directory.
Directory Proxy doesn't seem to be the right tool for this job.
Could anyone suggest an alternative?

So I have done this migration before - The directory information is not so bad to get migrated in fact the Novell IDM could do that piece for you.
Migrating the files, and security over will be an exercise in icacls
Logon scripts/Group policy preferences will need to be done to convert the existing drive mapping scripts over, Look at DFS for this so you only have to migrate once.
If everything is planned out it will keep the migration smooth, but truly understanding the role of Novell IDM in the environment, analyzing the shared files and data (Good time to cleanup, hint hint), validating the groups and security are set up correctly
will aid immensely.
I would also do in batches based on shared data and not try to move everyone at one big move, that way as departments move you can get the kinks out of the process.
From a workstation that has the novell client you can script the data copy with robocopy over to the new home
once the data is copied over, robocopy can keep the data sync'd until users are migrated over.
Novell has the ability of exporting directory structures with permissions on them, scripting icacls will get the permissions repopulated on the MS side. Groups and users of course would need to be there already.
Thanks,
Brad Held
Windorks.wordpress.com
View
Brad Held's profile

Similar Messages

  • IXOS 9.6.1 migration with change in DBMS(Oracle - MS SQL)

    Hi All
    We have requirement to migrate IXOS 9.6.1 archive server(DBMS:Oracle) from one server to another with change in DBMS to sql server
    Current : OpenText Archive server 9.6.1 with Oracle 
    Target : OpenText Archive server 9.6.1 with MS SQL
    Can anyone please help me out with migration of OpenText archive server.
    Thanks and Regards,
    Chaitanya

    I turned on the jsp debug init-param for the system for more information about the error. I get the following:
    Request URI:/interop/jsp/css/CSSTemplate.jsp
    Exception:
    OracleJSP:oracle.jsp.parse.JspParseException: /jsp/css/provisionSummary.jsp: Line # 119, <bean:define id="appId" name="appName" property="key"/>
    Error: Tag attempted to define a bean which already exists: appId
    When I attempt to provision the user I created for administrative purposes, Ialso see the following:
    --from the file SharedServices_Metadata.log I see the error:
    +2009-09-18 15:55:32,399 \[AJPRequestHandler-HTTPThreadGroup-6\] ERROR com.hyperion.eie.common.cms.CMSFacade - org.apache.slide.structure.ObjectNotFoundException: No object found at /users/admin+
    --from SharedServices_Admin.log
    +2009-09-17 14:49:20,583 \[Thread-13\] ERROR com.hyperion.cas.server.CASAppRegistrationHandler.loadApplicationsFromCMS(CASAppRegistrationHandler.java:430) - AuthorizationException occured when loading applications : Refer log in debug mode for details+
    How does one set these logs into debug mode ?

  • Core data: Versioned model migration with change in existing attribute data types.

    Hi ,
    I want to upgrade my ios app with app store version with enhancement of new functionality related to core data features in the app.
    -In new upgarde version, I want to change data types of attribute which is already present in core data model of existing app store version.
    e.g.In Version 1.0 ,Attribute "date" datatype is "NSDate", New change required to change  datatype "NSDate" to "NSString" with new version model.
    I followed Lightwaight migration, but as per documentation , Lightwaigh migration not support for change data type of any existing attribute/entity of core data model.
    Please suggest optimized solution for migration of database along with change in data type of exsiting attribute of the entity in core data.
    If required in further info ,please ask.
    Thanks in advances.
    Regards,
    Laxmikant

    More Info: The two entries are actually pointing to the same object. If I save the context and restart, I only have one entry (I can also see this by looking at the XML store).
    It seems that the NSTableView is getting messed up somehow after the FetchRequest. Still stumped.

  • Msg server implementation with change in directory tree strcucture

    Hello all,
    Our organization is using directory server 5.1 and messaging server5.2.Our company is going for a change of directory tree structure.Can anybody please tell me whether for this change of directory tree structure we will have to again go for a reinstall of our messaging servers.Is there a way by which present messaging servers can be made to operate with new directory server with revamped directory tree structure.We are not going to upgrade to a new version of either directory or messaging server.

    The "schema 1" that 6.0 supports is the two tree system.
    Schema 2 is single tree. You may be able to go that direction, BUT this road is not well documented. There's really no "schema guide" the way there is for 5.2 and "schema 1".
    You can certainly download and install 6.1, and provisioin some test users, and see what it does.
    The mailstore has had only minor changes gooing from 5.1 to 5.2 to 6.0 to 6.1, and the server should automatically and invisibly upgrade as you go.
    the queue is incompatible, so you will need to clear the queue before you upgrade.

  • HT6114 after i migrate my pictures and music from my PC to my Macbook Air, how do i access that info? does it have something to do with changing the user?

    I wirelessly migrated successfully my pictures and music via Wi-Fi from my PC laptop to my new Macbook Air but I don't now how to access that info. The space was used up on my Mac Air, but can't find music or pictures. Does it have something to do with changing user and if so what do I do when I change it. I think I got that far but got stuck.

    Hello jeycool1,
    Thank you for using Apple Support Communities.
    For more information, take a look at:
    Switch Basics: Migrate your Windows files to your Mac
    http://support.apple.com/kb/ht2518
    If you transferred a user account from your Windows PC to your Mac, you can log into the account on your Mac.
    Have a nice day,
    Mario

  • Migration and copying schema

    Greetings,
    I am in the process of upgrading from DS 5.2 to 6.3. I am using dsmig to migrate. Here is what I am doing.
    a) I make a complete image of the 5.2 instance and copying it to the new 6.3 machine
    b) The name of the 5.2 schema is schemaORIG (example).
    c) I use dsmig to import (schema, configuration and security). The 6.3 schema is called schemaNEW.
    d) Now I was to rename schemaNEW to schemaORIG.
    To rename the schemaNew I am thinking of doing:
    a) Remove the schemaORIG since it is copy of files from the old ldap server
    b) Use dscc to create a new schema and call it schemaORIG and copy all the settings from schemaNEW.
    Does this approach make sense? I am new to this game so any better approach will be appreciated.
    I looked for the solutions on this topic but was not successful and I apologize if this question has been answered before.
    Thanks

    Hi,
    Well I don't think that you should play with renaming the databases. The best way is to do it this way:
    1. Install the new DS 6.3 on the same server on which you have DS 5.2
    2. Migrate only the schema:
    /opt/SUNWdsee/ds6/bin/dsmig -v /OLD/INSTANCE/PATH /NEW/INSTANCE/PATH
    3. Modify the cn=replication manager,cn=replication,cn=config (also known as replication manager) password * the password must be identical as on 5.2
    4. Create empty suffixes on the new directory server
    /opt/SUNWdsee/ds6/bin/dsconf create-suffix SUFFIX
    5. Change the owslsLockoutPrioritized from TRUE to FALSE (this is causing replicaion errors between 5.2 and 6.X):
    /opt/SUNWdsee/dsee6/bin/ldapmodify -D "cn=Directory Manager" -w YOURDIRECTORYMANAGERPASSWORD
    dn: cn=Password Policy,cn=config
    changeType: modify
    replace: pwdIsLockoutPrioritized
    pwdIsLockoutPrioritized: FALSE
    6. enable replication on all suffixes that you need:
    /opt/SUNWdsee/ds6/bin/dsconf enable-repl SUFFIX
    and initialize the suffixes from DS 5.2
    7. Reindex all suffixes
    8. configure all replication agreements for old ds 5 instances.
    Predrag

  • Does The GroupWise Server Migration Utility Change Case?

    From what I've read, if I do a manual migration with dbcopy and use the "m" switch it will change the case of files (to lower) from the source server. Is this true?
    Also, if I use the Novell GroupWise Migration Utility will that change files to lower case? If not is there a way to make it?
    Thank you for your help
    David

    Originally Posted by samsa1mi
    From what I've read, if I do a manual migration with dbcopy and use the "m" switch it will change the case of files (to lower) from the source server. Is this true?
    Also, if I use the Novell GroupWise Migration Utility will that change files to lower case? If not is there a way to make it?
    Thank you for your help
    David
    Yes and yes :) The dbcopy -m switch performs lowercase and according to the documentation for the migration utility it also does the same:
    Performs an operation equivalent to GroupWise Check (GWCheck) with the storelowercase option to ensure that all filenames and directory names stored in the guardian database (ngwguard.db) are also converted to lowercase.
    Novell Doc: GroupWise Server Migration Utility Installation and Migration Guide - Post Office Migration Process
    Thomas

  • DB2 to Oracle conversion using SQL Developer Migration Wizard - different schemas

    I am performing a conversion between DB2  to Oracle 11 XE, using the SQL Developer Migration Wizard. Specifically I am trying to migrate the DB2User schema over to Oracle.
    Using the migration wizard, when I pick the Oracle target connection to be the same schema ( DB2User schema ) the migration is successful and all data is converted.
    However if I pick a different Oracle target connection ( say OracleUser ) , I run into issues.
    Firstly , the table schema is not created. When I check the project output directory, the .out file has the following errors:
       CREATE USER DB2User IDENTIFIED BY DB2User DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP
            SQL Error: ORA-01031: insufficient privileges
            01031. 00000 -  "insufficient privileges"
        connect DB2User/DB2User
        Error report:
        Connection Failed
        Commit
        Connection created by CONNECT script command disconnected
    I worked around this by manually executing the .sql in the project output directory using the OracleUser id  in the new DB.
    Then I continue with the migration wizard and perform the Move Data step.
    Now - the message appears as succuessful, however, when I review the Migrationlog.xml file, i see errors as follows:
    <level>SEVERE</level>
      <class>oracle.dbtools.migration.workbench.core.logging.MigrationLogUtil</class>
      <message>Failed to disable constraints: Data Move</message>
      <key>DataMove.DISABLE_CONSTRAINTS_FAILED</key>
      <catalog>&lt;null&gt;</catalog>
      <param>Data Move</param>
      <param>oracle.dbtools.migration.workbench.core.logging.LogInfo@753f827a</param>
      <exception>
        <message>java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist</message>
      <level>WARNING</level>
      <class>oracle.dbtools.migration.datamove.online.TriggerHandler</class>
      <message>ORA-01031: insufficient privileges
    </message>
    I think what is happening is that the wizard is attempting to perform the 'move data' process using the DB2User id.
    How do I tell the wizard that the target schema is different than my source schema.
    My requirement is that I need to be able to migrate the DB2User schema to different schemas in the same Oracle database
    ( since we will have multiple test environments under the same database ) .
    Thanks in advance .
    K.

    Perhaps the following from the SQL Developer documentation is helpful for you:
    Command-Line Interface for Migration
    As an alternative to using the SQL Developer graphical interface for migration operations, you can use the migration batch file (Windows) or shell script (Linux) on the operating system command line. These files are located in the sqldeveloper\sqldeveloper\bin folder or sqldeveloper/sqldeveloper/bin directory under the location where you installed SQL Developer.
    migration.bat or migration.sh accepts these commands: capture, convert, datamove, delcaptured, delconn, delconverted, driver, generate, guide, help, idmap, info, init, lscaptured, lsconn, lsconverted, mkconn, qm, runsql, and scan. For information about the syntax and options, start by running migration without any parameters at the system command prompt. For example:
    C:\Program Files\sqldeveloper\sqldeveloper\bin>migration
    You can use the -help option for information about one or more actions. For the most detailed information, including some examples, use the -help=guide option. For example:
    C:\Program Files\sqldeveloper\sqldeveloper\bin>migration -help=guide
    Regards
    Wolfgang

  • Database with many schemas - query schema depending on user

    Let's assume we have a number of schemas in a database (hundred or so), which all contain the same tables.
    The customer wants one application express application which accesses only the schema corresponding to the user that is currently logged in.
    So database schema user1 contains table EMP, and schema user2 also contains EMP. Now the application is assigned both schemas user1 and user2, and when user2 logs in he only sees database schema user2.
    Is this possible with Apex? I've tried both built-in authentication and database authentication for the application, but found out this is truly only about authentication, because the query is always executed by APEX_PUBLIC_USER.
    How can I manage which schema is accessed with the authenticated user?
    Thanks in advance!

    Patrick,
    Every apex application has an "owner" attribute which is used as the parsing schema. All SQL and PL/SQL in your app is parsed as that schema using that schema's privileges. It is as if your application were a definer's rights stored procedure in that schema. The APEX_PUBLIC_USER schema is simply used to create the session and has no bearing on privileges. Currently there is no way to change the owner attribute of an application at runtime. However, your application can operate with any schema in the database according to the privileges granted to the parsing schema. Something along the lines of what you described was discussed at length in the following thread, maybe it will give you some ideas to try out: Access to owner schema throught APIs
    Scott

  • File and directory names with Danish characters

    I have installed the Novell Client v2.0 for Linux on my Open Suse 10.3. The Client is connecting to my Netware servers (6.0 & 6.5) without any problems...
    There is one problem... Filenames and directory names with the Danish , & (ae, oe & aa), e.g the filename bger.doc (bger = books) is shown as b. and when clicking the file the file disappears from the file list. It seems to be the same problem with the German (umlaub).
    What to do?
    /Michael

    Originally Posted by J.H.M. Dassen (Ray)
    mimo <[email protected]> wrote:
    > There is one problem... Filenames and directory names with the Danish ,
    > & (ae, oe & aa), e.g the filename bger.doc (bger = books) is shown as
    > b. and when clicking the file the file disappears from the file list. It
    > seems to be the same problem with the German (umlaub). What to do?
    As far as I know, the Novell Client for Linux expects that file and
    directory names use the UTF-8 encoding and does not support a traditional
    8-bit encoding like ISO 8859-1. Try changing the encoding of file and
    directory names to UTF-8 as described in
    SDB:Converting Files or File Names to UTF-8 Encoding - openSUSE
    HTH,
    Ray Dassen
    Technical Support Engineer, EMEA Services Center, Novell Technical Services
    Novell, Inc. Software for the Open Enterprise Software for the Open Enterprise
    Seems a good hint. When I create a folder or file from within SUSE using an "Umlaut" everything is OK and NCL 2.0 displays them correctly as they are UTF-8 formatted.
    The proposed tool is no solution: one cannot convert folders or files that one cannot see (does it work for folders at all?). Maybe a windows tool would work because one could search for all files or folders with "Umlaute" and convert them. Other options?

  • Using dbms_datapump package to export the schema with the schema name as pa

    Hi,
    I am using the pl/sql block to export schema using dbms_datapump package,Now I want to pass the scheme name as the parameter to the procedure and get the .dmp and .log files with the schema name included.
    CREATE OR REPLACE PROCEDURE export
    IS
    h1 number;
    begin
    h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'export1', version => 'COMPATIBLE');
    dbms_datapump.set_parallel(handle => h1, degree => 1);
    dbms_datapump.add_file(handle => h1, filename => 'EXPDAT.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);
    dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
    dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''CHECKOUT'')');
    dbms_datapump.set_parameter(handle => h1, name => 'ESTIMATE', value => 'BLOCKS');
    dbms_datapump.add_file(handle => h1, filename => 'EXPDAT%U' || to_char(sysdate,'dd-mm-yyyy') || '.DMP', directory => 'DATA_PUMP_DIR', filetype => 1);
    dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
    dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
    dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
    dbms_datapump.detach (handle => h1);
    exception
    when others then
    raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
    end;
    Thank you in advanced
    Sri

    user12062360 wrote:
    Hi,
    I am using the pl/sql block to export schema using dbms_datapump package,Now I want to pass the scheme name as the parameter to the procedure and get the .dmp and .log files with the schema name included.
    OK, please proceed to do so
    >
    CREATE OR REPLACE PROCEDURE export
    IS
    h1 number;
    begin
    h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'export1', version => 'COMPATIBLE');
    dbms_datapump.set_parallel(handle => h1, degree => 1);
    dbms_datapump.add_file(handle => h1, filename => 'EXPDAT.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);
    dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
    dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''CHECKOUT'')');
    dbms_datapump.set_parameter(handle => h1, name => 'ESTIMATE', value => 'BLOCKS');
    dbms_datapump.add_file(handle => h1, filename => 'EXPDAT%U' || to_char(sysdate,'dd-mm-yyyy') || '.DMP', directory => 'DATA_PUMP_DIR', filetype => 1);
    dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
    dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
    dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
    dbms_datapump.detach (handle => h1);
    exception
    when others then
    raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
    end;
    EXCEPTION handler is a bug waiting to happen.
    eliminate it entirely

  • Process flow - Active Directory integration with Enterprise Portal

    Hi
    I have seen number of documents/forum discussions on integrating Microsoft Active Directory (LDAP) with Enterprise Portal, but unable to find out the process flow for achieving the same.
    I have installed Enterprise Portal 6 (SP13) running on Web AS 640 (J2EE Standalone). The UME is currently configured to use Java database. (i.e datasourceconfiguration_database_only.xml)
    I intend to proceed as below for integrating with Active Directory and integrate with Windows authentication:
    1) Configure UME to use an LDAP Server as Data Source using Config Tool
    http://help.sap.com/saphelp_erp2004/helpdata/en/cc/cdd93f130f9115e10000000a155106/frameset.htm
    2) Configure Enterprise Portal UME i.e http://<host name>:50000/irj - System Administration - System Configuration - UM Configuration
    <b>Should I configure Data Sources & LDAP Server here as I have already configured these using J2EE Config tool (point no.1).</b>
    3) Integrate Windows authentication with EP using IISProxy module.
    I hope the above will enable me to logon to Portal without supplying username and password once you are logged on to the PC using your Windows user name and password.
    Also, any schema updates required to Activie Directory i.e What additional data is stored in A.D.
    I would appreciate your guidance on this.
    Thanks in advance,
    Chandu

    Hi Chandau,
    you wanted that some users are not taken into account by the User Management Engine (UME).
    This behavior can be established by specifying the
    ume.ldap.negative_user_filter property for the LDAP data sources in the data source configuration file. Using this property one can define that all users and accounts that
    match the defined conditions are filtered out by the UME API.
    A detailed documentation can be found in the SAP Online Help:
    http://help.sap.com/saphelp_nw04/helpdata/en/9a/f43541b9cc4c0de10000000a1550b0/
    content.htm
    In the following example of a data source configuration file for Microsoft Active Directory
    Server the attribute userPrincipalName is used as Logon ID of a portal user id (j_user).
    Here the user accounts that have one of the following Logon ID’s (index_service,
    notificator_service and cmadmin_service ) are filtered out.
    <dataSources>
    </dataSource>
    <dataSource id="CORP_LDAP">
    <privateSection>
    <ume.ldap.negative_user_filter>
    userPrincipalName=[index_service,notificator_service,cmadmin_service]
    </ume.ldap.negative_user_filter>
    </privateSection>
    </dataSource>
    </dataSources>

  • Content migrated with ICE, Name property and Check In

    I have pulled in a new portal server several KM folders, originally located in another portal, with <i>Content Exchange</i> (online ICE protocol).
    When I edit locally one of these migrated files, save, and check it in, I get a message complaining that the resource has no name.
    Three curious things:
    1) If I open the <i>Details</i> view for that file, I can see that the resource's name property is indeed correctly set.
    2) After choosing <i>Edit locally</i>, the message reports the correct name of the resource!
    <i>You are currently editing the document "REAL_NAME_OF_THE_FILE.doc"...</i>
    3) From this page, if I choose <i>Show Properties</i> (showing me again the correct resource's name) and subsequently <i>Hide Properties</i>, no problem arises when checking the file in.
    Please notice that:
    1) This problem involves <b>only the resources migrated with ICE</b>.
    2) If I change either the resource ID or its name from the Details view, no problem arises anymore.
    We're using the ActiveX control to edit locally. Our portal version is:
    J2EE Engine 6.40 PatchLevel 87289.311
    Portal 6.0.9.0.0
    KnowledgeManagementCollaboration 6.0.9.3.0 (NW04 SPS09 Patch3)
    Have you gurus any idea?
    Thanks all, Davide

    I can't find any SAP note about such a problem. Has anyone any info about it?
    Thanks, Davide

  • Help With Multiple Schemas In Multiple Environments

    Dear Oracle Forum:
    We have a bit of controversy around the office and I was hoping we could get some expert input to get us on the right track.
    For the purposes of this discussion, we have two machines, development and production. Currently, on each machine, we have one database with multiple schemas, say, one for sales data and another for inventory. The sales data has maybe 200 tables and the inventory has another 50. About 12 times a year, once a month, we have a release and move code from dev to prod. The database is accessed by several hundred Pro*C and Pro*Cobol programs for online transaction processing.
    The problem comes up when we need to have multiple development environments. If I need to work on something for May that requires the customer address field to be 50 characters and somebody else is working on something for July that requires the customer address field to be 100 characters, we can’t both function in the same schema. We have a method of configuring running programs to attach to a given schema/database. Currently, everything connects to the same place. We were told that we should not have the programs running as the owners of the schemas for some reason so we set up additional users. The SALES schema is accessed with the connect string: SALES_USER/[email protected]. (I don’t know where we got dot world from but that is not the current discussion.)
    One of the guys said that we should have 12 copies of the database running, which is kind of painful to think about in my opinion. Oracle is not a lightweight product and there are any number of ancillary processes that would have to be duplicated 12 times.
    My recommendation is that we have 12 schemas each for sales and inventory with 12 users each to access them. We would have something like JAN_SALES_USER, FEB_SALES_USER, etc. Each user would have synonyms set up for each of the tables it is interested in. When my program connects as MAY_SALES_USER, I could select from the customer table and I would get my 50 character address field. When the other user connects as JUL_SALES_USER, he would get his 100 character address field. Both of us would not know anything different.
    Another idea that came up is to have a logon trigger that would set the current schema for that user to the appropriate base schema. When JUL_SALES_USER logs in, the current schema would be set to JUL_SALES, etc. This would simplify things by allowing us to avoid having something like 2400 synonyms to maintain (which could be automated without too much difficulty) but it would complicate things by requiring a trigger.
    There are probably other ways to go about this we have not considered as yet. Any input you can give will be appreciated.
    Regards,
    /Bob Bryan

    Hans Forbrich wrote:
    I'd rather see you with 12 schemas than with 12 databases. Unless you have lots of CPUs to spare ... and lots of cash to pay for those extra CPU licenses.
    Then again, I'd take it one step further and ask to investigate the base design. There should be little reason to change the schema based on time. Indeed, from what little I know of your app, I'd have to ask whether adding a 'date' column and appropriate views or properly coded SQL statements might simplify things. Interesting. If we were to have one big Customer table with views for each month, how would we handle the case where the May people have to see 50 character address and July have to see a 100 character address field. I guess we could have MAY_ADDRESS VARCHAR2(50) and JULY_ADDRESS VARCHAR2(100) and take care to make sure that people connecting as May can only see the May columns, etc. This is simpler than multiple schemas?
    I may have overly simplified things in my effort to get something down that would not require too much explanation. The big thing is that multiple people are doing development and they have to be independent of each other. If we were to drop a column for July, the May people will have trouble compiling if we don’t keep things separate. It is not a case of making the data available. The data in development is something we cook up to allow us to test. The other part is the code we compile now will be released to production one of these times. In production, there is only a need for one database.
    We are moving from another database product where multiple databases are effectively different sets of files. We have lots of disk space so multiple databases were no problem. Oracle is such a powerful product; I can’t believe there is not some way to set up something similar.

  • Signature Schemas/Status values with Signature schemas need to be locked?

    I am tying to assign a signature schema to multiple values in a single status schema.  If I assign a signature schema to a status it automaticaly locks the document into that status so that no further changes to status can be made.  I need to have a signature schema  assigned to mutliple status values in the same status schema so I cant have A approved be locked because I still need the document to be B approved as well. 
    Does anyone know how I can adjust the setting so the status values with sig. schemas attached will not automatically be locked.
    Status Schema:
    Not Started
    A Approved  X signature schema
    B Approved  y signature schema

    Hello James,
    this is not possible. As I do not quite understand why you want to set up your status schema like that, it is difficult to propose another solution.
    If you want to stay with the schema you have mentioned, you will have to unlock a document which has been approved by A on the attribute dialog and set the new status "B Approved" there. But this means that the person who is doing that has to sign the document as first signee of the second signature process.
    Best regards
      Jürgen

Maybe you are looking for

  • Using combination of insert into and select to create a new record in the table

    Hello: I'm trying to write a stored procedure that receives a record locator parameter and then uses this parameter to locate the record and then copy this record into the table with a few columns changed. I'll use a sample to clarify my question a b

  • Charms bar doesnt work

    After updating my os from win 8 to win 8.1 I started experiencing problems with charms bar. problem 1 :wheni point my mouse pointer to right edge of the screen charms bar doesnot appear unless I restart windows explorer in task manager.Even start but

  • HOW-TO: Improving the interface of Xaw3d apps (emacs,xdvi)

    This mini HOW-TO is to a) improve the appearence and b) to give a more intuitive feel to apps using the Athena Widgets, i.e. emacs, xdvi and xemacs. Enjoy! Emacs: You can use the neXtaw widgets instead of the xaw3d widgets. # $Id: PKGBUILD,v 1.1 2005

  • Why can't I print PDF documents from Safari correctly

    When we go online to get our credit card statements they are always in PDF format.  My husband likes a paper copy so he can reconcile himself.  When we print the document all we get are six very mini copies in the upper left corner of an 81/2 x 11 sh

  • UI Components Library Download

    Hi, From where can I download the UI Components Library? The UICL sample reference application was earlier available for download from the "Sample Applications" section under Eden Downloads. We are trying to implement type ahead search in our applica