How to migrate from existing Database Usermanagement to Active Directory?

Hello experts,
we are running a portal with more than 2000 users. So far our user management is done by the portal´s own identity management with the database as data source.
However for many reasons instead of the database we would like to use an existing company´s Active Directory (=AD) as a data source for identity management. That means that we would like only to use the AD-users and AD-groups in the portal.
All users who are in the portal´s database now you can find also in the existing company´s Active Directory. Luckily the users have the same ID both in the database and in the AD.
We know that the migration form the database to AD is a big issue since many portal objects depend on the existing structures. However because the IDs of users are identical in both systems we hope to finde a way to "override" the existing usermanagement data with the AD data without loosing the existing settings (e.g. KM-Permissions, user profiles etc.).
Generally I am asking you if you have had already experience with changing the user management´s datasource of an already "living" portal (several 1000 users) to Active Directory User Managent.
What problems can occour?
Which modifications need to be done?
Which portal´s objects are affected by the migration?
Is a migration possible at all?
I will appreciate all suggestions, remarks, ideas.
Thanks in advance.
Thomas

Hello experts,
the current permissions in the KM-Objects are based on both groups and users from database.
Because it is not possible to modify the Group´s Display Name in the portal´s database we would also like to use LDAP-Groups in the portal: All users and groups in the portal shall be managed by Active Directory in future.
In the Active Directory it is possible to modify the Display Name of groups. This is a necessary feature because of reorganisations of departments in our company which occur from time to time.
Creating new groups with the new department names is not an option because one has to assign all department members to the new group again. Otherwise one need to asign the new group to the ACLs of all KM objects in question. This is a too big deal.
However, thank you for that hint Michael.
Any other experiences?
I will appreciate any ideas, foreseen problems.
Thomas

Similar Messages

  • How to :Migration from existing 595 into new 595 in RAC?

    Hi all,
    This is the bit complicated task for me(i.e) first let me explain our environment
    We are having 595 two instances (RAC) and 8300 is our storage due to the position we are not able to enhance the memory and CPU in this existing 595 Server.So what we are planning to make another 595 servers with some high configuration?
    Since this is new area for me dont know any idea of how to migrate these into new 595 Servers.
    fy.i storage will not get any change it will be as it is.
    I need any docs or anyone pls share their ideas and steps how they did(suppose if anyone did this already)
    Waiting For your reply.
    Thanks & Regards
    M.Murali..

    If your objective is to replace the exiting 595's with new one's then follow the steps below :-
    1. configure both the new servers (595) for RAC installation - follow the pre-install steps required for installing clusterware etc. e.g. storage, network, ssh etc..
    2. add both the new servers as a "node3" and "node4" into the existing 2-node cluster - follow the "node add" procedure in the documentation
    3. turn-off the first 2 nodes on old servers (node1 and node2)
    4. test your application with only the nodes...
    5. remove the old nodes using "node delete" procedure given in the documentation
    reference - http://download.oracle.com/docs/cd/B19306_01/rac.102/b14197/adddelunix.htm#BEICADHD
    - Siba
    http://dbathoughts.blogspot.com

  • Using PowerShell to import CSV data from Vendor database to manipulate Active Directory Users

    Hello,
    I have a big project I am trying to automate.  I am working in a K-12 public education IT Dept. and have been tasked with importing data that has been exported from a vendor database via .csv file into Active Directory to manage student accounts. 
    My client wants to use this data to make bulk changes  to student user accounts in AD such as moving accounts from one OU to another, modifying account attributes based on State ID, lunchroom ID, School, Grade, etc. and adding new accounts / disabling
    accounts for students no longer enrolled.
    The .csv that is exported doesn't have headers that match up with what is needed for importing in AD, so those have to be modified in this process, or set as variables to get the correct info into the correct attributes in AD or else this whole project is
    a bust.  He is tired of manually manipulating the .csv data and trying to get it onto AD with few or no errors, hence the reason it has been passed off to me.
    Since this information changes practically daily, I need a way to automate user management by accomplishing the following on a scheduled basis.
    Process must:
    Check to see if Student Number already exists
    If yes, then modify account
    Update {School Name}, {Site Code}, {School Number}, {Grade Level} (Variables)
    Add correct group memberships (School / Grade Specific)
    Move account to correct OU (OU={Grade},OU=Students,OU=Users,OU={SiteCode},DC=Domain,DC=net)
    Remove incorrect group memberships (School / Grade Specific)
    Set account status (enabled / disabled)
    If no, create account
    Import Student #
    Import CNP #
    Import Student name
    Extract First and Middle initial
    If duplicate name exists
    Create log entry for review
    Import School, School Number, Grade Level
    Add to correct Group memberships (School / Grade Specific)
    Set correct OU (OU={Grade},OU=Students,OU=Users,OU={SiteCode},DC=Domain,DC=net)
    Set account Status
    I am not familiar with Powershell, but have researched enough to know that it will be the best option for this project.  I have seen some partial solutions in VB, but I am more of an infrastructure person instead of scripting / software development. 
    I have just started creating a script and already have hit a snag.  Maybe one of you could help.
    #Connect to Active Directory
    Import-Module ActiveDirectory
    # Import iNOW user information
    $Users = import-csv C:\ADUpdate\INOW_export.csv
    #Check to see if the account already exists in AD
    ForEach ( $user in $users )
    #Assign the content to variables
    $Attr_employeeID = $users."Student Number"
    $Attr_givenName = $users."First Name"
    $Attr_middleName = $users."Middle Name"
    $Attr_sn = $users."Last Name"
    $Attr_postaldeliveryOfficeName = $users.School
    $Attr_company = $users."School Number"
    $Attr_department = $users."Grade Level"
    $Attr_cn = $Attr_givenName.Substring(0,1) + $Attr_middleName.Substring(0,1) + $Attr_sn
    IF (Get-ADUser $Attr_cn)
    {Write-Host $Attr_cn already exists in Active Directory

    Thank you for helping me with that before it became an issue later on, however, even when modified to be $Attr_sAMAaccountName i still get errors.
    #Connect to Active Directory
    Import-Module ActiveDirectory
    # Import iNOW user information
    $Users = import-csv D:\ADUpdate\Data\INOW_export.csv
    #Check to see if the account already exists in AD
    ForEach ( $user in $users )
    #Assign the content to variables
    $Attr_employeeID = $users."Student Number"
    $Attr_givenName = $users."First Name"
    $Attr_middleName = $users."Middle Name"
    $Attr_sn = $users."Last Name"
    $Attr_postaldeliveryOfficeName = $users.School
    $Attr_company = $users."School Number"
    $Attr_department = $users."Grade Level"
    $Attr_sAMAccountName = $Attr_givenName.Substring(0,1) + $Attr_middleName.Substring(0,1) + $Attr_sn
    IF (Get-ADUser $Attr_sAMAccountName)
    {Write-Host $Attr_sAMAccountName already exists in Active Directory
    PS C:\Windows\system32> D:\ADUpdate\Scripts\INOW-AD.ps1
    Get-ADUser : Cannot convert 'System.Object[]' to the type 'Microsoft.ActiveDirectory.Management.ADUser'
    required by parameter 'Identity'. Specified method is not supported.
    At D:\ADUpdate\Scripts\INOW-AD.ps1:28 char:28
    + IF (Get-ADUser $Attr_sAMAccountName)
    + ~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : InvalidArgument: (:) [Get-ADUser], ParameterBindingException
    + FullyQualifiedErrorId : CannotConvertArgument,Microsoft.ActiveDirectory.Management.Commands.GetAD
    User

  • How to migrate from Dbase Database to Oracle 11gRelease2 database

    Hi,
    I have a Dbase database and i need to migrate to Oracle 11g Release2 database.
    Please suggest as to how to proceed for the same.
    Thanks,
    Gaurav

    GauravJ wrote:
    Hi,
    I have a Dbase database and i need to migrate to Oracle 11g Release2 database.
    Please suggest as to how to proceed for the same.
    Thanks,
    Gauravhttp://search.cpan.org/dist/CAM-DBF/lib/CAM/DBF.pm
    PERL can access both DBs simultaneously.
    so all you have to do is write some custom code to SELECT from DBASE DB & INSERT into Oracle DB

  • How to migrate oracle 9i database to oracle 10g

    Hi All!
    Can anyone tell me the steps to migrate oracle database from 9i to 10g.
    what are the prerequisite and preliminary thing to consider...
    any thing related to migration.
    ThanQ.

    Which one is better?
    Thanks a lot.
    itpub888
    Re: How to migrate oracle 9i database to oracle 10g
    Posted: Aug 12, 2007 11:17 PM
    Hi,
    u can go for any of the migration steps provided by oracle..say
    1.By using DBUA
    2.By using PL/SQL scripts provided by oracle like utlu102i.sql
    3. Using exp/imp utilities
    4. Using Copy
    For further info, u can refer 10g upgrade guide available on OTN
    Regards
    ramesh

  • [HTML DB] How to use the existing database table?

    [HTML DB] How to use the existing database table?
    I installed Oracle 10g database in Computer A(Windows 2000), and I already create all the tables with data and the data size is about 300MB.
    In Computer B(Windows 2000), I installed HTML DB 1.6.
    How can I use /get the existing database table (in computer A) for HTML DB?
    Could anyone help me on this? I am newbie and I need some detail instructions. or Where can I find the examples.....
    Thanks

    Well I guess if you wish to retain that architecture, i.e. HTMLDB on one machine and your data on another, you will have to establish database links to access the data. Oracle documentation will describe how to achieve that.

  • Migrate from redbrick database to oracle 10g

    Hi Folks,
    Does anyone of you did a migration from redbrick database to oracle 10g please help me out with the steps to do that? I tried to search for a document with no luck.
    Appriciate if you could point me to a doc or a white paper that talks about the migration.
    Thanks in advance.
    Karthik.

    Red Brick is now part of the Informix family, maybe the migration workbench helps:
    http://www.oracle.com/technology/tech/migration/workbench/index.html
    Werner

  • HT1444 how to migrate from yosemite beta to full vesion

    Hi
    how to migrate from yosemite beta to full vesion

    When I look in "about this mac" - it says Yosemite 10.10. Do I really need to download it again?
    I found on a blog that someone contacted the Apple service:
    “simply install the final version of the software you are testing when it appears in Software Update” – that means the final version of the beta software you are given to test is what will be made public. So we’re on the same build as what is now in the App Store.
    My compute OSX is 10.10 but build is (14A388b) while some say that app store version is (build 14A389).
    Does it really make a difference? If I don't download it will it continue to update?

  • How to Migrate the Existing Content

    Hi every1,
      I want to know how to Migrate the existing content in Interwoven to new Content Management system
    Thanks & Regards
    Vasu

    Hi Vasu,
    as said, I don't know Interwoven in detail. But if you would get the Interwoven Business Package, maybe you could get some insight how to access Interwoven content per API (at least, Interwoven <i>has</i> a Java API, see http://blogs.ittoolbox.com/km/content/archives/005150.asp for example; but I would ask Interwoven or within some Interwoven forum for the details, and on this site for the details of SAP KM API... ).
    Hope it helps
    Detlev
    PS: Please consider rewarding points for helpful answers on SDN. Thanks in advance!

  • Does one of the Lync SQL databases store the active directory username or SID of the person who made a call ?

    I am trying to write a report that uses data from Lync (2010), Active directory (AD) and other databases.
    I need to match data from Lync with records in active directory.
    When you make/recieve a call, the session details has a userid column - a foreign key to the users table, which has the UserURI - the users emails adddress or telephone number.
    However, trying to mach the data, I have noticed that someones email address can change so that what is in active directory does not match that used as the SIPaddress in Lync.
    I need a field that matches in Active directory and Lync to be able to link a users call records with their active directroy records.
    I was wondering how Lync decides which Lync user you are when it auto logins you in.
    Does it do it on the basis of your phone number, AD username or something else ?
    If so , where in Lync does it store the mapping from whatever it uses to your Lync userid ?
    Greg

    The msrtcsip-primaryuseraddress attribute in AD is where the users SIP address is stored.
    This can change still, but generally that should not be very often except maybe a name change or domain name change.
    Almost everything in Lync is based on the SIP address. In CDR's case, it is just recording SIP messages as they pass through the front end; it has no visibility into the actual AD account that sent it.
    If you will need to match user SIP addresses back to live AD accounts, even after a SIP address change, then I would recommend setting up a custom AD attribute to store their SIP account history and have a policy to update that attribute each time someone's
    SIP address gets changed.

  • How i use OEM 12c to monitor Microsoft Active directory.

    Hi,
    How i use OEM 12c to monitor Microsoft Active directory.Please assist me on this.
    Thanks,
    Sagar

    Hi,
    The fundamental problem with this scenario is that you have non-failover capable modules in a failover chassis - think of the ASA failover pair as one device and the IPS modules as two completely separate devices.
    Then, as already mentioned, add only the primary ASA. (The secondary will never be passing traffic in standby mode so it's not actually needed in MARS) Then, with the first IPS module you can add it as a module of the ASA or as a standalone device (MARS doesn't care). With the second IPS module the only option is to add it as a separate device anyway.
    In a failover scenario the ASA's swap IP's but the IPS's don't so whereas you'll only ever get messages from the active ASA you'll get messages from both IPS IP's depending on which one happens to be in the active ASA at the time.
    Don't forget that you have to manually replicate all IPS configuration every time you make a change.
    HTH
    Andrew.

  • I am new How to make internet enable group in my active directory 2003 ?

    I am new How to make internet enable group in my active directory 2003 ?
    Thanks & Regards, Amol . Amol Dhaygude

    Greetings!
    What is Internet Enabled Group? Would you please clarify this?
    Mahdi Tehrani   |  
      |  
    www.mahditehrani.ir
    Please click on Propose As Answer or to mark this post as
    and helpful for other people.
    This posting is provided AS-IS with no warranties, and confers no rights.
    How to query members of 'Local Administrators' group in all computers?

  • How to migrate from a standard store setup in a splitted store (msg - idx) setup

    How can I migrate from a standard store setup in a splitted setup described in
    https://wikis.oracle.com/display/CommSuite/Best+Practices+for+Messaging+Server+and+ZFS
    can a 'reconstruct' run do the migration or have I do a
    imsbackup - imsrestore ?

    If your new setup would use the same filesystem layout as the old one (i.e. directory paths to the files would be the same when your migration is complete) you can just copy the existing store into the new structure, rename the old store directory into some other name, and mount the new hierarchy instead of it (zfs set mountpoint=...). The CommSuite Wiki also includes pages on more complex migrations, such as splitting the user populace into several stores (on different storage) and/or separate mailhosts. That generally requires that you lock the user in LDAP (perhaps deferring his incoming mail for later processing into the new location), migrate his mailbox, rewrite the pointers from LDAP, reenable account. The devil is in the details, for both methods. For the latter, see Wiki; for the former I'll elaborate a bit here
    1) To avoid any surprises, you should stop the messaging services before making the filesystem switch, finalize the data migration (probably with prepared data already mostly correct in the new hierarchy before you shut down the server, just resync'ing the recent changes into new structure), make the switch and reenable the server. If this is a lightly-used server which can tolerate some downtime - good for you If it is a production server, you should schedule some time when it is not very used so you can shut it down, and try to be fast - so perhaps practice on a test system or a clone first.
    I'd strongly recommend taking this adventure in small reversible steps, using snapshots and backups, and renaming old files and directories instead of removing them - until you're sure it all works, at least.
    2) If your current setup already includes a message store on ZFS, and it is large enough for size to be a problem, you can save some time and space by tricks that lead to direct re-use of existing files as if they are the dataset with a prepopulated message store.
    * If this is a single dataset with lots of irrelevant data (i.e. one dataset for the messaging local zone root with everything in it, from OS to mailboxes) you can try zfs-cloning a snapshot of the existing filesystem and moving the message files to that clone's root (eradicating all irrelevant directories and files on the clone). Likewise, you'd remove the mailbox files on the original system (when the time is right, and after sync-ing).
    * If this is already a dedicated store dataset which contains the directories like dbdata/    mboxlist/  partition/ session/   and which you want to split further to store just some files (indices, databases) separately, you might find it easier to just make new filesystem datasets with proper recordsizes and relocate these files there, and move the partition/primary to the remaining dataset's root, as above. In our setups, the other directories only take up a few megabytes and are not worth the hassle of cloning - which you can also do for larger setups (i.e. make 4 clones and make different data at each one's root). Either way, when you're done, you can and should make sure that these datasets can mount properly into the hierarchy, yielding the pathnames you need.
    3) You might also look into separating the various log-file directories into datasets, perhaps with gzip-9 compression. In fact, to reduce needed IOPS and disk space at expense of available CPU-time, you might use lightweight compression (lzjb) on all messaging data, and gzip on WORM data sets - local zone, but not global OS, roots; logs; etc. Structured databases might better be left without compression, especially if you use reduced record sizes - they might just not compress enough to make a difference, just burning CPU cycles. Though you could look into "zle" compression which would eliminate strings of null bytes only - there's lots of these in fresh database files.
    4) If you need to recompress the data as suggested in point (3), or if you migrate from some other storage to ZFS, rsync may be your friend (at least, if your systems don't rely on ZFS/NFSv4 ACLs - in that case you're limited to Solaris tar or cpio, or perhaps to very recent rsync versions which claim ACL support). Namely, I'd suggest "rsync -acvPHK --delete-after $SRC/ $DST/" with maybe some more flags added for your needs. This would retain the hardlink structure which Messaging server uses a lot, and with "-c" it verifies file contents to make sure you've copied everything over (i.e. if a file changes without touching the timestamp).
    Also, if you were busy preparing the new data hierarchy with a running server, you'd need to rsync old data to new while the services are down. Note that reading and comparing the two structures can take considerable time - translating to downtime for the services.
    Note that if you migrate from ZFS to ZFS (splitting as described in (2)), you might benefit from "zfs diff" if your ZFS version supports it - this *should* report all ofjects that changes since the named snapshot, and you can try to parse and feed this to rsync or some other migration tool.
    Hope this helps and you don't nuke your system,
    //Jim Klimov

  • How to Migrate from SAP BO XI 3.1 system to SAP BI 4.1

    Hello Gurus,
    I got a new project and I have to start Upgrade and Migration from BO XIR3.1 to BI 4.1. Please help me out here. More detail are given below. Appreciated in advance.
    1.1    Technical Scope
    Installation of only a production SAP BI 4.1 environment.
    Repository is currently on DB2 but will be on SQL Server for the BI 4.1 implementation.
    All VMware machines: The new architecture calls for 12 VM servers.
    Row-level Security in the universe for authorization of content and Enterprise for Authentication (no SSO). Matrix security model with custom level groups which gives Basic, Intermediate, and Advanced level users to pre-defined folders and content.
    Migrate content (objects, universe and instances) from SAP BO XI 3.1 to SAP BO BI 4.1 for the technical upgrade, the details are below:
    Universes will stay in the UNV format and will not be converted to UNX.
    Only WebI Documents
    All Controlled Folders - (~5600 documents)
    174 Total objects in Public Documents folders (All documents to upgrade & remediate)
    5,433 Total WebI reports in Corporate and other folders (All documents to upgrade, & remediate)
    User Folders – (~6,000 documents)
    6000 + Webi documents accessed in 2013 ( All 6000 + documents to upgrade, NO remediation)
    Inbox – None will migrate
    Total Documents to remediate: up to 5,607
    Migrate all universes and connections
    Migrate all Xcelsius and agnostic documents with no remediation.
    The estimated report count for remediation by complexity, Low-1525, Medium-150 and High-50.
    Assumes a 10% report remediation effort described earlier sections
    Report remediation: should it exceed the base assumptions made in this document, will be implemented as a Change Order. Effort for such change will be mutually agreed between parties. Price to project is determined using this effort and blended rates.
    Testing:
    Conduct planning and inventory analysis; Reusable templates for Migration Plan, Validation.
    Perform migration
    Use the Right Sized Testing framework to plan and conduct testing
    Use the automated Reports Compare tools to compare large volumes of excel / xml data
    Templates based remediation– ensures quality control
    Thanking you best regards.
    AK.
    Message was edited by: Simone Caneparo
    reduced title length

    Hello Mark,
    Thanks for your help. Appreciated.
    Do you know or some one know, how to create at report for Audit purpose of BO 3.1 Universes's Connection, Database type, Network Layer and so on... I want to pull all info in to Webi report which are you seeing in the pictures.
    Please see the attached file.

  • How to migrate from ascii to unicode (MaxDB 7.5)? loadercli: ERR -25347

    Hi,
    I use MaxDB 7.5.00.26. (Ok, I know that I should switch to 7.6, however, it is not possilble for some customer restriction for now, but should be possible quite soon).
    We'd like to migrate a db from ascii to unicode. Based on the infos in the thread "Error at copying database using dumps via loadercli: error -25364" I tried the following:
    Export sourcedb
    1. Export catalog and data
    C:\> loadercli -d db_asc -u dba,dba
    loadercli> export db catalog outstream file 'C:\tmp1\20080702a_dbAsc.catalog' ddl
    OK
    loadercli> export db data outstream file 'C:\tmp1\20080702b_dbAsc.data' pages
    OK
    loadercli> exit
    Import targetdb
    1. Create a new empty DB with '_UNICODE=yes'
    2. Set 'columncompression' to 'no'
    C:\> dbmcli -d db_uni -u dba,dba param_directput columncompression no
    ERR
    -24979,ERR_XPNOTFOUND: parameter not found
    Couldn't find this parameter e.g. in dbmgui (parameters general, extended and support)
    3. Import catalog and data
    C:\> loadercli -d db_uni -u dba,dba
    loadercli> import db catalog instream file 'C:\tmp1\20080702a_dbAsc.catalog' ddl
    OK
    loadercli> import db data instream file 'C:\tmp1\20080702b_dbAsc.data' pages
    ERR -25347
    Encoding type of source and target database do not match: source = ASCII, target
    = UNICODE.
    loadercli> exit
    What is wrong? Is a migration from ascii to unicode to be done somehow else?
    Can I migrate a db from 7.5.00.26 to 7.6.03.15 in the same way or should it be done in another way.
    It would be greate if you point me to a post etc. where these two migrations are explained in detail.
    Thanks in advance - kind regards
    Michael

    Hi,
    I can neither find "USEUNICODECOLUMNCOMPRESSION" nor "COLUMNCOMPRESSION". Could it be that there do exist from MaxDB version 7.6 on and not in 7.5?
    Kind regards,
    Michael
    The complete parameter list (created by "dbmcli -d db_uni -u dbm,dbm param_directgetall > maxdb_params.txt") is:
    OK
    KERNELVERSION                         KERNEL    7.5.0    BUILD 026-123-094-430
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    RESTART_SHUTDOWN                      MANUAL
    _SERVERDB_FOR_SAP                     YES
    _UNICODE                              YES
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      INTERNAL
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         2
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   LOG_001
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   131072
    DATA_VOLUME_NAME_0001                 DAT_0001
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0001                 262144
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    BACKUP_BLOCK_CNT                      8
    LOG_MIRRORED                          NO
    MAXVOLUMES                            14
    _MULT_IO_BLOCK_CNT                    4
    _DELAY_LOGWRITER                      0
    LOG_IO_QUEUE                          50
    _RESTART_TIME                         600
    MAXCPU                                1
    MAXUSERTASKS                          50
    _TRANS_RGNS                           8
    _TAB_RGNS                             8
    _OMS_REGIONS                          0
    _OMS_RGNS                             25
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        1
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    _ROW_RGNS                             8
    _MIN_SERVER_DESC                      16
    MAXSERVERTASKS                        21
    _MAXTRANS                             292
    MAXLOCKS                              2920
    _LOCK_SUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       900
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    _USE_ASYNC_IO                         YES
    _IOPROCS_PER_DEV                      1
    _IOPROCS_FOR_PRIO                     1
    _USE_IOPROCS_ONLY                     NO
    _IOPROCS_SWITCH                       2
    LRU_FOR_SCAN                          NO
    _PAGE_SIZE                            8192
    _PACKET_SIZE                          36864
    _MINREPLY_SIZE                        4096
    _MBLOCK_DATA_SIZE                     32768
    _MBLOCK_QUAL_SIZE                     16384
    _MBLOCK_STACK_SIZE                    16384
    _MBLOCK_STRAT_SIZE                    8192
    _WORKSTACK_SIZE                       8192
    _WORKDATA_SIZE                        8192
    _CAT_CACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      3264
    INIT_ALLOCATORSIZE                    221184
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    _TASKCLUSTER_01                       tw;al;ut;2000*sv,100*bup;10*ev,10*gc;
    _TASKCLUSTER_02                       ti,100*dw;30000*us;
    _TASKCLUSTER_03                       compress
    _MP_RGN_QUEUE                         YES
    _MP_RGN_DIRTY_READ                    NO
    _MP_RGN_BUSY_WAIT                     NO
    _MP_DISP_LOOPS                        1
    _MP_DISP_PRIO                         NO
    XP_MP_RGN_LOOP                        0
    MP_RGN_LOOP                           0
    _MP_RGN_PRIO                          NO
    MAXRGN_REQUEST                        300
    _PRIO_BASE_U2U                        100
    _PRIO_BASE_IOC                        80
    _PRIO_BASE_RAV                        80
    _PRIO_BASE_REX                        40
    _PRIO_BASE_COM                        10
    _PRIO_FACTOR                          80
    _DELAY_COMMIT                         NO
    _SVP_1_CONV_FLUSH                     NO
    _MAXGARBAGE_COLL                      0
    _MAXTASK_STACK                        1024
    MAX_SERVERTASK_STACK                  100
    MAX_SPECIALTASK_STACK                 100
    _DW_IO_AREA_SIZE                      50
    _DW_IO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    _FBM_LOW_IO_RATE                      10
    CACHE_SIZE                            10000
    _DW_LRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    _DATA_CACHE_RGNS                      8
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              11
    SEQUENCE_CACHE                        1
    _IDXFILE_LIST_SIZE                    2048
    _SERVER_DESC_CACHE                    74
    _SERVER_CMD_CACHE                     22
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    _READAHEAD_BLOBS                      25
    RUNDIRECTORY                          E:\_mp\u_v_dbs\EVERW_T3
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        1
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        0
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       653
    EXTERNAL_DUMP_REQUEST                 NO
    _AK_DUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    _UTILITY_PROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    _BACKUP_HISTFILE                      dbm.knl
    _BACKUP_MED_DEF                       dbm.mdf
    _MAX_MESSAGE_FILES                    0
    _EVENT_ALIVE_CYCLE                    0
    _SHAREDDYNDATA                        10280
    _SHAREDDYNPOOL                        3658
    USE_MEM_ENHANCE                       NO
    MEM_ENHANCE_LIMIT                     0
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-07-02 21:10:19
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     E:\_mp\u_v_dbs\EVERW_T3\DIAGHISTORY
    _DIAG_SEM                             1
    SHOW_MAX_STACK_USE                    NO
    LOG_SEGMENT_SIZE                      43690
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    0
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_JOIN_OUTER                   YES
    JOIN_OPERATOR_IMPLEMENTATION          YES
    JOIN_TABLEBUFFER                      128
    OPTIMIZE_FETCH_REVERSE                YES
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             NO
    SHAREDSQL_EXPECTEDSTATEMENTCOUNT      1500
    SHAREDSQL_COMMANDCACHESIZE            32768
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    HASHED_RESULTSET                      NO
    HASHED_RESULTSET_CACHESIZE            262144
    AUTO_RECREATE_BAD_INDEXES             NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FORBID_LOAD_BALANCING                 NO

Maybe you are looking for