Error in all coversations from r/3 to bw

Hi Sam,
When I am doing extraction I am getting problm in Coversation errors in BW..
i.e. In R/3 everything is done successfully, But in BW  after replicating Data Source and after instalation of objects in Busines Contect it trowing errors in Conversions
Its displaying all types of Coversions i.e.. Currency, Units, Quantity etc...
2LIS_11_VDHDR
CUBE 0SD_C03 and 0SD_C05
No Conversation is happing in any data source
Give some advice..
Mohhan

Hi Sam,
I did LO's cockpit, datasource is 2LIS_11_VASCL..
in bw WHILE MONITORING IT IS THROUGHING BELOW ERROR.. I GOT SAME TYPE OF PRBLMS FOR REST OF DATASOURCES..
PLS SUGGEST WHAT WOULD BE THE PRBLM..
Like installation or Selecting any default Feilds..
The error is
Settings for material number conversion not found
CUBE :: 0CDS_DS04

Similar Messages

  • I keep getting an error when trying to update an app- cannot connect to store. I logged out from my account and tried to log back in and got the same error. I am doing all this from my phone since I no longer own a personal computer (only work)

    I keep getting an error when trying to update an app- cannot connect to store. I logged out from my account and tried to log back in and got the same error. I am doing all this from my phone since I no longer own a personal computer (only work) since I use iCloud and I tunes match

    YAY!!! Saved it in my Mac's Firefox Bookmarks for easy future access!
    Hope you are having a lovely afternoon today! I'm about ready to go bobo....I have an early meeting, and I don't want to oversleep! The nice part is that I work remotely, so I only have to wake up 15 minutes or so before the meeting.... I don't even use an alarm clock anymore (really, my iPhone alarm, which is much more pleasant), unless I have to get up at 6:30 or something....
    TMI?
    GB

  • TestStand Deplyment Error- Error: Unable to locate all subVIs from saved VIs because a subVI is missing

    Hi,
    I am a Systems and Software Engineer based in Vancouver. I developed an automated test system using LabVIEW 2013 and TestStand 2013 with custom operator interface.
    I encountered 'missing VIs' problem which is kind of weird because I passed analyzing the sequence for both TestStand Sequence Editor and TestStand Deployment Utility >> Distrubuted Files Tab.
    But when I tried building the installer and reaching the point 'Calling distribution VIs, it always throw an error saying 'An error occurred while trying to read the depencies of the VIs, possible because the VIs are not saved in the current version of LabVIEW. Do you want to save any modified now?'. I tried both cases (i.e. Yes and No) for this option but it did not solve the issue.
    This is part of the original error message displayed in TestStand Deployment Utility:
    While Processing VIs...
    Error: Unable to locate all subVIs from saved VIs because a subVI is missing or the VI is not saved in the current version of LabVIEW.
    The call chain to missing VIs:
    1 - ATE_AccelerometerTest.vi
    2 - CreateAndMergeErrors.vi (missing)
    3 - LogControl_CheckForErrorSendUpdates.vi (missing) "
    All missing VIs are coming from userlib.
    Actions Done:
    - Analyzed sequence file using TestStand Sequence Editor and TestStand Deployment Utility
    - Verified 'Search Directories' include all necessary files/dependencies.
    - Mass compile the directory of the missing VIs
    - Added all needed files and folders in the workspace file.
    The result is still the same based from the actions done.
    The last debugging I did earlier is that I tried locating the sequence and steps of missing VIs as mentioned above (e.g. ATE_AccelerometerTest.vi)
    and I found out that the step seems to be an empty action step. Would this be possible even if it already passed the analysis?
    Other considerations include:
    I am using LabVIEW 2013 sp1 and TestStand 2013. We tried building from three (3) computers and we only succeeded once to a freshly-installed comptuer.
    Hope to hear from you soon.
    With Best Regards,
    Michael Panganiban
    Systems and Software Engineer
    www.synovus.ca
    [email protected]
     

    Hi All,
    We were able to resolve the issue. First to note is that the release notes in TestStand 2013 is outdated and we confirmed from NI Engineer in Austin that TestStand 2013 works fine with LabVIEW 2013 SP1.
    Secondly, we played around TestStand Deployment option that resolved the issue. Attached are the images.
    We just enabled the 'Remove Unused VI Components'. It could be one of the libraries (lvlib) we included in the build but we haven't figured it out yet because we verified that all VIs are working. It could be also something else that I think very difficult to find based from the information. However, if anybody experienced the same issue, this could be helpful.
    Again, we revert back in using TestStand 2013 and LabVIEW 2013 SP1.
    I appreciate any comments and feedbacks. Otherwise, you can close this support request.
    Thank you.
    With Best Regards,
    Michael Panganiban
    Systems and Software Engineer
     

  • After moving all photographs from my first catalogue, which was a mess, to a new catalogue, it all worked fine. Until that is I tried to open the new catalogue, which had worked OK all day. I now get a window that says "Unexpected error, select another ca

    After moving all photographs from my first catalogue, which was a mess, to a new catalogue, it all worked fine. Until that is I tried to open the new catalogue, which had worked OK all day. I now get a window that says "Unexpected error, select another catalogue."

    OK, restart in Safe Mode, this will clear some caches. It's possible one or more is corrupt. To restart in Safe Mode when you hear the start up tone hold down the Shift Key until you see a progress bar. Let it fully boot then restart normally and test.
    Also I am assuming you have checked Finder - Preferences  - General and see what boxes are checked in "Show these items on desktop." You can also mount an item in Disk Utility, simply highlight it and then look in the File menu for Mount.....

  • Error when generating IDoc from MC document - workitem to all the SAP users

    A workflow item with the subject of “Error when generating IDoc from MC document” is sent to all the SAP users' inbox. Is it possible to stop the generation of this work item? If that is not possible, can we limit sending the work item to a specific user/agent instead of all the users in the system?
    It appears that these work item or error message are generated when one of the developers reopen the POs and add line items. Moreover, during that time the procurement team blocked the IDOCs from going out to the vendors when changing and resaving the POs. Therefore, we need stop the generation of error message/work item when the IDOCs generation blocked.

    Please check Rule 70000141which is the default rule for this task. Inside this rule a FM is attcahed which is reading table EDO13 and EDPP1 where agent is retrieved Probably this table entries are not maintained. This Workflow is getting triggered from Message cOntrol I think.
    Please check this link for
    http://help.sap.com/saphelp_47x200/helpdata/en/c5/e4aec8453d11d189430000e829fbbd/frameset.htm
    <b>Reward points if useful and close thread if resolved</b>

  • Error: Unable to find all subVIs from saved VIs.

    TestStand 2010 SP1, LabVIEW 2011, WinXP
    Trying to build a deployment in TestStand.
    During the build, I get the now-infamous popup:
    Title: "Save Modified VIs?"
    Text: "An error occured while trying to read the dependencies of your VIs; a possible cause for this problem is VIs not saved in the current version of LabVIEW. Would you like to save any modified VIs now?"
    I select "yes", and the build then fails with errors in the log pane of the "Build Status" window.  The error is:
    Error: Unable to find all subVIs from saved VIs, either a subVI is missing or the VI is not saved under the current LabVIEW version.
    The missing file path is:  etc, etc.
    I go to said missing path.  The VI is in fact present.  I open the VI and do "Save All", try again.  Same thing.  Then, I try a Mass Compile on the test VI directory.  Attempt to build again, same error.
    Note that I do call some VIs dynamically, but those are all present in my workspace, and are not found in this error.
    Is there a debugging option or log file or some kind of trace I can do to dig into this and find out the state of things causing the error?  Please help me out here.
    -Andy

    One quick way to narrow down the issue is to change your LabVIEW adapter to Run-Time engine and then try and run the sequence in the Sequence Editor.
    Also, triple check your search directories and make sure you're not pulling in VIs from an unexpected location.
    When you see that dialog it's already too late to save so trying to Save at that point never works.
    Another tip is to ignore which VIs it says are not saved in the current version and instead look at the subVIs that those VIs are calling. You can run into this if you have two subVIs loaded with the same name.
    CTA, CLA, MTFBWY

  • I have upgraded all apps from CS5 to CC - but keep getting U43M1D207 error when trying to upgrade Illustrator. Tried twice. Help!

    I have upgraded all apps from CS5 to CC - but keep getting U43M1D207 error when trying to upgrade Illustrator. Tried twice. Help!

    You can also use Download New Adobe CC Trials: Direct Links (no Assistant/Manager) | ProDesignTools
    Direct Download Links for Adobe Software
    Are you on a managed network. If yes please refer the Knowledge base article: http://helpx.adobe.com/creative-cloud/help/cc-desktop-download-error.html.
    You may even try the direct download: http://prodesigntools.com/adobe-cc-direct-download-links.html.
    Kindly follow the very important instructions before download.
    It might help you.
    Regards
    Rajshree

  • Error while selecting date from external table

    Hello all,
    I am getting the follwing error while selecting data from external table. Any idea why?
    SQL> CREATE TABLE SE2_EXT (SE_REF_NO VARCHAR2(255),
      2        SE_CUST_ID NUMBER(38),
      3        SE_TRAN_AMT_LCY FLOAT(126),
      4        SE_REVERSAL_MARKER VARCHAR2(255))
      5  ORGANIZATION EXTERNAL (
      6    TYPE ORACLE_LOADER
      7    DEFAULT DIRECTORY ext_tables
      8    ACCESS PARAMETERS (
      9      RECORDS DELIMITED BY NEWLINE
    10      FIELDS TERMINATED BY ','
    11      MISSING FIELD VALUES ARE NULL
    12      (
    13        country_code      CHAR(5),
    14        country_name      CHAR(50),
    15        country_language  CHAR(50)
    16      )
    17    )
    18    LOCATION ('SE2.csv')
    19  )
    20  PARALLEL 5
    21  REJECT LIMIT UNLIMITED;
    Table created.
    SQL> select * from se2_ext;
    SQL> select count(*) from se2_ext;
    select count(*) from se2_ext
    ERROR at line 1:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    KUP-04043: table column not found in external source: SE_REF_NO
    ORA-06512: at "SYS.ORACLE_LOADER", line 19

    It would appear that you external table definition and the external data file data do not match up. Post a few input records so someone can duplicate the problem and determine the fix.
    HTH -- Mark D Powell --

  • Error reading 'ods' passwd from wallet!!!!!

    I did:
    c:\>oidmon connect=hahldap start
    (No errors).
    C:\>oidctl connect=hahldap server=oidldapd host=192.168.1.100 instance=1 start
    gsluwpwaGetWalletPasswd: Opening 2 file failed with error 11005
    [gslusw]:Error reading 'ods' passwd from wallet
    [gsdsiConnect]:Error reading 'ods' passwd from wallet
    Could not connect to the Database.
    What could be the reason??? I am running on WinXP Is it the problem?
    Installation went fine for both Metadata repository and for the Oracle Internet Directory.
    Note: I couldn't find the service OracleDirectoryService_xxx at all, does that mean the installation is incorrect?
    thanks.

    Any resolution here :
    My listener is up
    My DB is up and accessible, but when issuing the following command I get following error .
    Any chance you can offer some advice ?
    oidmon start
    2006/03/01:12:58:32 * gsluwpwaGetWalletPasswd: Opening 3 file failed with error 11005
    2006/03/01:12:58:32 * [gslusw]:Error reading 'ods' passwd from wallet
    2006/03/01:12:58:32 * [gsdsiConnect]:Error reading 'ods' passwd from wallet
    2006/03/01:12:58:32 * [oidmon]: Unable to connect to database,
    will retry again after 20 sec
    Im on Unix

  • Delete all calendars from an iCloud account

    Hi  Support Communities,
    did anyone find a way to delete all calendars from an iCloud account? It seems that iCloud wants to keep at least one calendar. We use several accounts and one is only used to look at shared calendars from other accounts. So there is no need for its own empty calendar. On iOS I just hid this calendar, but on OS X it is always visible. I unchecked it, but it would be nice if I could just delete it. If I do so on an iOS device it is gone, but reappears later. Trying to delete it on a Mac results in an error that it can’t be deleted, same goes for the web interface at icloud.com. It’s just a cosmetic issue, but I would prefer it to be gone altogether. Any ideas?
    Thanks
    Björn

    I had hoped I was wrong, you never know. Well, thanks anyway!
    Björn

  • Get all users from Active Directory

    Dear All,
    I would like to retreive all USERS from the AD.
    I finaly could connect to an AD server but I couldn't perform the search.
    I got a javax.naming.NamingException: [LDAP: error code 1 - 000020D6: SvcErr: DSID-03100690, problem 5012 (DIR_ERROR), data 0
    --> Does it mean that my query is incorect.
    I think I am missing something obvious. but what?
    Can somebody please help me or point me to some working code sample.
    Thanks in advance.
    Karim.
    //======== Test Code =============
            String THIS_INIT_CONT_FAC="com.sun.jndi.ldap.LdapCtxFactory";
            String THIS_PROV_URL=url;
            String THIS_SEC_AUTH="simple";
            String THIS_SEARCHBASE="CN=Users, CN=domain, CN=com";
            String THIS_ATTRS[] = {"mail"};
    try {
    String THIS_FILTER="(objectClass=user)";
    System.out.println("Testing LDAP Program");
    System.out.println("************************************************************");
    String THIS_SEC_PRIN="";
    String THIS_SEC_CRED="";
    System.out.println("Cont Fac : " + THIS_INIT_CONT_FAC);
    System.out.println("LDAP Server : " + THIS_PROV_URL);
    System.out.println("Auth Method : " + THIS_SEC_AUTH);
    System.out.println("Search Base : " + THIS_SEARCHBASE);
    System.out.println("Filter : " + THIS_FILTER);
    System.out.println("Login : " + THIS_SEC_PRIN);
    System.out.println("Credentials : " + THIS_SEC_CRED);
    System.out.println("************************************************************");
    Hashtable env=new Hashtable();
    env.put(Context.INITIAL_CONTEXT_FACTORY, THIS_INIT_CONT_FAC);
    env.put(Context.PROVIDER_URL, THIS_PROV_URL);
    env.put(Context.SECURITY_AUTHENTICATION, THIS_SEC_AUTH);
    env.put(Context.SECURITY_PRINCIPAL, THIS_SEC_PRIN);
    env.put(Context.SECURITY_CREDENTIALS, THIS_SEC_CRED);
    DirContext ctx = new InitialDirContext(env);
    System.out.println("LDAP TEST Login Successful!");
    SearchControls constraints = new SearchControls();
    constraints.setSearchScope(SearchControls.SUBTREE_SCOPE);
    NamingEnumeration results = ctx.search(THIS_SEARCHBASE,THIS_FILTER, constraints);
    int namecount=0;
    System.out.println("LDAP TEST Results : " + results);
    System.out.println("LDAP TEST Pre-Hit ! ");
    } catch(AuthenticationException ae) {
    ae.printStackTrace();
    System.out.println("Incorrect Password or UserName");
    return false;
    } catch(Exception e) {
    e.printStackTrace();
    System.out.println("Error accessing LDAP");
    return false;
    // ============ OUTPUT =====================
    Testing LDAP Program
    Cont Fac : com.sun.jndi.ldap.LdapCtxFactory
    LDAP Server : ldap://192.168.2.3:389/
    Auth Method : simple
    Search Base : CN=Users, CN=domain, CN=com
    Filter : (objectClass=user)
    Login :
    Credentials :
    LDAP TEST Login Successful!
    javax.naming.NamingException: [LDAP: error code 1 - 000020D6: SvcErr: DSID-03100690, problem 5012 (DIR_ERROR), data 0
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    If you want to list all the users then you don't need to perform a search. Just list them.
       private void list(String contextName)
          try
             // get enumeration of NameValuePairs
                NamingEnumeration contentsEnum = ctx.list(contextName);
             while (contentsEnum.hasMore())
                System.out.println(contentsEnum.next());
          catch (NamingException e)
             System.err.println("Problem listing context contents: " + e);
       }You will want to call this using something like this:
    list("CN=Users, CN=domain, CN=com");One caveat, there is a restriction on the number of results returned so this will still throw an LDAP exception if you have a lot of users.
    Not sure how to get around that. Never needed to look. Don't expect it is hard though.

  • Get all groups from an AD Server

    Hi everyone,
    I'm trying to get all groups from and AD server.
    Here's how I'm doing it:
    DirContext ctx = new InitialDirContext( (Hashtable<String,String>) env);
              Name n2 = new CompositeName().add(groupsContainer);
              NamingEnumeration<Binding> contentsEnum = ctx.listBindings(n2);
              int i = 1;
              while ( contentsEnum.hasMore() && (i++) < 1000 )
                   Binding binding = contentsEnum.next();
                   groups.add(binding.getName().substring(3));
              return groups; The problem is, I always get an error if I don't restrict the results number to below 1000.
    The error is the following *javax.naming.SizeLimitExceededException: [LDAP: error code 4 - Sizelimit Exceeded];*
    After googling, I found it it's due to a field in the AD Server, that restrict the result number.
    So there is no way that I can obtain all groups without changing that field?
    Regards,
    Nuno.

    Hi Nuno,
    You have to increase the MaxPageSize value at ActiveDirectory level to retrieve results more than 1000. By default the MaxPageSize value is 1000. There is no option other than increasing the MaxPageSize value.
    Thanks & Regards,
    Murali.
    ============

  • How to get All Users from OID LDAP

    Hi all,
    I have Oracle Internet Directory(OID) and have created the users in it manually.
    Now I want to extract all the users from OID. How can I get Users from OID??
    Any response will be appritiated. If some one could show me demo code for that I shall be greatful to you.
    Thanks and reagards
    Pravy

    hi,
    the notes from metalink:
    bgards
    elvis
    Doc ID: Note:276688.1
    Subject: How to copy (export/import) the Portal database schemas of IAS 9.0.4 to another database
    Type: BULLETIN
    Status: PUBLISHED
    Content Type: TEXT/X-HTML
    Creation Date: 18-JUN-2004
    Last Revision Date: 05-AUG-2005
    How to copy (export/import) Portal database schemas of IAS 9.0.4 to another database
    Note 276688.1
    Download scripts Unix: Attachment 276688.1:1
    Download Perl scripts (Unix/NT) :Attachment 276688.1:2
    This article is being delivered in Draft form and may contain errors. Please use the MetaLink "Feedback" button to advise Oracle of any issues related to this article.
    HISTORY
    Version 1.0 : 24-JUN-2004: creation
    Version 1.1 : 25-JUN-2004: added a link to download the scripts from Metalink
    Version 1.2 : 29-JUN-2004: Import script: Intermedia indexes are recreated. Imported jobs are reassigned to Portal. ptlconfig replaces ptlasst.
    Version 1.3 : 09-JUL-2004: Additional updates. Usage of iasconfig.xml. Need only 3 environment variables to import.
    Version 1.4 : 18-AUG-2004: Remark about 9.2.0.5 and 10.1.0.2 database
    Version 1.5 : 26-AUG-2004: Duplicate job id
    Version 1.6 : 29-NOV-2004: Remark about WWC-44131 and WWSBR_DOC_CTX_54
    Version 1.7 : 07-JAN-2005: Attached perl scripts (for NT/Unix) at the end of the note
    Version 1.8 : 12-MAY-2005: added a work-around for the WWSTO_SESS_FK1 issue
    Version 1.9 : 07-JUL-2005: logoff trigger and 9.0.1 database export, import in 10g database
    Version 1.10: 05-AUG-2005: reference to the 10.1.2 note
    PURPOSE
    This document explains how to copy a Portal database schema from a database to another database.
    It allows restoring the Portal repository and the OID security associated with Portal.
    It can be used to go in production by copying physically a database from a development portal to a production environment and avoid to use the export/import utilities of Portal.
    This note:
    uses the export/import on the database level
    allows the export/import to be done between different platforms
    The script are Unix based and for the BASH shell. They can be adapted for other platforms.
    For the persons familiar with this technics in Portal 9.0.2, there is a list of the main differences with Portal 9.0.2 at the end of the note.
    These scripts are based on the experience of a lot of persons in Portal 902.
    The scripts are attached to the note. Download them here: Attachment 276688.1:1 : exp_schema_904.zip
    A new version of the script was written in Perl. You can also download them, here: Attachment 276688.1:2 : exp_schema_904_v2.zip. They do exactly the same than the bash ones. But they have the advantage of working on all platforms.
    SCOPE & APPLICATION
    This document is intented for Portal administrators. For using this note, you need basic DBA skills.
    This notes is for Portal 9.0.4.x only. The notes for Portal 9.0.2 are :
    Note 228516.1 : How to copy (export/import) Portal database schemas of IAS 9.0.2 to another database
    Note 217187.1 : How to restore a cold backup of a Portal IAS 9.0.2 on another machine
    The note for Portal 10.1.2 is:
    Note 330391.1 : How to copy (export/import) Portal database schemas of IAS 10.1.2 to another databaseMethod
    The method that we will follow in the document is the following one:
    Export:
    - export of the 4 portal schemas of a database (DEV / development)
    - export the LDAP OID users and groups (optional)
    Install a new machine with fresh IAS installation (PROD / production)
    Import:
    - delete the new and empty portal schema on PROD
    - import the schemas in the production database in place of the deleted schemas
    - import the LDAP OID users and groups (optional)
    - modify the configuration such that the infrastructure uses the portal repository of the backup
    - modify the configuration such that the portal repository uses the OID, webcache and SSO of the new infrastructure
    The export and the import are divided in several steps. All of these steps are included in 2 sample scripts:
    export : exp_portal_schema.sh
    import : imp_portal_schema.sh
    In the 2 scripts, all the steps are runned in one shot. It is just an example. Depending of the configuration and circonstance, all the steps can be runned independently.
    Convention
    Development (DEV) is the name of the machine where resides the copied database
    Production (PROD) is the name of the machine where the database is copied
    Prerequisite
    Some prerequisite first.
    A. Environment variables
    To run the import/export, you will need 3 environment variables. In the given scripts, they are defined in 'portal_env.sh'
    SYS_PASSWORD - the password of user sys in the Portal database
    IAS_PASSWORD - the password of IAS
    ORACLE_HOME - the ORACLE_HOME of the midtier
    The rest of the settings are found automatically by reading the iasconfig.xml file and querying the OID. It is done in 'portal_automatic_env.sh'. I wish to write a note on iasconfig.xml and the way to transform it in usefull environment variables. But it is not done yet. In the meanwhile, you can read the old 902 doc, that explains the meaning of most variables :
    < Note 223438.1 : Shell script to find your portal passwords, settings and place them in environment variables on Unix >
    B. Definition: Cutter database
    A 'Cutter Database' is the term used to designate a Database created by RepCA or OUI and that contains all the schemas used by a IAS 9.0.4 infrastructure. Even if in most cases, several schemas are not used.
    In Portal 9.0.4, the option to install only the portal repository in an empty database has been removed. It has been replaced by RepCA, a tool that creates an infrastructure database. Inside all the infrastucture database schemas, there are the portal schemas.
    This does not stop people to use 2 databases for running portal. One for OID and one for Portal. But in comparison with Portal 9.0.2, all schemas exist in both databases even if some are not used.
    The main idea of Cutter database is to have only 1 database type. And in the future, simplify the upgrades of customer installation
    For an installation where Portal and OID/SSO are in 2 separate databases, it looks like this
    Portal 9.0.2 Portal 9.0.4
    Infrastructure database
    (INFRA_SID)
    The infrastructure contains:
    - OID (used)
    - OEM (used)
    - Single Sign-on / orasso (used)
    - Portal (not used)
    The infrastructure contains:
    - OID (used)
    - OEM (used)
    - Single Sign-on / orasso (used)
    - Portal (not used)
    Portal database
    (PORTAL_SID)
    The custom Portal database contains:
    - Portal (used)
    The custom Portal database (is also an infrastructure):
    - OID (not used)
    - OEM (not used)
    - Single Sign-on / orasso (not used)
    - Portal (used)
    Whatever, the note will suppose there is only one single database. But it works also for 2 databases installation like the one explained above.
    C. Directory structure.
    The sample scripts given inside this note will be explained in the next paragraphs. But first, the scripts are done to use a directory structure that helps to classify the files.
    Here is a list of important files used during the process of export/import:
    File Name
    Description
    exp_portal_schema.sh
    Sample script that exports all the data needed from a development machine
    imp_portal_schema.sh
    Sample script that import all the data into a production machine
    portal_env.sh
    Script that defines the env variable specific to your system (to configure)
    portal_automatic_env.sh
    Helper script to get all the rest of the Portal settings automatically
    xsl
    Directory containing all the XSL files (helper scripts)
    del_authpassword.xsl
    Helper script to remove the authpassword tags in the DSML files
    portal_env_unix.sql
    Helper script to get Portal settings from the iasconfig.xml file
    exp_data
    Directory containing all the exported data
    portal_exp.dmp
    export on the database level of the portal, portal_app, ... database schemas
    iasconfig.xml
    copy the name of iasconfig.xml of the midtier of DEV. Used to get the hostname and port of Webcache
    portal_users.xml
    export from LDAP of the OID users used by Portal (optional)
    portal_groups.xml export from LDAP of the OID groups used by Portal (optional)
    imp_log
    Directory containing several spool and logs files generated during the import
    import.log Log file generated when running the imp command
    ptlconfig.log
    Log generated by ptlconfig when rewiring portal to the infrastructure.
    Some other spool files.
    D. Known limitations
    The scripts given in this note have the following known limitations:
    It does not copy the data stored in the SSO schema: external applications definitions and the passwords stored for them.
    See in the post steps: SSO migration to know how to do.
    The ssomig command resides in the Infrastructure Oracle home. And all commands of Portal in the Midtier home. And practically, these 2 Oracle homes are most of the time not on the same machine. This is the reason.
    The export of the users in OID exports from the default user location:
    ldapsearch .... -b "cn=users,dc=domain,dc=com"
    This is not 100% correct. The users are by default stored in something like "cn=users,dc=domain,dc=com". So, if the users are stored in the default location, it works. But if this location (user install base) is customized, it does not work.
    The reason is that such settings means that the LDAP most of the time highly customized. And I prefer that the administrator to copy the real LDAP himself. The right command will probably depend of the customer case. So, I prefered not to take the risk..
    orclCommonNicknameAttribute must match in the Target and Source OID .
    The orclCommonNicknameAttribute must match on both the source and target OID. By default this attribute is set to "uid", so if this has been changed, it must be changed in both systems.
    Reference Note 282698.1
    Migration of custom Java portlets.
    The script migrates all the data of Portal stored in the database. If you have custom java portlet deployed in your development machine, you will need to copy them in the production system.
    Step 1 - Export in Development (DEV)
    To export a full Portal installation to another machine, you need to follow 3 steps:
    Export at the database level the portal schemas + related schemas
    Get the midtier hostname and port of DEV
    Export of the users and groups with LDAPSEARCH in 2 XML files
    A script combining all the steps is available here.
    A. Export the 4 portals schemas (DEV)
    You need to export 3 types of database schemas:
    The 4 portal schemas created by default by the portal installation :
    portal,
    portal_app,
    portal_demo,
    portal_public
    The schemas where your custom database portlets / providers resides (if any)
    - The custom schemas you have created for storing your portlet / provider code
    The schemas where your custom tables resides. (if any)
    - Your custom schemas accessed by portal and containing only data (tables, views ...)
    You can get an approximate list of the schemas: default portal schemas (1) and database portlets schemas (2) with this query.
    SELECT USERNAME, DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE
    FROM DBA_USERS
    WHERE USERNAME IN (user, user||'_PUBLIC', user||'_DEMO', user||'_APP')
    OR USERNAME IN (SELECT DISTINCT OWNER FROM WWAPP_APPLICATION$ WHERE NAME != 'WWV_SYSTEM');
    It still misses your custom schemas containing data only (3).
    We will export the 4 schemas and your custom ones in an export file with the user sys.
    Please, use a command like this one
    exp userid="'sys/change_on_install@dev as sysdba'" file=portal_exp.dmp grants=y log=portal_exp.log owner=(portal,portal_app,portal_demo,portal_public)The result is a dump file: 'portal_exp.dmp'. If you are using a database 9.2.0.5 or 10.1.0.2, the database of the exp/imp dump file has changed. Please read this.
    B. Hostname and port
    For the URL to access the portal, you need the 2 following infos to run the script 'imp_portal_schema.sh below :
    Webcache hostname
    Webcache listen port
    These values are contained in the iasconfig.xml file of the midtier.
    iasconfig.xml
    <IASConfig XSDVersion="1.0">
    <IASInstance Name="ias904.dev.dev_domain.com" Host="dev.dev_domain.com" Version="9.0.4">
    <OIDComponent AdminPassword="@BfgIaXrX1jYsifcgEhwxciglM+pXod0dNw==" AdminDN="cn=orcladmin" SSLEnabled="false" LDAPPort="3060"/>
    <WebCacheComponent AdminPort="4037" ListenPort="7782" InvalidationPort="4038" InvalidationUsername="invalidator" InvalidationPassword="@BR9LXXoXbvW1iH/IEFb2rqBrxSu11LuSdg==" SSLEnabled="false"/>
    <EMComponent ConsoleHTTPPort="1813" SSLEnabled="false"/>
    </IASInstance>
    <PortalInstance DADLocation="/pls/portal" SchemaUsername="portal" SchemaPassword="@BR9LXXoXbvW1c5ZkK8t3KJJivRb0Uus9og==" ConnectString="cn=asdb,cn=oraclecontext">
    <WebCacheDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
    <OIDDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
    <EMDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
    </PortalInstance>
    </IASConfig>
    It corresponds to a portal URL like this:
    http://dev.dev_domain.com:7782/pls/portalThe script exp_portal_schema.sh copy the iasconfig.xml file in the exp_data directory.
    C. Export the security: users and groups (optional)
    If you use other Single Sing-On uses than the portal user, you probably need to restore the full security, the users and groups stored in OID on the production machine. 5 steps need to be executed for this operation:
    Export the OID entries with LDAPSEARCH
    Before to import, change the domain in the generated file (optional)
    Before to import, remove the 'authpassword' attributes from the generated files
    Import them with LDAPADD
    Update the GUID/DN of the groups in portal tables
    Part 1 - LDAPSEARCH
    The typical commands to do this operation look like this:
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -b "cn=portal.040127.1384,cn=groups,dc=dev_domain,dc=com" -s sub "objectclass=*" > portal_group.xml
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -D "cn=orcladmin" -w $IAS_PASSWORD -b "cn=users,dc=dev_domain,dc=com" -s sub "objectclass=inetorgperson" > portal_users.xmlTake care about the following points
    The groups are stored in a LDAP directory containing the date of installation
    ( in this example: portal.040127.1384,cn=groups,dc=dev_domain,dc=com )
    If the domain of dev and prod is different, the exported files contains the name of the development domain in the form of 'dc=dev_domain,dc=com' in a lot of place. The domain name needs to be replaced by the production domain name everywhere in the files.
    Ldapsearch uses the option '- X '. It it to export to DSML files (XML). It avoids a problem related with common LDAP files, LDIF files. LDIF files are wrapped at 78 characters. The wrapping to 78 characters make difficult to change the domain name contained in the LDIF files. XML files are not wrapped and do not have this problem.
    A sample script to export the 2 XML files is given here in : step 3 - export the users and groups (optional) of the export script.
    Part 2 : change the domain in the DSML files
    If the domain of dev and prod is different, the exported files contains the name of the development domain in the form of 'dc=dev_domain,dc=com' in a lot of place. The domain name need to be replaced by the production domain name everywhere in the files.
    To do this, we can use these commands:
    cat exp_data/portal_groups.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/portal_groups.xml
    cat exp_data/portal_users.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/temp_users.xml
    Part 3 : Remove the authpassword attribute
    The export of all attributes from the all users has also exported an automatically generated attribute in OID called 'authpassword'.
    'authpassword' is a list automatically generated passwords for several types of application. But mostly, it can not be imported. Also, there is no option in ldapsearch (that I know) that allows removing an attribute. In place of giving to the ldapsearch command the list of all the attributes that is very long, without 'authpassword', we will remove the attribute after the export.
    For that we will use the fact that the DSML files are XML files. There is a XSLT in the Oracle IAS, in the executable '$ORACLE_HOME/bin/xml'. XSLT is a standard specification of the internet consortium W3C to transform a XML file with the help of a XSL file.
    Here is the XSL file to remove the authpassword tag.
    del_autpassword.xsl
    <!--
    File : del_authpassword.xsl
    Version : 1.0
    Author : mgueury
    Description:
    Remove the authpassword from the DSML files
    -->
    <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <xml:output method="xml"/>
    <xsl:template match="*|@*|node()">
    <xsl:copy>
    <xsl:apply-templates select="*|@*|node()"/>
    </xsl:copy>
    </xsl:template>
    <xsl:template match="attr">
    <xsl:choose>
    <xsl:when test="@name='authpassword;oid'">
    </xsl:when>
    <xsl:when test="@name='authpassword;orclcommonpwd'">
    </xsl:when>
    <xsl:otherwise>
    <xsl:copy>
    <xsl:apply-templates select="*|@*|node()"/>
    </xsl:copy>
    </xsl:otherwise>
    </xsl:choose>
    </xsl:template>
    </xsl:stylesheet>
    And the command to make the transfomation:
    xml -f -s del_authpassword.xsl -o imp_log/portal_users.xml imp_log/temp_users.xmlWhere :
    imp_log/portal_users.xml is the final file without authpassword tags
    imp_log/temp_users.xml is the input file with the authpassword tags that can not be imported.
    Part 4 : LDAPADD
    The typical commands to do this operation look like this:
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X portal_group.xml
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X portal_users.xmlTake care about the following points
    Ldapadd uses the option ' -c '. Existing users/groups are generating an error. The option -c allows continuing and ignoring these errors. Whatever, the errors should be checked to see if it is just existing entries.
    A sample script to import the 2 XML files given in the step 5 - import the users and groups (optional) of the import script.
    Part 5 : Update the GUID/DN
    In Portal 9.0.4, the update of the GUID is taken care by PTLCONFIG during the import. (Import step 7)
    D. Example script for export
    Here is a example script that combines the 3 steps.
    Depending of you need, you will :
    or execute all the steps
    or just execute the 1rst one (export of the database users). It will be enough you just want to login with the portal user on the production instance.
    if your portal repository resides in a database 9.2.0.5 or 10.1.0.2, please read this
    you can download all the scripts here, Attachment 276688.1:1
    Do not forget to modify the script to your need and mostly add the list of users like explained in point A above.
    exp_portal_schema.sh
    # BASH Script : exp_portal_schema.sh
    # Version : 1.3
    # Portal : 9.0.4.0
    # History :
    # mgueury - creation
    # Description:
    # This script export a portal dump file from a dev instance
    # -------------------------- Environment variables --------------------------
    . portal_env.sh
    # In case you do not use portal_env.sh you have to define all the variables
    # For exporting the dump file only.
    # export SYS_PASSWORD=change_on_install
    # export PORTAL_TNS=asdb
    # For the security (optional)
    # export IAS_PASSWORD=welcome1
    # export PORTAL_USER=portal
    # export PORTAL_PASSWORD=A1b2c3de
    # export OID_HOSTNAME=development.domain.com
    # export OID_PORT=3060
    # export OID_DOMAIN_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
    # ------------------------------ Help function -----------------------------------
    function press_any_key() {
    if [ $PRESS_ANY_KEY_AFTER_EACH_STEP = "Y" ]; then
    echo
    echo Press enter to continue
    read $ANY_KEY
    else
    echo
    fi
    echo "------------------------------- Export ------------------------------------"
    # create a directory for the export
    mkdir exp_data
    # copy the env variables in the log just in case
    export > exp_data/exp_env_variable.txt
    echo "--------------------- step 1 - export"
    # export the portal users, but take care to add:
    # - your users containing DB providers
    # - your users containing data (tables)
    exp userid="'sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba'" file=exp_data/portal_exp.dmp grants=y log=exp_data/portal_exp.log owner=(portal,portal_app,portal_demo,portal_public)
    press_any_key
    echo "--------------------- step 2 - store iasconfig.xml file of the MIDTIER"
    cp $MIDTIER_ORACLE_HOME/portal/conf/iasconfig.xml exp_data
    press_any_key
    echo "--------------------- step 3 - export the users and groups (optional)"
    # Export the groups and users from OID in 2 XML files (not LDIF)
    # The OID groups of portal are stored in GROUP_INSTALL_BASE that depends
    # of the installation date.
    # For the user, I use the default place. If it does not work,
    # you can find the user place with:
    # > exec dbms_output.put_line(wwsec_oid.get_user_search_base);
    # Get the GROUP_INSTALL_BASE used in security export
    sqlplus $PORTAL_USER/$PORTAL_PASSWORD@$PORTAL_TNS <<IASDB
    set serveroutput on
    spool exp_data/group_base.log
    begin
    dbms_output.put_line(wwsec_oid.get_group_install_base);
    end;
    IASDB
    export GROUP_INSTALL_BASE=`grep cn= exp_data/group_base.log`
    echo '--- Exporting Groups'
    echo 'creating portal_groups.xml'
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -s sub -b "$GROUP_INSTALL_BASE" -s sub "objectclass=*" > exp_data/portal_groups.xml
    echo '--- Exporting Users'
    echo 'creating portal_users.xml'
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -X -s sub -b "cn=users,$OID_DOMAIN_DN" -s sub "objectclass=inetorgperson" > exp_data/portal_users.xml
    The script is done to run from the midtier.
    Step 2 - Install IAS in a new machine (PROD)
    A. Installation
    This note does not distinguish if Portal is sharing the same database than Single-Sign On and OID. For simplicity, I will speak only about 1 database. But I could also create a second infrastructure database just for the portal repository. This way is better for production system, because the Portal repository is only product used in the 2nd database. Having 2 separate databases allows taking easily backup of the portal repository.
    On the production machine, you need to install a fresh install of IAS 9.0.4. Take care to use :
    the same IAS patchset 9.0.4.1, 9.0.4.2, ...on the middle-tier and infrastruture than in development
    and same characterset than in development (or UTF8)
    The result will be 2 ORACLE_HOMES and 1 infrastructure database:
    the ORACLE_HOME of the infrastructure (SID:infra904)
    the ORACLE_HOME of the midtier (SID:ias904)
    an infrastructure database (SID:asdb)
    The empty new Portal install should work fine before to go to the next step.
    B. About tablespaces (optional)
    The size of the tablespace of the production should match the one of the Developement machine. If not, the tablespace will autoextend. It is not really a concern, but it is slow. You should modify the tablespaces for to have as much space on prod and dev.
    Also, it is safer to check that there is enough free space on the hard disk to import in the database.
    To modify the tablespace size, you can use Oracle Entreprise Manager console,
    On Unix, . oraenv
    infra904oemapp dbastudio
    On NT Start/ Programs/ Oracle Application server - infra904 / Enterprise Manager Console
    Launch standalone
    Choose the portal database (typically asdb.domain.com)
    Connect with a DBA user, sys or system
    Click Storage/Tablespaces
    Change the size of the PORTAL, PORTAL_DOC, PORTAL_LOGS, PORTAL_IDX tablespaces
    C. Backup
    It could be a good idea to take a backup of the MIDTIER and INFRASTRUCTURE Oracle Homes at that point to allow retesting the import process if it fails for any reason as much as you want without needing to reinstall everything.
    Step 3 - Import in production (on PROD)
    The following script is a sample of an Unix script that combines all the steps to import a portal repository to the production machine.
    To import a portal reporistory and his users and group in OID, you need to do 8 things:
    Stop the midtier to avoid errors while dropping the portal schema
    SQL*Plus with Portal
    Drop the 4 default portal schemas
    Create the portal users with the same passwords than the just deleted users and give them grants (you need to create your own custom shemas too if you have some).
    Import the dump file
    Import the users and groups into OID (optional)
    SQL*Plus with SYS : Post import changes
    Recompile everything in the database
    Reassign the imported jobs to portal
    SQL*Plus with Portal : Post import changes
    Recreate the Portal intermedia indexes
    Correct an import errror on wwsrc_preference$
    Make additional post import changes, by updating some portal tables, and replacing the development hostname, port or domain by the production ones.
    Rewire the portal repository with ptlconfig -dad portal
    Restart the midtier
    Here is a sample script to do this on Unix. You will need to adapt the script to your needs.
    imp_portal_schema.sh
    # BASH Script : imp_portal_schema.sh
    # Version : 1.3
    # Portal : 9.0.4.0
    # History :
    # mgueury - creation
    # Description:
    # This script import a portal dump file and relink it with an
    # infrastructure.
    # Script to be started from the MIDTIER
    # -------------------------- Environment variables --------------------------
    . portal_env.sh
    # Development and Production machine hostname and port
    # Example
    # .._HOSTNAME machine.domain.com (name of the MIDTIER)
    # .._PORT 7782 (http port of the MIDTIER)
    # .._DN dc=domain,dc=com (domain name in a LDAP way)
    # These values can be determined automatically with the iasconfig.xml file of dev
    # and prod. But if you do not know or remember the dev hostname and port, this
    # query should find it.
    # > select name, http_url from wwpro_providers$ where http_url like 'http%'
    # These variables are used in the
    # > step 4 - security / import OID users and groups
    # > step 6 - post import changes (PORTAL)
    # Set the env variables of the DEV instance
    rm /tmp/iasconfig_env.sh
    xml -f -s xsl/portal_env_unix.xsl -o /tmp/iasconfig_env.sh exp_data/iasconfig.xml
    . /tmp/iasconfig_env.sh
    export DEV_HOSTNAME=$WEBCACHE_HOSTNAME
    export DEV_PORT=$WEBCACHE_LISTEN_PORT
    export DEV_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
    # Set the env variables of the PROD instance
    . portal_env.sh
    export PROD_HOSTNAME=$WEBCACHE_HOSTNAME
    export PROD_PORT=$WEBCACHE_LISTEN_PORT
    export PROD_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
    # ------------------------------ Help function -----------------------------------
    function press_any_key() {
    if [ $PRESS_ANY_KEY_AFTER_EACH_STEP = "Y" ]; then
    echo
    echo Press enter to continue
    read $ANY_KEY
    else
    echo
    fi
    echo "------------------------------- Import ------------------------------------"
    # create a directory for the logs
    mkdir imp_log
    # copy the env variables in the log just in case
    export > imp_log/imp_env_variable.txt
    echo "--------------------- step 1 - stop the midtier"
    # This step is needed to avoid most case of ORA-01940: user connected
    # when dropping the portal user
    $MIDTIER_ORACLE_HOME/opmn/bin/opmnctl stopall
    press_any_key
    echo "--------------------- step 2 - drop and create empty users"
    sqlplus "sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba" <<IASDB
    spool imp_log/drop_create_user.log
    ---- Drop users
    -- Warning: You need to stop all SQL*Plus connection to the
    -- portal schema before that else the drop will give an
    -- ORA-01940: cannot drop a user that is currently connected
    drop user portal_public cascade;
    drop user portal_app cascade;
    drop user portal_demo cascade;
    drop user portal cascade;
    ---- Recreate the users and give them grants"
    -- The new users will have the same passwords as the users we just dropped
    -- above. Do not forget to add your exported custom users
    create user portal identified by $PORTAL_PASSWORD default tablespace portal;
    grant connect,resource,dba to portal;
    create user portal_app identified by $PORTAL_APP_PASSWORD default tablespace portal;
    grant connect,resource to portal_app;
    create user portal_demo identified by $PORTAL_DEMO_PASSWORD default tablespace portal;
    grant connect,resource to portal_demo;
    create user portal_public identified by $PORTAL_PUBLIC_PASSWORD default tablespace portal;
    grant connect,resource to portal_public;
    alter user portal_public grant connect through portal;
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wwv/wdbigra.sql portal
    exit
    IASDB
    press_any_key
    echo "--------------------- step 3 - import"
    imp userid="'sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba'" file=exp_data/portal_exp.dmp grants=y log=imp_log/import.log full=y
    press_any_key
    echo "--------------------- step 4 - import the OID users and groups (optional)"
    # Some errors will be raised when running the ldapadd because at least the
    # default entries will not be able to be inserted. Remove them from the
    # ldif file if you want to avoid them. Due to the flag '-c', ldapadd ignores
    # duplicate entries. Another more radical solution is to erase all the entries
    # of the users and groups in OID before to run the import.
    # Replace the domain name in the XML files.
    cat exp_data/portal_groups.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/portal_groups.xml
    cat exp_data/portal_users.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/temp_users.xml
    # Remove the authpassword attributes with a XSL stylesheet
    xml -f -s xsl/del_authpassword.xsl -o imp_log/portal_users.xml imp_log/temp_users.xml
    echo '--- Importing Groups'
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X imp_log/portal_groups.xml -v
    echo '--- Importing Users'
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X imp_log/portal_users.xml -v
    press_any_key
    echo "--------------------- step 5 - post import changes (SYS)"
    sqlplus "sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba" <<IASDB
    spool imp_log/sys_post_changes.log
    ---- Recompile the invalid packages"
    -- On the midtier, the script utlrp is not present. This step
    -- uses a copy of it stored in patch/utlrp.sql
    select count(*) INVALID_OBJECT_BEFORE from all_objects where status='INVALID';
    start patch/utlrp.sql
    set lines 999
    select count(*) INVALID_OBJECT_AFTER from all_objects where status='INVALID';
    ---- Jobs
    -- Reassign the JOBS imported to PORTAL. After the import, they belong
    -- incorrectly to the user SYS.
    update dba_jobs set LOG_USER='PORTAL', PRIV_USER='PORTAL' where schema_user='PORTAL';
    commit;
    exit
    IASDB
    press_any_key
    echo "--------------------- step 6 - post import changes (PORTAL)"
    sqlplus $PORTAL_USER/$PORTAL_PASSWORD@$PORTAL_TNS <<IASDB
    set serveroutput on
    spool imp_log/portal_post_changes.log
    ---- Intermedia
    -- Recreate the portal indexes.
    -- inctxgrn.sql is missing from the 9040 CD-ROMS. This is the bug 3536937.
    -- Fixed in 9041. The missing script is contained in the downloadable zip file.
    start patch/inctxgrn.sql
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
    ---- Import error
    alter table "WWSRC_PREFERENCE$" add constraint wwsrc_preference_pk
    primary key (subscriber_id, id)
    using index wwsrc_preference_idx1
    begin
    DBMS_RLS.ADD_POLICY ('', 'WWSRC_PREFERENCE$', 'WEBDB_VPD_POLICY',
    '', 'webdb_vpd_sec', 'select, insert, update, delete', TRUE,
    static_policy=>true);
    end ;
    ---- Modify tables with full URLs
    -- If the domain name of prod and dev are different, this step is really important.
    -- It modifies the portal tables that contains reference to the hostname or port
    -- of the development machine. (For more explanation: see Addional steps in the note)
    -- groups (dn)
    update wwsec_group$
    set dn=replace( dn, '$DEV_DN', '$PROD_DN' )
    update wwsec_group$
    set dn_hash = wwsec_api_private.get_dn_hash( dn )
    -- users (dn)
    update wwsec_person$
    set dn=replace( dn, '$DEV_DN', '$PROD_DN' )
    update wwsec_person$
    set dn_hash = wwsec_api_private.get_dn_hash( dn)
    -- subscriber
    update wwsub_model$
    set dn=replace( dn, '$DEV_DN', '$PROD_DN' ), GUID=':1'
    where dn like '%$DEV_DN%'
    -- preferences
    update wwpre_value$
    set varchar2_value=replace( varchar2_value, '$DEV_DN', '$PROD_DN' )
    where varchar2_value like '%$DEV_DN%'
    update wwpre_value$
    set varchar2_value=replace( varchar2_value, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where varchar2_value like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- page url items
    update wwv_things
    set title_link=replace( title_link, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where title_link like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- web providers
    update wwpro_providers$
    set http_url=replace( http_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where http_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- html links created by the RTF editor inside text items
    update wwv_text
    set text=replace( text, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where text like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- Portlet metadata nls: help URL
    update wwpro_portlet_metadata_nls$
    set help_url=replace( help_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where help_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- URL items (There is a trigger on this table building absolute_url automatically)
    update wwsbr_url$
    set absolute_url=replace( absolute_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where absolute_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- Things attributes
    update wwv_thingattributes
    set value=replace( value, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where value like '%$DEV_HOSTNAME:$DEV_PORT%'
    commit;
    exit
    IASDB
    press_any_key
    echo "--------------------- step 7 - ptlconfig"
    # Configure portal such that portal uses the infrastructure database
    cd $MIDTIER_ORACLE_HOME/portal/conf/
    ./ptlconfig -dad portal
    cd -
    mv $MIDTIER_ORACLE_HOME/portal/logs/ptlconfig.log imp_log
    press_any_key
    echo "--------------------- step 8 - restart the midtier"
    $MIDTIER_ORACLE_HOME/opmn/bin/opmnctl startall
    date
    Each step can generate his own errors due to a lot of factors. It is better to run the import step by step the first time.
    Do not forget to check the output of log files created during the various steps of the import:
    imp_log/drop_create_user.log
    Spool when dropping and recreating the portal users
    imp_log/import.log Import log file when importing the portal_exp.dmp file
    imp_log/sys_post_changes.log
    Spool when making post changes with SYS
    imp_log/portal_post_changes.log
    Spool when making post changes with PORTAL
    imp_log/ptlconfig.log
    Log file of ptconfig when rewiring the midtier
    Step 4 - Test
    A. Check the log files
    B. Test the website and see if it works fine.
    Step 5 - take a backup
    Take a backup of all ORACLE_HOME and DATABASES to prevent all hardware problems. You need to copy:
    All the files of the 2 ORACLE_HOME
    And all the database files.
    Step 6 - Additional steps
    Here are some additional steps.
    SSO external application ( that are part of the orasso schema and not imported yet )
    Page URL items ( they seems to store the full URL ) - included in imp_portal_schema.sh
    Web Providers ( the URL needs to be changed ) - included in imp_portal_schema.sh
    Text items edited with the RTF editor in IE and containing links - included in imp_portal_schema.sh
    Most of them are taken care by the "step 8 - post import changes". Except the first one.
    1. SSO import
    This script imports only Portal and the users/groups of OID. Not the list of the external application contained in the orasso user.
    In Portal 9.0.4, there is a script called SSOMIG that resides in $INFRA_ORACLE_HOME/sso/bin and allows to move :
    Definitions and user data for external applications
    Registration URLs and tokens for partner applications
    Connection information used by OracleAS Discoverer to access various data sources
    See:
    Oracle® Application Server Single Sign-On Administrator's Guide 10g (9.0.4) Part Number B10851-01
    14. Exporting and Importing Data
    2. Page items: the page URL items store the full URL.
    This is Bug 2661805 fixed in Portal 9.0.2.6.
    This following work-around is implemented in post import step of imp_portal_schema.sh
    -- page url items
    update wwv_things
    set title_link=replace( title_link, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
    where title_link like '%$DEV_HOSTNAME:$DEV_PORT%'
    2. Web Providers
    The URL to the Web providers needs also change. Like for the Page items, they contain the full path of the webserver.
    Or you can get the list of the URLs to change with this query
    select name, http_url from PORTAL.WWPRO_PROVIDERS$ where http_url like '%';
    This following work-around is implemented in post import step of imp_portal_schema.sh
    -- web providers
    update wwpro_providers$
    set http_url=replace( http_url, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
    where http_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    4. The production and development machine do not share the same domain
    If the domain of the production and the development are not the same, the DN (name in LDAP) of all users needs to change.
    Let's say from
    dc=dev_domain,dc=com -> dc=prod_domain,dc=com
    1. before to upload the ldif files. All the strings in the 2 ldifs files that contain 'dc=dev_domain,dc=com', have to be replaced by 'dc=prod_domain,dc=com'
    2. in the wwsec_group$ and wwsec_person$ tables in portal, the DN need to change too.
    This following work-around is implemented in post import step of imp_portal_schema.sh
    -- groups (dn)
    update wwsec_group$
    set dn=replace( dn, 'dc=dev_domain,dc=com', 'dc=prod_domain,dc=com' )
    update wwsec_group$
    set dn_hash = wwsec_api_private.get_dn_hash( dn )
    -- users (dn)
    update wwsec_person$
    set dn=replace( dn, 'dc=dev_domain,dc=com', 'dc=prod_domain,dc=com' )
    update wwsec_person$
    set dn_hash = wwsec_api_private.get_dn_hash( dn)
    5. Text items with HTML links
    Sometimes people stores full URL inside their text items, it happens mostly when they use link with the RichText Editor in IE .
    This following work-around is implemented in post import step in imp_portal_schema.sh
    -- html links created by the RTF editor inside text items
    update wwv_text
    set text=replace( text, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
    where text like '%$DEV_HOSTNAME:$DEV_PORT%'
    6. OID Custom password policy
    It happens quite often that the people change the password policy of the OID server. The reason is that with the default policy, the password expires after 60 days. If so, do not forget to make the same changes in the new installation.
    PROBLEMS
    1. Import log has some errors
    A. EXP-00091 -Exporting questionable statistics
    You can ignore this error.
    B. IMP-00017 - WWSRC_PREFERENCE$
    When importing, there is one import error:
    IMP-00017: following statement failed with ORACLE error 921:
    "ALTER TABLE "WWSRC_PREFERENCE$" ADD "
    IMP-00003: ORACLE error 921 encountered
    ORA-00921: unexpected end of SQL commandThe primary key is not created. You can create it with this commmand
    in SQL*Plus with the user portal.. Then readd the missing VPD policy.
    alter table "WWSRC_PREFERENCE$" add constraint wwsrc_preference_pk
    primary key (subscriber_id, id)
    using index wwsrc_preference_idx1
    begin
    DBMS_RLS.ADD_POLICY ('', 'WWSRC_PREFERENCE$', 'WEBDB_VPD_POLICY',
    '', 'webdb_vpd_sec', 'select, insert, update, delete', TRUE,
    static_policy=>true);
    end ;
    Step 8 in the script "imp_portal_schema.sh" take care of this. This can also possibly be solved by the
    C. IMP-00017 - WWDAV$ASL
    . importing table "WWDAV$ASL"
    Note: table contains ROWID column, values may be obsolete 113 rows importedThis error is normal, the table really contains a ROWID column.
    D. IMP-00041 - Warning: object created with compilation warnings
    This error is normal too. The packages giving these error have
    dependencies on package not yet imported. A recompilation is done
    after the import.
    E. ldapadd error 'cannot add add entries containing authpasswords'
    # ldap_add: DSA is unwilling to perform
    # ldap_add: additional info: You cannot add entries containing authpasswords.
    "authpasswords" are automatically generated values from the real password of the user stored in userpassword. These values do not have to be exported from ldap.
    In the import script, I remove the additional tag with a XSL stylesheet 'del_authpassword.xsl'. See above.
    F. IMP-00017: WWSTO_SESSION$
    IMP-00017: following statement failed with ORACLE error 2298:
    "ALTER TABLE "WWSTO_SESSION$" ENABLE CONSTRAINT "WWSTO_SESS_FK1""
    IMP-00003: ORACLE error 2298 encountered
    ORA-02298: cannot validate (PORTAL.WWSTO_SESS_FK1) - parent keys not found
    Here is a work-around for the problem. I will normally integrate it in a next version of the scripts.
    SQL> delete from WWSTO_SESSION_DATA$;
    7690 rows deleted.
    SQL> delete from WWSTO_SESSION$;
    1073 rows deleted.
    SQL> commit;
    Commit complete.
    SQL> ALTER TABLE "WWSTO_SESSION$" ENABLE CONSTRAINT "WWSTO_SESS_FK1";
    Table altered.
    G. IMP-00017 - ORACLE error 1 - DBMS_JOB.ISUBMIT
    This error can appear during the import when the import database is not empty and is already customized for some reasons. For example, you export from an infrastructure and you import in a database with a lot of other programs that uses jobs. And unhappily the same job id.
    Due to the way the export/import of jobs is done, the jobs keeps their id after the import. And they may conflict.
    IMP-00017: following statement failed with ORACLE error 1: "BEGIN DBMS_JOB.ISUBMIT(JOB=>42,WHAT=>'begin execute immediate " "''begin wwutl_cache_sys.process_background_inval; end;'' ; exc" "eption when others then wwlog_api.log(p_domain=> ''utl'', " " p_subdomain=>''cache'', p_name=>''background'', " " p_action=>''process_background_inval'', p_information => ''E" "rror in process_background_inval ''|| sqlerrm);end;', NEXT_DATE=" ">TO_DATE('2004-08-19:17:32:16','YYYY-MM-DD:HH24:MI:SS'),INTERVAL=>'SYSDATE " "+ 60/(24*60)',NO_PARSE=>TRUE); END;"
    IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated
    ORA-06512: at "SYS.DBMS_JOB", line 97 ORA-06512: at line 1
    Solutions:
    1. use a fresh installed database,
    2. Due that the jobs conflicting are different because it happens only in custom installation, there is no clear rule. But you can
    recreate the jobs lost after the import with other_ids
    and/or change the job id of the other program before to import. This type of commands can help you (you need to do it with SYS)
    select * from dba_jobs;
    update dba_jobs set job=99 where job=52;
    commit
    2. Import in a RAC environment
    Be aware of the Bug 2479882 when the portal database is in a RAC database.
    Bug 2479882 : NEEDED TO BOUNCE DB NODES AFTER INSTALLING PORTAL 9.0.2 IN RAC NODE3. Intermedia
    After importing a environment, the intermedia indexes are invalid. To correct the error you need to run in SQL*Plus with Portal
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/inctxgrn.sql
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
    But $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/inctxgrn.sql is missing in IAS 9.0.4.0. This is Bug 3536937. Fixed in 9041. The missing scripts are contained in the downloadable zip file (exp_schema904.zip : Attachment 276688.1:1 ), directory sql. This means that practically in 9040, you have to run
    start sql/inctxgrn.sql
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
    In the import script, it is done in the step 6 - recreate Portal Intermedia indexes.
    You can not WA the problem without the scripts. Running ctxcrind.sql alone does not work. You will have this error:
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-06512: at "PORTAL.WWERR_API_EXCEPTION", line 164
    ORA-06512: at "PORTAL.WWV_CONTEXT", line 1035
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-06512: at "PORTAL.WWERR_API_EXCEPTION", line 164
    ORA-06512: at "PORTAL.WWV_CONTEXT", line 476
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-20000: Oracle Text error:
    DRG-12603: CTXSYS does not own user datastore procedure: WWSBR_THING_CTX_69
    ORA-06512: at line 13
    4. ptlconfig
    If you try to run ptlconfig simply after an import you will get an error:
    Problem processing Portal instance: Configuring HTTP server settings : Installing cache data : SQL exception: ERROR: ORA-23421: job number 32 is not a job in the job queue
    This is because the import done by user SYS has imported the PORTAL jobs to the SYS schema in place of portal. The solution is to run
    update dba_jobs set LOG_USER='PORTAL', PRIV_USER='PORTAL' where schema_user='PORTAL';
    In the import script, it is done in the step 8 - post import changes.
    5. WWC-41417 - invalid credentials.
    When you try to login you get:
    Unexpected error encountered in wwsec_app_priv.process_signon (User-Defined Exception) (WWC-41417)
    An exception was raised when accessing the Oracle Internet Directory: 49: Invalid credentials
    Details
    Error:Operation: dbms_ldap.simple_bind_s
    OID host: machine.domain.com
    OID port number: 4032
    Entry DN: orclApplicationCommonName=PORTAL,cn=Portal,cn=Products,cn=OracleContext. (WWC-41743)Solution:
    - run secupoid.sql
    - rerun ptlconfig
    This problem has been seen after using ptlasst in place of ptlconfig.
    6. EXP-003 with a database 9.2.0.5 or 10.1.0.2
    In fact, the DB format of imp/exp has changed in 9.2.0.5 or 10.1.0.2. The EXP-3 error only occurs when the export from the 9.2.0.5.0 or 10.1.0.2.0 database is done with a lower release export utility, e.g. 9.2.0.4.0.
    Due to the way this note is written, the imp/exp utility used is the one of the midtier (9014), if your portal resides in a 9.2.0.5 database, it will not work. To work-around the problem, there are 2 solutions:
    Change the script so that it uses the exp and imp command of database.
    Make a change to the 9.2.0.5 or 10.1.0.2 database to make them compatible with previous version. The change is to modify a database internal view before to export/import the data.
    A work-around is given in Bug 3784697
    1. Make a note of the export definition of exu9tne from
    $OH/rdbms/admin/catexp.sql
    2. Copy this to a new file and add "UNION ALL select * from sys.exu9tneb" to the end of the definition
    3. Run this as sys against the DB to be exported.
    4. Export as required
    5. Put back the original definition of exu9tne
    eg: For 9204 the workaround view would be:
    CREATE OR REPLACE VIEW exu9tne (
    tsno, fileno, blockno, length) AS
    SELECT ts#, segfile#, segblock#, length
    FROM sys.uet$
    WHERE ext# = 1
    UNION ALL
    select * from sys.exu9tneb
    7. EXP-00006: INTERNAL INCONSISTENCY ERROR
    This is Bug 2906613.
    The work-around given in this bug is the following:
    - create the following view, connected as sys, before running export:
    CREATE OR REPLACE VIEW exu8con (
    objid, owner, ownerid, tname, type, cname,
    cno, condition, condlength, enabled, defer,
    sqlver, iname) AS
    SELECT o.obj#, u.name, c.owner#, o.name,
    decode(cd.type#, 11, 7, cd.type#),
    c.name, c.con#, cd.condition, cd.condlength,
    NVL(cd.enabled, 0), NVL(cd.defer, 0),
    sv.sql_version, NVL(oi.name, '')
    FROM sys.obj$ o, sys.user$ u, sys.con$ c,
    sys.cdef$ cd, sys.exu816sqv sv, sys.obj$ oi
    WHERE u.user# = c.owner# AND
    o.obj# = cd.obj# AND
    cd.con# = c.con# AND
    cd.spare1 = sv.version# (+) AND
    cd.enabled = oi.obj# (+) AND
    NOT EXISTS (
    SELECT owner, name
    FROM sys.noexp$ ne
    WHERE ne.owner = u.name AND
    ne.name = o.name AND
    ne.obj_type = 2)
    The modification of exu8con simply adds support for a constraint type that had not previously been supported by this view. There is no negative impact.
    8. WWSBR_DOC_CTX_54 is invalid
    After the recompilation of the package, one package remains invalid (in sys_post_changes.log):
    INVALID_OBJECT_AFTER
    1
    select owner, object_name from all_objects where status='INVALID'
    CTXSYS WWSBR_DOC_CTX_54
    CREATE OR REPLACE procedure WWSBR_DOC_CTX_54
    (rid in rowid, bilob in out NOCOPY blob)
    is begin PORTAL.WWSBR_CTX_PROCS.DOC_CTX(rid,bilob);end;
    This object is not used anymore by portal. The error can be ignored. The procedure can be removed too. This is Bug 3559731.
    9. You do not have permission to perform this operation. (WWC-44131)
    It seems that there are problems if
    - groups on the production machine are not residing in the default place in OID,
    - and that the group creation base and group search base where changed.
    After this, the cloning of the repository work without problem. But it seems that the command 'ptlconfig -dad portal' does not reset the GUID and DN of the groups correctly. I have not checked this yet.
    The solution seems to use the script given in the 9.0.2 Note 228516.1. And run group_sec.sql to reset all the DN and GUID in the copied instance.
    10. Invalid Java objects when exporting from a 9.x database and importing in a 10g database
    If you export from a 9.x database and import in a 10g database, after running utlrp.sql, 18 Java objects will be invalid.
    select object_name, object_type from user_objects where status='INVALID'
    SQL> /
    OBJECT_NAME OBJECT_TYPE
    /556ab159_Handler JAVA CLASS
    /41bf3951_HttpsURLConnection JAVA CLASS
    /ce2fa28e_ProviderManagerClien JAVA CLASS
    /c5b98d35_ServiceManagerClient JAVA CLASS
    /d77cf2ab_SOAPServlet JAVA CLASS
    /649bf254_JavaProvider JAVA CLASS
    /a9164b8b_SpProvider JAVA CLASS
    /2ee43ac9_StatefulEJBProvider JAVA CLASS
    /ad45acec_StatelessEJBProvider JAVA CLASS
    /da1c4a59_EntityEJBProvider JAVA CLASS
    /66fdac3e_OracleSOAPHTTPConnec JAVA CLASS
    /939c36f5_OracleSOAPHTTPConnec JAVA CLASS
    org/apache/soap/rpc/Call JAVA CLASS
    org/apache/soap/rpc/RPCMessage JAVA CLASS
    org/apache/soap/rpc/Response JAVA CLASS
    /198a7089_Message JAVA CLASS
    /2cffd799_ProviderGroupUtils JAVA CLASS
    /32ebb779_ProviderGroupMgrProx JAVA CLASS
    18 rows selected.
    This is a known issue. This can be solved by applying patch one of the following patch depending of your IAS version.
    Bug 3405173 - PORTAL 9.0.4.0.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
    Bug 4100409 - PORTAL 9.0.4.1.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
    Bug 4100417 - PORTAL 9.0.4.2.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
    11. Import : IMP-00003: ORACLE error 30510 encountered
    When importing Portal 9.0.4.x, it could be that the import of the database side produces an error ORA-30510.The new perl script work-around the issue in the portal_post_import.sql script. But not the BASH scripts. If you use the BASH scripts, after the import, please run this command manually in SQL*Plus logged as portal.
    ---- Import error 2 - ORA-30510 when importing
    CREATE OR REPLACE TRIGGER logoff_trigger
    before logoff on schema
    begin
    -- Call wwsec_oid.unbind to close open OID connections if any.
    wwsec_oid.unbind;
    exception
    when others then
    -- Ignore all the errors encountered while unbinding.
    null;
    end logoff_trigger;
    This is logged as <Bug;4458413>.
    12. Exporting from a 9.0.1 database and import in a 9.2.0.5+ or 10g DB
    It could be that when exporting from a 9.0.1 database to a 10g database that the java classes do not get compiled correctly. The following errors are seen
    ORA-29534: referenced object PORTAL.oracle/net/www/proto/https/HttpsURLConnection could not be resolved
    errors:: class oracle/net/www/proto/https/HttpsURLConnection
    ORA-29521: referenced name oracle/security/ssl/OracleSSLSocketFactoryImpl could not be found
    ORA-29521: referenced name oracle/security/ssl/OracleSSLSocketFactory could not be found
    In such a case, please apply the following patches after the import in the 10g database.
    Bug 3405173 PORTAL REPOS DB UPGRADE TO 10G: for Portal 9.0.4.0
    Bug 4100409 PORTAL REPOS DB UPGRADE TO 10G: for Portal 9.0.4.1
    Main Differences with Portal 9.0.2
    For the persons used to this technics in Portal 9.0.2, you could be interested to read the main differences with the same note for Portal 9.0.2
    Portal 9.0.2
    Portal 9.0.4
    Cutter database
    Portal 9.0.2 can be part of an infrastructure database or in a custom external database.
    In Portal 9.0.2, the portal schema is imported in an empty database.
    Portal 9.0.4 can only be installed in a 'Cutter database', a database created with RepCA or OUI containing always OID, DCM and so on...
    In Portal 9.0.4, the portal schema is imported in an 'Cutter database' (new)
    group_sec.sql
    group_sec.sql is used to correct the GUIDs of OID stored in Portal
    ptlconfig -dad portal -oid is used to correct the GUIDs of OID stored in Portal (new)
    1 script
    The import / export are divided by several steps with several scripts
    The import script is done in one step
    Additional steps are included in the script
    This requires to know the hostname and port of the original development machine. (new)
    Import
    The steps are:
    creation of an empty database
    creation of the users with password=username
    import
    The steps are:
    creation of an IAS 10g infrastructure DB (repca or OUI)
    deletion of new portal schemas (new)
    creation of the users with the same password than the schemas just dropped.
    import
    DAD
    The dad needed to be changed
    The passwords are not changed, the dad does not need to be changed.
    Bugs
    In portal 9.0.2, 2 bugs were workarounded by change_host.sh
    In Portal 9.0.4, some tables additional tables needs to be updated manually before to run ptlasst. This is #Bug:3762961#.
    export of LDAP
    The export is done in LDIF files. If the prod and the dev have different domain, it is quite difficult to change the domain name in these file due to the line wrapping at 78 characters.
    The export is done in XML files, in the DSML format (new). It is a lot easier to change the XML files if the domain name is different from PROD to DEV.
    Download
    You have to cut and paste the scripts
    The scripts are attached to the note. Just donwload them.
    Rewiring
    9.0.2 uses ptlasst.
    ptlasst.csh -mode MIDTIER -i custom -s $PORTAL_USER -sp $PORTAL_PASSWORD -c $PORTAL_HOSTNAME:$PORTAL_DB_PORT:$PORTAL_SERVICE_NAME -sdad $PORTAL_DAD -o orasso -op $ORASSO_PASSWORD -odad orasso -host $MIDTIER_HOSTNAME -port $MIDTIER_HTTP_PORT -ldap_h $INFRA_HOSTNAME -ldap_p $OID_PORT -ldap_w $IAS_PASSWORD -pwd $IAS_PASSWORD -sso_c $INFRA_HOSTNAME:$INFRA_DB_PORT:$INFRA_SERVICE_NAME -sso_h $INFRA_HOSTNAME -sso_p $INFRA_HTTP_PORT -ultrasearch -oh $MIDTIER_ORACLE_HOME -mc false -mi true -chost $MIDTIER_HOSTNAME -cport_i $WEBCACHE_INV_PORT -cport_a $WEBCACHE_ADM_PORT -wc_i_pwd $IAS_PASSWORD -emhost $INFRA_HOSTNAME -emport $EM_PORT -pa orasso_pa -pap $ORASSO_PA_PASSWORD -ps orasso_ps -pp $ORASSO_PS_PASSWORD -iasname $IAS_NAME -verbose -portal_only
    9.0.4 uses ptlconfig (new)
    ptlconfig -dad portal
    Environment variables
    A lot of environment variables are needed
    Just 3 environment variables are needed:
    - password of SYS
    - password of IAS,
    - ORACLE_HOME of the Midtier
    All the rest is found in iasconfig.xml and LDAP (new)
    TO DO
    - Check if the orclcommonapplication name fits SID.hostname
    - Check what gives the import of a portal30 upgraded schema inside a schema named portal
    - Explain how to copy the portal*.dbf files in place of export/import and the limitation of tra

  • Delete all entries from the following tables - Follow-up Activities (oracle)

    Hello,
    I performed a homogeneous system copy of our development BW system with the database (oracle 11.2.0.3) from the BW production system!
    I already start the oracle database and the SAP system in the target system/server (development BW system) and I´m doing some follow-up activities. One of this activities is (at the system copy guide 6.2.3.2 Activities at Database Level) is to delete all entries from the following tables:
    DBSTATHORA, DBSTAIHORA, DBSTATIORA, DBSTATTORA
    I tried to delete them using SQL Plus:
    sqlplus /nolog
    SQL> connect /as sysdba
    SQL> delete from DBSTATTORA;
    delete from DBSTATTORA
    ERROR at line 1:
    ORA-00942: table or view does not exist
    ... and it show me that error message.
    This is strange because when I go to transaction SE14 and check the DBSTATTORA I see that table exist and contain a lot of entries!
    Why this is happened in SQL Plus!? I´m running the correct SQL statement for doing this type of task or not?
    How can I delete the entries of that tables? Can I do that using the transaction SE14?
    Can you help me please?
    Thank you,
    samid raif

    Hello
    sqlplus /nolog
    SQL> connect /as sysdba
    SQL> delete from DBSTATTORA;
    delete from DBSTATTORA
    ERROR at line 1:
    ORA-00942: table or view does not exist
    It doesn't surprise me as you are not mentioning the schema name here. Instead it should be
    delete from SAPSR3.DBSTATTORA;
    Assuming the schema owner is SAPSR3. if the owner is different then replace that with the correct one.
    Regards
    RB

  • I've cleared almost 30 gig off of my hard drive in the past 2 weeks, and it will temporarily show that in the Get Info box.  But hours later, I am still getting a disk full error and all of the memory has disappeared.

    I've cleared almost 30 gig off of my hard drive in the past 2 weeks, and it will temporarily show that in the Get Info box.  But hours later, I am still getting a disk full error and all of the memory has disappeared.  I have cleared my backup logs from Time Machine, checked the mail folder, cleaned out tons of photos and videos and it still keeps filling back up.
    In checking the log files, here is the message repeated over and over....
    Jul  4 07:18:13 Donald-Keele-Jrs-iMac-123.local CalendarAgent[213]: CoreData: error: (21) I/O error for database at /Users/donjr/Library/Calendars/Calendar Cache.  SQLite error code:21, 'unable to open database file'
    Jul  4 07:18:13 Donald-Keele-Jrs-iMac-123.local CalendarAgent[213]: Core Data: annotation: -executeRequest: encountered exception = I/O error for database at /Users/donjr/Library/Calendars/Calendar Cache.  SQLite error code:21, 'unable to open database file' with userInfo = {
                  NSFilePath = "/Users/donjr/Library/Calendars/Calendar Cache";
                  NSSQLiteErrorDomain = 21;
    Jul  4 07:18:14 Donald-Keele-Jrs-iMac-123.local cfprefsd[180]: CFPreferences: error creating file /Users/donjr/Library/Preferences/com.apple.iPhoto.plist.t3l894p: 28
    Jul  4 07:18:30 Donald-Keele-Jrs-iMac-123.local Printer Pro Desktop[275]: Empty task
    Jul  4 07:18:33 Donald-Keele-Jrs-iMac-123.local Microsoft Sync Services[8149]: [0x16697c0] |ISyncSession|Warning| com.microsoft.Entourage2008: transitioning to cancel - session cancelled by server: Client 'com.microsoft.Entourage2008' tried to start a session for the plan 45AD80C3-0D52-4CF2-8CBA-103564B6C47C and the plan no longer exists.
    Jul  4 07:18:33 Donald-Keele-Jrs-iMac-123.local Microsoft Sync Services[8149]: Warning: NSBundle NSBundle </Applications/Microsoft Office 2008/Office/Microsoft Sync Services.app/Contents/Resources/MicrosoftOfficeNotes.syncschema> (not yet loaded) was released too many times. For compatibility, it will not be deallocated, but this may change in the future. Set a breakpoint on __NSBundleOverreleased() to debug
    Jul  4 07:18:33 Donald-Keele-Jrs-iMac-123.local Microsoft Sync Services[8149]: Warning: NSBundle NSBundle </Users/donjr/Library/Sync Services/Schemas/MicrosoftOfficeNotes.syncschema> (not yet loaded) was released too many times. For compatibility, it will not be deallocated, but this may change in the future. Set a breakpoint on __NSBundleOverreleased() to debug
    Jul  4 07:18:45 Donald-Keele-Jrs-iMac-123 kernel[0]: (default pager): [KERNEL]: default_pager_backing_store_monitor - send LO_WAT_ALERT
    Jul  4 07:18:45 Donald-Keele-Jrs-iMac-123 kernel[0]: macx_swapoff SUCCESS
    Jul  4 07:19:31 Donald-Keele-Jrs-iMac-123.local Printer Pro Desktop[275]: Empty task
    Any ideas on what to do next?
    I'm running and iMac 20-inch  early 2009
    Processor  2.66 GHz Intel Core 2 Duo
    Memory  8 GB 1067 MHz DDR3
    Graphics  NVIDIA GeForce 9400 256 MB
    Software  OS X 10.8.4 (12E55)

    Step 1
    Quit Calendar. Triple-click the line below to select it:
    ~/Library/Calendars/Calendar Cache
    Right-click or control-click the highlighted line and select
    Services ▹ Reveal
    from the contextual menu. A Finder window should open with a file named "Calendar Cache" selected.
    Move the selected file to the Trash. There may be one or two other files in the same folder with names that begin in "Calendar Cache". If so, delete those files too.
    Step 2
    Empty the Trash if you haven't already done so. If you use iPhoto, empty its internal Trash as well:
    iPhoto ▹ Empty Trash
    Then reboot. That will temporarily free up some space.
    According to Apple documentation, you need at least 9 GB of available space on the startup volume (as shown in the Finder Info window) for normal operation. You also need enough space left over to allow for growth of your data. There is little or no performance advantage to having more available space than the minimum Apple recommends. Available storage space that you'll never use is wasted space.
    To locate large files, you can use Spotlight. That method may not find large folders that contain a lot of small files.
    You can more effectively use a tool such as OmniDiskSweeper (ODS) to explore your volume and find out what's taking up the space. You can also delete files with it, but don't do that unless you're sure that you know what you're deleting and that all data is safely backed up. That means you have multiple backups, not just one.
    Deleting files inside an iPhoto or Aperture library will corrupt the library. Any changes to a photo library must be made from within the application that created it. The same goes for Mail files.
    Proceed further only if the problem isn't solved by the above steps.
    ODS can't see the whole filesystem when you run it just by double-clicking; it only sees files that you have permission to read. To see everything, you have to run it as root.
    Back up all data now.
    Install ODS in the Applications folder as usual. Quit it if it's running.
    Triple-click the line of text below to select it, then copy the selected text to the Clipboard (command-C):
    sudo /Applications/OmniDiskSweeper.app/Contents/MacOS/OmniDiskSweeper
    Launch the Terminal application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Terminal in the icon grid.
    Paste into the Terminal window (command-V). You'll be prompted for your login password, which won't be displayed when you type it. You may get a one-time warning not to screw up. If you see a message that your username "is not in the sudoers file," then you're not logged in as an administrator.
    The application window will open, eventually showing all files in all folders. It may take some minutes for ODS to list all the files.
    I don't recommend that you make a habit of doing this. Don't delete anything while running ODS as root. If something needs to be deleted, make sure you know what it is and how it got there, and then delete it by other, safer, means. When in doubt, leave it alone or ask for guidance.
    When you're done with ODS, quit it and also quit Terminal.

Maybe you are looking for