Patch dependency
Patch dependency
Hi!
Here is my problem. We use Sun Connection for path deployment
This works fine! Recently I was asked to submit a report listing
Which patch is dependent of which patch.
It is possible to do this by going on sunsolv getting you patch
In the readme and finding its dependencies.
My question. Is there a tool out there that you can use to find
The dependency tree of a patch. In other words if input patch
xxxxxx-xx it is dependent on patch yyyyyy-yy which in turn
is dependent on
..
Is there such a tool?
Dan
You'll be happy to hear that PCA already does what you want. Try:
$ wget http://www.par.univie.ac.at/solaris/pca/stable/pca
$ chmod +x pca
$ ./pca -l 144488
Patch IR CR RSB Age Synopsis
142911 -- < 01 R-- 129 SunOS 5.10: Place Holder patch
142933 -- < 02 R-- 129 SunOS 5.10: failsafe patch
142909 -- < 17 RS- 129 SunOS 5.10: kernel patch
144488 -- < 06 RS- 25 SunOS 5.10: kernel patch
It lists the specified patch and all its requirements (recursively) in the correct order. Of couse it can also download and install the patches - you need to specify your MOS account for that. See the docs (e.g. pca --man).
One caveat: Oracle includes information about the most current release of each patch. So it's not possible to list e.g. the requirements of 144488-05, simply because PCA doesn't have the necessary data. Usually that's not a problem, as you deal with the most recent revision of a patch anyway.
Martin (author of PCA).
P.S.: As for the donation, look at http://www.par.univie.ac.at/solaris/pca/donation.html :-)
Similar Messages
-
How to install APAC, LATAM and EMEA localization patches
Hi,
I was told to install APAC, LATAM and EMEA localization patches on our instance.
Please help me how to proceed with this patching, below are the details about our instance.
eBS version: R12.1.3
OS Version: REL 5.8
DB version: 11.2.0.1
DB Characterset: US7ASCII.
Does this patching depends on the existing database character set ?
I referred few threads in Oracle forums and metalink notes, but I did not get clear idea where to start and how to start....your help is highly appreciated.
Regards,
Siva.Hi Srini/Hussein,
I am referring the navigation (License Manger --> Country-specific Functionalities ) as per the note id: 351900.1, but JA, JE, JG and JL not listed in it, below are the values for reference.
I was told that the country codes are JA, JE, JG and JL, do I need to perform any action plan to get JA, JE, JG and JL in the below list...i think from the list we can select the respective country name and enable the functionality.
I am a DBA, doing this activity first time; please let me know what details I should get from the our application team to proceed with this activity.
Srini: Please provide the url for referring MOS Doc.
Select Country Name Country Short Name
1 Argentina AR
2 Australia AU
3 Austria AT
4 Belgium BE
5 Bolivia, Plurinational State of BO
6 Brazil BR
7 Canada CA
8 Chile CL
9 China CN
10 Colombia CO
11 Costa Rica CR
12 Czech Republic CZ
13 Denmark DK
14 Dominican Republic DO
15 Ecuador EC
16 El Salvador SV
17 Finland FI
18 France FR
19 Germany DE
20 Greece GR
21 Guatemala GT
22 Honduras HN
23 Hungary HU
24 Iceland IS
25 Israel IL
26 Italy IT
27 Jamaica JM
28 Japan JP
29 Korea, Republic of KR
30 Mexico MX
31 Netherlands NL
32 Nicaragua NI
33 Norway NO
34 Panama PA
35 Paraguay PY
36 Peru PE
37 Poland PL
38 Portugal PT
39 Puerto Rico PR
40 Singapore SG
41 Spain ES
42 Sweden SE
43 Switzerland CH
44 Taiwan TW
45 Thailand TH
46 Trinidad and Tobago TT
47 Turkey TR
48 United Kingdom GB
49 Uruguay UY
50 Venezuela, Bolivarian Republic of VE
Regards,
Siva. -
Problems downloading patches (automated)
I've been working with a number of machines and have a continued problem trying to download patches from the sunsolve.sun.com patch server system. Ever since the new update that requires you to "accept", the system pretty consistantly cannot find patch files that I'm requesting. Some examples include the following patches that seem to only work 1 out of 15 tries.... (so it does eventually work in most cases, but typically the sun server is failing. Pretty lousy for a paid for service).
127719-01
126257-02
119795-06
There are more, but its a pain to type them all (I have about 20 I'm trying to bring down to a proxy server since its way too painful to actually get these all from sun directly).
Marcossmpatch(1m), due to its java dependency, is unfortunately not an option for me. Downloading via a "light-weight" wget(1) from http://sunsolve.sun.com/private-cgi/pdownload.pl?target=<patchid> have worked just fine for many years and still does for most patches, but for certain patchid's (i.e. 119689 or 125081) you now have to specify also the patch's revision (i.e. 119689-04), otherwise you'll get "ERROR 404: Not Found". It seems like patchid links to its latest patches are missing. The other problem is the "SunSolve Boundry System Queue Limit Exceeded" error which I sometimes get when I'm forced to manually download via http at the patch README page after the (automated) wget method has failed. BTW, I've also run into patch dependency being broken, i.e. for 120199-11 and other patches which depend on a non-existing patch 126677-01.
-
What is the current advice is on the frequency of patching for IIS web servers and SQL DB servers please?
I know that patches are released each month but is it recommended to apply patches every month/more frequently/less frequently?
Thank you!
TweedtheatrekatHi,
my 2 cents on this topic:
IIS: Update as soon as patches are released to make sure that you don't leave your system vulnerable to security threats. If you have any 3rd party integration, you need to take that into account.
SQLDB: Security & Patches should also be updated in a timely manner. Not exactly on the same day they are released but with a month gap, you should be fine. Any other SQL related patches depend a lot on the databases you are running, e.g. some 3rd party
solutions only support a certain Service Pack level etc.
Cheers
Chaib -
How to get All Users from OID LDAP
Hi all,
I have Oracle Internet Directory(OID) and have created the users in it manually.
Now I want to extract all the users from OID. How can I get Users from OID??
Any response will be appritiated. If some one could show me demo code for that I shall be greatful to you.
Thanks and reagards
Pravyhi,
the notes from metalink:
bgards
elvis
Doc ID: Note:276688.1
Subject: How to copy (export/import) the Portal database schemas of IAS 9.0.4 to another database
Type: BULLETIN
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 18-JUN-2004
Last Revision Date: 05-AUG-2005
How to copy (export/import) Portal database schemas of IAS 9.0.4 to another database
Note 276688.1
Download scripts Unix: Attachment 276688.1:1
Download Perl scripts (Unix/NT) :Attachment 276688.1:2
This article is being delivered in Draft form and may contain errors. Please use the MetaLink "Feedback" button to advise Oracle of any issues related to this article.
HISTORY
Version 1.0 : 24-JUN-2004: creation
Version 1.1 : 25-JUN-2004: added a link to download the scripts from Metalink
Version 1.2 : 29-JUN-2004: Import script: Intermedia indexes are recreated. Imported jobs are reassigned to Portal. ptlconfig replaces ptlasst.
Version 1.3 : 09-JUL-2004: Additional updates. Usage of iasconfig.xml. Need only 3 environment variables to import.
Version 1.4 : 18-AUG-2004: Remark about 9.2.0.5 and 10.1.0.2 database
Version 1.5 : 26-AUG-2004: Duplicate job id
Version 1.6 : 29-NOV-2004: Remark about WWC-44131 and WWSBR_DOC_CTX_54
Version 1.7 : 07-JAN-2005: Attached perl scripts (for NT/Unix) at the end of the note
Version 1.8 : 12-MAY-2005: added a work-around for the WWSTO_SESS_FK1 issue
Version 1.9 : 07-JUL-2005: logoff trigger and 9.0.1 database export, import in 10g database
Version 1.10: 05-AUG-2005: reference to the 10.1.2 note
PURPOSE
This document explains how to copy a Portal database schema from a database to another database.
It allows restoring the Portal repository and the OID security associated with Portal.
It can be used to go in production by copying physically a database from a development portal to a production environment and avoid to use the export/import utilities of Portal.
This note:
uses the export/import on the database level
allows the export/import to be done between different platforms
The script are Unix based and for the BASH shell. They can be adapted for other platforms.
For the persons familiar with this technics in Portal 9.0.2, there is a list of the main differences with Portal 9.0.2 at the end of the note.
These scripts are based on the experience of a lot of persons in Portal 902.
The scripts are attached to the note. Download them here: Attachment 276688.1:1 : exp_schema_904.zip
A new version of the script was written in Perl. You can also download them, here: Attachment 276688.1:2 : exp_schema_904_v2.zip. They do exactly the same than the bash ones. But they have the advantage of working on all platforms.
SCOPE & APPLICATION
This document is intented for Portal administrators. For using this note, you need basic DBA skills.
This notes is for Portal 9.0.4.x only. The notes for Portal 9.0.2 are :
Note 228516.1 : How to copy (export/import) Portal database schemas of IAS 9.0.2 to another database
Note 217187.1 : How to restore a cold backup of a Portal IAS 9.0.2 on another machine
The note for Portal 10.1.2 is:
Note 330391.1 : How to copy (export/import) Portal database schemas of IAS 10.1.2 to another databaseMethod
The method that we will follow in the document is the following one:
Export:
- export of the 4 portal schemas of a database (DEV / development)
- export the LDAP OID users and groups (optional)
Install a new machine with fresh IAS installation (PROD / production)
Import:
- delete the new and empty portal schema on PROD
- import the schemas in the production database in place of the deleted schemas
- import the LDAP OID users and groups (optional)
- modify the configuration such that the infrastructure uses the portal repository of the backup
- modify the configuration such that the portal repository uses the OID, webcache and SSO of the new infrastructure
The export and the import are divided in several steps. All of these steps are included in 2 sample scripts:
export : exp_portal_schema.sh
import : imp_portal_schema.sh
In the 2 scripts, all the steps are runned in one shot. It is just an example. Depending of the configuration and circonstance, all the steps can be runned independently.
Convention
Development (DEV) is the name of the machine where resides the copied database
Production (PROD) is the name of the machine where the database is copied
Prerequisite
Some prerequisite first.
A. Environment variables
To run the import/export, you will need 3 environment variables. In the given scripts, they are defined in 'portal_env.sh'
SYS_PASSWORD - the password of user sys in the Portal database
IAS_PASSWORD - the password of IAS
ORACLE_HOME - the ORACLE_HOME of the midtier
The rest of the settings are found automatically by reading the iasconfig.xml file and querying the OID. It is done in 'portal_automatic_env.sh'. I wish to write a note on iasconfig.xml and the way to transform it in usefull environment variables. But it is not done yet. In the meanwhile, you can read the old 902 doc, that explains the meaning of most variables :
< Note 223438.1 : Shell script to find your portal passwords, settings and place them in environment variables on Unix >
B. Definition: Cutter database
A 'Cutter Database' is the term used to designate a Database created by RepCA or OUI and that contains all the schemas used by a IAS 9.0.4 infrastructure. Even if in most cases, several schemas are not used.
In Portal 9.0.4, the option to install only the portal repository in an empty database has been removed. It has been replaced by RepCA, a tool that creates an infrastructure database. Inside all the infrastucture database schemas, there are the portal schemas.
This does not stop people to use 2 databases for running portal. One for OID and one for Portal. But in comparison with Portal 9.0.2, all schemas exist in both databases even if some are not used.
The main idea of Cutter database is to have only 1 database type. And in the future, simplify the upgrades of customer installation
For an installation where Portal and OID/SSO are in 2 separate databases, it looks like this
Portal 9.0.2 Portal 9.0.4
Infrastructure database
(INFRA_SID)
The infrastructure contains:
- OID (used)
- OEM (used)
- Single Sign-on / orasso (used)
- Portal (not used)
The infrastructure contains:
- OID (used)
- OEM (used)
- Single Sign-on / orasso (used)
- Portal (not used)
Portal database
(PORTAL_SID)
The custom Portal database contains:
- Portal (used)
The custom Portal database (is also an infrastructure):
- OID (not used)
- OEM (not used)
- Single Sign-on / orasso (not used)
- Portal (used)
Whatever, the note will suppose there is only one single database. But it works also for 2 databases installation like the one explained above.
C. Directory structure.
The sample scripts given inside this note will be explained in the next paragraphs. But first, the scripts are done to use a directory structure that helps to classify the files.
Here is a list of important files used during the process of export/import:
File Name
Description
exp_portal_schema.sh
Sample script that exports all the data needed from a development machine
imp_portal_schema.sh
Sample script that import all the data into a production machine
portal_env.sh
Script that defines the env variable specific to your system (to configure)
portal_automatic_env.sh
Helper script to get all the rest of the Portal settings automatically
xsl
Directory containing all the XSL files (helper scripts)
del_authpassword.xsl
Helper script to remove the authpassword tags in the DSML files
portal_env_unix.sql
Helper script to get Portal settings from the iasconfig.xml file
exp_data
Directory containing all the exported data
portal_exp.dmp
export on the database level of the portal, portal_app, ... database schemas
iasconfig.xml
copy the name of iasconfig.xml of the midtier of DEV. Used to get the hostname and port of Webcache
portal_users.xml
export from LDAP of the OID users used by Portal (optional)
portal_groups.xml export from LDAP of the OID groups used by Portal (optional)
imp_log
Directory containing several spool and logs files generated during the import
import.log Log file generated when running the imp command
ptlconfig.log
Log generated by ptlconfig when rewiring portal to the infrastructure.
Some other spool files.
D. Known limitations
The scripts given in this note have the following known limitations:
It does not copy the data stored in the SSO schema: external applications definitions and the passwords stored for them.
See in the post steps: SSO migration to know how to do.
The ssomig command resides in the Infrastructure Oracle home. And all commands of Portal in the Midtier home. And practically, these 2 Oracle homes are most of the time not on the same machine. This is the reason.
The export of the users in OID exports from the default user location:
ldapsearch .... -b "cn=users,dc=domain,dc=com"
This is not 100% correct. The users are by default stored in something like "cn=users,dc=domain,dc=com". So, if the users are stored in the default location, it works. But if this location (user install base) is customized, it does not work.
The reason is that such settings means that the LDAP most of the time highly customized. And I prefer that the administrator to copy the real LDAP himself. The right command will probably depend of the customer case. So, I prefered not to take the risk..
orclCommonNicknameAttribute must match in the Target and Source OID .
The orclCommonNicknameAttribute must match on both the source and target OID. By default this attribute is set to "uid", so if this has been changed, it must be changed in both systems.
Reference Note 282698.1
Migration of custom Java portlets.
The script migrates all the data of Portal stored in the database. If you have custom java portlet deployed in your development machine, you will need to copy them in the production system.
Step 1 - Export in Development (DEV)
To export a full Portal installation to another machine, you need to follow 3 steps:
Export at the database level the portal schemas + related schemas
Get the midtier hostname and port of DEV
Export of the users and groups with LDAPSEARCH in 2 XML files
A script combining all the steps is available here.
A. Export the 4 portals schemas (DEV)
You need to export 3 types of database schemas:
The 4 portal schemas created by default by the portal installation :
portal,
portal_app,
portal_demo,
portal_public
The schemas where your custom database portlets / providers resides (if any)
- The custom schemas you have created for storing your portlet / provider code
The schemas where your custom tables resides. (if any)
- Your custom schemas accessed by portal and containing only data (tables, views ...)
You can get an approximate list of the schemas: default portal schemas (1) and database portlets schemas (2) with this query.
SELECT USERNAME, DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE
FROM DBA_USERS
WHERE USERNAME IN (user, user||'_PUBLIC', user||'_DEMO', user||'_APP')
OR USERNAME IN (SELECT DISTINCT OWNER FROM WWAPP_APPLICATION$ WHERE NAME != 'WWV_SYSTEM');
It still misses your custom schemas containing data only (3).
We will export the 4 schemas and your custom ones in an export file with the user sys.
Please, use a command like this one
exp userid="'sys/change_on_install@dev as sysdba'" file=portal_exp.dmp grants=y log=portal_exp.log owner=(portal,portal_app,portal_demo,portal_public)The result is a dump file: 'portal_exp.dmp'. If you are using a database 9.2.0.5 or 10.1.0.2, the database of the exp/imp dump file has changed. Please read this.
B. Hostname and port
For the URL to access the portal, you need the 2 following infos to run the script 'imp_portal_schema.sh below :
Webcache hostname
Webcache listen port
These values are contained in the iasconfig.xml file of the midtier.
iasconfig.xml
<IASConfig XSDVersion="1.0">
<IASInstance Name="ias904.dev.dev_domain.com" Host="dev.dev_domain.com" Version="9.0.4">
<OIDComponent AdminPassword="@BfgIaXrX1jYsifcgEhwxciglM+pXod0dNw==" AdminDN="cn=orcladmin" SSLEnabled="false" LDAPPort="3060"/>
<WebCacheComponent AdminPort="4037" ListenPort="7782" InvalidationPort="4038" InvalidationUsername="invalidator" InvalidationPassword="@BR9LXXoXbvW1iH/IEFb2rqBrxSu11LuSdg==" SSLEnabled="false"/>
<EMComponent ConsoleHTTPPort="1813" SSLEnabled="false"/>
</IASInstance>
<PortalInstance DADLocation="/pls/portal" SchemaUsername="portal" SchemaPassword="@BR9LXXoXbvW1c5ZkK8t3KJJivRb0Uus9og==" ConnectString="cn=asdb,cn=oraclecontext">
<WebCacheDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
<OIDDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
<EMDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
</PortalInstance>
</IASConfig>
It corresponds to a portal URL like this:
http://dev.dev_domain.com:7782/pls/portalThe script exp_portal_schema.sh copy the iasconfig.xml file in the exp_data directory.
C. Export the security: users and groups (optional)
If you use other Single Sing-On uses than the portal user, you probably need to restore the full security, the users and groups stored in OID on the production machine. 5 steps need to be executed for this operation:
Export the OID entries with LDAPSEARCH
Before to import, change the domain in the generated file (optional)
Before to import, remove the 'authpassword' attributes from the generated files
Import them with LDAPADD
Update the GUID/DN of the groups in portal tables
Part 1 - LDAPSEARCH
The typical commands to do this operation look like this:
ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -b "cn=portal.040127.1384,cn=groups,dc=dev_domain,dc=com" -s sub "objectclass=*" > portal_group.xml
ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -D "cn=orcladmin" -w $IAS_PASSWORD -b "cn=users,dc=dev_domain,dc=com" -s sub "objectclass=inetorgperson" > portal_users.xmlTake care about the following points
The groups are stored in a LDAP directory containing the date of installation
( in this example: portal.040127.1384,cn=groups,dc=dev_domain,dc=com )
If the domain of dev and prod is different, the exported files contains the name of the development domain in the form of 'dc=dev_domain,dc=com' in a lot of place. The domain name needs to be replaced by the production domain name everywhere in the files.
Ldapsearch uses the option '- X '. It it to export to DSML files (XML). It avoids a problem related with common LDAP files, LDIF files. LDIF files are wrapped at 78 characters. The wrapping to 78 characters make difficult to change the domain name contained in the LDIF files. XML files are not wrapped and do not have this problem.
A sample script to export the 2 XML files is given here in : step 3 - export the users and groups (optional) of the export script.
Part 2 : change the domain in the DSML files
If the domain of dev and prod is different, the exported files contains the name of the development domain in the form of 'dc=dev_domain,dc=com' in a lot of place. The domain name need to be replaced by the production domain name everywhere in the files.
To do this, we can use these commands:
cat exp_data/portal_groups.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/portal_groups.xml
cat exp_data/portal_users.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/temp_users.xml
Part 3 : Remove the authpassword attribute
The export of all attributes from the all users has also exported an automatically generated attribute in OID called 'authpassword'.
'authpassword' is a list automatically generated passwords for several types of application. But mostly, it can not be imported. Also, there is no option in ldapsearch (that I know) that allows removing an attribute. In place of giving to the ldapsearch command the list of all the attributes that is very long, without 'authpassword', we will remove the attribute after the export.
For that we will use the fact that the DSML files are XML files. There is a XSLT in the Oracle IAS, in the executable '$ORACLE_HOME/bin/xml'. XSLT is a standard specification of the internet consortium W3C to transform a XML file with the help of a XSL file.
Here is the XSL file to remove the authpassword tag.
del_autpassword.xsl
<!--
File : del_authpassword.xsl
Version : 1.0
Author : mgueury
Description:
Remove the authpassword from the DSML files
-->
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xml:output method="xml"/>
<xsl:template match="*|@*|node()">
<xsl:copy>
<xsl:apply-templates select="*|@*|node()"/>
</xsl:copy>
</xsl:template>
<xsl:template match="attr">
<xsl:choose>
<xsl:when test="@name='authpassword;oid'">
</xsl:when>
<xsl:when test="@name='authpassword;orclcommonpwd'">
</xsl:when>
<xsl:otherwise>
<xsl:copy>
<xsl:apply-templates select="*|@*|node()"/>
</xsl:copy>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet>
And the command to make the transfomation:
xml -f -s del_authpassword.xsl -o imp_log/portal_users.xml imp_log/temp_users.xmlWhere :
imp_log/portal_users.xml is the final file without authpassword tags
imp_log/temp_users.xml is the input file with the authpassword tags that can not be imported.
Part 4 : LDAPADD
The typical commands to do this operation look like this:
ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X portal_group.xml
ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X portal_users.xmlTake care about the following points
Ldapadd uses the option ' -c '. Existing users/groups are generating an error. The option -c allows continuing and ignoring these errors. Whatever, the errors should be checked to see if it is just existing entries.
A sample script to import the 2 XML files given in the step 5 - import the users and groups (optional) of the import script.
Part 5 : Update the GUID/DN
In Portal 9.0.4, the update of the GUID is taken care by PTLCONFIG during the import. (Import step 7)
D. Example script for export
Here is a example script that combines the 3 steps.
Depending of you need, you will :
or execute all the steps
or just execute the 1rst one (export of the database users). It will be enough you just want to login with the portal user on the production instance.
if your portal repository resides in a database 9.2.0.5 or 10.1.0.2, please read this
you can download all the scripts here, Attachment 276688.1:1
Do not forget to modify the script to your need and mostly add the list of users like explained in point A above.
exp_portal_schema.sh
# BASH Script : exp_portal_schema.sh
# Version : 1.3
# Portal : 9.0.4.0
# History :
# mgueury - creation
# Description:
# This script export a portal dump file from a dev instance
# -------------------------- Environment variables --------------------------
. portal_env.sh
# In case you do not use portal_env.sh you have to define all the variables
# For exporting the dump file only.
# export SYS_PASSWORD=change_on_install
# export PORTAL_TNS=asdb
# For the security (optional)
# export IAS_PASSWORD=welcome1
# export PORTAL_USER=portal
# export PORTAL_PASSWORD=A1b2c3de
# export OID_HOSTNAME=development.domain.com
# export OID_PORT=3060
# export OID_DOMAIN_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
# ------------------------------ Help function -----------------------------------
function press_any_key() {
if [ $PRESS_ANY_KEY_AFTER_EACH_STEP = "Y" ]; then
echo
echo Press enter to continue
read $ANY_KEY
else
echo
fi
echo "------------------------------- Export ------------------------------------"
# create a directory for the export
mkdir exp_data
# copy the env variables in the log just in case
export > exp_data/exp_env_variable.txt
echo "--------------------- step 1 - export"
# export the portal users, but take care to add:
# - your users containing DB providers
# - your users containing data (tables)
exp userid="'sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba'" file=exp_data/portal_exp.dmp grants=y log=exp_data/portal_exp.log owner=(portal,portal_app,portal_demo,portal_public)
press_any_key
echo "--------------------- step 2 - store iasconfig.xml file of the MIDTIER"
cp $MIDTIER_ORACLE_HOME/portal/conf/iasconfig.xml exp_data
press_any_key
echo "--------------------- step 3 - export the users and groups (optional)"
# Export the groups and users from OID in 2 XML files (not LDIF)
# The OID groups of portal are stored in GROUP_INSTALL_BASE that depends
# of the installation date.
# For the user, I use the default place. If it does not work,
# you can find the user place with:
# > exec dbms_output.put_line(wwsec_oid.get_user_search_base);
# Get the GROUP_INSTALL_BASE used in security export
sqlplus $PORTAL_USER/$PORTAL_PASSWORD@$PORTAL_TNS <<IASDB
set serveroutput on
spool exp_data/group_base.log
begin
dbms_output.put_line(wwsec_oid.get_group_install_base);
end;
IASDB
export GROUP_INSTALL_BASE=`grep cn= exp_data/group_base.log`
echo '--- Exporting Groups'
echo 'creating portal_groups.xml'
ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -s sub -b "$GROUP_INSTALL_BASE" -s sub "objectclass=*" > exp_data/portal_groups.xml
echo '--- Exporting Users'
echo 'creating portal_users.xml'
ldapsearch -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -X -s sub -b "cn=users,$OID_DOMAIN_DN" -s sub "objectclass=inetorgperson" > exp_data/portal_users.xml
The script is done to run from the midtier.
Step 2 - Install IAS in a new machine (PROD)
A. Installation
This note does not distinguish if Portal is sharing the same database than Single-Sign On and OID. For simplicity, I will speak only about 1 database. But I could also create a second infrastructure database just for the portal repository. This way is better for production system, because the Portal repository is only product used in the 2nd database. Having 2 separate databases allows taking easily backup of the portal repository.
On the production machine, you need to install a fresh install of IAS 9.0.4. Take care to use :
the same IAS patchset 9.0.4.1, 9.0.4.2, ...on the middle-tier and infrastruture than in development
and same characterset than in development (or UTF8)
The result will be 2 ORACLE_HOMES and 1 infrastructure database:
the ORACLE_HOME of the infrastructure (SID:infra904)
the ORACLE_HOME of the midtier (SID:ias904)
an infrastructure database (SID:asdb)
The empty new Portal install should work fine before to go to the next step.
B. About tablespaces (optional)
The size of the tablespace of the production should match the one of the Developement machine. If not, the tablespace will autoextend. It is not really a concern, but it is slow. You should modify the tablespaces for to have as much space on prod and dev.
Also, it is safer to check that there is enough free space on the hard disk to import in the database.
To modify the tablespace size, you can use Oracle Entreprise Manager console,
On Unix, . oraenv
infra904oemapp dbastudio
On NT Start/ Programs/ Oracle Application server - infra904 / Enterprise Manager Console
Launch standalone
Choose the portal database (typically asdb.domain.com)
Connect with a DBA user, sys or system
Click Storage/Tablespaces
Change the size of the PORTAL, PORTAL_DOC, PORTAL_LOGS, PORTAL_IDX tablespaces
C. Backup
It could be a good idea to take a backup of the MIDTIER and INFRASTRUCTURE Oracle Homes at that point to allow retesting the import process if it fails for any reason as much as you want without needing to reinstall everything.
Step 3 - Import in production (on PROD)
The following script is a sample of an Unix script that combines all the steps to import a portal repository to the production machine.
To import a portal reporistory and his users and group in OID, you need to do 8 things:
Stop the midtier to avoid errors while dropping the portal schema
SQL*Plus with Portal
Drop the 4 default portal schemas
Create the portal users with the same passwords than the just deleted users and give them grants (you need to create your own custom shemas too if you have some).
Import the dump file
Import the users and groups into OID (optional)
SQL*Plus with SYS : Post import changes
Recompile everything in the database
Reassign the imported jobs to portal
SQL*Plus with Portal : Post import changes
Recreate the Portal intermedia indexes
Correct an import errror on wwsrc_preference$
Make additional post import changes, by updating some portal tables, and replacing the development hostname, port or domain by the production ones.
Rewire the portal repository with ptlconfig -dad portal
Restart the midtier
Here is a sample script to do this on Unix. You will need to adapt the script to your needs.
imp_portal_schema.sh
# BASH Script : imp_portal_schema.sh
# Version : 1.3
# Portal : 9.0.4.0
# History :
# mgueury - creation
# Description:
# This script import a portal dump file and relink it with an
# infrastructure.
# Script to be started from the MIDTIER
# -------------------------- Environment variables --------------------------
. portal_env.sh
# Development and Production machine hostname and port
# Example
# .._HOSTNAME machine.domain.com (name of the MIDTIER)
# .._PORT 7782 (http port of the MIDTIER)
# .._DN dc=domain,dc=com (domain name in a LDAP way)
# These values can be determined automatically with the iasconfig.xml file of dev
# and prod. But if you do not know or remember the dev hostname and port, this
# query should find it.
# > select name, http_url from wwpro_providers$ where http_url like 'http%'
# These variables are used in the
# > step 4 - security / import OID users and groups
# > step 6 - post import changes (PORTAL)
# Set the env variables of the DEV instance
rm /tmp/iasconfig_env.sh
xml -f -s xsl/portal_env_unix.xsl -o /tmp/iasconfig_env.sh exp_data/iasconfig.xml
. /tmp/iasconfig_env.sh
export DEV_HOSTNAME=$WEBCACHE_HOSTNAME
export DEV_PORT=$WEBCACHE_LISTEN_PORT
export DEV_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
# Set the env variables of the PROD instance
. portal_env.sh
export PROD_HOSTNAME=$WEBCACHE_HOSTNAME
export PROD_PORT=$WEBCACHE_LISTEN_PORT
export PROD_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
# ------------------------------ Help function -----------------------------------
function press_any_key() {
if [ $PRESS_ANY_KEY_AFTER_EACH_STEP = "Y" ]; then
echo
echo Press enter to continue
read $ANY_KEY
else
echo
fi
echo "------------------------------- Import ------------------------------------"
# create a directory for the logs
mkdir imp_log
# copy the env variables in the log just in case
export > imp_log/imp_env_variable.txt
echo "--------------------- step 1 - stop the midtier"
# This step is needed to avoid most case of ORA-01940: user connected
# when dropping the portal user
$MIDTIER_ORACLE_HOME/opmn/bin/opmnctl stopall
press_any_key
echo "--------------------- step 2 - drop and create empty users"
sqlplus "sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba" <<IASDB
spool imp_log/drop_create_user.log
---- Drop users
-- Warning: You need to stop all SQL*Plus connection to the
-- portal schema before that else the drop will give an
-- ORA-01940: cannot drop a user that is currently connected
drop user portal_public cascade;
drop user portal_app cascade;
drop user portal_demo cascade;
drop user portal cascade;
---- Recreate the users and give them grants"
-- The new users will have the same passwords as the users we just dropped
-- above. Do not forget to add your exported custom users
create user portal identified by $PORTAL_PASSWORD default tablespace portal;
grant connect,resource,dba to portal;
create user portal_app identified by $PORTAL_APP_PASSWORD default tablespace portal;
grant connect,resource to portal_app;
create user portal_demo identified by $PORTAL_DEMO_PASSWORD default tablespace portal;
grant connect,resource to portal_demo;
create user portal_public identified by $PORTAL_PUBLIC_PASSWORD default tablespace portal;
grant connect,resource to portal_public;
alter user portal_public grant connect through portal;
start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wwv/wdbigra.sql portal
exit
IASDB
press_any_key
echo "--------------------- step 3 - import"
imp userid="'sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba'" file=exp_data/portal_exp.dmp grants=y log=imp_log/import.log full=y
press_any_key
echo "--------------------- step 4 - import the OID users and groups (optional)"
# Some errors will be raised when running the ldapadd because at least the
# default entries will not be able to be inserted. Remove them from the
# ldif file if you want to avoid them. Due to the flag '-c', ldapadd ignores
# duplicate entries. Another more radical solution is to erase all the entries
# of the users and groups in OID before to run the import.
# Replace the domain name in the XML files.
cat exp_data/portal_groups.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/portal_groups.xml
cat exp_data/portal_users.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/temp_users.xml
# Remove the authpassword attributes with a XSL stylesheet
xml -f -s xsl/del_authpassword.xsl -o imp_log/portal_users.xml imp_log/temp_users.xml
echo '--- Importing Groups'
ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X imp_log/portal_groups.xml -v
echo '--- Importing Users'
ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X imp_log/portal_users.xml -v
press_any_key
echo "--------------------- step 5 - post import changes (SYS)"
sqlplus "sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba" <<IASDB
spool imp_log/sys_post_changes.log
---- Recompile the invalid packages"
-- On the midtier, the script utlrp is not present. This step
-- uses a copy of it stored in patch/utlrp.sql
select count(*) INVALID_OBJECT_BEFORE from all_objects where status='INVALID';
start patch/utlrp.sql
set lines 999
select count(*) INVALID_OBJECT_AFTER from all_objects where status='INVALID';
---- Jobs
-- Reassign the JOBS imported to PORTAL. After the import, they belong
-- incorrectly to the user SYS.
update dba_jobs set LOG_USER='PORTAL', PRIV_USER='PORTAL' where schema_user='PORTAL';
commit;
exit
IASDB
press_any_key
echo "--------------------- step 6 - post import changes (PORTAL)"
sqlplus $PORTAL_USER/$PORTAL_PASSWORD@$PORTAL_TNS <<IASDB
set serveroutput on
spool imp_log/portal_post_changes.log
---- Intermedia
-- Recreate the portal indexes.
-- inctxgrn.sql is missing from the 9040 CD-ROMS. This is the bug 3536937.
-- Fixed in 9041. The missing script is contained in the downloadable zip file.
start patch/inctxgrn.sql
start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
---- Import error
alter table "WWSRC_PREFERENCE$" add constraint wwsrc_preference_pk
primary key (subscriber_id, id)
using index wwsrc_preference_idx1
begin
DBMS_RLS.ADD_POLICY ('', 'WWSRC_PREFERENCE$', 'WEBDB_VPD_POLICY',
'', 'webdb_vpd_sec', 'select, insert, update, delete', TRUE,
static_policy=>true);
end ;
---- Modify tables with full URLs
-- If the domain name of prod and dev are different, this step is really important.
-- It modifies the portal tables that contains reference to the hostname or port
-- of the development machine. (For more explanation: see Addional steps in the note)
-- groups (dn)
update wwsec_group$
set dn=replace( dn, '$DEV_DN', '$PROD_DN' )
update wwsec_group$
set dn_hash = wwsec_api_private.get_dn_hash( dn )
-- users (dn)
update wwsec_person$
set dn=replace( dn, '$DEV_DN', '$PROD_DN' )
update wwsec_person$
set dn_hash = wwsec_api_private.get_dn_hash( dn)
-- subscriber
update wwsub_model$
set dn=replace( dn, '$DEV_DN', '$PROD_DN' ), GUID=':1'
where dn like '%$DEV_DN%'
-- preferences
update wwpre_value$
set varchar2_value=replace( varchar2_value, '$DEV_DN', '$PROD_DN' )
where varchar2_value like '%$DEV_DN%'
update wwpre_value$
set varchar2_value=replace( varchar2_value, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
where varchar2_value like '%$DEV_HOSTNAME:$DEV_PORT%'
-- page url items
update wwv_things
set title_link=replace( title_link, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
where title_link like '%$DEV_HOSTNAME:$DEV_PORT%'
-- web providers
update wwpro_providers$
set http_url=replace( http_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
where http_url like '%$DEV_HOSTNAME:$DEV_PORT%'
-- html links created by the RTF editor inside text items
update wwv_text
set text=replace( text, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
where text like '%$DEV_HOSTNAME:$DEV_PORT%'
-- Portlet metadata nls: help URL
update wwpro_portlet_metadata_nls$
set help_url=replace( help_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
where help_url like '%$DEV_HOSTNAME:$DEV_PORT%'
-- URL items (There is a trigger on this table building absolute_url automatically)
update wwsbr_url$
set absolute_url=replace( absolute_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
where absolute_url like '%$DEV_HOSTNAME:$DEV_PORT%'
-- Things attributes
update wwv_thingattributes
set value=replace( value, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
where value like '%$DEV_HOSTNAME:$DEV_PORT%'
commit;
exit
IASDB
press_any_key
echo "--------------------- step 7 - ptlconfig"
# Configure portal such that portal uses the infrastructure database
cd $MIDTIER_ORACLE_HOME/portal/conf/
./ptlconfig -dad portal
cd -
mv $MIDTIER_ORACLE_HOME/portal/logs/ptlconfig.log imp_log
press_any_key
echo "--------------------- step 8 - restart the midtier"
$MIDTIER_ORACLE_HOME/opmn/bin/opmnctl startall
date
Each step can generate his own errors due to a lot of factors. It is better to run the import step by step the first time.
Do not forget to check the output of log files created during the various steps of the import:
imp_log/drop_create_user.log
Spool when dropping and recreating the portal users
imp_log/import.log Import log file when importing the portal_exp.dmp file
imp_log/sys_post_changes.log
Spool when making post changes with SYS
imp_log/portal_post_changes.log
Spool when making post changes with PORTAL
imp_log/ptlconfig.log
Log file of ptconfig when rewiring the midtier
Step 4 - Test
A. Check the log files
B. Test the website and see if it works fine.
Step 5 - take a backup
Take a backup of all ORACLE_HOME and DATABASES to prevent all hardware problems. You need to copy:
All the files of the 2 ORACLE_HOME
And all the database files.
Step 6 - Additional steps
Here are some additional steps.
SSO external application ( that are part of the orasso schema and not imported yet )
Page URL items ( they seems to store the full URL ) - included in imp_portal_schema.sh
Web Providers ( the URL needs to be changed ) - included in imp_portal_schema.sh
Text items edited with the RTF editor in IE and containing links - included in imp_portal_schema.sh
Most of them are taken care by the "step 8 - post import changes". Except the first one.
1. SSO import
This script imports only Portal and the users/groups of OID. Not the list of the external application contained in the orasso user.
In Portal 9.0.4, there is a script called SSOMIG that resides in $INFRA_ORACLE_HOME/sso/bin and allows to move :
Definitions and user data for external applications
Registration URLs and tokens for partner applications
Connection information used by OracleAS Discoverer to access various data sources
See:
Oracle® Application Server Single Sign-On Administrator's Guide 10g (9.0.4) Part Number B10851-01
14. Exporting and Importing Data
2. Page items: the page URL items store the full URL.
This is Bug 2661805 fixed in Portal 9.0.2.6.
This following work-around is implemented in post import step of imp_portal_schema.sh
-- page url items
update wwv_things
set title_link=replace( title_link, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
where title_link like '%$DEV_HOSTNAME:$DEV_PORT%'
2. Web Providers
The URL to the Web providers needs also change. Like for the Page items, they contain the full path of the webserver.
Or you can get the list of the URLs to change with this query
select name, http_url from PORTAL.WWPRO_PROVIDERS$ where http_url like '%';
This following work-around is implemented in post import step of imp_portal_schema.sh
-- web providers
update wwpro_providers$
set http_url=replace( http_url, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
where http_url like '%$DEV_HOSTNAME:$DEV_PORT%'
4. The production and development machine do not share the same domain
If the domain of the production and the development are not the same, the DN (name in LDAP) of all users needs to change.
Let's say from
dc=dev_domain,dc=com -> dc=prod_domain,dc=com
1. before to upload the ldif files. All the strings in the 2 ldifs files that contain 'dc=dev_domain,dc=com', have to be replaced by 'dc=prod_domain,dc=com'
2. in the wwsec_group$ and wwsec_person$ tables in portal, the DN need to change too.
This following work-around is implemented in post import step of imp_portal_schema.sh
-- groups (dn)
update wwsec_group$
set dn=replace( dn, 'dc=dev_domain,dc=com', 'dc=prod_domain,dc=com' )
update wwsec_group$
set dn_hash = wwsec_api_private.get_dn_hash( dn )
-- users (dn)
update wwsec_person$
set dn=replace( dn, 'dc=dev_domain,dc=com', 'dc=prod_domain,dc=com' )
update wwsec_person$
set dn_hash = wwsec_api_private.get_dn_hash( dn)
5. Text items with HTML links
Sometimes people stores full URL inside their text items, it happens mostly when they use link with the RichText Editor in IE .
This following work-around is implemented in post import step in imp_portal_schema.sh
-- html links created by the RTF editor inside text items
update wwv_text
set text=replace( text, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
where text like '%$DEV_HOSTNAME:$DEV_PORT%'
6. OID Custom password policy
It happens quite often that the people change the password policy of the OID server. The reason is that with the default policy, the password expires after 60 days. If so, do not forget to make the same changes in the new installation.
PROBLEMS
1. Import log has some errors
A. EXP-00091 -Exporting questionable statistics
You can ignore this error.
B. IMP-00017 - WWSRC_PREFERENCE$
When importing, there is one import error:
IMP-00017: following statement failed with ORACLE error 921:
"ALTER TABLE "WWSRC_PREFERENCE$" ADD "
IMP-00003: ORACLE error 921 encountered
ORA-00921: unexpected end of SQL commandThe primary key is not created. You can create it with this commmand
in SQL*Plus with the user portal.. Then readd the missing VPD policy.
alter table "WWSRC_PREFERENCE$" add constraint wwsrc_preference_pk
primary key (subscriber_id, id)
using index wwsrc_preference_idx1
begin
DBMS_RLS.ADD_POLICY ('', 'WWSRC_PREFERENCE$', 'WEBDB_VPD_POLICY',
'', 'webdb_vpd_sec', 'select, insert, update, delete', TRUE,
static_policy=>true);
end ;
Step 8 in the script "imp_portal_schema.sh" take care of this. This can also possibly be solved by the
C. IMP-00017 - WWDAV$ASL
. importing table "WWDAV$ASL"
Note: table contains ROWID column, values may be obsolete 113 rows importedThis error is normal, the table really contains a ROWID column.
D. IMP-00041 - Warning: object created with compilation warnings
This error is normal too. The packages giving these error have
dependencies on package not yet imported. A recompilation is done
after the import.
E. ldapadd error 'cannot add add entries containing authpasswords'
# ldap_add: DSA is unwilling to perform
# ldap_add: additional info: You cannot add entries containing authpasswords.
"authpasswords" are automatically generated values from the real password of the user stored in userpassword. These values do not have to be exported from ldap.
In the import script, I remove the additional tag with a XSL stylesheet 'del_authpassword.xsl'. See above.
F. IMP-00017: WWSTO_SESSION$
IMP-00017: following statement failed with ORACLE error 2298:
"ALTER TABLE "WWSTO_SESSION$" ENABLE CONSTRAINT "WWSTO_SESS_FK1""
IMP-00003: ORACLE error 2298 encountered
ORA-02298: cannot validate (PORTAL.WWSTO_SESS_FK1) - parent keys not found
Here is a work-around for the problem. I will normally integrate it in a next version of the scripts.
SQL> delete from WWSTO_SESSION_DATA$;
7690 rows deleted.
SQL> delete from WWSTO_SESSION$;
1073 rows deleted.
SQL> commit;
Commit complete.
SQL> ALTER TABLE "WWSTO_SESSION$" ENABLE CONSTRAINT "WWSTO_SESS_FK1";
Table altered.
G. IMP-00017 - ORACLE error 1 - DBMS_JOB.ISUBMIT
This error can appear during the import when the import database is not empty and is already customized for some reasons. For example, you export from an infrastructure and you import in a database with a lot of other programs that uses jobs. And unhappily the same job id.
Due to the way the export/import of jobs is done, the jobs keeps their id after the import. And they may conflict.
IMP-00017: following statement failed with ORACLE error 1: "BEGIN DBMS_JOB.ISUBMIT(JOB=>42,WHAT=>'begin execute immediate " "''begin wwutl_cache_sys.process_background_inval; end;'' ; exc" "eption when others then wwlog_api.log(p_domain=> ''utl'', " " p_subdomain=>''cache'', p_name=>''background'', " " p_action=>''process_background_inval'', p_information => ''E" "rror in process_background_inval ''|| sqlerrm);end;', NEXT_DATE=" ">TO_DATE('2004-08-19:17:32:16','YYYY-MM-DD:HH24:MI:SS'),INTERVAL=>'SYSDATE " "+ 60/(24*60)',NO_PARSE=>TRUE); END;"
IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated
ORA-06512: at "SYS.DBMS_JOB", line 97 ORA-06512: at line 1
Solutions:
1. use a fresh installed database,
2. Due that the jobs conflicting are different because it happens only in custom installation, there is no clear rule. But you can
recreate the jobs lost after the import with other_ids
and/or change the job id of the other program before to import. This type of commands can help you (you need to do it with SYS)
select * from dba_jobs;
update dba_jobs set job=99 where job=52;
commit
2. Import in a RAC environment
Be aware of the Bug 2479882 when the portal database is in a RAC database.
Bug 2479882 : NEEDED TO BOUNCE DB NODES AFTER INSTALLING PORTAL 9.0.2 IN RAC NODE3. Intermedia
After importing a environment, the intermedia indexes are invalid. To correct the error you need to run in SQL*Plus with Portal
start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/inctxgrn.sql
start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
But $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/inctxgrn.sql is missing in IAS 9.0.4.0. This is Bug 3536937. Fixed in 9041. The missing scripts are contained in the downloadable zip file (exp_schema904.zip : Attachment 276688.1:1 ), directory sql. This means that practically in 9040, you have to run
start sql/inctxgrn.sql
start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
In the import script, it is done in the step 6 - recreate Portal Intermedia indexes.
You can not WA the problem without the scripts. Running ctxcrind.sql alone does not work. You will have this error:
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "PORTAL.WWERR_API_EXCEPTION", line 164
ORA-06512: at "PORTAL.WWV_CONTEXT", line 1035
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "PORTAL.WWERR_API_EXCEPTION", line 164
ORA-06512: at "PORTAL.WWV_CONTEXT", line 476
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-20000: Oracle Text error:
DRG-12603: CTXSYS does not own user datastore procedure: WWSBR_THING_CTX_69
ORA-06512: at line 13
4. ptlconfig
If you try to run ptlconfig simply after an import you will get an error:
Problem processing Portal instance: Configuring HTTP server settings : Installing cache data : SQL exception: ERROR: ORA-23421: job number 32 is not a job in the job queue
This is because the import done by user SYS has imported the PORTAL jobs to the SYS schema in place of portal. The solution is to run
update dba_jobs set LOG_USER='PORTAL', PRIV_USER='PORTAL' where schema_user='PORTAL';
In the import script, it is done in the step 8 - post import changes.
5. WWC-41417 - invalid credentials.
When you try to login you get:
Unexpected error encountered in wwsec_app_priv.process_signon (User-Defined Exception) (WWC-41417)
An exception was raised when accessing the Oracle Internet Directory: 49: Invalid credentials
Details
Error:Operation: dbms_ldap.simple_bind_s
OID host: machine.domain.com
OID port number: 4032
Entry DN: orclApplicationCommonName=PORTAL,cn=Portal,cn=Products,cn=OracleContext. (WWC-41743)Solution:
- run secupoid.sql
- rerun ptlconfig
This problem has been seen after using ptlasst in place of ptlconfig.
6. EXP-003 with a database 9.2.0.5 or 10.1.0.2
In fact, the DB format of imp/exp has changed in 9.2.0.5 or 10.1.0.2. The EXP-3 error only occurs when the export from the 9.2.0.5.0 or 10.1.0.2.0 database is done with a lower release export utility, e.g. 9.2.0.4.0.
Due to the way this note is written, the imp/exp utility used is the one of the midtier (9014), if your portal resides in a 9.2.0.5 database, it will not work. To work-around the problem, there are 2 solutions:
Change the script so that it uses the exp and imp command of database.
Make a change to the 9.2.0.5 or 10.1.0.2 database to make them compatible with previous version. The change is to modify a database internal view before to export/import the data.
A work-around is given in Bug 3784697
1. Make a note of the export definition of exu9tne from
$OH/rdbms/admin/catexp.sql
2. Copy this to a new file and add "UNION ALL select * from sys.exu9tneb" to the end of the definition
3. Run this as sys against the DB to be exported.
4. Export as required
5. Put back the original definition of exu9tne
eg: For 9204 the workaround view would be:
CREATE OR REPLACE VIEW exu9tne (
tsno, fileno, blockno, length) AS
SELECT ts#, segfile#, segblock#, length
FROM sys.uet$
WHERE ext# = 1
UNION ALL
select * from sys.exu9tneb
7. EXP-00006: INTERNAL INCONSISTENCY ERROR
This is Bug 2906613.
The work-around given in this bug is the following:
- create the following view, connected as sys, before running export:
CREATE OR REPLACE VIEW exu8con (
objid, owner, ownerid, tname, type, cname,
cno, condition, condlength, enabled, defer,
sqlver, iname) AS
SELECT o.obj#, u.name, c.owner#, o.name,
decode(cd.type#, 11, 7, cd.type#),
c.name, c.con#, cd.condition, cd.condlength,
NVL(cd.enabled, 0), NVL(cd.defer, 0),
sv.sql_version, NVL(oi.name, '')
FROM sys.obj$ o, sys.user$ u, sys.con$ c,
sys.cdef$ cd, sys.exu816sqv sv, sys.obj$ oi
WHERE u.user# = c.owner# AND
o.obj# = cd.obj# AND
cd.con# = c.con# AND
cd.spare1 = sv.version# (+) AND
cd.enabled = oi.obj# (+) AND
NOT EXISTS (
SELECT owner, name
FROM sys.noexp$ ne
WHERE ne.owner = u.name AND
ne.name = o.name AND
ne.obj_type = 2)
The modification of exu8con simply adds support for a constraint type that had not previously been supported by this view. There is no negative impact.
8. WWSBR_DOC_CTX_54 is invalid
After the recompilation of the package, one package remains invalid (in sys_post_changes.log):
INVALID_OBJECT_AFTER
1
select owner, object_name from all_objects where status='INVALID'
CTXSYS WWSBR_DOC_CTX_54
CREATE OR REPLACE procedure WWSBR_DOC_CTX_54
(rid in rowid, bilob in out NOCOPY blob)
is begin PORTAL.WWSBR_CTX_PROCS.DOC_CTX(rid,bilob);end;
This object is not used anymore by portal. The error can be ignored. The procedure can be removed too. This is Bug 3559731.
9. You do not have permission to perform this operation. (WWC-44131)
It seems that there are problems if
- groups on the production machine are not residing in the default place in OID,
- and that the group creation base and group search base where changed.
After this, the cloning of the repository work without problem. But it seems that the command 'ptlconfig -dad portal' does not reset the GUID and DN of the groups correctly. I have not checked this yet.
The solution seems to use the script given in the 9.0.2 Note 228516.1. And run group_sec.sql to reset all the DN and GUID in the copied instance.
10. Invalid Java objects when exporting from a 9.x database and importing in a 10g database
If you export from a 9.x database and import in a 10g database, after running utlrp.sql, 18 Java objects will be invalid.
select object_name, object_type from user_objects where status='INVALID'
SQL> /
OBJECT_NAME OBJECT_TYPE
/556ab159_Handler JAVA CLASS
/41bf3951_HttpsURLConnection JAVA CLASS
/ce2fa28e_ProviderManagerClien JAVA CLASS
/c5b98d35_ServiceManagerClient JAVA CLASS
/d77cf2ab_SOAPServlet JAVA CLASS
/649bf254_JavaProvider JAVA CLASS
/a9164b8b_SpProvider JAVA CLASS
/2ee43ac9_StatefulEJBProvider JAVA CLASS
/ad45acec_StatelessEJBProvider JAVA CLASS
/da1c4a59_EntityEJBProvider JAVA CLASS
/66fdac3e_OracleSOAPHTTPConnec JAVA CLASS
/939c36f5_OracleSOAPHTTPConnec JAVA CLASS
org/apache/soap/rpc/Call JAVA CLASS
org/apache/soap/rpc/RPCMessage JAVA CLASS
org/apache/soap/rpc/Response JAVA CLASS
/198a7089_Message JAVA CLASS
/2cffd799_ProviderGroupUtils JAVA CLASS
/32ebb779_ProviderGroupMgrProx JAVA CLASS
18 rows selected.
This is a known issue. This can be solved by applying patch one of the following patch depending of your IAS version.
Bug 3405173 - PORTAL 9.0.4.0.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
Bug 4100409 - PORTAL 9.0.4.1.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
Bug 4100417 - PORTAL 9.0.4.2.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
11. Import : IMP-00003: ORACLE error 30510 encountered
When importing Portal 9.0.4.x, it could be that the import of the database side produces an error ORA-30510.The new perl script work-around the issue in the portal_post_import.sql script. But not the BASH scripts. If you use the BASH scripts, after the import, please run this command manually in SQL*Plus logged as portal.
---- Import error 2 - ORA-30510 when importing
CREATE OR REPLACE TRIGGER logoff_trigger
before logoff on schema
begin
-- Call wwsec_oid.unbind to close open OID connections if any.
wwsec_oid.unbind;
exception
when others then
-- Ignore all the errors encountered while unbinding.
null;
end logoff_trigger;
This is logged as <Bug;4458413>.
12. Exporting from a 9.0.1 database and import in a 9.2.0.5+ or 10g DB
It could be that when exporting from a 9.0.1 database to a 10g database that the java classes do not get compiled correctly. The following errors are seen
ORA-29534: referenced object PORTAL.oracle/net/www/proto/https/HttpsURLConnection could not be resolved
errors:: class oracle/net/www/proto/https/HttpsURLConnection
ORA-29521: referenced name oracle/security/ssl/OracleSSLSocketFactoryImpl could not be found
ORA-29521: referenced name oracle/security/ssl/OracleSSLSocketFactory could not be found
In such a case, please apply the following patches after the import in the 10g database.
Bug 3405173 PORTAL REPOS DB UPGRADE TO 10G: for Portal 9.0.4.0
Bug 4100409 PORTAL REPOS DB UPGRADE TO 10G: for Portal 9.0.4.1
Main Differences with Portal 9.0.2
For the persons used to this technics in Portal 9.0.2, you could be interested to read the main differences with the same note for Portal 9.0.2
Portal 9.0.2
Portal 9.0.4
Cutter database
Portal 9.0.2 can be part of an infrastructure database or in a custom external database.
In Portal 9.0.2, the portal schema is imported in an empty database.
Portal 9.0.4 can only be installed in a 'Cutter database', a database created with RepCA or OUI containing always OID, DCM and so on...
In Portal 9.0.4, the portal schema is imported in an 'Cutter database' (new)
group_sec.sql
group_sec.sql is used to correct the GUIDs of OID stored in Portal
ptlconfig -dad portal -oid is used to correct the GUIDs of OID stored in Portal (new)
1 script
The import / export are divided by several steps with several scripts
The import script is done in one step
Additional steps are included in the script
This requires to know the hostname and port of the original development machine. (new)
Import
The steps are:
creation of an empty database
creation of the users with password=username
import
The steps are:
creation of an IAS 10g infrastructure DB (repca or OUI)
deletion of new portal schemas (new)
creation of the users with the same password than the schemas just dropped.
import
DAD
The dad needed to be changed
The passwords are not changed, the dad does not need to be changed.
Bugs
In portal 9.0.2, 2 bugs were workarounded by change_host.sh
In Portal 9.0.4, some tables additional tables needs to be updated manually before to run ptlasst. This is #Bug:3762961#.
export of LDAP
The export is done in LDIF files. If the prod and the dev have different domain, it is quite difficult to change the domain name in these file due to the line wrapping at 78 characters.
The export is done in XML files, in the DSML format (new). It is a lot easier to change the XML files if the domain name is different from PROD to DEV.
Download
You have to cut and paste the scripts
The scripts are attached to the note. Just donwload them.
Rewiring
9.0.2 uses ptlasst.
ptlasst.csh -mode MIDTIER -i custom -s $PORTAL_USER -sp $PORTAL_PASSWORD -c $PORTAL_HOSTNAME:$PORTAL_DB_PORT:$PORTAL_SERVICE_NAME -sdad $PORTAL_DAD -o orasso -op $ORASSO_PASSWORD -odad orasso -host $MIDTIER_HOSTNAME -port $MIDTIER_HTTP_PORT -ldap_h $INFRA_HOSTNAME -ldap_p $OID_PORT -ldap_w $IAS_PASSWORD -pwd $IAS_PASSWORD -sso_c $INFRA_HOSTNAME:$INFRA_DB_PORT:$INFRA_SERVICE_NAME -sso_h $INFRA_HOSTNAME -sso_p $INFRA_HTTP_PORT -ultrasearch -oh $MIDTIER_ORACLE_HOME -mc false -mi true -chost $MIDTIER_HOSTNAME -cport_i $WEBCACHE_INV_PORT -cport_a $WEBCACHE_ADM_PORT -wc_i_pwd $IAS_PASSWORD -emhost $INFRA_HOSTNAME -emport $EM_PORT -pa orasso_pa -pap $ORASSO_PA_PASSWORD -ps orasso_ps -pp $ORASSO_PS_PASSWORD -iasname $IAS_NAME -verbose -portal_only
9.0.4 uses ptlconfig (new)
ptlconfig -dad portal
Environment variables
A lot of environment variables are needed
Just 3 environment variables are needed:
- password of SYS
- password of IAS,
- ORACLE_HOME of the Midtier
All the rest is found in iasconfig.xml and LDAP (new)
TO DO
- Check if the orclcommonapplication name fits SID.hostname
- Check what gives the import of a portal30 upgraded schema inside a schema named portal
- Explain how to copy the portal*.dbf files in place of export/import and the limitation of tra -
Error While Firing select Query on Table
Hi aill,
we have Oracle 11g RG2 RAC on Production Machine. When I am firing select query on one partiton table,it will show me below error in Alter Log file :
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x0] [PC:0x84056AA, kkpamDInfo()+38] [flags: 0x0, count: 1]
Errors in file /opt/app/oracle/diag/rdbms/winsdb/WINSDB2/trace/WINSDB2_ora_29686.trc (incident=288563):
ORA-07445: exception encountered: core dump [kkpamDInfo()+38] [SIGSEGV] [ADDR:0x0] [PC:0x84056AA] [Address not mapped to object] []
Incident details in: /opt/app/oracle/diag/rdbms/winsdb/WINSDB2/incident/incdir_288563/WINSDB2_ora_29686_i288563.trc
While checking the trace file :
========== FRAME [2] (ksedst1()+98 -> skdstdst()) ==========
defined by frame pointers 0x2ba371efaa40 and 0x2ba371efa990
CALL TYPE: call ERROR SIGNALED: no COMPONENT: KSE
RDI 0000000000000000 RSI 0000000000000000 RDX 00002BA371EF6118
RCX 0000000000000001 R8 0000000000000000 R9 0000000000000000
RAX 0000000000000000 RBX 0000000000000003 RBP 00002BA371EFAA40
R10 71EFA9A000000000 R11 0000000000000000 R12 0000000000000003
R13 0000000000000003 R14 0000000000000001 R15 0000000000000001
RSP 00002BA371EFA9A0 RIP 000000000349E72E
Dump of memory from 0x2ba371efaa40 to 0x2ba371efaaf0
2BA371EFAA40 71EFAB10 00002BA3 0349E77F 00000000 [...q.+....I.....]
2BA371EFAA50 00000000 00000000 00000000 00000000 [................]
2BA371EFAA60 71B996F0 00002BA3 02050034 00000000 [...q.+..4.......]
2BA371EFAA70 000000FF 00002BA3 00002004 00000000 [.....+... ......]
2BA371EFAA80 00000000 00000000 2338D058 00016DAB [........X.8#.m..]
2BA371EFAA90 00000003 00000000 085232F3 00000000 [.........2R.....]
2BA371EFAAA0 0000000D 00000000 00000002 00000000 [................]
2BA371EFAAB0 00000000 00000000 00000000 00000000 [................]
2BA371EFAAC0 71EFAAD0 00002BA3 085BBFCF 00000000 [...q.+....[.....]
2BA371EFAAD0 71EFAB10 00002BA3 0349E249 00000000 [...q.+..I.I.....]
2BA371EFAAE0 00000000 00002BA3 00000013 00000000 [.....+..........]
Can Anyone guide me for above Errors. How can I resolve the same.A quick search of Oracle Support shows that there are a handful of bugs that match ORA-07445 with an argument of kkpamDInfo()+38.
Whether yours is already identified and patched depends on the exact version of 11.2 and the specific circumstances.
If you're on 11.2.0.1 then this might be part of what sounds like quite a big bucket of such errors - Bug 9399991 relating to errors and dumps with SQL against partitioned tables. -
Hello. I just performed a clean install of Solaris 10 and Sun Studio 12. Despite this, I cannot compile C99 code. If I try to use c99, I get a message similar to, "c99 utility unavailable SunOS 5.10". Using cc-5.0 with various -xc99 flags, I get an error saying, "c99 is not available SunOS 5.10".
Why will it not work? I included the SunOS Header Files package in the Solaris installation. I can even see them in my /usr/include directory. Yet, I cannot compile with C99 enabled. If I use the -xc99 flag, it has to be set to no_lib. And then, I still cannot compile C99 code.
What am I missing? What do I need to do to get C99 functionality?To be clear, you tried the command /sun_studio/version_12/SUNWspro/bin/c99 somefile.c and got a message that c99 was not available? Can you copy/paste the exact command and response?
Please also run the command ls -CF /sun_studio/version_12/SUNWspro/bin and copy/paste the output.
I'm emphasizing copy/paste to ensure that I see exactly what you see.
Please also look at the Sun Studio installation log to see if any errors were reported.
This really isn't the place for a Unix tutorial, but very briefly:
In a terminal window you are interacting with a shell. Among the shell and environment variables is a "path", which shows which directories (or "folders" in Windows terminology) to search for commands that you type. The easiest way to run Sun Studio components is to add the bin directory ("bin" for "binary") to your path, so that you can just type "cc" or "c99" (or whatever) to run the command. The way you set the patch depends on which shell you are running, so let's save that for after we fix whatever your problem is. -
Invalid payh to WORD templates folder
Hi experts.
after upgrade to release from 2005 to 2007 I have a problem with exporting to word.
the error message is: invalid payh to WORD templates folder [message 20015-6].
first the upgrade everything worked. I have sap and folder authorization.
also the export to PDF by clicking the appropriate button does not work. I solved pdf export using file/export/layout to/pdf, but not for word doc.
I have Sap Business One SP:01 PL:07
can you help me?
thanksCiao Pasquale,
for create a word doc or a pdf doc from a marketing document it is necessary only to click on related word icon or pdf icon; for sure you need to set the correct patch into General Setting tab Path, like " C:\Program Files\SAP\SAP Business One\WordDocs\Italy\ " for the word document and the same for excel files like "C:\Program Files\SAP\SAP Business One\ExclDocs\ ".
Please verify if into the folder you have this file's:
CustAccounts.doc
Docmnhl1.doc
mnhlDot1.dot
ToDoLetters.doc
I've tested into 2007 A SP01 PL08 and I'm not able to reproduce the issue, so I think there is something wrong into the client and I suggest to you to open a customer message to Global Support Center explain in detail what is happen with a little word document with each single step.
Or you can try to upgrade to the last patch 08 and re-try if the issue is patch depending !
Regards,
Massimo Sala - LPE Italy -
After a rather big upgrade yesterday (see paste below) I have got a problem with MySQL query browser which is found in package mysql-gui-tools. Everything else seems to be okay, but after upgrading mysql-gui-tools from 5.0r12-3 to 5.0r14-1 (..and now, after another update, its at 5.0r14-2) it is no longer able to refresh schema/databases. Every time I try to change/refresh schema in the schema menu to the right the program hangs and I have to kill it. A refresh icon appears as normal to the left of the schema I'm trying to refresh, but that icon too has frozen. I am however able to execute normal SELECT queries etc if I write the full "path" to the table in the SQL query, but when I try a USE query the same thing happens.
I have done some searching on the Internet about this and found this thread to be interesting. Too bad it's from July 2007..
I tried to downgrade to the old version of query browser, but then I got this problem:
/usr/bin/mysql-query-browser-bin: error while loading shared libraries: libmysqlclient_r.so.15: cannot open shared object file: No such file or directory
..So I guess theres a dependency problem here with the old version.
Has anyone else got any problem with the MySQL Query Browser after upgrade or has any ideas for a solution?
Log from pacman:
[2009-04-12 13:52] synchronizing package lists
[2009-04-12 13:52] starting full system upgrade
[2009-04-12 13:55] upgraded xf86-input-evdev (2.1.2-1 -> 2.2.1-1)
[2009-04-12 13:55] upgraded xorg-server (1.5.3-5 -> 1.6.0-3)
[2009-04-12 13:56] synchronizing package lists
[2009-04-12 13:56] starting full system upgrade
[2009-04-12 13:56] upgraded glib2 (2.20.0-1 -> 2.20.1-1)
[2009-04-12 13:56] upgraded libcap (1.10-2 -> 2.16-3)
[2009-04-12 13:56] upgraded avahi (0.6.24-1 -> 0.6.24-3)
[2009-04-12 13:56] upgraded cdrkit (1.1.9-1 -> 1.1.9-2)
[2009-04-12 13:56]
[2009-04-12 13:56] >>> Deluge's daemon is running with the "deluge" user. The default download directory is /srv/deluge/
[2009-04-12 13:56]
[2009-04-12 13:56] upgraded deluge (1.1.5-1 -> 1.1.6-3)
[2009-04-12 13:56] upgraded jack-audio-connection-kit (0.109.2-2 -> 0.116.2-1)
[2009-04-12 13:56] upgraded gstreamer0.10-bad-plugins (0.10.11-2 -> 0.10.11-3)
[2009-04-12 13:56] upgraded hdparm (9.12-1 -> 9.14-1)
[2009-04-12 13:56] upgraded iptables (1.4.2-1 -> 1.4.3.1-1)
[2009-04-12 13:56] upgraded kbproto (1.0.3-1 -> 1.0.3-2)
[2009-04-12 13:57] upgraded kdelibs (4.2.2-3 -> 4.2.2-4)
[2009-04-12 13:57] upgraded kernel26-firmware (2.6.28-1 -> 2.6.29-1)
[2009-04-12 13:59] >>> Updating module dependencies. Please wait ...
[2009-04-12 13:59] >>> MKINITCPIO SETUP
[2009-04-12 13:59] >>> ----------------
[2009-04-12 13:59] >>> If you use LVM2, Encrypted root or software RAID,
[2009-04-12 13:59] >>> Ensure you enable support in /etc/mkinitcpio.conf .
[2009-04-12 13:59] >>> More information about mkinitcpio setup can be found here:
[2009-04-12 13:59] >>> http://wiki.archlinux.org/index.php/Mkinitcpio
[2009-04-12 13:59]
[2009-04-12 13:59] >>> Generating initial ramdisk, using mkinitcpio. Please wait...
[2009-04-12 13:59] ==> Building image "default"
[2009-04-12 13:59] ==> Running command: /sbin/mkinitcpio -k 2.6.29-ARCH -c /etc/mkinitcpio.conf -g /boot/kernel26.img
[2009-04-12 13:59] :: Begin dry run
[2009-04-12 13:59] :: Parsing hook [base]
[2009-04-12 13:59] :: Parsing hook [udev]
[2009-04-12 13:59] :: Parsing hook [autodetect]
[2009-04-12 13:59] :: Parsing hook [pata]
[2009-04-12 13:59] :: Parsing hook [scsi]
[2009-04-12 13:59] :: Parsing hook [sata]
[2009-04-12 13:59] :: Parsing hook [usbinput]
[2009-04-12 13:59] :: Parsing hook [keymap]
[2009-04-12 13:59] :: Parsing hook [filesystems]
[2009-04-12 13:59] :: Generating module dependencies
[2009-04-12 13:59] :: Generating image '/boot/kernel26.img'...SUCCESS
[2009-04-12 13:59] ==> SUCCESS
[2009-04-12 13:59] ==> Building image "fallback"
[2009-04-12 13:59] ==> Running command: /sbin/mkinitcpio -k 2.6.29-ARCH -c /etc/mkinitcpio.conf -g /boot/kernel26-fallback.img -S autodetect
[2009-04-12 13:59] :: Begin dry run
[2009-04-12 13:59] :: Parsing hook [base]
[2009-04-12 13:59] :: Parsing hook [udev]
[2009-04-12 13:59] :: Parsing hook [pata]
[2009-04-12 13:59] :: Parsing hook [scsi]
[2009-04-12 13:59] :: Parsing hook [sata]
[2009-04-12 13:59] :: Parsing hook [usbinput]
[2009-04-12 13:59] :: Parsing hook [keymap]
[2009-04-12 13:59] :: Parsing hook [filesystems]
[2009-04-12 13:59] :: Generating module dependencies
[2009-04-12 14:00] :: Generating image '/boot/kernel26-fallback.img'...SUCCESS
[2009-04-12 14:00] ==> SUCCESS
[2009-04-12 14:00] upgraded kernel26 (2.6.28.8-1 -> 2.6.29.1-3)
[2009-04-12 14:00] upgraded klibc-udev (140-1 -> 141-1)
[2009-04-12 14:00] upgraded libavc1394 (0.5.3-1 -> 0.5.3-2)
[2009-04-12 14:00] upgraded libcddb (1.3.0-3 -> 1.3.2-1)
[2009-04-12 14:00] upgraded libdatrie (0.1.2-1 -> 0.2.1-1)
[2009-04-12 14:00] upgraded libdrm (2.3.1-3 -> 2.4.9-1)
[2009-04-12 14:00] upgraded libdvdread (0.9.7-1 -> 0.9.7-2)
[2009-04-12 14:00] upgraded libfontenc (1.0.4-1 -> 1.0.4-2)
[2009-04-12 14:00] upgraded libid3tag (0.15.1b-2 -> 0.15.1b-3)
[2009-04-12 14:00] upgraded libmatroska (0.8.1-1 -> 0.8.1-2)
[2009-04-12 14:00] upgraded libmpd (0.16.1-1 -> 0.18.0-1)
[2009-04-12 14:00] upgraded libmysqlclient (5.0.77-1 -> 5.1.33-1)
[2009-04-12 14:00] upgraded libogg (1.1.3-1 -> 1.1.3-2)
[2009-04-12 14:00] upgraded libsamplerate (0.1.6-1 -> 0.1.7-1)
[2009-04-12 14:00] upgraded libthai (0.1.9-1 -> 0.1.11-1)
[2009-04-12 14:00] upgraded libx11 (1.2-1 -> 1.2.1-1)
[2009-04-12 14:00] installed libftdi (0.15-1)
[2009-04-12 14:00] upgraded lirc-utils (0.8.4-1 -> 0.8.5pre2-1)
[2009-04-12 14:00] upgraded m4 (1.4.12-1 -> 1.4.13-1)
[2009-04-12 14:00] upgraded man-db (2.5.4-2 -> 2.5.5-1)
[2009-04-12 14:00] upgraded man-pages (3.19-1 -> 3.20-1)
[2009-04-12 14:00] upgraded nvidia-utils (180.29-3 -> 180.44-1)
[2009-04-12 14:00] installed dri2proto (1.99.3-1)
[2009-04-12 14:01] upgraded mesa (7.2-1 -> 7.4-1)
[2009-04-12 14:01] upgraded mpfr (2.3.2-2 -> 2.4.1-1)
[2009-04-12 14:01] upgraded mpg123 (1.7.1-4 -> 1.7.2-1)
[2009-04-12 14:01] upgraded mysql-clients (5.0.77-1 -> 5.1.33-1)
[2009-04-12 14:01] upgraded mysql (5.0.77-3 -> 5.1.33-1)
[2009-04-12 14:01] upgraded mysql-gui-tools (5.0r12-3 -> 5.0r14-1)
[2009-04-12 14:01] In order to use the new nvidia module, exit Xserver and unload it manually.
[2009-04-12 14:01] upgraded nvidia (180.29-3 -> 180.44-1)
[2009-04-12 14:01] warning: /etc/pacman.d/mirrorlist installed as /etc/pacman.d/mirrorlist.pacnew
[2009-04-12 14:01] upgraded pacman-mirrorlist (20090108-1 -> 20090405-1)
[2009-04-12 14:01] upgraded pango (1.24.0-1 -> 1.24.0-2)
[2009-04-12 14:01] upgraded php (5.2.9-2 -> 5.2.9-3)
[2009-04-12 14:01] upgraded pm-utils (1.2.4-3 -> 1.2.5-1)
[2009-04-12 14:01] installed perl-xyne-common (0.01-5)
[2009-04-12 14:01] installed perl-html-tagset (3.20-1)
[2009-04-12 14:01] installed perl-html-parser (3.60-1)
[2009-04-12 14:01] installed perl-libwww (5.825-1)
[2009-04-12 14:01] installed perl-xyne-arch (0.03-5)
[2009-04-12 14:01] ######################
[2009-04-12 14:01] ## IMPORTANT NOTICE ##
[2009-04-12 14:01] ######################
[2009-04-12 14:01] Powerpill options and configuration file syntax have changed with
[2009-04-12 14:01] version 16.0. Please remove old configuration files and use the
[2009-04-12 14:01] default configuration file at /etc/powerpill.conf as a template for new
[2009-04-12 14:01] ones. Please see the man page for information on the command-line options.
[2009-04-12 14:01] upgraded powerpill (15.12-1 -> 16.0-5)
[2009-04-12 14:01] upgraded python-numpy (1.2.1-4 -> 1.3.0-1)
[2009-04-12 14:02] upgraded qt (4.5.0-3 -> 4.5.0-4)
[2009-04-12 14:02] upgraded qt3 (3.3.8-9 -> 3.3.8-10)
[2009-04-12 14:02] upgraded redland (1.0.8-1 -> 1.0.8-3)
[2009-04-12 14:02] upgraded tdb (3.3.1-1 -> 3.3.3-1)
[2009-04-12 14:02] upgraded smbclient (3.3.1-1 -> 3.3.3-1)
[2009-04-12 14:02] upgraded subversion (1.6.0-2 -> 1.6.1-2)
[2009-04-12 14:02] upgraded syslog-ng (3.0.1-4 -> 3.0.1-6)
[2009-04-12 14:02] upgraded tzdata (2009d-1 -> 2009e-1)
[2009-04-12 14:02] upgraded udev (140-2 -> 141-1)
[2009-04-12 14:02] upgraded xf86-input-keyboard (1.3.2-1 -> 1.3.2-2)
[2009-04-12 14:02] upgraded xf86-input-mouse (1.3.0-1 -> 1.4.0-2)
[2009-04-12 14:02] upgraded xf86-video-vesa (2.1.0-1 -> 2.2.0-1)
[2009-04-12 14:02] upgraded xfce4-mpc-plugin (0.3.3-2 -> 0.3.3-3)
[2009-04-12 14:02] upgraded xorg-server-utils (7.4-3 -> 7.4-4)
[2009-04-12 14:02] upgraded xorg-utils (7.4-2 -> 7.4-3)
Last edited by siaco (2009-04-14 21:42:10)I found the error on this. In the current PGKBUILD found in AUR, the patch mysql-gui-tools.chema_change_freeze_bug.patch is no longer applied.
I downloaded all related files to this package and built it myself now, with some changes to PGKBUILD and the mysql-gui-tools.chema_change_freeze_bug.patch. It works again :-)
New PGKBUILD:
# $Id: PKGBUILD,v 1.14 2009/04/12 11:52:45 dsa Exp $
# Maintainer: Douglas Soares de Andrade <[email protected]>
# Contributor: Vinay S Shastry <[email protected]>
pkgname=mysql-gui-tools
pkgver=5.0r14
pkgrel=2
arch=('i686' 'x86_64')
pkgdesc="Set of programs to manage and interact with a MySQL server."
url="http://www.mysql.com/products/tools/"
license=('GPL')
source=(http://mirrors.uol.com.br/pub/mysql/Downloads/MySQLGUITools/$pkgname-$pkgver.tar.gz
bad-char.patch
mysql-gui-tools-sigc_2.1.1_api_fixes.diff
mysql-gui-tools-5.0_p12-deprecated-gtk+-api.patch
mysql-gui-tools-gcc43.patch
mysql-gui-tools.chema_change_freeze_bug.patch)
depends=('gtkmm' 'gtkhtml' 'libmysqlclient' 'pcre')
replaces=('mysql-administrator' 'mysql-query-browser')
conflicts=('mysql-administrator' 'mysql-query-browser')
provides=('mysql-gui-common' 'mysql-administrator' 'mysql-query-browser')
makedepends=('pkgconfig' 'lua' 'libxml2' 'libgnomeprint')
options=('!makeflags')
build() {
cd $startdir/src/$pkgname-$pkgver
# Patch from mysql.com to fix the freeze when selecting a schema
patch -p1 < ../mysql-gui-tools.chema_change_freeze_bug.patch || return 1
# Patch to make 5.0r14 compile
patch -Np1 < $startdir/src/bad-char.patch
patch -Np1 < $startdir/src/mysql-gui-tools-sigc_2.1.1_api_fixes.diff
patch -Np1 < $startdir/src/mysql-gui-tools-gcc43.patch
patch -Np0 < $startdir/src/mysql-gui-tools-5.0_p12-deprecated-gtk+-api.patch
cd $startdir/src/$pkgname-$pkgver/common
sh autogen.sh
./configure --prefix=/usr --datarootdir=/usr/share --with-gtkhtml=libgtkhtml-3.14 || return 1
make || return 1
make DESTDIR=$startdir/pkg install || return 1
cd ..
cp -R common mysql-gui-common
cd $startdir/src/$pkgname-$pkgver/administrator
sh autogen.sh
./configure --prefix=/usr --datarootdir=/usr/share --with-gtkhtml=libgtkhtml-3.14 || return 1
make || return 1
make DESTDIR=$startdir/pkg install || return 1
cd $startdir/src/$pkgname-$pkgver/query-browser
sh autogen.sh
./configure --prefix=/usr --datarootdir=/usr/share --with-gtkhtml=libgtkhtml-3.14 || return 1
make CFLAGS="${CFLAGS} -D_GNU_SOURCE" || return 1
make DESTDIR=$startdir/pkg install || return 1
#cd $startdir/src/$pkgname-$pkgver/mysql-workbench
#patch -p1 < ../../mysql-gui-tools-5.0_p12-workbench-lua.patch
#./configure --prefix=/usr --with-gtkhtml=libgtkhtml-3.14 || return 1
#make || return 1
#make DESTDIR=$startdir/pkg install
# Some adjusts to make mysql-workbench run
#cd $startdir/pkg/usr/bin
#mv mysql-workbench mysql-wb
#mv mysql-workbench-bin mysql-wb-bin
#install -m755 $startdir/src/mysql-workbench.sh mysql-workbench
#rm -rf $startdir/pkg/usr/lib/
# Fixed startup scripts
install -m755 $startdir/mysql-administrator $pkgdir/usr/bin
install -m755 $startdir/mysql-query-browser $pkgdir/usr/bin
md5sums=('b8efefbf20b7264c8f3afd34424467d7'
'4279c75bb5e6c2bfcb16c98817d55b80'
'4625629385142862cd01d37f814d5e80'
'33205d45329ab4fa4096b6b298a60b2c'
'1368384dac87bc0a64adb774ab2e6cbd'
'2ff840932405f7a6a6863f633a639fe9')
New mysql-gui-tools.chema_change_freeze_bug.patch: (only changes to paths in file. Don't know if this was really needed, but I belive so.)
diff -ruN mysql-gui-tools-5.0r11.ORIG/query-browser/source/linux/MQQueryDispatcher.cc mysql-gui-tools-5.0r11/query-browser/source/linux/MQQueryDispatcher.cc
--- mysql-gui-tools-5.0r11.ORIG/query-browser/source/linux/MQQueryDispatcher.cc 2007-02-21 01:31:19.000000000 +0000
+++ mysql-gui-tools-5.0r11/query-browser/source/linux/MQQueryDispatcher.cc 2007-11-09 15:31:38.000000000 +0000
@@ -558,8 +558,8 @@
Gtk::Main::instance()->run();
- while(!req->is_complete())
+// while(!req->is_complete())
+// ;
return sps;
I hope this helps anyone else who needs to fix this! -
Upgradation from 11..5.10 to CU2 is taking more than 30 hours
Hi All ,
WE 're upgrading our application from 11.5.10 to 11.5.10.2 by applying 3460000 patch . Our OS version is AIX 5.3 .
But our patch is running from last 30 hours and is still running . We've checked the patch log & Worker log and from that we've found that it's running succesfully . But 30 hours is a very long time and we can't arrange such downtime on PROD . And still the patch is not completed .
I want to know that if anyone here ( reading this post ) has applied this patch earlier , then how much time it 'll take to complete ??? Is this time OK . And also , is this patch depends upon the DATABASE size . As here our database size is 3 TB . Is it also creating a problem , if yes then whats the workaround for this .
Earlier revert or workaround on this would be highly appreciated . you can revert me on my id also , my id is [email protected]
regds
Rahul GuptaRahul,
30 hours does seem like a long time. How many parallel workers are you using? On a 12 CPU box, you should specify workers=36 or workers=48 unless you start swapping heavily with that many java processes during the database portion. Also, how many other instances are running on this box? Does the box seem overloaded?
After the upgrade completes, go into Oracle Applications Manager and look at the timing details for 3460000, or look at the csv timing report in $APPL_TOP/admin/<sid>/out and look for problems with the longest running tasks.
You may want to run bde_rebuild.sql on your biggest schemas to look for fragmented indexes and ensure you have done gather schema stats before the upgrade along with a gather dictionary stats as shown below:
Collecting Statistics with Oracle Apps 11i
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=368252.1
bde_rebuild.sql - Validates and rebuilds indexes occupying more space than needed
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=182699.1
!date
execute dbms_stats.unlock_schema_stats('SYS');
execute dbms_stats.unlock_schema_stats('SYSTEM');
exec dbms_stats.gather_schema_stats('SYSTEM',options=>'GATHER', estimate_percent => 100, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('SYS',options=>'GATHER', estimate_percent => 100, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_fixed_objects_stats();
commit;
exec dbms_stats.DELETE_TABLE_STATS('SYS','X$KCCRSR');
exec dbms_stats.LOCK_TABLE_STATS('SYS','X$KCCRSR');
commit;
Rman Backup is Very Slow selecting from V$RMAN_STATUS
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=375386.1
Poor performance when accessing V$RMAN_BACKUP_JOB_DETAILS
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=420200.1
Troubleshooting Oracle Applications Performance Issues
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=169935.1 -
Sun Update Manager Entitlement Permissions
I recently ran "smpatch analyze" and "smpatch update" on a W1100z workstation running Solaris 10 (x86) [03/05]. Five patches were loaded including 119103-03 which upgraded Patch Manager to Update Manager.
Now, it is my understanding that without a Service Plan, I am entitled to the Security Updates, but that's it (Recommended Updates require a service plan???). That seems OK to me, but if this is the case, what happens when a Securtiy Update depends on s Recommended Update?
Update Manager shows a list of 8 new available updates this morning - 5 of them are securiy updates that all seem to rely on Recommended Update 119684-01. However, when I try to install 119684-01, I am given the following warning:
Failed Installation - update specified does not have entitlement permissions
Not sure if something else is going on or if this is due to the fact that I do not have a service plan and only require the security updates - however, it doesn't make much sense to release security updates for free if the recommended update it relies on is not also available.
I like the fact that Sun is trying to ease patch management, but I really question the need of a service plan to keep the systems up-to-date.
Am I missing something or do I have to forget about all these Security Updates unless I subscribe to a Service Plan
JBFrom Sun's Website describing Update Manager:
http://www.sun.com/service/sunupdate/gettingstarted.html
"Customers who have not purchased a valid Sun Service Plan can use the Sun Update Manager software to access security fixes and device drivers."
This applies to my situation. I have not subscribed to a Service Plan and really see no need to if what is described above is true - I would assume the "security fixes" refer to patches of Type=Security. However, this tool is useless if I am not allowed to also install patches of Type=Recommended that the Security Patches depend on.
I can understand Sun wanting users to pay for the Recommended Patches, but if a Security Patch depends on a Recommend Patch, that Recommended Patch should probably also be labelled as a Security Fix.
JB -
1) If you run install_cluster in the latest S10(sparc) recommended set, the
first time through it will install most of the patches. If you run it again, it
will install another one (maybe two). How odd is that?
2) The main kernal patch 118833-36 won't install because patch 119042-10
won't install. Now, 119042 won't install "due to a failure produced by pkgadd".
A look at the log file shows a claim that the pkg in question was already install,
which kills the overlying patchadd.
Has anyone encountered this or have any hints? Thanks.
PS Didn't we already decide that putting /usr/lib/sendmail in the kernal patch
was a BAD idea?
-T1) Not odd at all. Some patches depend on other
patches being previously installed AND operational.Hm, thought that is why they had a patch order file in the cluster. :-)
So you are suggesting that it should be standard practice to always run
install_cluster TWICE to make sure all the proper patches are installed.
Thats new for me, but if that's what it takes.
-T -
Getconf returns compiler flags that the compiler warns about
I have Workshop 12u1 installed on my Solaris 10u8 machine (x86_64). I'm up to date with patches as of a couple weeks ago.
I'm well aware that the Workshop suite has deprecated flags like -xarch=generic64, -xarch=amd64a, etc.
However, when I run "getconf" on my Solaris 10 system, some of the conf settings still return those older flags that the compiler now warns about. That causes a lot of warning chatter when building certain applications (e.g. perl 5.10.1) that are smart enough to use "getconf" to try determine what flags to use in certain situations. Using getconf to find the right flags is even mentioned in the standards(5) page, but following its advice results in deprecation warnings from the compiler. Not good.
If you run the following code you'll see what I mean:
for f in `man getconf | egrep 'XBS'`; do echo "Checking $f"; getconf "$f"; doneIs this something that I (as a sysadmin for the system) should be able to configure, or is it something that would need to be patched to fix? The man pages for getconf, sysconf, and confstr aren't clear where the values are actually coming from.
So, how should this be fixed?
Thanks,
TimEnchanter wrote:
Since I'm running regular old Solaris 10, rather than OpenSolaris, I'm skeptical how much help the OpenSolaris developers are truly going to be. I can certainly give it a go, but even if they fix it in OpenSolaris, it might be a very long time before that fix filters into Solaris.Actually, it goes the other way. New features and modification go into the internal Solaris development workspace before they migrate to Open Solaris. The Solaris Express and final releases and Open Solaris releases come from the same source base. Whether a feature or update goes into a Solaris 10 patch depends partly on demand, especially demand from customers with service contracts.
I'm also a little puzzled about the "backwards compatibility" thinking of the compiler developers. In some areas (e.g. C++ and the standard library), what's shipped with the compiler is years and years behind what's current, apparently in the name of "backward compatibility". Yet where compiler flags are concerned, the developers seem to be much more cavalier about deprecating things and making changes that break backward compatibility.We try very hard not to break code or makefiles. Sun (and now Oracle) makes its money from enterprise users. Their code bases last for a long time, and changes are expensive. Changing a line in a makefile or source code file can mean having to re-certify the application.
The change from -xarch to -m specifying the memory model was more abrupt than we would have liked, and in hindsight, was probably not handled very well. We were running into combinatorial explosion of -xarch sub-options due to the increasing number of architectures that the compiler supports. Some option combinations quietly resulted in behavior that you didn't expect. (For example, the combination "*-fast -xarch=v9*"did not give the same result as "*-xarch=v9 -fast*".) Separating memory model from the other considerations made the options easier and more reliable to use.
Don't get me wrong -- I'm all for forward progress with the tool-chain -- I think that the C++ standard library and the defaults should all be brought up to what's current in the industry. Sun Studio 12 update 1 has direct support for Apache stdcxx (if installed in a standard location) in addition to libCstd and STLport. The Apache library will be in the next Open Solaris. When it will be available in a Solaris 10 update has not yet been decided.
I also think adding support for -m32/-m64 was a good move. It just seems to me that if the developers are going to deprecate a large number of flags that the compiler accepts, they should also make certain that the rest of the operating system can handle that change. To me, that means that they need to work with the developers of getconf/confstr() to make certain it knows how to check the compiler version before it emits a particular flag.Only a few options have been deprecated. It's just that they are probably the ones you used most. :-(
More than one compiler version can be installed on a system, and compilers can be installed anywhere. Programmers in medium to large shops typically run compilers installed on a remote server, although they might have other compilers installed locally. If getconf had to guess what compiler you were using, it would often guess wrong. If you have suggestions on how to match up getconf behavior with the compiler you want to use, please post them in an Open Solaris forum. -
Oracle Diagnostics for E-Business
Hi, I am trying to get my head around the current situation for Oracle diagnostics.
My understanding is that the older Diagnostics Framework has been replaced with a proper Diagnostics Module ( IZU ) available as a responsibility within E-Biz. From 12.1.1 this module is included in the base install. Prior to that the module is still available as a patch ( depending on what version of E_Biz you are on ) but only back to a minimum version of 11.5.4 ( with pre-requisite FND and AD levels ).
Q1. For this new Diagnostics Module - does it have an accompanying Catalog of tests within the patch for the Diagnostic module itself or is the catalog supplied via a separate patch ?
Q2. My understanding therefore is that a updated test catalogs are only available with each RUP of E-Business. However can you also added individual tests to your catalog as each test itself has a patch number - is that right ?
Q3. So the only cumulative batch of tests are those supplied with each RUP, rather than cumulative patches for the catalog - is that correct ?
Q4. If you are already using the older diagnostics framework on an 11.5.X system and you wish to use the diagnostics module ( IZU ) is it simply a case of installing this module through the appropriate patch ( and readme ) ? In such a case does the IZU module then override the older diagnostics framework that was in place
Any clarity appreciated,
thanks,
JimPl post details of OS, database and EBS versions.
>
Q1. For this new Diagnostics Module - does it have an accompanying Catalog of tests within the patch for the Diagnostic module itself or is the catalog supplied via a separate patch ?
>
Both - the default IZU install comes with a standard battery of tests - these tests are updated thru IZU patches. See MOS Doc 235307.1 (E-Business Suite Diagnostic Tools FAQ and Troubleshooting Guide for Release 11i and R12)
>
Q2. My understanding therefore is that a updated test catalogs are only available with each RUP of E-Business. However can you also added individual tests to your catalog as each test itself has a patch number - is that right ?
>
Updates to test catalogs are available as individual patches for the IZU module.
>
Q3. So the only cumulative batch of tests are those supplied with each RUP, rather than cumulative patches for the catalog - is that correct ?
>
Not sure what you mean - each RUP will update the test catalog - and individual IZU patchsets will also update the test catalog.
>
Q4. If you are already using the older diagnostics framework on an 11.5.X system and you wish to use the diagnostics module ( IZU ) is it simply a case of installing this module through the appropriate patch ( and readme ) ? In such a case does the IZU module then override the older diagnostics framework that was in place
>
Yes - create/enable the IZU module as per the patch README - I believe doing so will override any older diagnostics framework (which diagnostics framework is this :-) ) ?
HTH
Srini -
Hi, I have a 17 inch macbook pro and have some strange marks on the screen at first I thought it was just dirt, I tried cleaning but this seems to make no difference, can anyone suggest anything, could the screen be damaged? the patches only show from particular angles sometimes as dark patches sometimes as light patches depending on the angle of view.
Thanks
Marcus
mac book pro 17 inch Mac OS X (10.4.7)If it is really troublesome and upsets you then I would call Applecare, yes it will be covered by the guarantee. It is just that you never know how you will receive your MBP back (whether in perfect condition or scratched) or if they decide to exchange it, whether it will have other problems. So i decided to keep mine , as it only seems visible during boot up on the blue screen.
If you bought it from a store in London, take it back to them and speak to a genius..They can tell you there and there what needs to be done.
Good luck.
Maybe you are looking for
-
Restart/shutdown comps frm an attached file in mails created via java,other
As, I have heard 'bout the invention of such attachment which when clickin' can restart/shutdown the computer tryin' to open it. I wonder if it really can be or just one of the bluffin'..... I first thought of creatin' a batch file n' sendin' it via
-
Watching DVD's with external speakers?
We like to watch DVD's on our Air. Is there a way to hook up external speakers? I know the Superdrive won't work through a hub. Any ideas or suggestions?
-
High Risk Vendor PO Approval Process
Hi All, We have a requirement where in we classify few vendors as High Risk Vendors at the Supplier Header Level. When PO is created, the requirement is not to block the Permanent creation of PO, but to do extra check/validation/approval by complianc
-
I am a beginner to JDBC and my english is not good. i hope anyone here understand where my problem is. i am trying to show some data in the database using very simple servlet. i have checked that the string returned have size (rs.getString(1).length(
-
When I try to select a category to sell an item on eBay I get the following error message. "Please provide the correct information in the highlighted fields. Please select at least one category to list your item in." I can search and select the categ