Upgrade of database with GC repository resides

I have GC 10.2.0.3 running with the repository stored in a 9.2.0.8 database.
I would like to upgrade the database to 10.2.0.3 using dbua if possible. When the dbua sees an upgrade to 10.2 it creates a new SYSMAN schema, but I already have one that the Grid Control install created when I used the option "install into an existing database."
I've searched MetaLink on how to do this, and created a SR but am having trouble getting support to understand what I'm attempting.
I'm open to anything, create a new database, etc. The only thing I want to be sure of is to not loose the information that I've already established in Grid, and I'm assuming it's stored in the SYSMAN schema.

Totally fresh clean 10gR2 database on a different host and platform.
2.3 Export/Import
If the source and destination database is non-10g, then export/import is the only option for cross platform database migration.
For performance improvement of export/import, set higher value for BUFFER and RECORDLENGTH . Do not export to NFS as it will slow down the process considerably.
Direct path can be used to increase performance. Note – As EM uses VPD, conventional mode will only be used by Oracle on tables where policy is defined.
Also User running export should have EXEMPT ACCESS POLICY privilege to export all rows as that user is then exempt from VPD policy enforcement. SYS is always exempted from VPD or Oracle Label Security policy enforcement, regardless of the export mode, application, or utility that is used to extract data from the database.
2.3.1 Prepare for Export/Import
* Mgmt_metrics_raw partitions check
select table_name,partitioning_type type,
partition_count count, subpartitioning_type subtype from
dba_part_tables where table_name = 'MGMT_METRICS_RAW'
If MGMT_METRICS_RAW has more than 3276 partitions please see Bug 4376351 – This bug is fixed in 10.2. Old partitions should be dropped before export/import to avoid this issue – This will also speed up the export/import process.
To drop old partitions - run exec emd_maintenance.partition_maintenance
(This needs shutdown of OMS and set job_queue_processes to 0 during the time drop partitions is run) – Please refer to EM Performance Best practices document for more details on usage.
Workaround to avoid bug 4376351 is to export mgmt_metrics_raw in conventional mode – This is needed only if drop partition is not run. Note - drop old partitions run is highly recommended.
* Shutdown OMS instances and prepare for migration
Shutdown OMS, set job queue_processes to 0 and remove dbms jobs using commands
connect /as sysdba
alter system set job_queue_processes=0;
connect sysman/<password>
@ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_remove_dbms_jobs.sql
2.3.2 Export
Before running export make sure that NLS_LANG variable is same as database characterset. For example after running this query
SQL> select value from nls_database_parameters where PARAMETER='NLS_CHARACTERSET';
VALUE
WE8ISO8859P1
Then NLS_LANG environment variable should be set to AMERICAN_ AMERICA. WE8ISO8859P1
* Export data
exp full=y constraints=n indexes=n compress=y file=fullem102_1.dmp log=fullem102exp_1.log
Provide system username and password when prompted.
Verify the log file and make sure that no characterset conversion happens (this line should not be present in log file “possible charset conversion”)
* Export without data and with constraints
exp full=y constraints=y indexes=y rows=n file=fullem102_2.dmp log=fullem102exp_2.log
Provide system username and password when prompted
2.3.3 Import
Before running import make sure that NLS_LANG variable is same as database characterset.
* Run RepManager to drop target repository (if target database has EM repository installed)
cd ORACLE_HOME/ sysman/admin/emdrep/bin
RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop
* Pre-create the tablespaces and the users in target database
@ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql
@ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql
@ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_pre_import.sql
For first 2 scripts, we need to provide input arguments when prompted or you can provide them on command line for example
@ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql MGMT_TABLESPACE <path>/mgmt.dbf <size of mgmt.dbf> <aotoextend size> MGMT_ECM_DEPOT_TS <path>/mgmt_ecm_depot1.dbf <size of mgmt_ecm_depot1.dbf> <aotoextend size> MGMT_TABLESPACE <path>/mgmt.dbf <size of mgmt.dbf> <aotoextend size>
@/scratch/nagrawal/OracleHomes/oms10g/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql sysman <sysman password> MGMT_TABLESPACE TEMP CENTRAL ON
* Import data -
imp constraints=n indexes=n FROMUSER=sysman TOUSER=sysman buffer=2097152 file=fullem102_1.dmp log=fullem102imp_1.log
* Import without data and with constraints -
imp constraints=y indexes=y FROMUSER=sysman TOUSER=sysman buffer=2097152 rows=n ignore=y file=fullem102_2.dmp log=fullem102imp_2.log
Verify the log file and make sure that no characterset conversion happens (this line should not be present in log file “possible charset conversion”)
2.3.4 Post Import EM Steps
* Please refer to Section 3.1 for Post Migration EM Specific Steps
3 Post Repository Migration Activities
3.1 Post Migration EM Steps
Following EM specific Steps should be carried out post migration -
* Recompile all invalid objects in sysman schema using
connect sysman/<password>
@ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_recompile_invalid.sql
* Run post plugin steps to recompile any invalids, create public synonyms, create other users, enable VPD policy, repin packages-
connect sysman/<password>
@ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_synonyms.sql
@ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_post_import.sql
Provide <ORACLE_HOME/sysman/admin/emdrep/sql for em_sql_root
SYSMAN for em_repos_user
MGMT_TABLESPACE for em_tablespace_name
TEMP for em_temp_tablespace_name
Note – The users created by admin_post_import will have same passwords as username.
Check for invalid objects – compare source and destination schemas for any discrepancy in counts and invalids.
* Following queues are not enabled after running admin_post_import.sql as per EM bug 6439035, enable them manually by running
connect sysman/<password>
exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_TASK_Q');
exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_PAF_RESPONSE_Q');
exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_PAF_REQUEST_Q');
exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_LOADER_Q');
* Please check for context using following query
connect sysman/<password>
select * from dba_context where SCHEMA='SYSMAN';
If any of following context is missing create them using
connect sysman/<password>
create or replace context storage_context sing storage_ui_util_pkg;
create or replace context em_user_context sing setemusercontext;
* Partition management
Check if necessary partitions are created so that OMS does not run into problems for loading into non-existent partitions (This problem can come only if there are gap of days between export and import) –
exec emd_maintenance.analyze_emd_schema('SYSMAN');
This will create all necessary partitions up to date.
* Submit EM dbms jobs
Reset back job_queue_processes to original value and resubmit EM dbms jobs
connect /as sysdba
alter system set job_queue_processes=10;
connect sysman/<password>
@ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_submit_dbms_jobs.sql
* Update OMS properties and startup OMS
Update emoms.properties to reflect the migrated repository - oracle.sysman.eml.mntr.emdRepConnectDescriptor
Update host name, port with the correct value and start the OMS.
* Relocate “Management Services and Repository” target
If “Management Services and repository” target needs to be migrated to the destination host, delete the old "Management Services and Repository" target. Add it again with same name "Management Services and Repository" on agent running on new machine.
* Run following sql to verify the repository collections are enabled for emrep target
SELECT
target_name,
metric_name,
task.task_id,
task.interval,
task.error_message,
trunc((mgmt_global.sysdate_utc-next_collection_timestamp )/1440) delay
from mgmt_collection_metric_tasks mtask,
mgmt_collection_tasks task,
mgmt_metrics met,
mgmt_targets tgt
where met.metric_guid = mtask.metric_guid AND
tgt.target_guid = mtask.target_guid AND
mtask.task_id = task.task_id(+) AND
met.source_type > 0 AND
met.source != ' '
AND tgt.target_type='oracle_emrep'
ORDER BY mtask.task_id;
This query should result same records in both source and destination database. If you find any of collections missing in destination database, run following to schedule them in destination database
DECLARE
traw RAW(16);
tname VARCHAR2(256);
ttype VARCHAR2(64);
BEGIN
SELECT target_name, target_type, target_guid
INTO tname, ttype, traw
FROM mgmt_targets
WHERE target_type = 'oracle_emrep';
mgmt_admin_data.add_emrep_collections(tname,ttype,traw);
END;
* Discover/relocate Database and database Listener targets
Delete the old repository database target and listener and rediscover the target database and listener in EM

Similar Messages

  • Upgrade of Database with Oracle Change Data Capture

    Hello,
    We are upgrading and moving our database to a different server.
    The move is from 10G R1 database on Solaris to 11G R2 on Linux.
    Our database uses Oracle Change Data Capture.
    What is the best way to perform this migration? Unlike in the approach below, ideally, it would be without any manual steps to drop and recreate CDC subscriptions, change tables, etc.
    Thanks.
    Considerations for Exporting and Importing Change Data Capture Objects
    http://docs.oracle.com/cd/B13789_01/server.101/b10736/cdc.htm#i1027532
    Starting in Oracle Databse 10g, Oracle Data Pump is the supported export and import utility for Change Data Capture. Change Data Capture change sources, change sets, change tables, and subscriptions are exported and imported by the Oracle Data Pump expdp and impdp commands with the following restrictions:
    After a Data Pump full database import operation completes for a database containing AutoLog Change Data Capture objects, the following steps must be performed to restore these objects:
    1. The publisher must manually drop the change tables with the SQL DROP TABLE command. This is needed because the tables are imported without the accompanying Change Data Capture metadata.
    2. The publisher must re-create the AutoLog change sources, change sets, and change tables using the appropriate DBMS_CDC_PUBLISH procedures.
    3. Subscribers must re-create their subscriptions to the AutoLog change sets.

    Hello,
    I opened SR with Oracle Support, they are suggesting to perform full database import/export
    Change Data Capture change sources, change sets, change tables, and subscriptions are exported and imported by the Oracle Data Pump expdp and impdp commands with the following restrictions.
    Change Data Capture objects are exported and imported only as part of full database export and import operations (those in which the expdp and impdb commands specify the FULL=y parameter). Schema-level import and export operations include some underlying objects (for example, the table underlying a change table), but not the Change Data Capture metadata needed for change data capture to occur."
    CDC has different implementation methods:
    You may use the below query to determine-
    select SOURCE_NAME, SOURCE_DESCRIPTION, CREATED, SOURCE_TYPE, SOURCE_DATABASE, SOURCE_ENABLED from change_sources;
    – Synchronous CDC: with this implementation method you capture changes
    synchronously on the source database into change tables. This method uses
    internal database triggers to enable CDC. Capturing the change is part of the
    original transaction that introduces the change thus impacting the performance
    of the transaction.
    – Asynchronous Autolog CDC: this implementation method requires a staging
    database separate from the source database. Asynchronous Autolog CDC uses
    the database's redo transport services to transport redo log information from
    the source database to the staging database1. Changes are captured at the
    staging database. The impact to the source system is minimal, but there is some
    latency between the original transaction and the change being captured.
    As suggested in the document-
    Change Data Capture objects are exported and imported only as part of full database export and import operations (those in which the expdp and impdb commands specify the FULL=y parameter). Schema-level import and export
    operations include some underlying objects (for example, the table underlying a change table), but not the Change Data Capture metadata needed for change data capture to occur.
    ■ AutoLog change sources, change sets, and change tables are not supported.
    Starting in Oracle Database 10g, Oracle Data Pump is the supported export and import utility for Change Data Capture.
    Re-Creating AutoLog Change Data Capture Objects After an Import Operation
    http://docs.oracle.com/cd/B19306_01/server.102/b14223/cdc.htm#i1027532
    After a Data Pump full database import operation completes for a database containing AutoLog Change Data Capture objects, the following steps must be performed to restore these objects:
    a. The publisher must manually drop the database objects underlying AutoLog Change Data Capture objects.
    b. The publisher must re-create the AutoLog change sources, change sets, and change tables using the appropriate DBMS_CDC_PUBLISH procedures.
    c. Subscribers must re-create their subscriptions to the AutoLog change sets.

  • Apply Patches on Oracle Database with Logical Standby Database

    Here I am:
    I got a primary database with a logical standby database running Oracle 11g. I got two client applications, one is the production site pointing to the primary one, another one is just a backup site pointing to the logical one.Things will only be written into the primary database every mid night and client applications can only query the database but not add, update nor delete.And now, I want to apply the latest patch on both of my databases. I am also the DNS administrator, I can make the name server pointing to the backup site instead of the production one.I want to firstly apply the patch on the logical one, and then the physical one.
    I found some reference which explains how to apply patches by adopting "Rolling Upgrade Method". however, I want to avoid doing any "switch over" mentioned in the reference because I can make use of name server. Can I just apply patches as the following way?
    1)Stop SQL apply
    2)Apply patches on logical standby database
    3)let the name server point to the backup site
    4)Apply patches on the primary database
    5)Start SQL apply
    6)Let the name server point back to the production site
    Thanks in advance.

    Pl follow the steps in MOS Doc 437276.1 ( Upgrading Oracle Database with a Logical Standby Database In Place )
    HTH
    Srini

  • How to upgrade standby database

    All,
    how to upgrade standby DB if we are going to upgrade Primary DB from 9.2.0 to 10.2.0.4?
    what will be the steps to upgrade stand by?

    Upgrading Oracle Database with a Physical Standby Database In Place
    => http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/upgrades.htm#sthref2051
    Upgrading Oracle Database with a Logical Standby Database In Place
    => http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/upgrades.htm#sthref2056
    Nicolas.

  • How to migrate  workflow user to upgrade oracle database

    hello all,
    we are running oracle 8.1.7 on sun solaris machine
    and workflow builder 2.5.0.16.4,now we want to upgrade the
    database with 9i,can some body please guide me how can we
    migrate the workflow user 'owf_mgr' data from 8.1.7 to 9i.
    i mean i want to have all the runtime data avaialable to me on
    9i as well.
    thanks in advance.
    Zeeshan Ahmad

    The first thing you need to be aware of is that Oracle Workflow
    2.5 is not certified on 9i. You have 2 options:
    1) Upgrade to Oracle Workflow 2.6 on 8.1.7 (Workflow 2.6 is a
    separate CD in the 8.1.7 CD pack), and then upgrade the database
    to 9i.
    2) Upgrade the Database to 9i, and then upgrade to Oracle
    Workflow 2.6.1 which is included on the 9i Database CD.
    When you upgrade the database, all the data in the owf_mgr schema
    should be upgraded as well. As always, you would perform a couple
    of test database/workflow upgrades before upgrading on your
    production box.
    hello all,
    we are running oracle 8.1.7 on sun solaris machine
    and workflow builder 2.5.0.16.4,now we want to upgrade the
    database with 9i,can some body please guide me how can we
    migrate the workflow user 'owf_mgr' data from 8.1.7 to 9i.
    i mean i want to have all the runtime data avaialable to me on
    9i as well.
    thanks in advance.
    Zeeshan Ahmad

  • Upgrade ERP database 11g and ATG7 with SSO integation

    Please let us know how to Perform Upgrade ERP database 11g and ATG7 with SSO integation .
    Regards .

    We have completed to upgrade ERP database from 9.2.0.6 to 11.2.0.1 and also apply ATG 7 on Test instance.
    And user finish testing , there is no issue after upgrade and application can work as normal.
    On Test instance we didn't implement Single Sign On
    But on Production we have Single Sign ON.
    Now we plan to upgrade on Production instance. But we afraid that we will found any issue on Production relate to SSO. Becase we don't have a chance to test it.
    My question is:
    Are there any spacial step we need to do if we have implemented SSO After upgrade DB 11g and ATG 7?

  • Upgrade large farm with Service Pack detaching all content databases?

    Hi,
    Whenever we upgrade one of our farms (with a Cumulative Update, Language Pack, or Service Pack), the corresponding PSConfig usually yields in error due to the huge time taken to upgrade all the content databases at the same time.
    We have heared about the possibility of detaching all content databases previous to the update (just CONTENT databases), then proceed with the installation and PSConfig, and finally a gradual upgrade by attaching one by one the content databases.
    After searching online I have not found a relevant or precise official documentation detailing whether this procedure is recommended or viable, and have some additional questions:
    - Can all he content database be disconnected (including the ones containig myprofiles, top level sites...) during the upgrade?
    Thanks in advance

    In my understanding, the patching involves (at least) two steps:
    Patching binaries on all servers of the farm
    Upgrading the database schema of Service Applications DBs, Configuration & Admin DBs, and content databases.
    If during the upgrade the database is not attached to the farm, SharePoint does not know about it, and does nothing with itm just like if it was physically detached from the SQL Server instance. But when, after patching, you reattach the database, SharePoint
    detects the difference in the schema version and the patched farm version, and upgrades the database accordingly. As far as I understand, this is automatically done by
    Mount-SPContentDatabase:
    "The Mount-SPContentDatabase cmdlet attaches an existing content database to the farm. If the database being mounted requires an upgrade, this cmdlet will cause the database to be upgraded.
    The default behavior of this cmdlet causes an upgrade of the schema of
    the database and initiates upgraded builds for all site collections within the specified content database if required. To prevent initiation of upgraded builds of site collections, use the
    NoB2BSiteUpgrade parameter. This cmdlet does not trigger version-to-version upgrade of any site collections."
    If you decide not to upgrade, you can later run Upgrade-SPContentdatabase.
    The point is, that if you keep attached all the content databases, the PSConfig needs to do a massive upgrade of all the content databases before finishing, thus considering that the farm is ok. On the detach/attach approach, the first PSConfig ends quite
    soon so your farm is up and running sooner (although, that's for sure, without the data being available until attached and upgraded - but you can gradually start giving service instead of waiting for the whole process).
    So far, so good. But what I was trying to ask in my question, is whether there is a detailed procedure to follow for this approach, or if there is any particular constraint regarding this approach (like "databases containing top level site collections
    should not be detached", or "pay attention to the my site host web application", or "that patching method is not supported").
    Thanks.

  • Using upgrade advisor on a database with compatibility level 80 on SQL Server 2008

    Hi All,
    My DB compatibility level is 80 which is on SqL server 2008, however I want to upgrade the database server to sql 2014. 
    Now in my case the upgrade advisor of SQL 2014 is not detecting the DB which is in 80 compatibility level and hence I cannot proceed with the upgrade (as I do not want to rollback). 
    So what exactly  can I do in this situation? any help would be appreciated
    aa

    What Olaf said Or if possible change compatibility level of database to 100 and try. I guess since you want to upgrade there wont be any issue in changing. Well if anything happens you can safely change it back to 80
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • How to deal with  time zone while upgrading the database?

    Hi,
    How to deal with time zone while upgrading the database?
    Thanks
    Edited by: user12135020 on Jul 4, 2011 3:06 AM

    Hello,
    I answered to a Wrong post.
    Best regards,
    Jean-Valentin
    Edited by: Lubiez Jean-Valentin on Jul 4, 2011 12:15 PM

  • Configuring New Oracle Database with HFM Shared Services

    Hi,
    We have upgrated the oracle database to new version 11g. We are using HFM 11.1.1.3 version. When I tried to configure the new oracle database with existing HFM Shared Services, the EPM Configurator hangs. Please could you clarify the below few questions before start configuring the database.
    1. Do we want edit any properties or configuration files before starting the EPM Configurator?
    2. If we want to edit the reg.properties file, what is the password denotes in reg.properties file(Below is the structure of the reg.properties file)? Whether I need to give Oracle database schema password?
    jdbc.url=jdbc\:hyperion\:oracle\://xxxxx;
    password=?
    jdbc.driver=hyperion.jdbc.oracle.OracleDriver
    local.value=####
    username=schemaname
    Thanks,
    Aravindh K

    Is it a new database server you are trying to configure? As if it was just upgraded then you shouldnt need to do anything.
    If it is a new database server then have a read of the following doc in Oracle Support - "How To Change Shared Services Database Repository in EPM 11 [ID 976279.1]"
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Upgrade ebs Database 10G to 11G  (Os upgrade from 32 to 64)

    I have to perform the following upgrade:
    ebs Database 10G to 11G (Os upgrade from 32 to 64)
    Any information is welcome.
    Thank you in advance.
    Edited by: Atanas Cholakov on Nov 19, 2012 2:27 AM

    Please also see:
    11gR2 11.2.0.3 Database Certified with E-Business Suite
    https://blogs.oracle.com/stevenChan/entry/11gr2_11_2_0_3
    Thanks,
    Hussein

  • Upgrading Oracle Database from 10.2.0.3 to 10.2.0.4 on Windows XP

    Hi,
    My moto is to install Oracle DB 10.2.0.4 but i didn't find that vesion of DB. so i installed 10.2.0.3 and now wanted to upgrade it to 10.2.0.4. Please redirect me to proper documentation or patch that will help me.
    Is there 10.2.0.4 setup directly insted of upgrading from previous version??

    Hi,
    Case 1) On top of Oracle Database 10.2.0.3 directly install and apply the patch for Oracle database 10.2.0.4 provided with Database ORACLE_HOME and SYS password parameters at the patch installation time.
    Case 2) If you installed only Database software from 10.2.0.3 to 10.2.0.4, using dbua utility you can upgrade the database version.
    Thanks,
    Ajay Babu Pentela

  • Is it possible to create a Clone database with the same name of source db ?

    Is it possible to create a Clone database with the same name of source db using RMAN ...
    DB version is 11.2.0.2
    Is it possible to clone a 11.2.0.2 database to 11.2.0.3 home location directly on a new server . If it starts in a upgrade mode , it is ok ....

    user11919409 wrote:
    Is it possible to create a Clone database with the same name of source db using RMAN ...
    yes
    >
    DB version is 11.2.0.2
    Is it possible to clone a 11.2.0.2 database to 11.2.0.3 home location directly on a new server . If it starts in a upgrade mode , it is ok ....yes
    Handle:     user11919409
    Status Level:     Newbie (10)
    Registered:     Dec 7, 2009
    Total Posts:     102
    Total Questions:     28 (22 unresolved)
    why do you waste time here when you rarely get any answers to your questions?

  • Upgrade 10 databases at a time without OEM

    Hi,
    I have ten databases in 10.2.0.2. standard edition on Linux platform. I want to upgrade all of them to 10.2.0.4 without using OEM.
    Is there any simple way to upgrade all of them to 10.2.0.4 instead of login to each and every server and upgrade.
    Thanks,
    Mahi

    You don't need OEM. But you have to patch 1) the software and then 2) to upgrade each database. Simply follow the instructions in the README file,which comes with the patchset. The whole procedure requires a downtime for all involved databases.
    Werner

  • ORA-03113: Error while upgrading the Database from 11.1.0.6 to 11.1.0.7

    Hi,
    I am trying to upgrade the database from 11.1.0.6 to 11.1.0.7 on OEL operating system.
    After applying the patch "6890831" when trying to start the database using "Startup Upgrade" command I am getting the below error.
    ORA-03113: end-of-file on communication channel
    Process ID: 20826
    Session ID: 170 Serial number: 3
    I am getting the same error when trying to create the new database using "DBCA".
    Please provide me the probable outcomes.
    Thanks
    Amith

    Below entries found in alert_orcl.log file
    MMNL started with pid=15, OS id=20571
    starting up 1 shared server(s) ...
    ORACLE_BASE from environment = /u01/app/oracle
    Thu Dec 03 20:11:11 2009
    ALTER DATABASE MOUNT
    Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_mman_20557.trc:
    ORA-27103: internal error
    Linux-x86_64 Error: 11: Resource temporarily unavailable
    Additional information: -1
    Additional information: 1
    MMAN (ospid: 20557): terminating the instance due to error 27103
    Instance terminated by MMAN, pid = 20557
    Below entries found in the Trace file generated
    error 27103 detected in background process
    ORA-27103: internal error
    Linux-x86_64 Error: 11: Resource temporarily unavailable
    Additional information: -1
    Additional information: 1
    *** 2009-12-03 20:11:14.727
    MMAN (ospid: 20557): terminating the instance due to error 27103

Maybe you are looking for

  • What to do when your program doesn't work.

    A huge number of posts here seem to relate to code that has been written (one presumes by the poster), but which doesn't work. I get the impression that people don't know what to do next. It's really not that difficult. When you're writing your progr

  • Satellite L300-17N (PSLBCE) - Wireless Card is not working

    Hello, I recently bought a Toshiba Satellite L300 (PSLBCE) which came without any OS and dirvers. I installed WXP PRO SP2 on it and downloaded all drivers from Toshiba site. Everything was installed ok and working except the WLAN card. The WLAN card

  • Screen changes are not active in production tenent

    Hi Experts, I realized the transport of a solution to the production environment and as can be seen in the attached image the solution has been successfully activated. The scripts "OnSave", "AfterModify", etc for example are up and running in the pro

  • Unable to buy or update

    I'm trying to make update and buy apps but i'm receiving the following message: Your account is not valid for use in US store. You must switch to the canadian store before purchasing. I don't have a us account only a canadian one, I checked my itunes

  • Save ADMN Export file not working

    I have a team member who when he tries to save a generated export file from ADMN export receives this error: Cannot Copy file: Cannot read from the source file or disk. Whats weird is it is only his PC that this happens on. We tried several other dif