Datapump network import from 10g to 11g database is not progressing
We have an 11.2.0.3 shell database in RedHat Linux 5 64-bit platform, and we are pulling data from its 10g database (10.2.0.4) HP-UX platform using datapump via NETWORK_LINK. However, the import seems to be not progressing and completing. We have left it run for almost 13 hours and did not even import a single row to the 11g database. We even try to import only one table with 0 rows but still it is not completing, the logfile just continue to be lopping on the following:
Worker 1 Status:
Process Name: DW00
State: EXECUTING
Estimate in progress using BLOCKS method...
Job: IMP_NONLONG4_5
Operation: IMPORT
Mode: TABLE
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Worker 1 Status:
Process Name: DW00
State: EXECUTING
We also see this:
EVENT SECONDS_IN_WAIT STATUS STATE ACTION
SQL*Net message from dblink 4408 ACTIVE WAITING IMP_NONLONG4_5
Below is our par file:
NETWORK_LINK=DATABASE_10G
DIRECTORY=MOS_UPGRADE_DUMPLOC1
LOGFILE=imp_nonlong_grp-4_5.log
STATUS=300
CONTENT=DATA_ONLY
JOB_NAME=IMP_NONLONG4_5
TABLES=SYSADM.TEST_TBL
Any ideas? Thanks.
Thanks a lot for all who have looked and responded to this, I appreciate you giving time for suggestions. As a recap, datapump export and import via dumpfile works on both 10g and 11g databases, we are only having issue pulling data from the 10g database by executing datapump network import (using dblink) from the 11g database.
SOLUTION: The culprit was the parameter optimizer_features_enable='8.1.7' that is currently set on the 10g database, we have taken that out of the parameter file of the 10g database and network import worked flawlessly.
HOW DID WE FIGURE OUT: We turned on a trace to the datapump sessions and trace file shows something about optimizer (see below). So we have taken one by one any parameter that has something to do with optimizer and found out it was optimizer_features_enable='8.1.7' parameter that when removed makes the network import successful.
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS opt_param('parallel_execution_enabled',
'false') NO_PARALLEL(SAMPLESUB) NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE
*/ NVL(SUM(C1),0), NVL(SUM(C2),0), NVL(SUM(C3),0)
FROM
(SELECT /*+ NO_PARALLEL("SYS_IMPORT_TABLE_02") INDEX("SYS_IMPORT_TABLE_02"
SYS_C00638241) NO_PARALLEL_INDEX("SYS_IMPORT_TABLE_02") */ 1 AS C1, 1 AS C2,
1 AS C3 FROM "DBSCHEDUSER"."SYS_IMPORT_TABLE_02" "SYS_IMPORT_TABLE_02"
WHERE "SYS_IMPORT_TABLE_02"."PROCESS_ORDER"=:B1 AND ROWNUM <= 2500)
SAMPLESUB
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0)
FROM
(SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("SYS_IMPORT_TABLE_02")
FULL("SYS_IMPORT_TABLE_02") NO_PARALLEL_INDEX("SYS_IMPORT_TABLE_02") */ 1
AS C1, CASE WHEN "SYS_IMPORT_TABLE_02"."PROCESS_ORDER"=:B1 THEN 1 ELSE 0
END AS C2 FROM "DBSCHEDUSER"."SYS_IMPORT_TABLE_02" "SYS_IMPORT_TABLE_02")
SAMPLESUB
Similar Messages
-
Import Dump Error from 10g to 11G Database
Hi
We are installing dump from 10g DB to 11g DB. We have received dump file in tape and trying to import with imp command.
imp system/sys@test11g file=/dev/st0 full=y fromuser=x touser=x
However, it gives error "IMP-00037: Character set marker unknown". Please advice on it also let us know things to take care while installing 10g dump into 11g DB.
Thanks in advance.Hi All
We have checked with client and have export log , dump was exported with expdp command. Client had sent dump in tap drive. So, we have tried with impdp command
impdp system/sys@test11g SCHEMAS=test DIRECTORY=data_pump_dir LOGFILE=schemas.log
I have following questions ->
1) With impdp it is always require to have oracle directory object to point dump file?
2) Can i create oracle direcotry with tap device path i.e /dev/st01
For now, i have created oracle direcotry with tap device path. However above command is giving error as below :
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
Please let me know how to do import dump with impdp command with exported file in tap drive.
Thanks in advance -
Hi,
we have a DB in 10g R2. We have a DB server in 11g on another physical machine. Can we import from export dumpfile of DB in 10g R2 to 11g ?
Thanks.sure this is no problem. the fastest way would be, to import the data with datapump through a network_link.
-
Datapump Network Import from Oracle 10.2.0 to 10.1.0
I have self-developed table bulk copy system based on dmbs_datapump API. It is pretty stable and worked very well for a long time on many servers with oracle 10.1.0. But some days ago we have installed new oracle server with oracle 10.2.0 and have noticed, that our system can't copy tables from new server to another servers. After some time of debugging we got description for our situation... dbms_datapump.open() call is failing with "ORA-39006: internal error" exception. dmbs_dapatump.get_status() returns more detailed description: "ORA-39022: Database version 10.2.0.1.0 is not supported."... We're really disappointed... what we can do to use dbms_datapump.open on 10.1.0 to copy table from 10.2.0?
P.S. We have tried to use different values (COMPATIBLE, LATEST, 10.0.0, 10.1.0) for version parameter of open()... but without luck...
Please, help, because we can't upgrade such many old servers to 10.2.0 and we really need to use this new server...
Thanks in advance,
Alexey KovyrinHello,
Your problem is about the Character Set.
WE8ISO8859P1 stores character in *1* byte but AL32UTF8 (Unicode) uses up to *4* bytes.
So, when you Import from WE8ISO8859P1 to AL32UTF8 you may have the offending error ORA-12899.
To solve it, you may change the option of the Datatype of your column. And choose the option CHAR
instead of BYTE (the default), for instance:
VARCHAR2 (100) --> VARCHAR2(100 CHAR)Then, you could import the data.
You may also change the parameter NLS_LENGTH_SEMANTICS to CHAR.
alter system set nls_length_semantics=char scope=spfile;Then, restart your database.
NB: Although the parameter nls_length_semantics is dynamic you have to restart the database so
that it's taken into account.
Then, you create the Tables (empty) and afterwards you run your Import.
Hope this help.
Best regards,
Jean-Valentin
Edited by: Lubiez Jean-Valentin on Mar 21, 2010 4:26 PM -
.Database upgrade from 10g to 11g in Oracle EBS 11i
Hi,
An urgent help is needed.I am upgrading Oracle Database from 10g to 11g.My EBS is version 11i.So after upgrading my database
from 10g to 11g as sa sanity check what are the parameters we need to take care of the EBS.My point is will there be any Application level changes if the database
is upgraded?Wiill the Workflow and what other things we had to do the sanity check if its working properly?Any Concurent Program level changes be affected?
Thanks and Regards
Edited by: 918308 on May 10, 2013 7:52 PM918308 wrote:
Hi,
Thanks for the info provided but i dont have the metalink id.Kindly can you provide me the link for the subjected docs so that i can relate.Any application level testing to be done?Concurrent Programs,Schedule Manager,Workflow etc.?We cannot post the contents of MOS docs since this violates Oracle support agreement policy.
You need to go a regular sanity check (i.e. submit concurrent programs, check workflow functionality, ..etc) -- E-Business Suite Diagnostics 11i Test Catalog [ID 179661.1]
Thanks,
Hussein -
Facing Parse Errors after upgrading database from 10g to 11g
Hi,
We are facing parse errors in the SQL's after upgrading database from 10g to 11g.
Kindly look into below parse errors.
********************************** Parse Error *****************************************************
Tue Aug 13 14:13:08 2013
kksSetBindType 16173533-2: parse err=1446 hd=3c73061fb8 flg=100476 cisid=173 sid=173 ciuid=173 uid=173
PARSE ERROR: ospid=15598, error=1446 for statement:
SELECT ROWID,ORGANIZATION_CODE,PADDED_CONCATENATED_SEGMENTS,PRIMARY_UOM_CODE,REVISION,SUBINVENTORY_CODE,TOTAL_QOH,NET,RSV,ATP,ORGANIZATION_NAME,ITEM_DESCRIPTION,INVENTORY_ITEM_ID,ORGANIZATION_ID,LOCATOR_ID,LOCATOR_TYPE,ITEM_LOCATOR_CONTROL,ITEM_LOT_CONTROL,ITEM_SERIAL_CONTROL FROM MTL_ONHAND_LOCATOR_V WHERE (INVENTORY_ITEM_ID=:1) and (ORGANIZATION_ID=:2) order by ORGANIZATION_CODE,SUBINVENTORY_CODE,REVISION, organization_code, padded_concatenated_segments
Tue Aug 13 14:13:10 2013
kksfbc 16173533: parse err=942 hd=3c387c4028 flg=20 cisid=3266 sid=3266 ciuid=3266 uid=3266
PARSE ERROR: ospid=29813, error=942 for statement:
Select feature from toad.toad_restrictions where user_name=USER or user_name in ( select ROLE from sys.session_roles)
kksfbc 16173533: parse err=942 hd=3c97d83648 flg=20 cisid=3266 sid=3266 ciuid=3266 uid=3266
PARSE ERROR: ospid=29813, error=942 for statement:
SELECT password
FROM SYS.USER$
WHERE 0=1
kksfbc 16173533: parse err=6550 hd=35185e4278 flg=20 cisid=3266 sid=3266 ciuid=3266 uid=3266
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
object line object
handle number name
319e277050 30 anonymous block
319e277050 57 anonymous block
PARSE ERROR: ospid=29813, error=6550 for statement:
BEGIN sys.dbms_profiler."146775420110782746251362632012"; END;
kksfbc 16173533: parse err=942 hd=3c142d8600 flg=20 cisid=3266 sid=3266 ciuid=3266 uid=3266
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
object line object
handle number name
319e277050 67 anonymous block
PARSE ERROR: ospid=29813, error=942 for statement:
SELECT 1 FROM plsql_profiler_data WHERE 0 = 1
Please help.
Regards
SureshHi Suresh,
Apologies for misunderstanding..
Tue Aug 13 14:13:08 2013
kksSetBindType 16173533-2: parse err=1446 hd=3c73061fb8 flg=100476 cisid=173 sid=173 ciuid=173 uid=173
PARSE ERROR: ospid=15598, error=1446 for statement:
SELECT ROWID,ORGANIZATION_CODE,PADDED_CONCATENATED_SEGMENTS,PRIMARY_UOM_CODE,REVISION,SUBINVENTORY_CODE,TOTAL_QOH,NET,RSV,ATP,ORGANIZATION_NAME,ITEM_DESCRIPTION,INVENTORY_ITEM_ID,ORGANIZATION_ID,LOCATOR_ID,LOCATOR_TYPE,ITEM_LOCATOR_CONTROL,ITEM_LOT_CONTROL,ITEM_SERIAL_CONTROL FROM MTL_ONHAND_LOCATOR_V WHERE (INVENTORY_ITEM_ID=:1) and (ORGANIZATION_ID=:2) order by ORGANIZATION_CODE,SUBINVENTORY_CODE,REVISION, organization_code, padded_concatenated_segments
Assuming you see the above error message in the alert log file, which was your original post, follow the below steps:
1 Get the 'ospid' value from the error
2. Issue the below command:
SQL> select request_id,ORACLE_PROCESS_ID
2 from fnd_concurrent_requests
3 where request_id = 15598;
3. After obtaining the request_id
4, Query it from the front-end using SYSADMIN responsibility
Hopefully this should get you the respective concurrent report/program.
Thanks &
Best Regards, -
How to import a *.dmp file (exported from 10g) to 8i database?
Hi everybody!
Could anybody tell me how to import a *.dmp file (exported from 10g) to 8i database?
I have tried but it seemed to be error "wrong version".
Thanks a lot!From 10.1.0 to 8.1.7 => Use the EXPORT 8.1.7 to export the data from the 10.1.0 database and IMPORT 8.1.7 to
import the data into the 8.1.7 database.
Metalink note 132904.1 Subject: Compatibility Matrix for Export & Import Between Different Oracle
Nicolas. -
Application has problems when migrated from 10g to 11g
Hi there,
I am hoping someone can shed some light on a problem I have in moving an application from Oracle 10g to Oracle 11g. The app works fine on 10g (uses Apache webserver with PL/SQL module, and APEX 3.1.2.00.2), but on 11g using the embedded web server and APEX 3.2.0.00.27, it doesnt. Most of the app works fine, but there are a couple of pages that provide the ability to add child rows to a parent/child relationship, where the parameter passing mechanism from one page to the next appears to suffer from some sort of corruption. I have traced this using the "session" and "debug" buttons on the developer interface, which show that the values of the parameters get changed inexplicably, when branching from one page to the next - even when the page is actually branching to itself.
I am using the "Set these items, "With these values" fields in the branch, and have verified that the correct values are being associated with the correct items in the application builder. But while this does work correctly under 10g, with 11g, the wrong values end up being passed. Just prior to the branch the set of parameters have the correct values, but immediately after the branch, one of the values is NULL, another has the value of a different item, and a third has a totally random value - I have no idea were it comes from!
I migrated the application from 10g to 11g using the APEX application developer's Export/Import options. There have been no other changes. Should this have worked? If so, any ideas what might have gone wrong?
Thanks,
Sid.Well, I managed to solve this, but not in a way that makes much sense.
In desperation (I had tried almost everything else!) I changed the value of "Cached" in the page settings from "No" to "Yes", and ran the app, but the page didnt render correctly (in either Firefox or IE7) - in fact all that displayed was the developers toolbar at the bottom of the page. I changed the value of "Cached" back to "Y", ran the app again, and hey presto - everything worked fine! I actually did this a second time with a fresh import of the app from 10g, just to be sure I wasnt seeing things. I wasnt!
There was just one further issue - everything worked fine apart from this section of code in a page process:-
ELSIF (:p9_filter_type = 5) THEN
IF (:p9_x_gene_list IS NOT NULL) THEN
:p9_filter := :p9_x_gene_list;
END IF;
:p9_entity_types := 'GENE';
END IF;
In 11g the value of :p9_entity_types was not being set to 'GENE' when :p9_filter_type was 5. This was (and still is) working in 10g. I changed the code as follows:-
ELSIF (:p9_filter_type = 5) THEN
:p9_entity_types := 'GENE';
IF (:p9_x_gene_list IS NOT NULL) THEN
:p9_filter := :p9_x_gene_list;
END IF;
END IF;
... and now it works fine in 11g as well.
Only wish I knew why! -
Check list for upgrading from 10g to 11g when there is a schema replication
Hi
We are looking to upgrade one of our production database from 10g to 11g
Currently this database has one schema that is replicated to 2 other databases using Oracle streams.
The replication is excluing DDLs and excluding several other application tables.
What should I do pre and post the upgrade ?
should we remove the stream configuration all together and rebuild it after the upgrade ?
I was hoping that we can first upgrade the two target databases to 11g and then the source database, without impacting our streams configuration at all
Is that possible ?
Is there any documentation on the subject ?
thanks in advance
OrnaPl post the OS versions of the source and target servers, along with exact (4-digit) versions of "10g" and "11g". I do not have any experience with streams, but the 11gR2 Upgrade Doc suggests that you upgrade the downstream target databases first before upgrading the source - http://download.oracle.com/docs/cd/E11882_01/server.112/e10819/upgrade.htm#CIAFJJFC
HTH
Srini -
Unable to Sign in analytics After Upgraded obiee from 10g to 11g
Hi all,
I have problem when upgrading catalog from 10g to 11g.
The error is:" Unable to Sign In . An invalid User Name or Password was entered. "
I upgraded following the steps like this:
1.Login EM and check BI server is running successfully.
2.Started the UA.bat.
Select the operation:Upgrade Oracle BI RPD and Web Catalog.
Step by Step,and the upgrade completed successfully.
3.Open the RPD online using the Admintool succefully.
4.But when I Login the BI and view the dashboard with the 10g's user (Administrator),the page appear the error "*Unable to Sign In . An invalid User Name or Password was entered.* "
5.Then I Try to Regenerating User GUIDs
1. Update the FMW_UPDATE_ROLE_AND_USER_REF_GUIDS parameter in NQSConfig.INI:
a. Open NQSConfig.INI for editing at:
b. ORACLE_INSTANCE/config/OracleBIServerComponent/coreapplication_obisn
c. Locate the FMW_UPDATE_ROLE_AND_USER_REF_GUIDS parameter and set it to YES, as follows:
d. FMW_UPDATE_ROLE_AND_USER_REF_GUIDS = YES;
e. Save and close the file.
2. Update the Catalog element in instanceconfig.xml:
a. Open instanceconfig.xml for editing at:
b. ORACLE_INSTANCE/config/OracleBIPresentationServicesComponent/
c. coreapplication_obipsn
d. Locate the Catalog element and update it as follows:
e. <Catalog>
f. <UpgradeAndExit>false</UpgradeAndExit>
g. <UpdateAccountGUIDs>UpdateAndExit</UpdateAccountGUIDs>
h. </Catalog>
i. Save and close the file.
3. Restart the Oracle Business Intelligence system components using opmnctl:
4. cd ORACLE_HOME/admin/instancen/bin
5. ./opmnctl stopall
6. ./opmnctl startall
7. Set the FMW_UPDATE_ROLE_AND_USER_REF_GUIDS parameter in NQSConfig.INI back to NO.
Important: You must perform this step to ensure that your system is secure.
8. Update the Catalog element in instanceconfig.xml to remove the UpdateAccount GUIDs entry.
9. Restart the Oracle Business Intelligence system components again using opmnctl:
10. cd ORACLE_HOME/admin/instancen/bin
11. ./opmnctl stopall
12. ./opmnctl startall
BUT THE ERROR ALSO EXISTING!
So,waiting for the help,thanks!Hi,
if your using oracle db please make sure your db settings and tnsnames.oRA settings
also try you please try below troubleshooting steps:
1) please check latest error message and find the root cause,
presentation catalog log path:
obiee installed Drive:\Oracle\Middleware\instances\instance1\diagnostics\logs\OracleBIPresentationServicesComponent\coreapplication_obips1\sawlog0.txt file
also check it nqserver.log file
oracle bi server log path ref:
obiee installed Drive:\Oracle\Middleware\instances\instance1\diagnostics\logs\OracleBIServerComponent\coreapplication_obis1\nqserver.log
2)can you try to login RPD,EM and console by using your weblogic a/c then try to login with some other user.
if its not working then try to delete that users from catlog and check it Check OPMN services are running state.
e.x:
C:\Oracle\Middleware\instances\instance1\bin>opmnctl startall
opmnctl startall: starting opmn and all managed processes...
C:\Oracle\Middleware\instances\instance1\bin>opmnctl status
Processes in Instance: instance1
--------------------------------------------------------------+---------
ias-component | process-type | pid | status
--------------------------------------------------------------+---------
coreapplication_obiccs1 | OracleBIClusterCo~ | 4992 | Alive
coreapplication_obisch1 | OracleBIScheduler~ | 2420 | Alive
coreapplication_obijh1 | OracleBIJavaHostC~ | 1856 | Alive
coreapplication_obips1 | OracleBIPresentat~ | 5684 | Alive
coreapplication_obis1 | OracleBIServerCom~ | 5232 | Alive
3) Refresh GUIDs and Restart WebLogic and OPMN Services then try it again...still if your getting same issues then try to
check DB(try to login and confirm) and check it RPD --> physical layer connection pool setting and try to
view physical data.
also paste it your latest error log message (nqserver.log and sawlog01.log message) here
THanks
Deva -
Migrating with RMAN from 10g to 11g
Hi gurus,
I am following the following procedure to migrate database from 10g to 11g using rman
Source side:
RMAN>connect target
RMAN>backup database;
RMAN>backup archivelog all;
RMAN>backup current controlfile;
SQL> create pfile from spfile;
Copied datafile, archivelog backup files and pfile and password files to target side. i.e. on 11g server side
Target side:
Set proper parameters for 11g
SQL>startup nomount;
RMAN>connect target
RMAN>set dbid=<source database id>
RMAN>catalog start with '<rman backup file location';
RMAN>restore controlfile;
RMAN>run
set newname for datafile 1 to '<target datafiles location with file name';
restore database;
switch datafile all;
finished
upto now it's success full when i am trying to recover
RMAN>recover database;
it is saying
rman 00571
rman 00569
rman 00571
rman 03002
ora - 19698
can you please suggest the solution for this.
thanks a lot.I'm not sure what you're doing is supported.
You are taking a 10g database and restore and recover it using 11g software.
I think you are allowed to do that with 10g software only. -
Hi,
I am going to upgrade CRS from 10g to 11g. Can i upgrade CRS only from 10g to 11g?
Thanks,user2017273 wrote:
Hi,
I am going to upgrade CRS from 10g to 11g. Can i upgrade CRS only from 10g to 11g?
Thanks,You mean 11gR1, right?
If yes.. you can... Oracle Cluser version must >= Database Software. -
Is it possible to perform network data encryption between Oracle 11g databases without the advance security option?
We are not licensed for the Oracle Advanced Security Option and I have been tasked to use Oracle Network Data Encryption in order to encryption network traffic between Oracle instances that reside on remote servers. From what I have read and my prior understanding this is not possible without ASO. Can someone confirm or disprove my research, thanks.Hi, Srini Chavali-Oracle
As for http://www.oracle.com/technetwork/database/options/advanced-security/advanced-security-ds-12c-1898873.pdf?ssSourceSiteId… ASO is mentioned as TDE and Redacting Sensitive Data to Display. Network encryption is excluded.
As for Network Encryption - Oracle FAQ (of course this is not Oracle official) "Since June 2013, Net Encryption is now licensed with Oracle Enterprise Edition and doesn't require Oracle Advanced Security Option." Could you clarify this? Thanks. -
Oracle Upgrade from 10g to 11g [BRANCHED BY MODERATOR]
Hi Deepak/Folks,
Another question that I have is that while doing the Oracle Upgrade on an EP server, the patches were not installed properly and I had to shut down the Patches installation after it did not do anything for a while.
Now when I try to install the patches it fails telling me that the Installed patches cannot be verified, I had written to SAP and they told me to follow the
SAP note 1862446 - Inventory
load failed... OPatch cannot load inventory for the given Oracle Home
and re-create the oracle inventory.
This has also not helped in anyway.
Is there a solution to this problem.
Following is the error that I am getting when I try to Install the patches
Getting pre-run patch inventory...
Getting pre-run patch inventory...failed.
Cannot get pre-run patch inventory.
Refer to log file
$ORACLE_HOME/cfgtoollogs/mopatch/mopatch-2014_08_06-14-52-51.log
when I open the log file specified here I get the following
more mopatch-2014_08_06-14-52-51.log
more /oracle/<SID>/11203/cfgtoollogs/mopatch/mopatch-2014_08_06-15-01-51.log
MOPatch - Install Multiple Oracle Patches in One Run - 2.1.15.
Copyright (c) 2007, 2013, Oracle and/or its affiliates. All rights reserved.
Version: 2.1.15
Revision: 5.1.2.26
Command-line: /oracle/<SID>/11203/MOPatch/mopatch.sh -v -s SAP11203P_1312-20009978.zip
Oracle Home: /oracle/<SID>/11203
RDBMS version: 11.2.0.3.0
OPatch version:11.2.0.3.3
Clean-up: supported
PSUs: supported
Log file: $ORACLE_HOME/cfgtoollogs/mopatch/mopatch-2014_08_06-15-01-51.log
Patch base: .
Patch source: SAP11203P_1312-20009978.zip
Link script: ./link.mts<SID>ua.sh
Readmes: <none>
Strpd. Readmes:<none>
make utility: /usr/ccs/bin/make
unzip utility: /oracle/<SID>/11203/bin/unzip
User name: ora<SID>
Working dir: /oracle/stage
System: HP-UX mts<SID>ua B.11.31 U ia64 2468369872 unlimited-user license
Disk free: 11734549 KBytes on /oracle/<SID>
Disk required: 886496 KBytes on /oracle/<SID>
Getting pre-run patch inventory...
executing: "/oracle/<SID>/11203/OPatch/opatch" lsinventory -retry 0 -xml "./mopatch-187-21696-tmpdir/preinv.xml"
========================================================
GENERIC OPATCH VERSION - FOR USE IN SAP ENVIRONMENT ONLY
========================================================
Oracle Interim Patch Installer version 11.2.0.3.3
Copyright (c) 2012, Oracle Corporation. All rights reserved.
Oracle Home : /oracle/<SID>/11203
Central Inventory : /oracle/<SID>/oraInventory
from : /oracle/<SID>/11203/oraInst.loc
OPatch version : 11.2.0.3.3
OUI version : 11.2.0.3.0
Log file location : /oracle/<SID>/11203/cfgtoollogs/opatch/opatch2014-08-06_15-01-56PM_1.log
List of Homes on this system:
Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
Oracle Home dir. path does not exist in Central Inventory
Oracle Home is a symbolic link
Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo
OPatch failed with error code 73
Getting pre-run patch inventory...failed.
Cannot get pre-run patch inventory. Exiting.
I would appreciate if you folks can help me out on this
Thanks
APSFOLLOW UP QUESTION BRANCHED:
Oracle Upgrade from 10g to 11g [BRANCHED BY MODERATOR] -
Images after upgrade from 10g to 11g
We recently upgraded from 10g to 11g and all the images are broke in the dashboard.
Can anyone provide the steps needs to be done in order to fix the broken images?
Thanks.Put image files at
Drive:\Oracle\Middleware\Oracle_BI1\bifoundation\web\app\res\s_blafp\images
Drive:\Oracle\Middleware\user_projects\domains\bifoundation_domain\servers\bi_server1\tmp\_WL_user\analytics_11.1.1\7dezjl\war\res\s_blafp\images
If required
Drive:\Oracle\Middleware\user_projects\domains\bifoundation_domain\servers\AdminServer\tmp\.appmergegen_1291264099332\analytics.ear\ukjjdc\res\s_blafp\images
Call them using fmap
Syntax: fmap:images/imagename.imageformat
for e.g: fmap:images/geograph.jpg
Pls mark correct/helpful if helps
Maybe you are looking for
-
File copying in Batch script and renaming
HI, I am using the below batch script to copy the files from current folder to the specified location. (@echo off echo copying files to destination copy *.eps* \\10.10.14.13\adman\in\displ ) I am facing 2 problems in the above script.. 1. I am not ab
-
Hi, I have a requirement of sorting a column which displays date or a comment. Because of the presence of comment in some of the rows, the data type is not maintained as Date but as String. That is cause of problem as the dates are getting sorted as
-
Maximum RAM Ceiling for G710 (Model 59400022)?
I'm interested in buying the G710 - 59400022 laptop. It's the model that's 17" with 1600x900 resolution. However, one spec that I'm quite concerned about is the RAM ceiling. All information on the Lenovo site shows that this model can only accept a
-
Again about t.code MN05: number of copies of MM documents printed
Hi all, by t.code MN05 I can set the number of copies of the MM documents printed the FIRST TIME. I can do it for doc.type or vendor. In particular, setting for document type, it doesn't work for all range of vendors. Setting for document type, in th
-
Lets say I have a random LabVIEW file. I can use the Get File Type.vi found in the VI.lib to figure out what the file type is. It could be an Instrument, Control, Project, Library, XControl or a bunch of others. If it is a VI I can use the Get VI