Wrong results after upgrading 10g database to 11.2.0.2.6
Hi,
Do anyone know, why the following query results are different?
Not Working query:
sql1:
select col1 from tab1
where col1 = (select '123' from dual)
Working query:
sql2:
select col1 from tab1
where col1 = '123';
Both the sql1 and sql2 are returning same reseults in 10g database , but not in 11g.
Pl post OS details along with sample outputs and explain plans from the the sql1 statement from the two databases. These MOS Docs may help also
Things to Consider Before Upgrading to 11.2.0.2 to Avoid Poor Performance or Wrong Results [ID 1320966.1]
Wrong Results on 11.2.0.2 with Function-Based Index and OR Expansion [ID 1264550.1]
Wrong Results/No Rows for Sql Involving Functions in 11.2.0.2. [ID 1380679.1]
HTH
Srini
Similar Messages
-
Problem with ApEx after upgrade of database to 10.2.0.2
I could have posted this in the general Database forum since I am not sure whether the problem is directly related to the HTMLDB installation, perhaps it's only the place where the symptoms are shown, but here is it.
One of the customers I am currently developing HTMLDB applications for, recently updated their Oracle database to version 10.2.0.2. The HTMLDB installation is 2.0 (not sure which version exactly, I can't check it at the moment as I'm at another customer right now). Everything seems to work fine (just like it did before the upgrade), but since about two weeks, the HTMLDB application that is already in production (and used quite a lot) suddenly started spawning 404 errors.
The problem is that this behaviour can not always be reproduced. The errors appeared on last Friday as well, after which the database was restarted, which stopped them. This week, it ran without any problems on Monday and on Tuesday, until around 4pm. Then, the search functionality of one page (very basic query) and switching back and forth using the tabs would give 404 errors.
The customer has been in touch with Oracle Support about this (Severity 1 TAR) but so far they have not been able to come up with anything that could lead to a solution. I will, in short, describe what information we DO have at the moment, and I hope that maybe one of you has experienced the same (or similar) problem, or could help me find where to look for a solution. At the moment, I am wondering whether it is the HTMLDB installation that became corrupted (and may require a reinstall) But, since it works fine at times, there may be something else that is causing these problems.
Here we go (sorry for the long introducton ;))
The problem:
Sudden 404 errors on specific pages, in a working application, but also in the HTMLDB environment itself. Yesterday I tried to import a new version of the application - it would keep giving the error, and also simple tasks such as adding a new Tab would give me the error. In the development environment, which runs on a different database (previous version), everything works fine.
If the errors show up, they will keep on coming, until the database is restarted. At one point (last week) they seemed to stop by themselves around lunch time, but that only occurred at one time, as far as I know.
The error:
The error message that is shown in the Apache log file is:
[Fri Mar 17 15:44:10 2006] [error] [client 10.100.60.2] [ecid: 1142606650:10.100.60.26:4432:4836:63,0] mod_plsql: /pls/vta/wwv_flow.accept HTTP-404 ORA-06550: line 22, column 3:\nPLS-00306: wrong number or types of arguments in call to 'ACCEPT'\nORA-06550: line 22, column 3:\nPL/SQL: Statement ignored\n
Additional notes:
So far it is unclear what is causing these errors, but the customer has been in contact with Allround Automations (of PL/SQL Developer) who experienced something similar (a known bug afaik, but I have no documentation) with the reuse of a parsed representation of a cursor, which could cause access violations. This should however have been fixed in the patchset used to upgrade to 10.2.0.2.
Also, the amount of invalid database objects seems to fluctuate a lot. At times there may be 6, or as much as 23 or 24, then, when queried again a short time later, only 4. No idea whether this is related to anything else described here.
In short:
To me, it seems like something is influencing the database, causing HTMLDB and its applications to crash. It does not seem to come from HTMLDB itself, because it works fine the rest of the time, and without changing anything in the DAD, the application or the procedures used, the errors start showing up. But, I might be wrong ofcourse.
Any help would be greatly appreciated.John,
Thanks for your help with this. So far everyone's pretty stumped.
It seems to be a problem that grows in the database, like something is leaking. We've been able to track it down to seeing that it starts happening on one particular call on a page we use for lookups, though if it is left alone it seems to grow until it encompasses anything on the HTMLDB server. We've got a SR open with Oracle, just waiting on some movement.
The keepalives may be unrelated, but it's worth a look while we're waiting for a break. Thanks for the suggestion.
I am still hoping that either Oracle can find the earlier SR and use it as a starting point, or someone involved from earlier checks back in.
Thanks again for your help, I'll let you know what happens.
Thanks,
Justin -
Deploy to experimental instance deploys to wrong location after upgrade to Vs2013
Hi,
I have an extension that is deployed against Vs2005/Vs2008/Vs2010/Vs2012 and I am planning to extend this to deploy against Vs2013.
After upgrading my solution to Vs2013 the Vs2012 VSIX is no longer deployed to the experimental instance during the build.
It appears the upgrade has caused the deployment to target "%APPDATA%\...\12.0EXP\.." instead of "%APPDATA%\...\11.0EXP\.."
Setup:
1) Visual Studio 2012 Solution which generates a Visual Studio Package Extension.
2) Upgrade the solution to VS2013
3) Attempt to build.
Build fails with Error:
"C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\VSSDK\Microsoft.VsSDK.targets(503,5): error VSSDK1031: Extension '<myGuid>' could not be found. Please make sure the extension has been installed."
4) Goto project properties and un-check "Deploy VSIX content to experimental instance for debugging"
5) Attempt to build.
Build Succeeds.
Some additional log output showing the incorrect path resolution.
===========================
2>Using "GetDeploymentPathFromVsixManifest" task from assembly "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\VSSDK\Microsoft.VsSDK.Build.Tasks.dll".
2>Task "GetDeploymentPathFromVsixManifest"
2>Done executing task "GetDeploymentPathFromVsixManifest".
2>Done building target "GetVsixDeploymentPath" in project "Vs2012.csproj".
2>Target "GetVsixDeploymentPath" skipped. Previously built successfully.
2>Target "FindExistingDeploymentPath" in file "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\VSSDK\Microsoft.VsSDK.targets" from project "C:\cygwin\home\cpedlar\w\git\desktop\vs\Trunk\vs2012\Vs2012.csproj" (target
"DeployVsixExtensionFiles" depends on it):
2>Using "FindInstalledExtension" task from assembly "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\VSSDK\Microsoft.VsSDK.Build.Tasks.dll".
2>Task "FindInstalledExtension"
2>Done executing task "FindInstalledExtension".
2>Done building target "FindExistingDeploymentPath" in project "Vs2012.csproj".
2>Target "GetVsixSourceItems" skipped. Previously built successfully.
2>Target "DeployVsixExtensionFiles" in file "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\VSSDK\Microsoft.VsSDK.targets" from project "C:\cygwin\home\user\w\git\desktop\vs\Trunk\vs2012\Vs2012.csproj" (target "PrepareForRun"
depends on it):
2>Task "UninstallExtension" skipped, due to false condition; ('$(ExistingDeploymentPath)' != '$(VsixDeploymentPath)' AND '$(ExistingDeploymentPath)' != '') was evaluated as ('' != 'C:\Users\cpedlar\AppData\Local\Microsoft\VisualStudio\12.0Exp\Extensions\myExtension'
AND '' != '').
2>Task "Message"
2> VsixID = myGuid
2>Done executing task "Message".
2>Task "Message"
2> VsixVersion = 1000.0.0
2>Done executing task "Message".
2>Task "Message"
2> VsixDeploymentPath = C:\Users\user\AppData\Local\Microsoft\VisualStudio\12.0Exp\Extensions\myExtension
===========================
Any help or suggestions are appreciated.
UPDATE:
This is only occurring on my win7 machine. I pulled my repo to a win8 VM and things are working as expected. I'm not sure if this is a win7 issue or an error on my machine. I think I'll attempt to re-install my Visual Studio instances and/or
try on another win7 machine in the near future and see if I can narrow down the issue. I will update again if I find any additional information.
Thanks,
ColinI just spent a half day trying to trouble shoot this problem assuming it was due to a change in the project settings. The error message is extremely misleading, and it would be nice if Microsoft could provide a fix for this issue!
C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\
VSSDK\Microsoft.VsSDK.targets(503,5):
error VSSDK1031: Extension 'GUID of VSIX Package' could not be found.
Please make sure the extension has been installed.
Build FAILED.
Looking at the block of code in Microsoft.VsSDK.targets was no help either:
<!--Enable this extension via Extension Manager-->
<EnableExtension
VsixIdentifier="$(VsixID)"
RootSuffix="$(VSSDKTargetPlatformRegRootSuffix)"
FailIfNotInstalled="true" />
It appears that an extension is only marked for deletion, and you must exit and re-start Visual Studio before the files are actually deleted. Take a look in your user profile %TEMP% folder:
C:\Users\Xxxx\AppData\Local\Microsoft\VisualStudio\12.Exp
Here you will find the extensions that are deployed to the experimental instance. During development I regularly delete the \Nn.Exp folders.
David Schwartz -
Filter expression producing different results after upgrade to 11.1.1.7
Hello,
We recently did an upgrade and noticed that on a number of reports where we're using the FILTER expression that the numbers are very inflated. Where we are not using the FILTER expression the numbers are as expected. In the example below we ran the 'Bookings' report in 10g and came up with one number and ran the same report in 11g (11.1.1.7.0) after the upgrade and got two different results. The data source is the same database for each envrionment. Also, in running the physical SQL generated by the 10g and 11g version of the report we get different the inflated numbers from the 11g SQL. Any ideas on what might be happening or causing the issue?
10g report: 2016-Q3......Bookings..........72,017
11g report: 2016-Q3......Bookings..........239,659
This is the simple FILTER expression that is being used in the column formula on the report itself for this particular scenario which produces different results in 10g and 11g.
FILTER("Fact - Opportunities"."Won Opportunity Amount" USING ("Opportunity Attributes"."Business Type" = 'New Business'))
-------------- Physical SQL created by 10g report -------- results as expected --------------------------------------------
WITH
SAWITH0 AS (select sum(case when T33142.OPPORTUNITY_STATUS = 'Won-closed' then T33231.USD_LINE_AMOUNT else 0 end ) as c1,
T28761.QUARTER_YEAR_NAME as c2,
T28761.QUARTER_RANK as c3
from
XXFI.XXFI_GL_FISCAL_MONTHS_V T28761 /* Dim_Periods */ ,
XXFI.XXFI_OSM_OPPTY_HEADER_ACCUM T33142 /* Fact_Opportunity_Headers(CloseDate) */ ,
XXFI.XXFI_OSM_OPPTY_LINE_ACCUM T33231 /* Fact_Opportunity_Lines(CloseDate) */
where ( T28761.PERIOD_NAME = T33142.CLOSE_PERIOD_NAME and T28761.QUARTER_YEAR_NAME = '2012-Q3' and T33142.LEAD_ID = T33231.LEAD_ID and T33231.LINES_BUSINESS_TYPE = 'New Business' and T33142.OPPORTUNITY_STATUS <> 'Duplicate' )
group by T28761.QUARTER_YEAR_NAME, T28761.QUARTER_RANK)
select distinct SAWITH0.c2 as c1,
'Bookings10g' as c2,
SAWITH0.c1 as c3,
SAWITH0.c3 as c5,
SAWITH0.c1 as c7
from
SAWITH0
order by c1, c5
-------------- Physical SQL created by the same report as above but in 11g (11.1.1.7.0) -------- results much higher --------------------------------------------
WITH
SAWITH0 AS (select sum(case when T33142.OPPORTUNITY_STATUS = 'Won-closed' then T33142.TOTAL_OPPORTUNITY_AMOUNT_USD else 0 end ) as c1,
T28761.QUARTER_YEAR_NAME as c2,
T28761.QUARTER_RANK as c3
from
XXFI.XXFI_GL_FISCAL_MONTHS_V T28761 /* Dim_Periods */ ,
XXFI.XXFI_OSM_OPPTY_HEADER_ACCUM T33142 /* Fact_Opportunity_Headers(CloseDate) */ ,
XXFI.XXFI_OSM_OPPTY_LINE_ACCUM T33231 /* Fact_Opportunity_Lines(CloseDate) */
where ( T28761.PERIOD_NAME = T33142.CLOSE_PERIOD_NAME and T28761.QUARTER_YEAR_NAME = '2012-Q3' and T33142.LEAD_ID = T33231.LEAD_ID and T33231.LINES_BUSINESS_TYPE = 'New Business' and T33142.OPPORTUNITY_STATUS <> 'Duplicate' )
group by T28761.QUARTER_YEAR_NAME, T28761.QUARTER_RANK),
SAWITH1 AS (select distinct 0 as c1,
D1.c2 as c2,
'Bookings2' as c3,
D1.c3 as c4,
D1.c1 as c5
from
SAWITH0 D1),
SAWITH2 AS (select D1.c1 as c1,
D1.c2 as c2,
D1.c3 as c3,
D1.c4 as c4,
D1.c5 as c5,
sum(D1.c5) as c6
from
SAWITH1 D1
group by D1.c1, D1.c2, D1.c3, D1.c4, D1.c5)
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4, D1.c5 as c5, D1.c6 as c6 from ( select D1.c1 as c1,
D1.c2 as c2,
D1.c3 as c3,
D1.c4 as c4,
D1.c5 as c5,
sum(D1.c6) over () as c6
from
SAWITH2 D1
order by c1, c4, c3 ) D1 where rownum <= 2000001
Thank you,
Mike
Edited by: Mike Jelen on Jun 7, 2013 2:05 PMThank you for the info. They are definitely different values since ones on the header and the other is on the lines. As the "Won Opportunity" logical column is mapped to multiple LTS it appears the OBI 11 uses a different alogorthim to determine the most efficient table to use in the query generation vs 10g. I'll need to spend some time researching the impact to adding a 'sort' to the LTS. I'm hoping that there's a way to get OBI to use similar logic relative to 10g in how it generated the table priority.
Thx again,
Mike -
NULL and Unspecified in Dashboard prompts after upgrading 10g to 11g OBIEE
Hi All,
We are working on OBI upgradation project from 10g to 11.1.1.6.5. We are facing one issue in Dashboard prompt level. If we can create Dashboard prompt on a column, its giving "NULL", "Unspecified" additionally. But, in 10g there are no extra values.
And the 10g, 11g instances are pointing to the same database and there are no NULL, Unspecified values in Database level.
Through some blogs and articles we found some solutions n are below.
To remove NULL:
-->Go to the Physical Column properties in Physical Layer and Disable the Nullable option by uncheck the box.
-->Go to the Database Features in Physical Layer and disable value to NULL_SUPPORTED.
-->Go to Edit Dashboard Prompt, in Choice List Values drop-down list select SQL Results.
To remove Unspecified:
-->Go to Edit Dashboard Prompt, in Choice List Values drop-down list select SQL Results, then write the SQL statements as columnname is not equals to "Unspecfied" ( In this way we can able to remove Null's also.)
-->Go to Content tab of LTS, in Where clause write an SQL query to restrict Unspecified values.
Note: CHeck with ETL, because if they maintains the Default value is Null when the Datatype is Character and they can maintain 9999 or #### if the datatype is Numeric. And check the Physical query and debug it carefully.
But, we have very big Repository and we have huge no.of Dashboard prompts. So, its not an easy thing to manage by using above solutions. Correct me if there are any mistakes in above.
Any ideas on this...?
Appreciate your help on this..!
Thanks in Advance,
Raghu NagadasariHi Frnds,
As of now, i found the only solution for the above mentioned issue that how to avoid NULL in dashboard prompt level:
Go to the Physical Column properties in Physical Layer and Disable the Nullable option by uncheck the box.
We have done this manually for all tables.
Appreciate if u have any other ideas.!!
Thanks,
Raghu Nagadasari -
Custom metadata properties give no results after upgrade from 2010
Hi,
Although I see my custom managed properties in my Search Schema in Central Admin, I am unable to find results using them e.g. 'VendorName:Marcel'. I have tried changing the column value in a document and performed a full crawl, but still no results. General
searches such as 'Marcel' work fine and even something like 'author:marcel' (which is a built in managed property) works too. What am I missing? How can I determine the issue? Nothing stands out in the ULS logs.
Background:
I recently upgraded my managed metadata service and search service from 2010 to 2013.
I then upgraded the content db holding my root site collection, and had to re-create my search service proxy to get searches to work. I then upgraded the content db which holds my content hub site collection. This includes site columns, many of which
are also Managed Properties.
Crawls are running with 8 errors - none of which are for the document I changed. Warnings are mainly about 'png' files.
In 2010 I had scopes, which I no longer see in Central Admin. Although I have converted the content database without error, I have not converted the Site Collection to 2013 yet.
macrelI had to set each metadata property 'queryable' property and then run a full crawl after which the results were returned. I am surprised that this did not come over from the upgrade of the search service.
macrel -
11.5.10.2(upgraded 10g database)
I just installed 11.5.10.2 and upgraded to 10g then after im not able to use ls or any other os commands psz help me out
when i type ls command i am getting following message
ls: error while loading shared libraries: librt.so.1: cannot open shared object file: No such file or directoryI just installed 11.5.10.2 and upgraded to 10g then after im not able to use ls or any other os commands psz help me outWhat is the OS?
when i type ls command i am getting following message
ls: error while loading shared libraries: librt.so.1: cannot open shared object file: No such file or directoryAs what user (applmgr, oracle, root)? -
Infoset Query Wrong result after removing document number in the drilldown
Hi Friends,
I have 2 oDS,billing ODS and Condition ODS.I have created an infoset query based on these two ODS.
From my Billing ODS i need QTY and from Condition ODS i need Value,Discount etc.
Since there are more than one record in condition ods for each and every document of billin gods,i have divided the qty using number of records in the query.
Im getting wrong qty if i run report based on customer or material,but if i drilldown based on document im getting the correct quanity.
Can any one help me..
Thanks & Regards
SudhakarThnaks Oscar and Ganesh for your interest.
FYI,my infoset is created based on bill doc and item number which is available in both the ODS.
Here,for every document in Billing ODS,there are more number of documnets in Condition ODS,so the bill qty is getting added according to the numvber of records in Condition ODS.
Eg.Billing ODS: Doc Num:100012, Qty = 8.if 10 records are ther in condition ODS,then in the infoset my QTY becomes,
Doc Num:100012, Qty = 80.
So in the query i divided it by number of records to get the qty and is coming correctly for documnet wise report.
Problem comes when i remove documnet from the report and drilldown to higher level,say material,then it is calculated wrongly.
Your suggestions plz..
Regards
Sudhakar -
hi,
i am getting this error whenever i open any form or try to query.
ORA-01116: error in opening database file 125
ORA-01110: data file 125: '/oratest/ora/proddata/a_txn_data42.dbf'
ORA-27041: unable to open file
HP-UX Error: 24: Too many open files
Additional information: 3
my OS is HP UX 11.11There is a Unix kernel parameter or a ulimit parameter that controls the number of files an account can have open at any given time - I suspect that this value is set too low. Pl check with your Unix sysadmin and have him/her double or triple this value and then try again. PL see ML Note 555895.1 (this note is for R12, but it lists the various kernel parameters and ulimit values).
HTH
Srini Chavali -
UWL Wrong functioning after upgrade to 7.3
Hi,
We've upgraded Portal from 7.01 to 7.30 SP09 and UWL stop functioning:
Clicking over a worktiem and then clicking over another it shows the error “400 Session Not Found”.
Managing substitutions rules the user picker shows the message “No name found for: XXXX”.
Reading the system’s logs we’ve found the exceptions:
Cannot clear ICM server cache by [718ac393010411e3a1fa000000d23c1e] etagIP address
Failed to invalidate cached entry for user 718ac393010411e3a1fa000000d23c1e Exception:java.rmi.RemoteException: Cannot clear ICM server cache by [718ac393010411e3a1fa000000d23c1e] etag.; nested exception is: com.sap.bc.proj.jstartup.icmadm.IcmAdmException: interface disabled
Searching SCN post and OSS notes we’ve read and apply SAP Note 1635058 - J2EE crash in IcmAdm.getInstance, but the error persists. We also upgrade the UWL COLL PROCESS ENGINE 7.30 to the latest PATCH and still doesn´t working.
The system alias configured on the WebFlowConnector tests OK, already re-register the connector, clear the cache and restart UWL service.
Regards,
Gregory.Hi,
We are getting the "cannot connect to provider " warning in frontend of portal while user is trying to view his/her inbox. though this issue is intermittent we have the following logs in trace files :
1. Cannot clear ICM server cache by [718ac393010411e3a1fa000000d23c1e] etagIP address.
2. Empty or null time zone value from Backend.
3.Attempting to create outgoing ssl connection without trusted certificates.
But we arent sure wether the warning message is due to the above mentioned logs.
Can any one help us regarding this?
Is the 3 mentioned logs the reason for getting the warning "cannot connect to provider "?
thanks and regards
Arghya -
STMS shows wrong release after upgrade
I've just upgraded our DEV box (also our transport domain controller) from R/3 4.6C to ECC 6.0 and am trying to get STMS to show the new release version on the system overview screen.
I try to "update configuration" but the release remains the same with value "46C".
Any ideas?
Thanks.Hi All,
I also upgraded my Development system from 4.6C to ECC6.0. Now i want to reconfigure the stms. can u please let me how to reconfigure that.
Hemanth -
Admin group has wrong gid after upgrading from Tiger to Leopard
Under Tiger I unfortunately made by login as admin. This worked OK under tiger, but created confusion under Leopard. The final result is that the admin group has the gid of 501 and the gid of 80 has an (unknown) name. Repairing permissions under Leopard caused all of Apples applications and system type folder and files to have the group 80(unknown). I can see from discussion forums and other native Leopard machines that gid 80 is supposed to be named admin and is for, well, Admins.
I created a new user, gave him administrative rights and migrated the old "admin" user's files, folders, etc. to the new user and removed the admin user through the Accounts GUI. That new user and all other administrative users are members of group 501(admin), but NOT of 80(unknown).
How can I remove the current admin group (501), give the admin group name to 80 and have the administrative users join group 80?Under Tiger I unfortunately made by login as admin. This worked OK under tiger, but created confusion under Leopard. The final result is that the admin group has the gid of 501 and the gid of 80 has an (unknown) name. Repairing permissions under Leopard caused all of Apples applications and system type folder and files to have the group 80(unknown). I can see from discussion forums and other native Leopard machines that gid 80 is supposed to be named admin and is for, well, Admins.
I created a new user, gave him administrative rights and migrated the old "admin" user's files, folders, etc. to the new user and removed the admin user through the Accounts GUI. That new user and all other administrative users are members of group 501(admin), but NOT of 80(unknown).
How can I remove the current admin group (501), give the admin group name to 80 and have the administrative users join group 80? -
Capture streaming does not work after upgrade of the source database.
Hello,
We have a complex system with 2 X RAC databases 10.2.0.4 (source) and 2 X single databases (target) 11.2.0.2
Streaming is running only from source to target.
After upgrading RAC databases to 11.2.0.2 , streaming is working only from one RAC to one single database.
First RAC streaming is flowing to first single database only, and second RAC to second single only.
First source-target is streaming fine, second capture are aborted just after starting with following errors:
Streams CAPTURE CP05 for STREAMS started with pid=159, OS id=21174
Wed Mar 28 10:41:55 2012
Propagation Sender/Receiver (CCA) for Streams Capture and Apply STREAMS with pid=189, OS id=21176 started.
Wed Mar 28 10:43:05 2012
Streams APPLY AP05 for STREAMS started with pid=134, OS id=21696
Wed Mar 28 10:43:06 2012
Streams Apply Reader for STREAMS started AS0G with pid=191 OS id=21709
Wed Mar 28 10:43:06 2012
Streams Apply Server for STREAMS started AS04 with pid=192 OS id=21711
Wed Mar 28 10:43:30 2012
Streams CAPTURE CP05 for STREAMS with pid=159, OS id=21174 is in combined capture and apply mode.
Capture STREAMS is handling 1 applies.
Streams downstream capture STREAMS uses downstream_real_time_mine: TRUE
Starting persistent Logminer Session with sid = 621 for Streams Capture STREAMS
LOGMINER: Parameters summary for session# = 621
LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
LOGMINER: Memory Size = 10M, Checkpoint interval = 1000M
LOGMINER: SpillScn 0, ResetLogScn 7287662065313
LOGMINER: summary for session# = 621
LOGMINER: StartScn: 12620843936763 (0x0b7a.84eb6bfb)
LOGMINER: EndScn: 0
LOGMINER: HighConsumedScn: 12620843936763 (0x0b7a.84eb6bfb)
LOGMINER: session_flag 0x1
LOGMINER: LowCkptScn: 12620843920280 (0x0b7a.84eb2b98)
LOGMINER: HighCkptScn: 12620843920281 (0x0b7a.84eb2b99)
LOGMINER: SkipScn: 12620843920280 (0x0b7a.84eb2b98)
Wed Mar 28 10:44:53 2012
LOGMINER: session#=621 (STREAMS), reader MS00 pid=198 OS id=22578 sid=1148 started
Wed Mar 28 10:44:53 2012
LOGMINER: session#=621 (STREAMS), builder MS01 pid=199 OS id=22580 sid=1338 started
Wed Mar 28 10:44:53 2012
LOGMINER: session#=621 (STREAMS), preparer MS02 pid=200 OS id=22582 sid=1519 started
LOGMINER: Begin mining logfile for session 621 thread 1 sequence 196589, /opt/app/oracle/admin/singledb/stdbyarch/singledb_1_196589_569775692.arc
Errors in file /opt/app/oracle/diag/rdbms/singledb/singledb/trace/singledb_ms00_22578.trc (incident=113693):
ORA-00600: internal error code, arguments: [krvxruts004], [11.2.0.0.0], [10.2.0.4.0], [], [], [], [], [], [], [], [], []
Incident details in: /opt/app/oracle/diag/rdbms/singledb/singledb/incident/incdir_113693/singledb_ms00_22578_i113693.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
krvxerpt: Errors detected in process 198, role reader.
We have 5 streaming processes running.
When we rebuilded one of them, everything works fine, but other are too big for rebuilding.
Has anybody met with such a behaviour ?
Oracle development is already working on it but we need faster solution.
Thanks
Jurraiwwn wrote:I got this after a former kernel update and I can give you only a typical windows advice: reinstall all the Bumblebeestuff after uninstallation and after a reboot. Sounds strange but worked for me.
What exactly did you reinstall? I am experiencing the same problem. -
ORA-12709: error while loading create database character set after upgrade
Dear All
i m getting ORA-12709: error while loading create database character set, After upgraded the database from 10.2.0.3 to 11.2.0.3 in ebusiness suit env.
current application version 12.0.6
please help me to resolve it.
SQL> startup;
ORACLE instance started.
Total System Global Area 1.2831E+10 bytes
Fixed Size 2171296 bytes
Variable Size 2650807904 bytes
Database Buffers 1.0133E+10 bytes
Redo Buffers 44785664 bytes
ORA-12709: error while loading create database character set
-bash-3.00$ echo $ORA_NLS10
/u01/oracle/PROD/db/teche_st/11.2.0/nls/data/9idata
export ORACLE_BASE=/u01/oracle
export ORACLE_HOME=/u01/oracle/PROD/db/tech_st/11.2.0
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/perl/bin:$PATH
export PERL5LIB=$ORACLE_HOME/perl/lib/5.10.0:$ORACLE_HOME/perl/site_perl/5.10.0
export ORA_NLS10=/u01/oracle/PROD/db/teche_st/11.2.0/nls/data/9idata
export ORACLE_SID=PROD
-bash-3.00$ pwd
/u01/oracle/PROD/db/tech_st/11.2.0/nls/data/9idata
-bash-3.00$ ls -lh |more
total 56912
-rw-r--r-- 1 oracle oinstall 951 Jan 15 16:05 lx00001.nlb
-rw-r--r-- 1 oracle oinstall 957 Jan 15 16:05 lx00002.nlb
-rw-r--r-- 1 oracle oinstall 959 Jan 15 16:05 lx00003.nlb
-rw-r--r-- 1 oracle oinstall 984 Jan 15 16:05 lx00004.nlb
-rw-r--r-- 1 oracle oinstall 968 Jan 15 16:05 lx00005.nlb
-rw-r--r-- 1 oracle oinstall 962 Jan 15 16:05 lx00006.nlb
-rw-r--r-- 1 oracle oinstall 960 Jan 15 16:05 lx00007.nlb
-rw-r--r-- 1 oracle oinstall 950 Jan 15 16:05 lx00008.nlb
-rw-r--r-- 1 oracle oinstall 940 Jan 15 16:05 lx00009.nlb
-rw-r--r-- 1 oracle oinstall 939 Jan 15 16:05 lx0000a.nlb
-rw-r--r-- 1 oracle oinstall 1006 Jan 15 16:05 lx0000b.nlb
-rw-r--r-- 1 oracle oinstall 1008 Jan 15 16:05 lx0000c.nlb
-rw-r--r-- 1 oracle oinstall 998 Jan 15 16:05 lx0000d.nlb
-rw-r--r-- 1 oracle oinstall 1005 Jan 15 16:05 lx0000e.nlb
-rw-r--r-- 1 oracle oinstall 926 Jan 15 16:05 lx0000f.nlb
-rw-r--r-- 1 oracle oinstall 1.0K Jan 15 16:05 lx00010.nlb
-rw-r--r-- 1 oracle oinstall 958 Jan 15 16:05 lx00011.nlb
-rw-r--r-- 1 oracle oinstall 956 Jan 15 16:05 lx00012.nlb
-rw-r--r-- 1 oracle oinstall 1005 Jan 15 16:05 lx00013.nlb
-rw-r--r-- 1 oracle oinstall 970 Jan 15 16:05 lx00014.nlb
-rw-r--r-- 1 oracle oinstall 950 Jan 15 16:05 lx00015.nlb
-rw-r--r-- 1 oracle oinstall 1.0K Jan 15 16:05 lx00016.nlb
-rw-r--r-- 1 oracle oinstall 957 Jan 15 16:05 lx00017.nlb
-rw-r--r-- 1 oracle oinstall 932 Jan 15 16:05 lx00018.nlb
-rw-r--r-- 1 oracle oinstall 932 Jan 15 16:05 lx00019.nlb
-rw-r--r-- 1 oracle oinstall 951 Jan 15 16:05 lx0001a.nlb
-rw-r--r-- 1 oracle oinstall 944 Jan 15 16:05 lx0001b.nlb
-rw-r--r-- 1 oracle oinstall 953 Jan 15 16:05 lx0001c.nlb
Starting up:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options.
ORACLE_HOME = /u01/oracle/PROD/db/tech_st/11.2.0
System name: SunOS
Node name: proddb3.zakathouse.org
Release: 5.10
Version: Generic_147440-19
Machine: sun4u
Using parameter settings in server-side spfile /u01/oracle/PROD/db/tech_st/11.2.0/dbs/spfilePROD.ora
System parameters with non-default values:
processes = 200
sessions = 400
timed_statistics = TRUE
event = ""
shared_pool_size = 416M
shared_pool_reserved_size= 40M
nls_language = "american"
nls_territory = "america"
nls_sort = "binary"
nls_date_format = "DD-MON-RR"
nls_numeric_characters = ".,"
nls_comp = "binary"
nls_length_semantics = "BYTE"
memory_target = 11G
memory_max_target = 12G
control_files = "/u01/oracle/PROD/db/apps_st/data/cntrl01.dbf"
control_files = "/u01/oracle/PROD/db/tech_st/10.2.0/dbs/cntrl02.dbf"
control_files = "/u01/oracle/PROD/db/apps_st/data/cntrl03.dbf"
db_block_checksum = "TRUE"
db_block_size = 8192
compatible = "11.2.0.0.0"
log_archive_dest_1 = "LOCATION=/u01/oracle/PROD/db/apps_st/data/archive"
log_archive_format = "%t_%s_%r.dbf"
log_buffer = 14278656
log_checkpoint_interval = 100000
log_checkpoint_timeout = 1200
db_files = 512
db_file_multiblock_read_count= 8
db_recovery_file_dest = "/u01/oracle/fast_recovery_area"
db_recovery_file_dest_size= 14726M
log_checkpoints_to_alert = TRUE
dml_locks = 10000
undo_management = "AUTO"
undo_tablespace = "APPS_UNDOTS1"
db_block_checking = "FALSE"
session_cached_cursors = 500
utl_file_dir = "/usr/tmp"
utl_file_dir = "/usr/tmp"
utl_file_dir = "/u01/oracle/PROD/db/tech_st/10.2.0/appsutil/outbound"
utl_file_dir = "/u01/oracle/PROD/db/tech_st/10.2.0/appsutil/outbound/PROD_proddb3"
utl_file_dir = "/usr/tmp"
plsql_code_type = "INTERPRETED"
plsql_optimize_level = 2
job_queue_processes = 2
cursor_sharing = "EXACT"
parallel_min_servers = 0
parallel_max_servers = 8
core_dump_dest = "/u01/oracle/PROD/db/tech_st/10.2.0/admin/PROD_proddb3/cdump"
audit_file_dest = "/u01/oracle/admin/PROD/adump"
db_name = "PROD"
open_cursors = 600
pga_aggregate_target = 1G
workarea_size_policy = "AUTO"
optimizer_secure_view_merging= FALSE
aq_tm_processes = 1
olap_page_pool_size = 4M
diagnostic_dest = "/u01/oracle"
max_dump_file_size = "20480"
Tue Jan 15 16:16:02 2013
PMON started with pid=2, OS id=18608
Tue Jan 15 16:16:02 2013
PSP0 started with pid=3, OS id=18610
Tue Jan 15 16:16:03 2013
VKTM started with pid=4, OS id=18612 at elevated priority
VKTM running at (10)millisec precision with DBRM quantum (100)ms
Tue Jan 15 16:16:03 2013
GEN0 started with pid=5, OS id=18616
Tue Jan 15 16:16:03 2013
DIAG started with pid=6, OS id=18618
Tue Jan 15 16:16:03 2013
DBRM started with pid=7, OS id=18620
Tue Jan 15 16:16:03 2013
DIA0 started with pid=8, OS id=18622
Tue Jan 15 16:16:03 2013
MMAN started with pid=9, OS id=18624
Tue Jan 15 16:16:03 2013
DBW0 started with pid=10, OS id=18626
Tue Jan 15 16:16:03 2013
LGWR started with pid=11, OS id=18628
Tue Jan 15 16:16:03 2013
CKPT started with pid=12, OS id=18630
Tue Jan 15 16:16:03 2013
SMON started with pid=13, OS id=18632
Tue Jan 15 16:16:04 2013
RECO started with pid=14, OS id=18634
Tue Jan 15 16:16:04 2013
MMON started with pid=15, OS id=18636
Tue Jan 15 16:16:04 2013
MMNL started with pid=16, OS id=18638
DISM started, OS id=18640
ORACLE_BASE from environment = /u01/oracle
Tue Jan 15 16:16:08 2013
ALTER DATABASE MOUNT
ORA-12709 signalled during: ALTER DATABASE MOUNT...ORA-12709 signalled during: ALTER DATABASE MOUNT...Do you have any trace files generated at the time you get this error?
Please see these docs.
ORA-12709: WHILE STARTING THE DATABASE [ID 1076156.6]
Upgrading from 9i to 10gR2 Fails With ORA-12709 : Error While Loading Create Database Character Set [ID 732861.1]
Ora-12709 While Trying To Start The Database [ID 311035.1]
ORA-12709 when Mounting the Database [ID 160478.1]
How to Move From One Database Character Set to Another at the Database Level [ID 1059300.6]
Thanks,
Hussein -
TNS Error after upgrade database to 10.2.0.2.0
Hi,
After upgrade my database from 10.2.0.1.0 to 10.2.0.2.0,
I got TNS error and couldn't connect to database anymore.
Here are the testing message.
C:\Documents and Settings\Administrator>tnsping malecare
TNS Ping Utility for 32-bit Windows: Version 10.2.0.2.0 - Production on 09-JAN-2
008 10:32:41
Copyright (c) 1997, 2005, Oracle. All rights reserved.
Used parameter files:
C:\oracle\product\10.2.0\db_1\network\admin\sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 127.0.0.
1)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = malecare))
OK (30 msec)
C:\Documents and Settings\Administrator>sqlplus /nolog
SQL*Plus: Release 10.2.0.2.0 - Production on Wed Jan 9 10:33:42 2008
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> connect / as sysdba
ERROR:
ORA-12560: TNS:protocol adapter error
SQL> connect sys/xxxxx@malecare as sysdba
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Any suggestion would be appreciated.C:\>lsnrctl
LSNRCTL for 32-bit Windows: Version 10.2.0.2.0 - Production on 09-JAN-2008 11:37
:18
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL> reload listener
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
The command completed successfully
LSNRCTL> status listener
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
Alias LISTENER
Version TNSLSNR for 32-bit Windows: Version 10.2.0.2.0 - Produ
ction
Start Date 09-JAN-2008 11:36:16
Uptime 0 days 0 hr. 1 min. 14 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File C:\oracle\product\10.2.0\db_1\network\admin\listener.o
ra
Listener Log File C:\oracle\product\10.2.0\db_1\network\log\listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1ipc)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))
Services Summary...
Service "MALECARE" has 1 instance(s).
Instance "MALECARE", status UNKNOWN, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
LSNRCTL> exit
C:\>sqlplus /nolog
SQL*Plus: Release 10.2.0.2.0 - Production on Wed Jan 9 11:37:41 2008
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> connect / as sysdba
ERROR:
ORA-12560: TNS:protocol adapter error
SQL> connect sys/xxxxx@malecare as sysdba
ERROR:
ORA-12518: TNS:listener could not hand off client connection
Maybe you are looking for
-
ATT Iphone 6 intermittently says no sim card installed, but works the rest of the time. thoughts?
-
Rx Mail Adapter configuration for Attachment sending and dynamic From/ To
Hi I have a senario in which i have to send a file content as an attachment and then take the values of the from / to /CC from the xml dynamically and post. I am able to do it without dynamic Do we need to add some modules to do the same. RGds Aditya
-
Using Nano as flash drive...
I used my 2nd gen nano as a flash drive to transfer music from my PC to my new mac. I deleted the files once they were transfered but when my ipod is connected all the empty space still shows up full as "other". What is the deal with that? Is there a
-
I have a question regarding good practice with SL server setup on single server networks/domains. Normally, in past lives with Linux I would have setup services to respond as "mail.mybiz.com", "vpn.mybiz.com", "ical.mybiz.com", "ichat.mybiz.com". I w
-
(--Show List of Values--) does not return prompts, just spins forever
We are using BO XI 3.1on Windows 2003/ IIS. If we create a simple report with a prompt, save the report and try to rerun it the prompts do not show up. Instead an hourglass spins until the report is closed. We get the error: BO XI 3.1 assert failur