ORA-39126 when attempting expdp with network_link
I'm trying to use expdp to get a dump of a db on another box so I can duplicate it on my dev machine.
I created a network link using :
create public database link preprodlink connect to "TESTUSER" identified by "testpass" using '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=my.machine.eu)(PORT=1521)))(CONNECT_DATA=(SID=PREPRODSID)))';
I have verified that this seems ok by doing :
select * from dual@preprodlink;
which runs successfully.
I then created a local user to match the schema with :
create user TESTUSER identified by testpass;
grant all privileges to TESTUSER;
I then attempt to do the actual export using
expdp TESTUSER/testpass directory=oratmp network_link=preprodlink dumpfile=TESTUSER.dmp schemas=TESTUSER exclude=statistics
But I get error ORA-39126 and PLS-00201.
I can't figure out what the problem is.
Full export log is below :
Export: Release 11.2.0.2.0 - Production on Wed Sep 12 04:08:58 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
Starting "TESTUSER"."SYS_EXPORT_SCHEMA_02": TESTUSER/******** directory=oratmp network_link=preprodlink dumpfile=TESTUSER.dmp schemas=TESTUSER exclude=statistics
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.CONFIGURE_METADATA_UNLOAD [ESTIMATE_PHASE]
ORA-06550: line 1, column 13:
PLS-00201: identifier 'DBMS_METADATA.NETWORK_OPEN@PREPRODLINK' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 8358
----- PL/SQL Call Stack -----
object line object
handle number name
4B5C9DC0 19208 package body SYS.KUPW$WORKER
4B5C9DC0 8385 package body SYS.KUPW$WORKER
4B5C9DC0 6628 package body SYS.KUPW$WORKER
4B5C9DC0 12605 package body SYS.KUPW$WORKER
4B5C9DC0 2546 package body SYS.KUPW$WORKER
4B5C9DC0 9054 package body SYS.KUPW$WORKER
4B5C9DC0 1688 package body SYS.KUPW$WORKER
5113DAC8 2 anonymous block
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.CONFIGURE_METADATA_UNLOAD [ESTIMATE_PHASE]
ORA-06550: line 1, column 13:
PLS-00201: identifier 'DBMS_METADATA.NETWORK_OPEN@PREPRODLINK' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 8358
----- PL/SQL Call Stack -----
object line object
handle number name
4B5C9DC0 19208 package body SYS.KUPW$WORKER
4B5C9DC0 8385 package body SYS.KUPW$WORKER
4B5C9DC0 6628 package body SYS.KUPW$WORKER
4B5C9DC0 12605 package body SYS.KUPW$WORKER
4B5C9DC0 2546 package body SYS.KUPW$WORKER
4B5C9DC0 9054 package body SYS.KUPW$WORKER
4B5C9DC0 1688 package body SYS.KUPW$WORKER
5113DAC8 2 anonymous block
Job "TESTUSER"."SYS_EXPORT_SCHEMA_02" stopped due to fatal error at 04:09:13
Oh well,
SQL> desc dbms_metadata
FUNCTION ADD_TRANSFORM RETURNS NUMBER
....Other than that, the point was what Dean says; check remote db (invalid packages, etc.). Involve remote db's dba. Is it possible to run export db locally on the remote db?
For invalid objects, try something like
select object_name, object_type from dba_objects where status != 'VALID';
edit:
forum ate not equal
Edited by: orafad on Sep 13, 2012 3:49 PM
Edited by: orafad on Sep 13, 2012 7:38 PM
Similar Messages
-
Expdp with network_link and different character sets for source and target?
DB versions 10.2 and 10.1
Is there any limitation for implementing expdp with network_link but target and source have different character sets?
I tried with many combinations, but only had success when target and source have the same character sets. In other combinations there was invalid character set conversion (loose of some characters in VARCHAR2 and CLOB fields).
I didn't find anything on the internet, forums or Oracle documentation. There was only limitation for transportable tablespace mode export (Database Utilities).
ThanksHi DBA-One
This link
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_overview.htm#CEGFCFFI
is from Database Utilities 10g Release 2 (10.2) (I read it many times)
There is only one thing about different character sets.
I quote:
Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in which character set conversions are not supported under the Data Pump is when using transportable tablespaces.
Parameter VERSION also doesn't play role here, because the behaviour (character set) is the same if I use for target and source only 10.2 or (10.2 and 10.1) respectivly. -
Getting ORA-1013 when attempting to expand tables within connections
Getting ORA-1013 when attempting to expand tables within connections on new install of Oracle SQL developer 3.1.06.
Any ideas?Hi Steve,
The ORA-01013 usually means a timeout or, more typically on this forum, an intentional user cancellation. Searching the forum, I do not see any prior references to your exact situation. Some recommendations:
1. SQL Developer 3.1.0.6 was an early adopter release. Upgrade to the latest and greatest: 3.2.10.09.57
2. Review this: Re: [SQL Developer 3.0.04] Cannot Drill Down
3. If it is a true timeout, your privileges might make a difference. If you have access to the DBA views, SQL Developer does look-ups using those and performance is better.
So for [3], having the SELECT_CATALOG_ROLE privilege may help. I even read of one case in this forum where the poster finally discovered his database's recycle bin had an issue and the DBA needed to do some clean-up to correct the performance issue.
Hope this helps,
Gary
SQL Developer Team -
ORA-39126 when exporting with expdp
Hi there,
I'm getting a crash on 11g 11.1.0.7 when exporting a schema using expdp:
expdp
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 3.096 GB
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA [TABLE_DATA:"NSNPL"."SYS_EXPORT_SCHEMA_02"]
ORA-22813: operand value exceeds system limits
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 7839
----- PL/SQL Call Stack -----
object line object
handle number name
0x14d81a160 18237 package body SYS.KUPW$WORKER
0x14d81a160 7866 package body SYS.KUPW$WORKER
0x14d81a160 2744 package body SYS.KUPW$WORKER
0x14d81a160 8504 package body SYS.KUPW$WORKER
0x14d81a160 1545 package body SYS.KUPW$WORKER
0x14d81db88 2 anonymous block
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA []
ORA-22813: operand value exceeds system limits
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.KUPW$WORKER", line 7834
----- PL/SQL Call Stack -----
object line object
handle number name
0x14d81a160 18237 package body SYS.KUPW$WORKER
0x14d81a160 7866 package body SYS.KUPW$WORKER
0x14d81a160 2744 package body SYS.KUPW$WORKER
0x14d81a160 8504 package body SYS.KUPW$WORKER
0x14d81a160 1545 package body SYS.KUPW$WORKER
0x14d81db88 2 anonymous block
Job "NSNPL"."SYS_EXPORT_SCHEMA_03" stopped due to fatal error at 00:57:23
This look awfully similar to bug 6991626 (cf ID 737618.1), however the database has already been successfully patched (and in fact even repatched) for this bug:
$ORACLE_HOME/OPatch/opatch lsinventory
Invoking OPatch 11.1.0.6.2
Oracle Interim Patch Installer version 11.1.0.6.2
Copyright (c) 2007, Oracle Corporation. All rights reserved.
Oracle Home : /opt/oracle/db11g
Central Inventory : /opt/oracle/inventory
from : /etc/oraInst.loc
OPatch version : 11.1.0.6.2
OUI version : 11.1.0.7.0
OUI location : /opt/oracle/db11g/oui
Log file location : /opt/oracle/db11g/cfgtoollogs/opatch/opatch2010-06-02_08-23-53AM.log
Lsinventory Output file location : /opt/oracle/db11g/cfgtoollogs/opatch/lsinv/lsinventory2010-06-02_08-23-53AM.txt
Installed Top-level Products (2):
Oracle Database 11g 11.1.0.6.0
Oracle Database 11g Patch Set 1 11.1.0.7.0
There are 2 products installed in this Oracle Home.
Interim patches (6) :
Patch 6991626 : applied on Tue Jun 01 22:35:32 WET 2010
Created on 14 Oct 2008, 23:25:07 hrs PST8PDT
Bugs fixed:
6991626
[...]Does anyone have an idea on what might be the culprit here?
Thanks for your help,
ChrisHi Prathmesh,
in fact I saw this very thread before and I made sure that both solutions were applied. Moreover as I said patch 6991626 had already been applied earlier, precisley to fix this problem, and I've had been able to successfully export other albeit somewhat smaller schemas (500M instead of 3GB) in the last few months. This is why I was so puzzled to see that exact bug raise its ugly head again. As far as I can tell I didn't do any modification to the DB since that last patch in nov. 2009. In fact the DB has been running pretty much untouched since then.
I even tried yestereday to reinstalled the patch again; opatch does the operation gracefully, first rolling back the patch then reapplying it again, with only a warning about the patch being already present. However the pb does not get fixed any better.
Thanks a lot for your help,
Chris -
ORA-3297 when attempting to resize a datafile to a small size
Hello,
I have a tablespace tbspace_A which has 3 datafiles. There are numerous segments in tbspace_A. This tablespace grew to 80 gigs while lots of inserts were performed on tables. Now those tables are either dropped or have records deleted dramatically. Now I want to reduce the tablespace from 80 gigs to 50 gigs so I can return the unused space to the o/s. I am getting error ORA-3297 when trying to reduce a datafile /oradata/tbspace_A03.dbf belong to tbspace_A.
My task now is to:
1) Find the segments that occupy /oradata/tbspace_A03.dbf datafile
2) alter table table_name99 move tablespace scratch_tbspace;
3) alter table table_name99 move tbspace_A;
4) Attemp again to resize(smaller) the /oradata/tbspace_A03.dbf
so to return disk space to the o/s.
Is this the correct method?
Is there a better method?
What are the gotchas?
Thank you.This is the procedure to follow, the gotchas I see are:
1. Watch out for relational constraints
2. Rebuild indexes afterwards
3. You may use the move command, unless the table has LONG/LONG RAW columns.
You could use Enterprise Manager and/or oracle expdp/impdp to perform the remapping tasks (10g).
~ Madrid -
Getting an ORA-39126 when importing
When i am doing
Connected to: Oracle Database 10g Release 10.2.0.3.0 - Production
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_02" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_02": system/********@o10g directory=my_imp_dir CONTENT=ALL TABLE_EXISTS_ACTION=REPLACE dumpfile=*****.dmp logfile=my_log_dir:imp.log schemas=e5_database REMAP_TABLESPACE=******:ERGO_USR REMAP_TABLESPACE=*****:ERGO_INDX
Processing object type DATABASE_EXPORT/SCHEMA/USER
i get the following error.
+ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_DDL [FUNCTION:"E5_DA+
TABASE"."GET_DB_VERSION"]
ORA-44001: invalid schema
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6266
----- PL/SQL Call Stack -----
object line object
handle number name
+22BD2830 14916 package body SYS.KUPW$WORKER+
+22BD2830 6293 package body SYS.KUPW$WORKER+
+22BD2830 12689 package body SYS.KUPW$WORKER+
+22BD2830 11969 package body SYS.KUPW$WORKER+
+22BD2830 3278 package body SYS.KUPW$WORKER+
+22BD2830 6882 package body SYS.KUPW$WORKER+
+22BD2830 1259 package body SYS.KUPW$WORKER+
+229C03C4 2 anonymous block+
Job "SYSTEM"."SYS_IMPORT_SCHEMA_02" stopped due to fatal error at 15:00:52
how can i fix this?
is it possible that the export was done on a different version and that causes the problems ?Welcome to the forums !
You may be running into a bug mentioned in MOS Doc 742343.1 (Ora-39126: Worker Unexpected Fatal Error In Kupw$Worker.Put_ddl and ORA-44001)
HTH
Srini -
XMLAGG giving ORA-19011 when creating CDATA with large embedded XML
What I'm trying to achieve is to embed XML (XMLTYPE return type) inside a CDATA block. However, I'm receiving "ORA-19011: Character string buffer too small" when generating large amounts of information within the CDATA block using XMLCDATA within an XMLAGG function.
Allow me to give a step by step explanation through the thought process.
h4. Creating the inner XML element
For example, suppose I have the subquery below
select
XMLELEMENT("InnerElement",DUMMY) as RESULT
from dual
;I would get the following.
RESULT
<InnerElement>X</InnerElement>h4. Creating outer XML element, embedding inner XML element in CDATA
Now, if I my desire were to embed XML inside a CDATA block, that's within another XML element, I can achieve it by doing so
select
XMLELEMENT("OuterElement",
XMLCDATA(XML_RESULT)
) XML_IN_CDATA_RESULT
FROM
(select
XMLELEMENT("InnerElement",DUMMY) as XML_RESULT
from dual)
;This gets exactly what I want, embedding XML into CDATA block, and CDATA block is in an XML element.
XML_IN_CDATA_RESULT
<OuterElement><![CDATA[<InnerElement>X</InnerElement>]]></OuterElement> So far so good. But the real-world dataset naturally isn't that tiny. We'd have more than one record. For reporting, I'd like to put all the <OuterElement> under a XML root.
h4. Now, I want to put that data in XML root element called <Root>, and aggregate all the <OuterElement> under it.
select
XMLELEMENT("Root",
XMLAGG(
XMLELEMENT("OuterElement",
XMLCDATA(INNER_XML_RESULT)
FROM
(select
XMLELEMENT("InnerElement",DUMMY) as INNER_XML_RESULT
from dual)
;And to my excitement, I get what I want..
<Root>
<OuterElement><![CDATA[<InnerElement>X</InnerElement>]]></OuterElement>
<OuterElement><![CDATA[<InnerElement>Y</InnerElement>]]></OuterElement>
<OuterElement><![CDATA[<InnerElement>Z</InnerElement>]]></OuterElement>
</Root> But... like the real world again... the content of <InnerElement> isn't always so small and simple.
h4. The problem comes when <InnerElement> contains lots and lots of data.
When attempting to generate large XML, XMLAGG complains the following:
ORA-19011: Character string buffer too smallThe challenge is to keep the XML formatting of <InnerElement> within CDATA. A particular testing tool I'm using parses XML out of a CDATA block. I'm hoping to use [Oracle] SQL to generate a test suite to be imported to the testing tool.
I would appreciate any help and insight I could receive, and hopefully overcome this roadblock.
Edited by: user6068303 on Jan 11, 2013 12:33 PM
Edited by: user6068303 on Jan 11, 2013 12:34 PMThat's an expected error.
XMLCDATA takes a string as input, but you're passing it an XMLType instance, therefore an implicit conversion occurs from XMLType to VARCHAR2 which is, as you know, limited to 4000 bytes.
This indeed gives an error :
SQL> select xmlelement("OuterElement", xmlcdata(inner_xml))
2 from (
3 select xmlelement("InnerElement", rpad(to_clob('X'),8000,'X')) as inner_xml
4 from dual
5 ) ;
ERROR:
ORA-19011: Character string buffer too small
no rows selectedThe solution is to serialize the XMLType to CLOB before passing it to XMLCDATA :
SQL> select xmlelement("OuterElement",
2 xmlcdata(xmlserialize(document inner_xml))
3 )
4 from (
5 select xmlelement("InnerElement", rpad(to_clob('X'),8000,'X')) as inner_xml
6 from dual
7 ) ;
XMLELEMENT("OUTERELEMENT",XMLCDATA(XMLSERIALIZE(DOCUMENTINNER_XML)))
<OuterElement><![CDATA[<InnerElement>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX(use getClobVal method if your version doesn't support XMLSerialize) -
Getting ORA-01031 when attempting to Create Database
I am attempting to issue a CREATE DATABASE statement via JDBC but I am getting a "ORA-01031: insufficient privileges" error. I am logged in as an administrator and I have no trouble creating tables. I am also receiving the same error when attempting this from SQLPlus. I'm stumped.
Thanks,
PinoPino,
I haven't verified this, but only the SYS user can create a DATABASE in Oracle, and you also need to use a special login (when going via SQL*Plus) which JDBC cannot handle. Hence your failures both via JDBC and SQL*Plus.
By the way, if you are new to Oracle, but have experience in another database, then the Oracle idea of a DATABASE is probably different than what you're used to. That's probably why you're stumped. If you haven't already done so, I suggest reading the Oracle Database Concepts volume of the Oracle documentation, which is available from:
http://tahiti.oracle.com
Good Luck,
Avi. -
Ora-28509 and ora-28511 when attempting to join oracle with postgresql in query
I have attempted to run the following statement.
select a."uri", b.person_id from "sources"@postgresql a, ct_person_src_ref_store b where decoder(b.source_uri) = a."id";
This is the error I get.
ERROR at line 1:
ORA-28511: lost RPC connection to heterogeneous remote agent using
SID=ORA-28511: lost RPC connection to heterogeneous remote agent using
SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DAT
A=(SID=posgre133)))
ORA-28509: unable to establish a connection to non-Oracle system
ORA-02063: preceding line from POSTGRESQL
Process ID: 116908
Session ID: 5197 Serial number: 49491
I have a resulting trace file from having HS_FDS_TRACE_LEVEL = DEBUG. It's kind of long (490 lines). I'm hesitant to paste the whole log into this posting. Is there a certain part of the trace file that I can post that would assist in helping me determine the cause of the error?
ThanksThat works now ....
When we ask the Postgres driver which columns/data types are in that table it reports:
Entered hgopcda at 2014/04/16-20:59:52
Column:27(uri): dtype:-1 (LONGVARCHAR), prc/scl:8190/0, nullbl:1, octet:0, sign:1, radix:10
Exiting hgopcda, rc=0 at 2014/04/16-20:59:52
Which then leads to:
SQL text from hgopars, id=1, len=32 ...
00: 53454C45 43542022 6964222C 22757269 [SELECT "id","uri]
10: 22204652 4F4D2022 736F7572 63657322 [" FROM "sources"]
Column:1(id): dtype:-5 (BIGINT), prc/scl:19/0, nullbl:0, octet:0, sign:1, radix:0
Exiting hgopcda, rc=0 at 2014/04/16-21:05:15
Entered hgopcda at 2014/04/16-21:05:15
Column:2(uri): dtype:-1 (LONGVARCHAR), prc/scl:8190/0, nullbl:1, octet:8190, sign:1, radix:0
Exiting hgopcda, rc=0 at 2014/04/16-21:05:15
hgodscr, line 464: Printing hoada @ 0x1ab8820
MAX:2, ACTUAL:2, BRC:1, WHT=5 (SELECT_LIST)
hoadaMOD bit-values found (0x20:NEGATIVE_HOADADTY,0x200:TREAT_AS_CHAR)
DTY NULL-OK LEN MAXBUFLEN PR/SC CST IND MOD NAME
-5 BIGINT N 8 8 0/ 0 0 0 20 id
-1 LONGVARCHAR Y 0 0 0/ 0 0 0 220 uri
Exiting hgodscr, rc=0 at 2014/04/16-21:05:15
=> so far so good. The gateway exists from that routine and here it can't locate the database process anymore:
Exiting hgodscr, rc=0 at 2014/04/16-21:05:15
HS Agent received unexpected RPC disconnect
Network error 1004: NCR-01004: NCRS: Write error.
Could you please upload the alert.log details matching that timstamp?
- Klaus -
ORA-03206 when attempting to AUTOEXTEND system tablespace
Hello.
We're running 11.2.0.2.0 on 64-bit Linux.
A week or so ago, I noticed my SYSTEM tablespace was getting close to it's limit of 32G, so I set it to autoextend by 1G, with a max size of UNLIMITED.
This seemed to work. However, EOM shows it as still having a limit of 32G. I'm guessing the limit of 32G is treated as UNLIMITED.
So, I tried extending to 50G. This failed with ORA-03206. Presumably as the SYSTEM tablespace isn't a BIGFILE tablespace.
The majority of suggestions mention reducing the datafile maxsize, which isn't an option for us.
As far as I can see I'll have to either:
1. Add a new datafile (can we do this on SYSTEM?)
2. Change SYSTEM to a BIGFILE tablespace
Can anyone advise on the best course of action here? I'm guessing 1) is safer than 2) and 2) would need downtime, but would also be future proof.
Any help.suggestions would be much apprciated, as always,
Thanks,
RayMany thanks for the quick reply. I also had an inkling that the tablespace was growing unusually fast...turns out it was the FGA_LOG$ table, which lives in the SYSTEM tablespace, which had grown to a massive 25GB!
Another thread deals with a similar issue:
Extents Issue
Truncated FGA_LOG$ and all's well again. We need to examine our FGA policy but for now the problem is solved.
Ray -
ORA-31020 when using XML with external DTD or entities
I'd like to parse XML documents against a modular DTD that references other DTDs. This works fine with Oracle 9i. But after upgrading to 11g, the parsing of XML-instances fails and DBMS_XMLPARSER.parseClob produces ORA-31020.
The same error occurs even if I simply try to store XML with a reference to an external DTD as xmltype:
SQL> select xmltype('<?xml version="1.0" encoding="iso-8859-1"?><!DOCTYPE ewl-artikel SYSTEM "http://www.foo.com/example.dtd"><test>123</test>') from dual;
ERROR:
ORA-31020: Der Vorgang ist nicht zulässig, Ursache: For security reasons, ftp
and http access over XDB repository is not allowed on server side
ORA-06512: in "SYS.XMLTYPE", Zeile 310
ORA-06512: in Zeile 1
How can I use external DTDs on remote servers in order to parse XML in an 11g database??? Any ideas for a workaround? Thanks in advance!This is my PL/SQL validation procedure:
procedure validatexml (v_id in number default 0) is
PARSER DBMS_XMLPARSER.parser;
DTD_SOURCE clob;
DTD_DOCUMENT xmldom.DOMDocumentType;
XML_INSTANCE xmltype;
BEGIN
-- load DTD from XDB repository
SELECT httpuritype('http://example.foo.de/app1/DTD1.dtd').getclob() into DTD_SOURCE from dual;
-- load XML instance
select co_xml into XML_INSTANCE from tb_xmltab where co_id=v_id;
-- parse XML instance
PARSER := DBMS_XMLPARSER.newParser;
xmlparser.setValidationMode( PARSER , false);
xmlparser.parseDTDClob( PARSER , DTD_SOURCE , 'myfirstnode' );
DTD_DOCUMENT := xmlparser.getDoctype( PARSER );
xmlparser.setValidationMode( PARSER , true );
xmlparser.setDoctype( PARSER , DTD_DOCUMENT );
DBMS_XMLPARSER.parseClob( PARSER , v_xml );
DBMS_XMLPARSER.freeParser(PARSER);
htp.print('<P>XML instance succesfully validated!<P>');
end validatexml; -
JDBC error when attempting SQL with date offsets
I am working on an issue with I am having with a JDBC program connecting to Oracle 8i. The program reads statements from configuration files, and for this particular statement, I need to get all the inactive records older then 2 weeks.
I tried using the statement
SELECT MAINTLOGID FROM MAINTLOG WHERE ACTIVE = 'F' AND UPDATETIME < (SYSDATE - 14)
That returns with an SQLException of "invalid relational operator". This statment works fine in Toad, and the statement also works fine if the UPDATETIME comparison is taken off. So at this point, I figure it doesn't like the datetime calculation. So I write a function DaysAgo which returns a date. Now trying any of the statements below still returns the same error.
SELECT MAINTLOGID FROM MAINTLOG WHERE ACTIVE = 'F' AND UPDATETIME < DaysAgo(14)
SELECT MAINTLOGID FROM MAINTLOG WHERE ACTIVE = 'F' AND UPDATETIME < { DaysAgo(14) }
SELECT MAINTLOGID FROM MAINTLOG WHERE ACTIVE = 'F' AND UPDATETIME < { fn DaysAgo(14) }
Does anyone know the correct syntax to get what I want done? Any help would be appreciated.Never mind everyone. I found my issue. The SQL statements are in an XML configuration file, and the less than sign was being treated as the start of a new tag. Doing
<![CDATA[ SELECT MAINTLOGID FROM MAINTLOG WHERE ACTIVE = 'F' AND UPDATETIME < DaysAgo(14) ]]>
works. -
ORA-03237 when creating table with BLOB columns on Oracle 9i
I am playing with my new Oracle 9i database. I am trying to copy
tables from my 8.1.6 db. Three tables won't create due to this
new message. The 9i documentation says it's soemthing to do with
freelist groups but my DBA says he hasn't set up any.
The tablespace is one of these newfangled locally managed
tablespaces (predetermined size 8k).
I can create these tables if I change the BLOB column to datatype
VARCHAR2(4000).
I have raised this through the usual channels, but I thought I'd
try this site as well.
Cheers, APCvictor_shostak wrote:
Figured, Virtual Columns work only for Binary XML.They are only supported, currently, for Binary XML. -
Expdp with schemas containing special charecters
Hi!
Need to export a schema which name contains the character "-".
When using expdp with parameter schemas="Test1-test2" this error comes:
ORA-39039: Schema expression "IN ('Test1')" contains no valid schemas.
How do I escape the "-".
Thanks,
KennI have created the function below to replace the special characters before usig XMLElement. Try if it workes for you. Good luck
FUNCTION NameofFunction(cNUMBER_STRING VARCHAR2) RETURN VARCHAR2
AS
BEGIN
RETURN(
REPLACE(
REPLACE(
--REPLACE(
TRANSLATE(cNUMBER_STRING,
'oAAAAAAECEEEEIIIIDNOOOOOxOUUUUYpbaaaaaaaceeeeiiiionooooouuuuypy'
--' & ','&'||'amp;'),
CHR(26),''),
CHR(27),'')
END NameofFunction;
Regards,
Rajesh Kanakagiri
Edited by: user2899093 on Jan 20, 2010 1:16 PM -
Expdp fail with ORA-39126 error
When i trying to export partition table it throws following error. I have tried it with newly created table also. but still it gets fail. My DB version is: 11.1.0.6.0 (RAC)
[oracle@db2 ORADATA8]$
[oracle@db2 ORADATA8]$
[oracle@db2 ORADATA8]$ ls -l
total 8
drwxr-xr-x 2 oracle oinstall 8192 Jul 1 11:16 CRESTELDATA
drwxr-xr-x 2 oracle oinstall 3896 Jul 3 11:21 DATAPUMP
drwxr-xr-x 2 oracle oinstall 3896 Feb 18 11:23 lost+found
[oracle@db2 ORADATA8]$
[oracle@db2 ORADATA8]$
[oracle@db2 ORADATA8]$ expdp CRESTELMEDIATIONPRD501/CRESTELMEDIATIONPRD501 tables=TBLINVALIDCDR:PSINVALIDCDR18APR2010 DIRECTORY=ORADATA8 PARALLEL=5 COMPRESSION=ALL DUMPFILE=INVALIDCDR_%U.dmp LOGFILE=expdpINVALIDCDR.log
Export: Release 11.1.0.6.0 - 64bit Production on Saturday, 03 July, 2010 11:24:34
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "CRESTELMEDIATIONPRD501"."SYS_EXPORT_TABLE_03": CRESTELMEDIATIONPRD501/******** tables=TBLINVALIDCDR:PSINVALIDCDR18APR2010 DIRECTORY=ORADATA8 PARALLEL=5 COMPRESSION=ALL DUMPFILE=INVALIDCDR_%U.dmp LOGFILE=expdpINVALIDCDR.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 3.917 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.CREATE_OBJECT_ROWS [TABLE:"CRESTELMEDIATIONPRD501"."TBLINVALIDCDR"]
ORA-01114: IO error writing block to file 6 (block # 79240)
ORA-29701: unable to connect to Cluster Manager
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 7709
----- PL/SQL Call Stack -----
object line object
handle number name
0x21ac5fc60 18051 package body SYS.KUPW$WORKER
0x21ac5fc60 7736 package body SYS.KUPW$WORKER
0x21ac5fc60 6945 package body SYS.KUPW$WORKER
0x21ac5fc60 2519 package body SYS.KUPW$WORKER
0x21ac5fc60 8342 package body SYS.KUPW$WORKER
0x217190f58 1 anonymous block
0x217687e98 1501 package body SYS.DBMS_SQL
0x21ac5fc60 8201 package body SYS.KUPW$WORKER
0x21ac5fc60 1477 package body SYS.KUPW$WORKER
0x21ac60350 2 anonymous block
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "CRESTELMEDIATIONPRD501"."TBLINVALIDCDR":"PSINVALIDCDR18APR2010"."TER418APR2010" 36.27 MB 723142 rows
. . exported "CRESTELMEDIATIONPRD501"."TBLINVALIDCDR":"PSINVALIDCDR18APR2010"."TER218APR2010" 16.13 MB 325123 rows
. . exported "CRESTELMEDIATIONPRD501"."TBLINVALIDCDR":"PSINVALIDCDR18APR2010"."TER318APR2010" 512.3 KB 10467 rows
. . exported "CRESTELMEDIATIONPRD501"."TBLINVALIDCDR":"PSINVALIDCDR18APR2010"."TER118APR2010" 556.2 MB 12261974 rows
Master table "CRESTELMEDIATIONPRD501"."SYS_EXPORT_TABLE_03" successfully loaded/unloaded
Dump file set for CRESTELMEDIATIONPRD501.SYS_EXPORT_TABLE_03 is:
/ORADATA8/DATAPUMP/INVALIDCDR_01.dmp
/ORADATA8/DATAPUMP/INVALIDCDR_02.dmp
/ORADATA8/DATAPUMP/INVALIDCDR_03.dmp
/ORADATA8/DATAPUMP/INVALIDCDR_04.dmp
/ORADATA8/DATAPUMP/INVALIDCDR_05.dmp
/ORADATA8/DATAPUMP/INVALIDCDR_06.dmp
/ORADATA8/DATAPUMP/INVALIDCDR_07.dmp
/ORADATA8/DATAPUMP/INVALIDCDR_08.dmp
Job "CRESTELMEDIATIONPRD501"."SYS_EXPORT_TABLE_03" completed with 3 error(s) at 11:26:36
Please - reply
Edited by: Yuvraj on Jul 3, 2010 1:13 PMHi,
It seems that you have got a problematic disk there. There might be a problem with the disk that the datafile resides on since you are receiving I/O errors. Moreoever, there might be a problem with the CSS. check that all the process of cluster are running.
regards
Maybe you are looking for
-
8.0.1 update fails to install says no internet connection but the connection is good
Downloaded 8.01 and when trying to install it says not connected to the internet. Still have wi-fi that is working fine, so it will not install
-
Everytime I add a keyboard shortcut, the settings app crashes.
So I go to the keyboard shortcuts section on the settings app. Then I hit the "add shorcut" button and I enter the shortcut, etc. When I finish entering everything in, I touch "save", and as soon as I do that, the app crashes to the home screen. It d
-
Hi, could anyone direct me where can I fine DACL format fo cisco ISE? Bacause when I use simple ACL like permit tcp any 10.8.26.0 0.0.0.255 eq 3389 My ASA says in log: Unable to install ACL '#ACSACL#-IP-standart_vpn-50fa79e7', downloaded for user kra
-
I'm trying to update my iPad to ios5, as I want Sky Go on it to watch the cricket, so I've gone thru Settings, General, but there is no update available...what am I doing wrong? Or is this iPad not able to get updates? Thanks
-
How do I move from Mozilla Thunderbird to Apple Mail for email??
Hi, I am currently using Thunderbird from Mozilla & I would like to use Apple's Mail application for sending and receiving my email. I tried to import Thunderbird but it did not work. I would also like to import the subfolders where I saved email mes