ORA-30036 when modifying column in large table
Oracle version 10.2.0.4
Afternoon everyone!
I have a large table that I need to modify a column in - increasing from CHAR(3) to CHAR(6)
On altering the table I'm getting an ORA-30036: unable to extend segment but 8 in undo tablespace
Increasing undo tbs size isn't really an option, and I don't really want to go copying the table elsewhere either - again due to space limitations.
Is there a way to avoid this undo exhaustion? Will disabling logging for this table solve my issue? Or is there another way similar to the 'checkpoint' clause you can use when dropping columns?
Many thanks!
Adam M
Just in case nothing better appears and you can't increase the UNDO ...
1. Create a new table with the correct datatype
2. Insert data from the old table to the new table, in batches if necessary. Divide the data by key values if possible or get a range of rowids to process or something
3. Make sure dependent objects are created for the new table
4. drop the old table
5. rename the old table to the new table
Similar Messages
-
제품 : ORACLE SERVER
작성날짜 : 2004-05-03
(V9I) ORA-2095 WHEN MODIFYING LARGE_POOL_SIZE OR JAVA_POOL_SIZE
===============================================================
PURPOSE
이 자료는 Oracle 9i에서 LARGE_POOL_SIZE 또는 JAVA_POOL_SIZE 파라미터를
동적으로 변경하려고 할 때 발생하는 ORA-2095 에러를 해결하는 방법에
대한 자료이다.
Problem Description
SQL*Plus 상에서 LARGE_POOL_SIZE 또는 JAVA_POOL_SIZE 파라미터를 동적으로
변경하려고 할 때, 즉, Dynamic SGA 기능을 이용하려 할 때 ORA-2095
error가 발생한다.
SQL> alter system set large_pool_size=32m scope=memory;
alter system set large_pool_size=32m scope=memory
ERROR at line 1:
ORA-02095: specified initialization parameter cannot be modified
SQL> alter system set large_pool_size=32m scope=spfile;
alter system set large_pool_size=32m scope=spfile
ERROR at line 1:
ORA-02095: specified initialization parameter cannot be modified
SQL> alter system set large_pool_size=32m scope=both;
alter system set large_pool_size=32m scope=both
ERROR at line 1:
ORA-02095: specified initialization parameter cannot be modified
위의 예와 같은 상황이 JAVA_POOL_SIZE parameter에 대해서도 나타난다.
Workaround
none
Solution Description
새로운 파라미터 값 지정 시, single quote를 지정한다.
SQL> alter system set LARGE_POOL_SIZE='32M' SCOPE=spfile;
System altered.
SQL> alter system set JAVA_POOL_SIZE='50M' SCOPE=spfile;
System altered.
Explanation
1. LARGE_POOL_SIZE 와 JAVA_POOL_SIZE 파라미터는 Oracle 9i에서
static parameter 이다.
이 두 파라미터는 오직 SPFILE scope 내에서만 변경될 수 있다.
2. LARGE_POOL_SIZE 와 JAVA_POOL_SIZE 파라미터에 저장되는 값은
NUMERIC 값이 아니라, STRING 값이다. 따라서, SPFILE scope 내에서
이 파라미터 값들을 변경하기 위해서는 character string으로 값을
명시해야 한다.
Reference Documents
<Note:152269.1>
<Note:148495.1>hi froylan,
this is the portal content management forum. for your database question please use the database forums:
http://forums.oracle.com/forums/index.jsp?cat=18
thanks,
christian -
ORA-22856: cannot add columns to object tables
Oracle 9i
==========
I tried to alter a table using a simple script.
ALTER TABLE tablename ADD col VARCHAR(50);
And it gave me the error: -
ORA-22856: cannot add columns to object tables
Can someone give me some direction on how to resolve this? The script executes fine on a test env.
Thanks in advanceThanks for replying...
exit Null? Type
BUS NUMBER
REP VARCHAR2(60)
COS NUMBER
REP VARCHAR2(50)
ACC NUMBER
ADJ VARCHAR2(2000)
BAS NUMBER
BIL VARCHAR2(360)
BIL VARCHAR2(50)
BIL VARCHAR2(3)
BIL VARCHAR2(50)
BLP VARCHAR2(240)
BLP NUMBER
BOO DATE
COM NUMBER
COM NUMBER
COM NUMBER(15)
COM NUMBER(15)
COM VARCHAR2(4000)
COM VARCHAR2(30)
CUR NUMBER
CUS VARCHAR2(240)
DEA VARCHAR2(240)
EVE VARCHAR2(240)
HEA VARCHAR2(240)
HEA VARCHAR2(240)
HEA VARCHAR2(240)
HEA VARCHAR2(240)
INC VARCHAR2(30)
INV DATE
MAN VARCHAR2(360)
ORD NUMBER
ORD VARCHAR2(240)
PAY VARCHAR2(240)
PAY NUMBER
HEL NUMBER
PEO VARCHAR2(150)
PER NUMBER
PER VARCHAR2(30)
PER NUMBER(15)
PRO VARCHAR2(240)
PRO VARCHAR2(240)
QUA NUMBER(15)
QUO NUMBER
QUO DATE
QUO DATE
QUO VARCHAR2(80)
RED VARCHAR2(240)
REP VARCHAR2(360)
REP VARCHAR2(30)
REP VARCHAR2(30)
REP VARCHAR2(150)
REP VARCHAR2(3)
REP VARCHAR2(150)
REP VARCHAR2(50)
ROL VARCHAR2(60)
SHI VARCHAR2(360)
SPL VARCHAR2(240)
STA DATE
TER DATE
TOT VARCHAR2(240)
TRX NUMBER
TRX VARCHAR2(240)
TRX VARCHAR2(20)
TRX VARCHAR2(30)
WAI VARCHAR2(240)
YEA NUMBER
MAN VARCHAR2(30)
BUF NUMBER
BUF VARCHAR2(60)
EMC NUMBER
EMC VARCHAR2(60)
INT NUMBER
INT VARCHAR2(60)
SUP NUMBER
SUP VARCHAR2(60)
BRM NUMBER
BRM VARCHAR2(60)
SUP NUMBER
SUP VARCHAR2(60)
REP NUMBER
REP VARCHAR2(60)
DIV NUMBER
DIV VARCHAR2(60)
SUP NUMBER
SUP VARCHAR2(60)
REG NUMBER
REG VARCHAR2(60)
SUP NUMBER
SUP VARCHAR2(60)
ARE NUMBER
ARE VARCHAR2(60)
DIS NUMBER
DIS VARCHAR2(60)
ROL VARCHAR2(240)
ACC NUMBER
BON NUMBER
COM VARCHAR2(240)
COM VARCHAR2(240)
REP NUMBER
BIL NUMBER
BAS NUMBER
TOT NUMBER
TOT NUMBER
OVE NUMBER
BLP NUMBER
QUO VARCHAR2(30)
FN_ NUMBER
FN_ VARCHAR2(10)
SAL NUMBER
RES NUMBER
CRE NUMBER
MAN VARCHAR2(100)
PER NUMBER
PLA NUMBER
PLA NUMBER
REV VARCHAR2(30)
REP VARCHAR2(150)
OU_ NUMBER
OU_ NUMBER
EXC VARCHAR2(1)
MAN NUMBER
INV NUMBER
REP NUMBER
UPL VARCHAR2(1)
COM NUMBER
SEQ NUMBER
QUO NUMBER
PRO VARCHAR2(10)
PRO NUMBER
PRO NUMBER
BI_ NUMBER
CUR NUMBER
YTD NUMBER
PAY NUMBER
PAY DATE
PAY VARCHAR2(1000)
PAY VARCHAR2(80)
PAI VARCHAR2(1)
HOL VARCHAR2(1)
SRP NUMBER
WAI VARCHAR2(1)
WAI VARCHAR2(1)
GBK VARCHAR2(10)
TRX DATE
PAY NUMBER(15)
FIX NUMBER
TER DATE
ADJ VARCHAR2(240)
PAY NUMBER
PRO DATE
OIC DATE
OIC NUMBER
OIC VARCHAR2(30)
OIC NUMBER
HEL NUMBER
COM NUMBER
TRA NUMBER
HDR VARCHAR2(30)
LIN VARCHAR2(30)
LIN DATE
SRC DATE
EM_ DATE
EM_ DATE
ORD VARCHAR2(30)
REP VARCHAR2(150)
BIL VARCHAR2(300)
PER VARCHAR2(240)
Excuse the incomplete column names. All datatypes are basic ones and there are no constraints defined on any of the columns (dw env). The table is partitioned. -
Urgent!!! Modify column of a table having records
Dear all,
I have a table with a column VARCHAR2(12) and I need to modify this column into VARCHAR2(9) without losing any data! s there any workaround?
Thanx a lot!!This code was posted in one of the previous Oracle magazines:
The Pl/Sql procedure to rename the column must be compiled in to the database you require to rename columns in - one note I would run this as SYS:
create or replace procedure RenameColumn
(pUserName varchar2,
pTableName varchar2,
pOldColumnName varchar2,
pNewColumnName varchar2
is
vUserName dba_users.userName%type :=
upper(ltrim(rtrim(pUserName)));
vTableName dba_tables.table_name%type :=
upper(ltrim(rtrim(pTableName)));
vOldColumnName dba_tab_columns.column_name%type :=
upper(ltrim(rtrim(pOldColumnName)));
vNewColumnName dba_tab_columns.column_name%type :=
upper(ltrim(rtrim(pNewColumnName)));
vErrorMessage varchar2(4000);
eNotAuthorizedUser exception; /* -20101 */
eInvalidUser exception; /* -20102 */
eInvalidTable exception; /* -20103 */
eInvalidOldColumn exception; /* -20104 */
eInvalidNewColumn exception; /* -20105 */
cursor csrCheckUser
(pUser dba_users.userName%type)
is
select '1'
from dba_users
where userName = pUser;
cursor csrCheckTable
(pUser dba_tables.owner%type,
pTable dba_tables.table_name%type)
is
select '1'
from dba_tables
where owner = pUser
and table_name = pTable;
cursor csrCheckExistingColumn
(pUser dba_tables.owner%type,
pTable dba_tables.table_name%type,
pColumn dba_tab_columns.column_name%type)
is
select '1'
from dba_tab_columns
where owner = pUser
and table_name = pTable
and column_name = pColumn;
vDummy char(1);
begin
if user <> 'SYS'
then
raise eNotAuthorizedUser;
end if;
/* Check the value of vUserName */
if vUserName is null
then
raise eInvalidUser;
end if;
open csrCheckUser(vUserName);
fetch csrCheckUser into vDummy;
if csrCheckUser%notfound
then
close csrCheckUser;
raise eInvalidUser;
end if;
close csrCheckUser;
/* Check the value of vTableName */
if vTableName is null
then
raise eInvalidTable;
end if;
open csrCheckTable(vUserName, vTableName);
fetch csrCheckTable into vDummy;
if csrCheckTable%notfound
then
close csrCheckTable;
raise eInvalidTable;
end if;
close csrCheckTable;
/* Check the value of vOldColumnName */
if vOldColumnName is null
then
raise eInvalidOldColumn;
end if;
open csrCheckExistingColumn(vUserName, vTableName, vOldColumnName);
fetch csrCheckExistingColumn into vDummy;
if csrCheckExistingColumn%notfound
then
close csrCheckExistingColumn;
raise eInvalidOldColumn;
end if;
close csrCheckExistingColumn;
/* Check the value of vNewColumnName */
if vNewColumnName is null
then
raise eInvalidNewColumn;
end if;
open csrCheckExistingColumn(vUserName, vTableName, vNewColumnName);
fetch csrCheckExistingColumn into vDummy;
if csrCheckExistingColumn%found
then
close csrCheckExistingColumn;
raise eInvalidNewColumn;
end if;
close csrCheckExistingColumn;
/* Update the row in col$ Oracle dictionary */
update col$
set name = vNewColumnName
where (obj#, col#) in
(select obj#,
col#
from col$
where name = vOldColumnName
and obj# = (select obj#
from obj$
where name = vTableName
and owner# = (select user_id
from dba_users
where username = vUserName)));
commit;
exception
when eNotAuthorizedUser
then
vErrorMessage := 'User ' || user ||
' is not authorized to run this procedure.';
raise_application_error(-20101, vErrorMessage);
when eInvalidUser
then
vErrorMessage := 'Invalid user name: ' ||
pUserName || '.';
raise_application_error(-20102, vErrorMessage);
when eInvalidTable
then
vErrorMessage := 'Invalid table name: ' ||
pTableName || '.';
raise_application_error(-20103, vErrorMessage);
when eInvalidOldColumn
then
vErrorMessage := 'Invalid old column name: ' ||
pOldColumnName || '.';
raise_application_error(-20104, vErrorMessage);
when eInvalidNewColumn
then
vErrorMessage := 'Invalid new column name: ' ||
pNewColumnName || '.';
raise_application_error(-20105, vErrorMessage);
end RenameColumn;
Once you have the above in and compiled okay then you can rename you column names by:
begin
RenameColumn('SCOTT', 'EMPTEST', 'SAL', 'SALARY');
end;
also while still connected as SYS, note the parameters SCOTT should be the schema owner which contains the table, EMPTEST is the table name which contains the column, SAL is the old column name and SALARY is the new column name.
Have fun. -
I followed the example provided in
https://jtabadero.wordpress.com/2011/04/13/modifying-sync-framework-scope-definition-part-3-workarounds-addingremoving-columns/ to add columns to a table in a provisioned database. The sample code replaces the stored procedures triggered when the
table is updated with new stored procedures which include the additional columns
However, the sample does not update scope_config. When I am finished, the new columns do not appear in the scope config. Looking at the SQL code it is immediately obvious why. Here is an excerpt from the beginning of the query
DROP PROCEDURE [PATIENT_insertmetadata];
DROP PROCEDURE [PATIENT_updatemetadata];
DROP PROCEDURE [PATIENT_deletemetadata];
-- BEGIN Enable Snapshot Isolation on Database 'PatScan' if needed
IF EXISTS (SELECT NAME FROM sys.databases where NAME = N'PatScan' AND [snapshot_isolation_state] = 0)
BEGIN
ALTER DATABASE [PatScan] SET ALLOW_SNAPSHOT_ISOLATION ON
END
GO
-- END Enable Snapshot Isolation on Database 'PatScan' if needed
-- BEGIN Create Scope Info Table named [scope_info]
IF NOT EXISTS (SELECT t.name FROM sys.tables t JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = N'scope_info' AND s.name = N'dbo')
BEGIN
CREATE TABLE [scope_info] ([scope_local_id] int IDENTITY(1,1) NOT NULL, [scope_id] uniqueidentifier DEFAULT NEWID() NOT NULL, [sync_scope_name] nvarchar(100) NOT NULL, [scope_sync_knowledge] varbinary(max) NULL, [scope_tombstone_cleanup_knowledge] varbinary(max)
NULL, [scope_timestamp] timestamp NULL, [scope_config_id] uniqueidentifier NULL, [scope_restore_count] int DEFAULT 0 NOT NULL, [scope_user_comment] nvarchar(max) NULL)
ALTER TABLE [scope_info] ADD CONSTRAINT [PK_scope_info] PRIMARY KEY ([sync_scope_name])
END
GO
-- END Create Scope Info Table named [scope_info]
-- BEGIN Create Scope Config Table named [scope_config]
IF NOT EXISTS (SELECT t.name FROM sys.tables t JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = N'scope_config' AND s.name = N'dbo')
BEGIN
CREATE TABLE [scope_config] ([config_id] uniqueidentifier NOT NULL, [config_data] xml NOT NULL, [scope_status] char NULL)
ALTER TABLE [scope_config] ADD CONSTRAINT [PK_scope_config] PRIMARY KEY ([config_id])
END
GO
-- END Create Scope Config Table named [scope_config]
In an already provisioned database, the scope_info and scope_config tables already exist. Hence the portions of the script to create the scope_info and scope_config tables are skipped. I tried modifying the query to drop these tables. When
I do so, ne tables are created but they only define the modified table.
Is it necessary to update scope_info and scope_config? Are there are side effects (such as deprovisioning) if I drop and then recreate these tables? Can I just update the existing scope_config, replacing the portion of the XML which defines the table
being modified? Do I have to make changes in scope_info?
Thanks
Howard Weiss
Howard P. WeissI followed the example provided in
https://jtabadero.wordpress.com/2011/04/13/modifying-sync-framework-scope-definition-part-3-workarounds-addingremoving-columns/ to add columns to a table in a provisioned database. The sample code replaces the stored procedures triggered when the
table is updated with new stored procedures which include the additional columns
However, the sample does not update scope_config. When I am finished, the new columns do not appear in the scope config. Looking at the SQL code it is immediately obvious why. Here is an excerpt from the beginning of the query
DROP PROCEDURE [PATIENT_insertmetadata];
DROP PROCEDURE [PATIENT_updatemetadata];
DROP PROCEDURE [PATIENT_deletemetadata];
-- BEGIN Enable Snapshot Isolation on Database 'PatScan' if needed
IF EXISTS (SELECT NAME FROM sys.databases where NAME = N'PatScan' AND [snapshot_isolation_state] = 0)
BEGIN
ALTER DATABASE [PatScan] SET ALLOW_SNAPSHOT_ISOLATION ON
END
GO
-- END Enable Snapshot Isolation on Database 'PatScan' if needed
-- BEGIN Create Scope Info Table named [scope_info]
IF NOT EXISTS (SELECT t.name FROM sys.tables t JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = N'scope_info' AND s.name = N'dbo')
BEGIN
CREATE TABLE [scope_info] ([scope_local_id] int IDENTITY(1,1) NOT NULL, [scope_id] uniqueidentifier DEFAULT NEWID() NOT NULL, [sync_scope_name] nvarchar(100) NOT NULL, [scope_sync_knowledge] varbinary(max) NULL, [scope_tombstone_cleanup_knowledge] varbinary(max)
NULL, [scope_timestamp] timestamp NULL, [scope_config_id] uniqueidentifier NULL, [scope_restore_count] int DEFAULT 0 NOT NULL, [scope_user_comment] nvarchar(max) NULL)
ALTER TABLE [scope_info] ADD CONSTRAINT [PK_scope_info] PRIMARY KEY ([sync_scope_name])
END
GO
-- END Create Scope Info Table named [scope_info]
-- BEGIN Create Scope Config Table named [scope_config]
IF NOT EXISTS (SELECT t.name FROM sys.tables t JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = N'scope_config' AND s.name = N'dbo')
BEGIN
CREATE TABLE [scope_config] ([config_id] uniqueidentifier NOT NULL, [config_data] xml NOT NULL, [scope_status] char NULL)
ALTER TABLE [scope_config] ADD CONSTRAINT [PK_scope_config] PRIMARY KEY ([config_id])
END
GO
-- END Create Scope Config Table named [scope_config]
In an already provisioned database, the scope_info and scope_config tables already exist. Hence the portions of the script to create the scope_info and scope_config tables are skipped. I tried modifying the query to drop these tables. When
I do so, ne tables are created but they only define the modified table.
Is it necessary to update scope_info and scope_config? Are there are side effects (such as deprovisioning) if I drop and then recreate these tables? Can I just update the existing scope_config, replacing the portion of the XML which defines the table
being modified? Do I have to make changes in scope_info?
Thanks
Howard Weiss
Howard P. Weiss -
OutOfMemory error when trying to display large tables
We use JDeveloper 10.1.3. Our project uses ADF Faces + EJB3 Session Facade + TopLink.
We have a large table (over 100K rows) which we try to show to the user via an ADF Read-only Table. We build the page by dragging the facade findAllXXX method's result onto the page and choosing "ADF Read-only Table".
The problem is that during execution we get an OutOfMemory error. The Facade method attempts to extract the whole result set and to transfer it to a List. But the result set is simply too large. There's not enough memory.
Initially, I was under the impression that the table iterator would be running queries that automatically fetch just a chunk of the db table data at a time. Sadly, this is not the case. Apparently, all the data gets fetched. And then the iterator simply iterates through a List in memory. This is not what we needed.
So, I'd like to ask: is there a way for us to show a very large database table inside an ADF Table? And when the user clicks on "Next", to have the iterator automatically execute queries against the database and fetch the next chunk of data, if necessary?
If that is not possible with ADF components, it looks like we'll have to either write our own component or simply use the old code that we have which supports paging for huge tables by simply running new queries whenever necessary. Alternatively, each time the user clicks on "Next" or "Previous", we might have to intercept the event and manually send range information to a facade method which would then fetch the appropriate data from the database. I don't know how easy or difficult that would be to implement.
Naturally, I'd prefer to have that functionality available in ADF Faces. I hope there's a way to do this. But I'm still a novice and I would appreciate any advice.Hi Shay,
We do use search pages and we do give the users the opportunity to specify search criteria.
The trouble comes when the search criteria are not specific enough and the result set is huge. Transferring the whole result set into memory will be disastrous, especially for servers used by hundreds of users simultaneously. So, we'll have to limit the number of rows fetched at a time. We should do this either by setting the Maximum Rows option for the TopLink query (or using rownum<=XXX inside the SQL), or through using a data provider that supports paging.
I don't like the first approach very much because I don't have a good recipe for calculating the optimum number of Maximum Rows for each query. By specifying some average number of, say, 500 rows, I risk fetching too many rows at once and I also risk filling the TopLink cache with objects that are not necessary. I can use methods like query.dontMaintainCache() but in my case this is a workaround, not a solution.
I would prefer fetching relatively small chunks of data at a time and not limiting the user to a certain number of maximum rows. Furthermore, this way I won't fetch large amounts of data at the very beginning and I won't be forced to turn off the caching for the query.
Regarding the "ADF Developer's Guide", I read there that "To create a table using a data control, you must bind to a method on the data control that returns a collection. JDeveloper allows you to do this declaratively by dragging and dropping a collection from the Data Control Palette."
So, it looks like I'll have to implement a collection which, in turn, implements the paging functionality that I need. Is the TopLink object you are referring to some type of collection? I know that I can specify a collection class that TopLink should use for queries through the query.useCollectionClass(...) method. But if TopLink doesn't provide the collection I need, I will have to write that collection myself. I still haven't found the section in the TopLink documentation that says what types of Collections are natively provided by TopLink. I can see other collections like oracle.toplink.indirection.IndirectList, for example. But I have not found a specific discussion on large result sets with the exception of Streams and Cursors and I feel uneasy about maintaining cursors between client requests.
And I completely agree with you about reading the docs first and doing the programming afterwards. Whenever time permits, I always do that. I have already read the "ADF Developer's Guide" with the exception of chapters 20 and 21. And I switched to the "TopLink Developer's Guide" because it seems that we must focus on the model. Unfortunately, because of the circumstances, I've spent a lot of time reading and not enough time practicing what I read. So, my knowledge is kind of shaky at the moment and perhaps I'm not seeing things that are obvious to you. That's why I tried using this forum -- to ask the experts for advice on the best method for implementing paging. And I'm thankful to everyone who replied to my post so far. -
XMLAGG giving ORA-19011 when creating CDATA with large embedded XML
What I'm trying to achieve is to embed XML (XMLTYPE return type) inside a CDATA block. However, I'm receiving "ORA-19011: Character string buffer too small" when generating large amounts of information within the CDATA block using XMLCDATA within an XMLAGG function.
Allow me to give a step by step explanation through the thought process.
h4. Creating the inner XML element
For example, suppose I have the subquery below
select
XMLELEMENT("InnerElement",DUMMY) as RESULT
from dual
;I would get the following.
RESULT
<InnerElement>X</InnerElement>h4. Creating outer XML element, embedding inner XML element in CDATA
Now, if I my desire were to embed XML inside a CDATA block, that's within another XML element, I can achieve it by doing so
select
XMLELEMENT("OuterElement",
XMLCDATA(XML_RESULT)
) XML_IN_CDATA_RESULT
FROM
(select
XMLELEMENT("InnerElement",DUMMY) as XML_RESULT
from dual)
;This gets exactly what I want, embedding XML into CDATA block, and CDATA block is in an XML element.
XML_IN_CDATA_RESULT
<OuterElement><![CDATA[<InnerElement>X</InnerElement>]]></OuterElement> So far so good. But the real-world dataset naturally isn't that tiny. We'd have more than one record. For reporting, I'd like to put all the <OuterElement> under a XML root.
h4. Now, I want to put that data in XML root element called <Root>, and aggregate all the <OuterElement> under it.
select
XMLELEMENT("Root",
XMLAGG(
XMLELEMENT("OuterElement",
XMLCDATA(INNER_XML_RESULT)
FROM
(select
XMLELEMENT("InnerElement",DUMMY) as INNER_XML_RESULT
from dual)
;And to my excitement, I get what I want..
<Root>
<OuterElement><![CDATA[<InnerElement>X</InnerElement>]]></OuterElement>
<OuterElement><![CDATA[<InnerElement>Y</InnerElement>]]></OuterElement>
<OuterElement><![CDATA[<InnerElement>Z</InnerElement>]]></OuterElement>
</Root> But... like the real world again... the content of <InnerElement> isn't always so small and simple.
h4. The problem comes when <InnerElement> contains lots and lots of data.
When attempting to generate large XML, XMLAGG complains the following:
ORA-19011: Character string buffer too smallThe challenge is to keep the XML formatting of <InnerElement> within CDATA. A particular testing tool I'm using parses XML out of a CDATA block. I'm hoping to use [Oracle] SQL to generate a test suite to be imported to the testing tool.
I would appreciate any help and insight I could receive, and hopefully overcome this roadblock.
Edited by: user6068303 on Jan 11, 2013 12:33 PM
Edited by: user6068303 on Jan 11, 2013 12:34 PMThat's an expected error.
XMLCDATA takes a string as input, but you're passing it an XMLType instance, therefore an implicit conversion occurs from XMLType to VARCHAR2 which is, as you know, limited to 4000 bytes.
This indeed gives an error :
SQL> select xmlelement("OuterElement", xmlcdata(inner_xml))
2 from (
3 select xmlelement("InnerElement", rpad(to_clob('X'),8000,'X')) as inner_xml
4 from dual
5 ) ;
ERROR:
ORA-19011: Character string buffer too small
no rows selectedThe solution is to serialize the XMLType to CLOB before passing it to XMLCDATA :
SQL> select xmlelement("OuterElement",
2 xmlcdata(xmlserialize(document inner_xml))
3 )
4 from (
5 select xmlelement("InnerElement", rpad(to_clob('X'),8000,'X')) as inner_xml
6 from dual
7 ) ;
XMLELEMENT("OUTERELEMENT",XMLCDATA(XMLSERIALIZE(DOCUMENTINNER_XML)))
<OuterElement><![CDATA[<InnerElement>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX(use getClobVal method if your version doesn't support XMLSerialize) -
Ora-20001 when creating a form on table with report (bug?)
Having some trouble creating a "Form on table with report".
1) I pick my table
2) take most of the defaults on the page where you pick the report type (interactive) and the page number (I changed it to 950). next->
3) Do not use tabs. Next->
4) Select all columns for the report. THEN (here's the problem) set an optional where clause of system_role_name like 'ODPSPOPUP%'. Next->
5) choose standard edit link. next->
6) Specify a page of 951 for the form (leave others defaults). next->
7) Set the form primary key (defined in table). next->
8) use existing trigger. next->
9) choose all columns for the form. next->
10) Leave actions to insert, update, delete. next->
11) Get to the summary page and click Finish
Then I get an error page saying:
ORA-20001: Unable to create query and update page. ORA-20001: Unable to create query and update page. ORA-00933: SQL command not properly ended
If I go back to step 4 and erase my where clause the wizard completes successfully.
Also if I change the report type in step 2 from the default of "Interactive" to "Classic" the wizard completes successfully. However upon running the report I get a query parse error. Looks like the where clause in the report sql is: system_role_name like ''ODPSPOPUP%'' (two single quotes on each side).
It looks as if you cannot specify a where clause with a quoted string. The wizard is expecting a bind variable.
Workaround(s):
1) Don't specify a where clause when report type = Interactive in "create form on table with report" wizard.
or
2) Specify a bogus where clause using bind variable syntax such as "system_role_name like :BOGUSVARIABLE". Then edit the report query once the wizard finishes and change the where clause to the constant string you wanted to use in the wizard (e.g. "system_role_name like 'MYSYSTEM%'")
Apex: 3.2.0.00.27
Database: Oracle Database 11g Enterprise Edition 11.1.0.7.0 64bit Production (Oracle EL5)Andy,
It's a bug, all right. Thanks for the detailed problem description. We'll fix it when we can.
Scott -
Ora-01440 on modifying column - urgent needed
Hi
we have following table
SQL> desc VULNERABILITY_REG_PERIODS
Name Null? Type
CLIENT_ID NOT NULL NUMBER(10)
ASSOCIATED_FAMILY NUMBER(7)
HCP VARCHAR2(8)
STAFF_TYPE VARCHAR2(2)
DATE_ON_PROTECTION NOT NULL DATE
DATE_OFF_PROTECTION NOT NULL DATE
SECTOR VARCHAR2(2)
AUDIT_TIMESTAMP DATE
AUDIT_ID NUMBER
AUDIT_ORIGIN VARCHAR2(30)
RECORD_STATUS CHAR(1)
i am trying to modify the column
alter table VULNERABILITY_REG_PERIODS modify audit_id number(10,0);
the max length of audit_id is 4
SQL> select max(audit_id) from VULNERABILITY_REG_PERIODS;
MAX(AUDIT_ID)
9936
what is the reason for above error when modifyng this column
thanx
kedarHi,
I think that internally, Oracle understand that NUMBER is a synonym for NUMBER(38).
The precision can range from 1 to 38. Then, that's why this error is happening;
You can try this below:
SGMS@ORACLE10> create table x (cod number);
Table created.
SGMS@ORACLE10> insert into x values (1000);
1 row created.
SGMS@ORACLE10> commit;
Commit complete.
SGMS@ORACLE10> desc x
Name Null? Type
COD NUMBER
SGMS@ORACLE10> alter table x add (temp number);
Table altered.
SGMS@ORACLE10> update x set temp = cod;
1 row updated.
SGMS@ORACLE10> update x set cod = null;
1 row updated.
SGMS@ORACLE10> alter table x modify cod number (10,0);
Table altered.
SGMS@ORACLE10> update x set cod = temp;
1 row updated.
SGMS@ORACLE10> alter table x drop column temp;
Table altered.
SGMS@ORACLE10> desc x
Name Null? Type
COD NUMBER(10)
SGMS@ORACLE10>Cheers -
ORA-00904 when use column alias in Record Group Query
Is it possible to use column aliases in Record Group Queries?
I have a query that runs fine in SQL*Developer, but gives me runtime errors when I use it as a Record Group Query.
When I use it as a Record Group Query, the Form compiles, but at runtime I receive the following errors:
FRM-40502: ORACLE error: unable to read list of values
when I use Help - Display Error, I see:
ORA-00904:"CHILDNAME":invalid identifier
The query is something like this
select decode(complex stuff here) as "childname" ....
I've tried it with and without the double quotes surrounding the alias name, and have also tried it without using the "as" keyword.
I would appreciate any suggestions or insights. I'm using Forms 9.0.4.
Thanks.It looks like this is caused by bug 725059:
"FILTER BEFORE DISPLAY" DOESN'T WORK IF LOV HAS COLUMN ALIASES (TRIAGE1098)
My LOV does have the Filter Before Display turned on. Here's the text of the bug:
IF an LOV is created with column aliases in the select statement, (eg: select ename emp_name from emp) and the LOV property "Filter Before Display" is "Yes", THEN when you attempt to filter the LOV at runtime, (eg: type '%' then press the 'Find' button) the internal WHERE clause that forms sends to the database is: WHERE column_alias LIKE '%%' This is incorrect syntax. A client-side sqlnet trace shows this. The correct syntax should be: WHERE column LIKE '%%' . The incorrect syntax results in no rows returned. However no error is displayed by forms to the user. -
Internal Error ORA-0600 when creating multiple consumer queue table
Hi,
I tried to create a multiple consumer queue table with the following statements:
exec DBMS_AQADM.GRANT_TYPE_ACCESS ('system');
create type Change_History_Trigger_Data as object(Col1 VARCHAR2(255), Col2 VARCHAR2(128), Col3 VARCHAR2(255), Col4 TIMESTAMP, Col5 VARCHAR2(64), Col6 VARCHAR2(64), Col7 NUMBER(8));
Works fine till this stage. But the following statement produces an ORA-0600 internal error message.
EXEC DBMS_AQADM.CREATE_QUEUE_TABLE ('change_history_queue_tbl','Change_History_Trigger_Data', 'tablespace my_tblspace','ENQ_TIME',TRUE,DBMS_AQADM.TRANSACTIONAL);
ERROR at line 1:
ORA-00600: internal error code, arguments: [kcbgtcr_4], [14392], [0], [1], [], [], [], []
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 2224
ORA-06512: at "SYS.DBMS_AQADM", line 58
ORA-06512: at line 1
I tried creating the same queue table with Multiple consumer = FALSE, and it works fine. But not with multiple consumer = TRUE
I'm running on Oracle9i Enterprise Edition Release 9.2.0.6.0
Any possible solutions?Problem solved.
The queue name was too long. Found a post with the same problem.
Re: Create Queue Table ORA-00600 while dbms_aqadm.create_queue_table
thanks anyway -
Problems of having a large table (columns and rows).
hi people,
can anyone give a list of problems that i will be facing when i have a large table(columns/rows). My table generate 5 lakhs record in a year and it keeps growing.
if the answers is labourous, pls give the link of the web-site where i can download it.
How to overcome it?
Thanks in advance
Ganapathyhi justin
i understand u problem too.
10 lakhs in Indian money = 1 million in the US.
Iam trying to understand a system where there will be millions of record over a period of years. I felt that i need to address the problems that should be forseen before the system is developed(some thing like a priliminary investigation or feasibility study before taking up the project). So as of now i have no idea of the system, but do know that there will be millions of records. Iam trying to prepare a document that addresses these issues and how we are going to circumvent the issues and arrive at a solution.
Thanks
Ganapathy -
Modifying datatype of columns across multiple tables
Hi,
I have a requirement where-in I have to modify the datatypes of columns across multiple tables. Is there any direct way to do this? I mean does oracle has any function or in-built functionality to achieve this.
Example;
Table1 -> col1 datatype needs to be changed to varchar2(10)
Table2 -> col2 datatype needs to be changed to varchar2(30)
Table3 -> col3 datatype needs to be changed to number.
and so on....
Do we have such functionality?
Thanks,
IshanHi Aman,
Seeing the replies, I think I was unclear in the requirements. But I guess you understood it fully, but still I would like to restate my question once again, just to be 100% sure.
What I actually want is that in one shot, I would be able to modify columns of multible tables.
eg, table1-> col1 changed to varchar2(20);
table2->col2 changed to varchar2(10)
table3-> col3 changed to number;
I know how to do it individually, but just wanted to check, if only one command can modify the datatypes of multiple tables/.
If not, I have already written half the script, but just for knowledge sake wanted to check if some feature is available in oracle for that.
Regards,
Ishan -
HS connection to MySQL fails for large table
Hello,
I have set up an HS to a MySql 3.51 dabatabe using an ODBC DNS. My Oracle box has version 10.2.0.1 running in Windows 2003 R2. MySQL version is 4.1.22 running on a different machine with the same OS.
I completed the connection through a database link, which works fine in SQLPLUS when selecting small MySQL Tables. However, I keep getting an out of memory error when selecting certain large table from the MySQL database. Previously, I had tested the DNS and ran the same SELECT in Access and it doesn't give any error. This is the error thrown by SQLPLUS:
SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
select * from progressnotes@mysql_rmg where "encounterID" = 224720
ERROR at line 1:
ORA-00942: table or view does not exist
[Generic Connectivity Using ODBC][MySQL][ODBC 3.51
Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL server during query
(SQL State: S1T00; SQL Code: 2013)
ORA-02063: preceding 2 lines from MYSQL_RMG
I traced the HS connection and here is the result from the .trc file:
Oracle Corporation --- THURSDAY JUN 12 2008 11:19:51.809
Heterogeneous Agent Release
10.2.0.1.0
(0) [Generic Connectivity Using ODBC] version: 4.6.1.0.0070
(0) connect string is: defTdpName=MYSQL_RMG;SYNTAX=(ORACLE8_HOA, BASED_ON=ORACLE8,
(0) IDENTIFIER_QUOTE_CHAR="",
(0) CASE_SENSITIVE=CASE_SENSITIVE_QUOTE);BINDING=<navobj><binding><datasources><da-
(0) tasource name='MYSQL_RMG' type='ODBC'
(0) connect='MYSQL_RMG'><driverProperties/></datasource></datasources><remoteMachi-
(0) nes/><environment><optimizer noFlattener='true'/><misc year2000Policy='-1'
(0) consumerApi='1' sessionBehavior='4'/><queryProcessor parserDepth='2000'
(0) tokenSize='1000' noInsertParameterization='true'
noThreadedReadAhead='true'
(0) noCommandReuse='true'/></environment></binding></navobj>
(0) ORACLE GENERIC GATEWAY Log File Started at 2008-06-12T11:19:51
(0) hoadtab(26); Entered.
(0) Table 1 - PROGRESSNOTES
(0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]MySQL client ran out of
(0) memory (SQL State: S1T00; SQL Code: 2008)
(0) (Last message occurred 2 times)
(0)
(0) hoapars(15); Entered.
(0) Sql Text is:
(0) SELECT * FROM "PROGRESSNOTES"
(0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL
(0) server during query (SQL State: S1T00; SQL Code: 2013)
(0) (Last message occurred 2 times)
(0)
(0) [A00D] Failed to open table MYSQL_RMG:PROGRESSNOTES
(0)
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
(0) (Last message occurred 2 times)
(0)
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
(0) (Last message occurred 2 times)
(0)
(0) [S1000] [9013]General error in nvITrans_Commit - rc = -1. Please refer to the
(0) log file for details.
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
(0) (Last message occurred 2 times)
(0)
(0) [S1000] [9013]General error in nvITrans_Rollback - rc = -1. Please refer to
(0) the log file for details.
(0) Closing log file at THU JUN 12 11:20:38 2008.
I have read the MySQL documentation and apparently there's a "Don't Cache Result (forward only cursors)" parameter in the ODBC DNS that needs to be checked in order to cache the results in the MySQL server side instead of the Driver side, but checking that parameter doesn't work for the HS connection. Instead, the SQLPLUS session throws the following message when selecting the same large table:
SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
select * from progressnotes@mysql_rmg where "encounterID" = 224720
ERROR at line 1:
ORA-02068: following severe error from MYSQL_RMG
ORA-28511: lost RPC connection to heterogeneous remote agent using
SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.0.120)(PORT=1521))(CONNECT_DATA=(SID=MYSQL_RMG)))
Curiously enough, after checking the parameter, the Access connection through the DNS ODBS seems to improve!
Is there an aditional parameter that needs to be set up in the inithsodbc.ora perhaps? These are current HS paramters:
# HS init parameters
HS_FDS_CONNECT_INFO = MYSQL_RMG
HS_FDS_TRACE_LEVEL = ON
My SID_LIST_LISTENER entry is:
(SID_DESC =
(PROGRAM = HSODBC)
(SID_NAME = MYSQL_RMG)
(ORACLE_HOME = D:\oracle\product\10.2.0\db_1)
Finally, here is my TNSNAMES.ORA entry for the HS connection:
MYSQL_RMG =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.0.120)(PORT = 1521))
(CONNECT_DATA =
(SID = MYSQL_RMG)
(HS = OK)
Your advice will be greatly appeciated,
Thanks,
Luis
Message was edited by:
lmconsiteFirst of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
indicates the Driver or the DB abends the connection due to a timeout.
Check out the wait_timeout mysql variable on the server and increase it. -
Large tables truncated or withheld from webhelp
I'm running into a major issue trying to include a large table in my WebHelp build. I'm using RoboHelp 8 in Word. When I include a large table (6 columns x 180 rows) the table is either truncated or withheld from the compiled WebHelp.
I've tried several things to resolve it, but they all end in the same results. I've tried importing the table from its original Word file. I've tried breaking it up into many smaller tables. I've tried building a new table in Word, then copying the data. Oddly enough if I build the table blank and compile--the table appears. But once I copy data into the table, it disappears.
RoboHelp seems unable to process the table, as when I've broken the single table into several smaller tables, it chokes, doesn't include the table or even put the topic in the TOC, even though it is in the source file.
Any ideas? I've not been able to find anything in the forums or anywhere else online.
Many thanks!Can you tell us what you mean "using RoboHelp in Word"? Do you mean you are using it as your editor or that you are using the RoboHelp for Word application? If the later, is there a reason why you can't use the RoboHelp HTML application? This is much more suited to producing WebHelp. Personally I wouldn't touch the HTML that Word creates with a bargepole.
Maybe you are looking for
-
Hi All, Is there a easy way in SAP to find list of PO's and the payment made to the related vendor for that PO . The payment may be partial or full. I lloked for the transaction code but could not find . We are not looking for the customised report.
-
Missing the update from the "kdebindings" and the new Archlinux Logo's for the background and kde-splash-screen.
-
Can't use Thunderbird after installing 31.3.0
I just installed Thunderbird 31.3.0 yesterday and after it restarted, I was unable to use it. The TB window stuttered, and the menu bar disappeared until I moved the window. I am unable to select any mailbox or account to read the emails. I am still
-
I want to disable the DHCP server on HH3 and use l...
Hi, i live in a flat with 5 other people so i have my HH3 connected to a linksys wrt5gl so that i can use the QoS capbilities of the linksys (tomato firmware). The linksys router can also limit bandwidth per ip address IF its also the DHCP server. W
-
Partitioned-MDT MP2MP with BGP-AD/mLDP in XR 4.3
I've been working on trying to get LSM working between a couple of A9Ks to support a SSM based IPTV application. After ingesting a bunch of content on the subject, I think what I want is Partitioned MDT, MP2MP with BGP-AD/mLDP (PIM-free core). I'm w