Regarding lobs...
I am not that familiar with LOBs, and was hoping someone could shed some light for me.
I am running Oracle 11.2.0.2 EE, and have made an interesting discovery of this new database that i am responsible.
First, I found out that I have a table that is about 7.4G, but it has two LOB columns that when I query dba_lobs, I found that they contains 365G of lobs and the table itself has 22G of LOBS - not sure what is the difference.
SQL> 1 select segment_name, round(sum(bytes)/1024/1024/1024,1) as "SIZE" , segment_type
2 from dba_segments where owner = 'ARADMIN'
3 group by segment_name, segment_type
4 having round(sum(bytes)/1024/1024/1024,1) > 1
5* order by 2
SEGMENT_NAME SIZE SEGMENT_TYPE
SYS_LOB0000077517C00027$$ 4.2 LOBSEGMENT
SYS_LOB0000210343C00029$$ 4.4 LOBSEGMENT
SYS_LOB0000077480C00002$$ 4.6 LOBSEGMENT
T465 5 TABLE
T2052 8.3 TABLE
T2115 12.4 TABLE
T2444 13.4 TABLE
T2179 14.8 TABLE
T2192 21.8 TABLE
SYS_LOB0000077549C00015$$ 182 LOBSEGMENT <=== (related to table T2192)
SYS_LOB0000077549C00016$$ 184.4 LOBSEGMENT <=== (related to table T2192)
30 rows selected.Now, let's look at the which table these LOBS belong...
SQL> select table_name, column_name, segment_name
2 from dba_lobs
3 where segment_name in (
4 select segment_name from dba_segments where owner = 'ARADMIN'
5 having round(sum(bytes)/1024/1024/1024,1) > 1
6 group by segment_name
7 )
8 /
TABLE_NAME COLUMN_NAME SEGMENT_NAME
B1947C536880923 C536880923 SYS_LOB0000077310C00002$$
T2051 C536870998 SYS_LOB0000077426C00041$$
T2052 C536870987 SYS_LOB0000077440C00063$$
T2115 C536870913 SYS_LOB0000077463C00009$$
B2125C536880912 C536880912 SYS_LOB0000077480C00002$$
B2125C536880913 C536880913 SYS_LOB0000077483C00002$$
T2179 C536870936 SYS_LOB0000077517C00027$$
T2192 C456 SYS_LOB0000077549C00015$$ <====
T2192 C459 SYS_LOB0000077549C00016$$ <====
T2444 C536870936 SYS_LOB0000210343C00029$$
T1990 C536870937 SYS_LOB0000250271C00026$$
11 rows selected.So, from the above, I noticed in the first query that the table T2192 itself contains 21.8G of LOBS, and, that the columns C456 and C459 of same table contain a total of (181.7+183.9) = 365.6G.
First question is how can the table be only 21.8G, and the lob segments of the table columns be 365.6G of Lobs?
It seems some lobs must be external, while others are part of the actual table.
Next, I am wondering if a row is deleted from the table, would the lobs associated with that row that are referenced by columns C456 and C459 also be deleted.
Discussing this with our Sr. Developer, he says the table is purged of rows older than 6 months, but my question is whether the Lobs are actually purged with the rows.
Any ideas?
Edited by: 974632 on Dec 27, 2012 8:05 AM
Hi John,
Reading note 386341.1, this is pretty messed up about lobs.
First, the UNDO data for a LOB segment is kept within the LOB segment space, e.g., when lobs are deleted, etc. Yuck!
So, you are right about the space eventually being returned to the database, but surely we can do better than that!
Then, when we check for the size of the lobs using dbms_lob.getlength, (since we are using AL32UTF8), it returns it in the number of characters instead of bytes.
So, then we have to convert - ref. note 790886.1. An enhancement request via Bug 7156454 has been filed to get this functionality and is under consideration by development.
So, how does one (safely) clean up lobs that have been deleted in the database?
It seems that doing an alter table... 'move lob' might work, and also an alter table ... modify lob (...) (shrink space [cascade]);
But with this being production, I'm very concerned about all the related bugs, even though I am on 11.2.0.2.
WARNING : shrinking / reorganizing BASICFILE lobs can cause performance problems due to "enq: HW contention" waits
Serious LOB corruption can occur after an
ALTER TABLE <table> MODIFY LOB ("<lob_column>") (STORAGE (freelists <n>));
has been issued on a LOB column which resides in a manual space managed tablespace. Subsequent use of the LOB can fail with various internal errors such as:
ORA-600 [ktsbvmap1]
ORA-600 [25012]
For more information, please refer to bug 4450606.
#2. Be aware of the following bug before using the SHRINK option in releases which are <=10.2.0.3:
Bug: 5636728 LOB corruption / ORA-1555 when reading LOBs after a SHRINK operation
Please check:
Note.5636728.8 Ext/Pub Bug 5636728 - LOB corruption / ORA-1555 when reading LOBs after a SHRINK operation
for details on it.
#3. Be aware that, sometimes, it could be needed to perform the shrink operation twice, in order to avoid the:
Bug:5565887 SHRINK SPACE IS REQUIRED TWICE FOR RELEASING SPACE.
is fixed in 10.2.From looking at note: 1451124.1, it seems the best options are:
1) alter table move (locks the table, and requires additional space of at least double the size of the table).
2) do an export / drop the table / and reimport - again downtime required.
Neither option are possible in our environment.
Similar Messages
-
Hi,
I am having a SQL loader mapping where in I am loading data from a CSV file to Oracle target table.
My target table is having a CLOB column. Now, due to this a LOB index has also been created for this column.
My CSV is having 1 million records and when I execute the mapping it gets hang and doesnot load anything.
I think that due to that LOB index the execution is taking so long. Is there any way that I can disable this index during run-time. I tried but it gave me error: ORA-22864: cannot ALTER or DROP LOB indexes.
Can anyone of you suggest any alternate to this.
Thanks!!!!Hey Fren,
We have Secondary index to shorten the search time and thus to increase the proficiency in selecting the accurate records from numerous fields available for the option...
That means, we can say that Secondary Index can act as an Extension to the Primary Key fields available to Shorten the Search time.....
But if you use 'OR' condition and give a SELECT Query in your report it will act as the reverse effect of Secondary index......
Instead you can create the Secondary Index including the key fields with some extension to the same...
Means For example,
Table 1:
Field1 [x] CHAR 10
Field2 [x] CHAR 10
Field3 [ ] NUMC 10
Field4 [ ] CHAR 10
Field5 [ ] CHAR 10
Field6 [ ] NUMC 10
Field7 [ ] CHAR 10
Field8 [ ] CHAR 10
Field9 [ ] NUMC 10
Field10 [ ] CHAR 10
Field11 [ ] CHAR 10
Field12 [ ] NUMC 10
Field13 [ ] CHAR 10
Field14 [ ] CHAR 10
Field15 [ ] NUMC 10
Field16 [ ] CHAR 10
Field17 [ ] CHAR 10
Field18 [ ] NUMC 10
Field19 [ ] CHAR 10
Field20 [ ] CHAR 10
Field21 [ ] NUMC 10
Field22 [ ] CHAR 10
Field23 [ ] CHAR 10
Field24 [ ] NUMC 10
So you can have Field 3 Field4 Field5 Field6 as your secondary index.....
Thats it,
Thank you,
Inspire If Needful,
Warm Regards,
Abhi -
Gateway or Heterogeneous Support for BLOBS
I have some geo information stored in a SQLServer database as a BLOB that I want to use heterogeneous services or gateway capabilities as a passthrough to read the information. In looking at the 10g documentation, it appears that this is NOT supported. Any ideas on how to achieve this?
(B)LOB data is supported by Generic Connectivity (HSODBC) and gateway.
With HSODBC (B)LOB data maps to either Oracle datatype 'LONG RAW' or 'LONG' based on the ODBC-Driver mapping (SQL_LONGVARBINARY or SQL_LONGVARCHAR). The gateway supports SQL Server LOB datatypes: BINARY, IMAGE, NTEXT, TEXT and VARBINARY.
HSODBC has the following restrictions regarding LOB data:
- A table including a BLOB column must have a separate column that serves as a primary key.
- BLOB and CLOB data cannot be read by passthrough queries.
- Updating LONG columns with bind variables is not supported.
This information comes from the following manual:
Oracle® Database
Heterogeneous Connectivity Administrator's Guide
10g Release 2 (10.2)
B14232-01
For restrictions with the gateway for MS SQL Server regarding LOBs see the following manual:
Oracle® Transparent Gateway for Microsoft SQL Server
Administrator’s Guide
10g Release 2 (10.2) for Microsoft Windows (32-bit)
B14270-01
Chapter 3 - Microsoft SQL Server Gateway Features and Restrictions
Documentation can be downloaded from Oracle at: http://docs.oracle.com -
Error while crawling LOB contents SharePoint 2013
Error while crawling LOB contents SharePoint 2013
I have Configured the BDC Service application using SQL external content. The connection was successful and I am able to see the external content in the List "BDC Demo" . But when I search in the BDC Demo site it gives nothing.
So I checked in the crawl logs and identified that it shows " 1 " under error. to further drill down the issue , I went to click on "1" and see the error message : Error while crawling LOB contents SharePoint 2013 .
I have created an external DB named BCSDemo_DB for which I have granted my search Service account read& write permission.
I have added the same account under administrators for both secure store and BCS service applications.
I have done index reset , done a full crawl but the error still occurs.
Can someone please advise if I am missing something.
RegardsHi Aravinda,
According to your description, my understanding is that you got an error when you crawled SQL database table in SharePoint 2013.
This error is caused by the fact that the default content access account does not have any rights to access the metadata store in the Business Data Connectivity Service Application.
Or it is caused by the default content access account has no rights on the SQL database.
For fixing it, you need to grant the default content access account permission on metadata store in the Business Data Connectivity Service Application and the SQL database. You can refer to the link below:
http://www.sharepointinspiration.com/Lists/Posts/Post.aspx?ID=5
After that, do a full crawl for the content source.
Best Regards,
Wendy
Wendy Li
TechNet Community Support -
Hi,
I have two databases with sids' R2SRVR5.WORLD and R2SRVR3.WORLD
Now I have executed the follwoing in r2srvr5 database.
create table kou_test_1
roll integer,
clob_data clob
begin
insert into kou_test_1(roll,clob_data) values (2,empty_clob());
insert into kou_test_1(roll,clob_data) values (3,empty_clob());
commit;
end;
declare
v_clob clob := 'dfhgfjudhgjfhjdhgdhjghdjrjgjgjhfhjjgfyugfygfjgfjgfjgjygjyfgwfg
gfjygfjygfjuyuiryugyubgryuwbgrjyubgrjgrjywgerjgrgw
eukwgfuweyjyegfjgfjhrjujuhwruihri';
begin
update kou_test_1 set clob_data=v_clob where roll=3;
commit;
end;
Now I have created a table in r2srvr3 database.
create table kou_test_2
roll integer,
clob_data clob
and also created a database link from r2srvr3 to r2srvr5.
now when I am executing the following code it gives the error "ORA-22992: cannot use LOB locators selected from remote tables
ORA-06512: at line 2"
begin
insert into kou_test_2(roll,clob_data)
select roll,decode(roll,3,clob_data,empty_clob())
from [email protected];
commit;
end;
Now can any body provide me any solution of it except two separate inserts like
begin
insert into kou_test_2(roll,clob_data)
select roll,clob_data
from [email protected]
where roll=3;
commit;
end;
insert into kou_test_2(roll,clob_data)
select roll,null
from [email protected]
where roll<>3;
commit;
end;
Waiting for reply,
Regards,
KoushikHi,
I have two databases with sids' R2SRVR5.WORLD and R2SRVR3.WORLD
Now I have executed the follwoing in r2srvr5 database.
create table kou_test_1
roll integer,
clob_data clob
begin
insert into kou_test_1(roll,clob_data) values (2,empty_clob());
insert into kou_test_1(roll,clob_data) values (3,empty_clob());
commit;
end;
declare
v_clob clob := 'dfhgfjudhgjfhjdhgdhjghdjrjgjgjhfhjjgfyugfygfjgfjgfjgjygjyfgwfg
gfjygfjygfjuyuiryugyubgryuwbgrjyubgrjgrjywgerjgrgw
eukwgfuweyjyegfjgfjhrjujuhwruihri';
begin
update kou_test_1 set clob_data=v_clob where roll=3;
commit;
end;
Now I have created a table in r2srvr3 database.
create table kou_test_2
roll integer,
clob_data clob
and also created a database link from r2srvr3 to r2srvr5.
now when I am executing the following code it gives the error "ORA-22992: cannot use LOB locators selected from remote tables
ORA-06512: at line 2"
begin
insert into kou_test_2(roll,clob_data)
select roll,decode(roll,3,clob_data,empty_clob())
from [email protected];
commit;
end;
Now can any body provide me any solution of it except two separate inserts like
begin
insert into kou_test_2(roll,clob_data)
select roll,clob_data
from [email protected]
where roll=3;
commit;
end;
insert into kou_test_2(roll,clob_data)
select roll,null
from [email protected]
where roll<>3;
commit;
end;
Waiting for reply,
Regards,
Koushik -
Sharepoint 2013 vs Exchange 2010 SP3 search (Error while crawling LOB contents)
Hi there:
We are trying to solve the problem: ERROR CRAWLING LOB CONTENTS when we wish to search Exchange 2010 SP3 public folder content on Sharepoint 2013 Foundation.
Quick briefing:
Followed this instructions:
http://technet.microsoft.com/en-us/library/jj591608(v=office.15).aspx
* Created CRAWL RULE
- Used Domain Admin for content access ---> IS THIS WRONG?
- Domain Admin can access public folder thru Outlook Web Access (checked)
- Included all items in this path
PRINTSCREEN 1
* Added a content source for Exchange Server public folders
- Logged to Outlook Web Access with domain admin, expanded Public folders and opened 1st subfolder in new window and copied the address
- Logged to Outlook Web Access with domain admin, expanded Public folders and opened 2nd subfolder in new window and copied the address
PRINTSCREEN2
* Did a FULL CRAWL
PROBLEM:
- Search results does not throw "correct data". Some items are not being found
CRAWL LOG is reporting: Error while crawling LOB contents
Detailed error message:
https://mail.domain.com/OWA/?ae=Folder&id=PSF.LgAAAAAaRHOQqmYRzZvIAKoAL8RaAwAnt2ed15IATLg8XoXLNj4EAAAAXsN8AAAB&t=IPF.Note
Error while crawling LOB contents.
Error caused by exception: Microsoft.BusinessData.Infrastructure.BdcException
The shim execution failed unexpectedly - Exception has been thrown by the target of an invocation..:
System.InvalidOperationException An internal server error occurred.
Try again later.; SearchID = 4E8542D3-48EF-404E-8025-8D9AAEFE777A )
We thought it's a throttling issue / found possible solution:
http://powersearching.wordpress.com/2013/07/23/exchange-public-folders-search-fail-error-while-crawling-lob-contents/
Tried it, still same Error messages, problem not resolved.
Any hints? Please advise.
With best regards
bostjancHi Bostjan,
From the error message, the issue might be caused by throttling policy on Exchange side. And the article you posted provides the right solution, some modification to the solution and please try again.
For throttling policy part
1.Execute the command for Set-ThrottlingPolicy
Set-ThrottlingPolicy SharePoint -RCAMaxConcurrency Unlimited -EWSMaxConcurrency Unlimited -EWSMaxSubscriptions Unlimited -CPAMaxConcurrency Unlimited -EwsCutoffBalance Unlimited -EwsMaxBurst Unlimited -EwsRechargeRate Unlimited
2.Execute the command Get-ThrottlingPolicy SharePoint to double confirm the policy setting has been successfully executed
For registry key part
1. Start Registry Editor (regedit).
2. Navigate to the following registry subkey:
\\HKEY_LOCAL_MACHINE \SYSTEM\CurrentControlSet\Services\MSExchangeIS\ParametersSystem
3. Right-click ParametersSystem, point to New, and then click Key.
A new key is created in the console tree.
4. Rename the key MaxObjsPerMapiSession, and then press Enter.
5. Right-click MaxObjsPerMapiSession, point to New, and then click DWORD (32-bit) Value.
The new value is created in the result pane.
6. Rename the key to <Object_type>, where <Object_type> is the name of the registry object type that you're modifying. For example, to modify the number of messages that can be opened, use objtMessage. Press Enter.
7. Right-click the newly created key, and then click Modify.
8. In the Value data box, type the number of objects that you want to limit this entry to, and then click OK. For example, type 350 to increase the value for the object.
9. Restart the Microsoft Exchange Information Store service.
If it still doesn’t help, please check ULS log for related error message.
Regards,
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
[email protected] .
Rebecca Tu
TechNet Community Support -
AR Open Balances ~java.sql.SQLException: No corresponding LOB data found
Dear all,
Please gudie me to solve this error..
OPP.log file
[10/24/12 12:16:18 PM] [UNEXPECTED] [82244:RT2787964] java.sql.SQLException: No corresponding LOB data found :SELECT L.FILE_DATA FILE_DATA,DBMS_LOB.GETLENGTH(L.FILE_DATA) FILE_LENGTH, L.LANGUAGE LANGUAGE, L.TERRITORY TERRITORY, B.DEFAULT_LANGUAGE DEFAULT_LANGUAGE, B.DEFAULT_TERRITORY DEFAULT_TERRITORY,B.TEMPLATE_TYPE_CODE TEMPLATE_TYPE_CODE, B.USE_ALIAS_TABLE USE_ALIAS_TABLE, B.START_DATE START_DATE, B.END_DATE END_DATE, B.TEMPLATE_STATUS TEMPLATE_STATUS, B.USE_ALIAS_TABLE USE_ALIAS_TABLE, B.DS_APP_SHORT_NAME DS_APP_SHORT_NAME, B.DATA_SOURCE_CODE DATA_SOURCE_CODE, L.LOB_TYPE LOB_TYPE FROM XDO_LOBS L, XDO_TEMPLATES_B B WHERE L.APPLICATION_SHORT_NAME= :1 AND L.LOB_CODE = :2 AND L.APPLICATION_SHORT_NAME = B.APPLICATION_SHORT_NAME AND L.LOB_CODE = B.TEMPLATE_CODE AND (L.LOB_TYPE = 'TEMPLATE' OR L.LOB_TYPE = 'MLS_TEMPLATE') AND ( (L.LANGUAGE = :3 AND L.TERRITORY = :4) OR (L.LANGUAGE = :5 AND L.TERRITORY = :6) OR (L.LANGUAGE= B.DEFAULT_LANGUAGE AND L.TERRITORY= B.DEFAULT_TERRITORY ))
at oracle.apps.xdo.oa.schema.server.TemplateInputStream.initStream(TemplateInputStream.java:403)
at oracle.apps.xdo.oa.schema.server.TemplateInputStream.<init>(TemplateInputStream.java:236)
at oracle.apps.xdo.oa.schema.server.TemplateHelper.getTemplateFile(TemplateHelper.java:1164)
Regards
Dharmawhat you can't understand?
OPP.log tell you what no lob for template (layout as rtf, pdf) in tables XDO_LOBS and XDO_TEMPLATES_B
so you have the query for checking
>
Please gudie me to solve this error..
>
for AR Open Balances get appl_short_name
find template by query from opp.log or by xml publisher resp
check existing of template file -
How to find out if LOB is stored "IN ROW"?
Hi,
If I have a compressed secuerfile LOB defined as "enable storage in row", is there any way to find out how many LOBs are stored in the table segment and how many where moved "out" in the lobsegment due to exceeding the approx. 4000 byte limit?
If I haven't defined the question clearly enough, please let me know.
Thanks in advance for any answer and regards,
JureMaybe I didn't explain the question well. I didn't ask how to find out from the data dictionary whether the LOB is defined as "enable storage in row" or "disable storage in row". I asked how to find out how many LOBs (LOB instances) are stored "in row" (in the table segment) and how many in the lobsegment, given that the LOB is defined as "ENABLE STORAGE IN ROW" with enabled compression.
If I write the question in another way:
Suppose I have a heap table with a LOB column defined as SECUREFILE COMPRESS HIGH, e.g.:
CREATE TABLE test1 (id INTEGER, test_b BLOB)
TABLESPACE USERS
LOB (test_b) STORE AS SECUREFILE (
TABLESPACE USERS
COMPRESS HIGH
ENABLE STORAGE IN ROW
CHUNK 8192
)There are 1000 rows of data in the TEST1 table. The data length in the test_b BLOB varies from 100 bytes to 1MB, so some of those BLOBs are stored in the table segment (those with size approx. smaller than 4000bytes as described here http://download.oracle.com/docs/cd/E11882_01/appdev.112/e18294/adlob_tables.htm#sthref320) and some are stored in the lobsegment. My question is, how to find how many LOBs are stored in the table segment and how many are stored in the lobsegment due to exceeding approximately 4000 bytes (4000 bytes being the limit where the LOB is moved out of the table segment).
The obvious answer could be to check the size of the LOB and if it is less than 4000 bytes (actually less than the size reported by the DBMS_LOB.GETCHUNKSIZE function) and based on that know where the LOB data is stored, but there are two problems with that approach:
- how to account for data compression
- even if the lob is defined as "ENABLE STORAGE IN ROW" and it's less than 4000 bytes in size, it could still be stored out of the table segment: http://download.oracle.com/docs/cd/E11882_01/appdev.112/e18294/adlob_tables.htm#ADLOB45273 : "If you update a LOB that is stored out-of-line and the resulting LOB is less than approximately 4000 bytes, it is still stored out-of-line."
I hope I asked clearly this time.
Regards,
Jure -
R3load export of table REPOSRC with lob col - error ora-1555 and ora-22924
Hello,
i have tried to export data from our production system for system copy and then upgrade test. while i export the R3load job has reported error in table REPOSRC, which has lob column DATA. i have apsted below the conversation in which i have requested SAP to help and they said it comes under consulting support. this problem is in 2 rows of the table.
but i would like to know if i delete these 2 rows and then copy from our development system to production system at oracle level, will there be any problem with upgrade or operation of these prorgams and will it have any license complications if i do it.
Regards
Ramakrishna Reddy
__________________________ SAP SUPPORT COnveration_____________________________________________________
Hello,
we have are performing Data Export for System copy of our Production
system, during the export, R3load Job gave error as
R3LOAD Log----
Compiled Aug 16 2008 04:47:59
/sapmnt/DB1/exe/R3load -datacodepage 1100 -
e /dataexport/syscopy/SAPSSEXC.cmd -l /dataexport/syscopy/SAPSSEXC.log -stop_on_error
(DB) INFO: connected to DB
(DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): WE8DEC
(DB) INFO: Export without hintfile
(NT) Error: TPRI_PAR: normal NameTab from 20090828184449 younger than
alternate NameTab from 20030211191957!
(SYSLOG) INFO: k CQF :
TPRI_PAR&20030211191957&20090828184449& rscpgdio 47
(CNV) WARNING: conversion from 8600 to 1100 not possible
(GSI) INFO: dbname = "DB120050205010209
(GSI) INFO: vname = "ORACLE "
(GSI) INFO: hostname
= "dbttsap "
(GSI) INFO: sysname = "AIX"
(GSI) INFO: nodename = "dbttsap"
(GSI) INFO: release = "2"
(GSI) INFO: version = "5"
(GSI) INFO: machine = "00C8793E4C00"
(GSI) INFO: instno = "0020111547"
(DBC) Info: No commits during lob export
DbSl Trace: OCI-call 'OCILobRead' failed: rc = 1555
DbSl Trace: ORA-1555 occurred when reading from a LOB
(EXP) ERROR: DbSlLobGetPiece failed
rc = 99, table "REPOSRC"
(SQL error 1555)
error message returned by DbSl:
ORA-01555: snapshot too old: rollback segment number with name "" too
small
ORA-22924: snapshot too old
(DB) INFO: disconnected from DB
/sapmnt/DB1/exe/R3load: job finished with 1 error(s)
/sapmnt/DB1/exe/R3load: END OF LOG: 20100816104734
END of R3LOAD Log----
then as per the note 500340, i have chnaged the pctversion of table
REPOSRC of lob column DATA to 30, but i get the error still,
i have added more space to PSAPUNDO and PSAPTEMP also, still the same
error.
the i have run the export as
exp SAPDB1/sap file=REPOSRC.dmp log=REPOSRC.log tables=REPOSRC
exp log----
dbttsap:oradb1 5> exp SAPDB1/sap file=REPOSRC.dmp log=REPOSRC.log
tables=REPOSRC
Export: Release 9.2.0.8.0 - Production on Mon Aug 16 13:40:27 2010
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit
Production
With the Partitioning option
JServer Release 9.2.0.8.0 - Production
Export done in WE8DEC character set and UTF8 NCHAR character set
About to export specified tables via Conventional Path ...
. . exporting table REPOSRC
EXP-00056: ORACLE error 1555 encountered
ORA-01555: snapshot too old: rollback segment number with name "" too
small
ORA-22924: snapshot too old
Export terminated successfully with warnings.
SQL> select table_name, segment_name, cache, nvl(to_char
(pctversion),'NULL') pctversion, nvl(to_char(retention),'NULL')
retention from dba_lobs where
table_name = 'REPOSRC';
TABLE_NAME | SEGMENT_NAME |CACHE | PCTVERSION | RETENTION
REPOSRC SYS_LOB0000014507C00034$$ NO 30 21600
please help to solve this problem.
Regards
Ramakrishna Reddy
Dear customer,
Thank you very much for contacting us at SAP global support.
Regarding your issue would you please attach your ORACLE alert log and
trace file to this message?
Thanks and regards.
Hello,
Thanks for helping,
i attached the alert log file. i have gone through is, but i could
not find the corresponding Ora-01555 for table REPOSRC.
Regards
Ramakrishna Reddy
+66 85835-4272
Dear customer,
I have found some previous issues with the similar symptom as your
system. I think this symptom is described in note 983230.
As you can see this symptom is mainly caused by ORACLE bug 5212539 and
it should be fixed at 9.2.0.8 which is just your version. But although
5212539 is implemented, only the occurrence of new corruptions will be
avoided, the already existing ones will stay in the system regardless of the patch.
The reason why metalink 452341.1 was created is bug 5212539, since this
is the most common software caused lob corruption in recent times.
Basically any system that was running without a patch for bug 5212539 at some time in the past could be potentially affected by the problem.
In order to be sure about bug 5212539 can you please verify whether the
affected lob really is a NOCACHE lob? You can do this as described in
mentioned note #983230. If yes, then there are basically only two
options left:
-> You apply a backup to the system that does not contain these
corruptions.
-> In case a good backup is not available, it would be possible to
rebuild the table including the lob segment with possible data loss . Since this is beyond the scope of support, this would have to be
done via remote consulting.
Any further question, please contact us freely.
Thanks and regards.
Hello,
Thanks for the Help and support,
i have gone through the note 983230 and metalink 452341.1.
and i have ran the script and found that there are 2 rows corrupted in
the table REPOSRC. these rows belong to Standard SAP programs
MABADRFENTRIES & SAPFGARC.
and to reconfirm i have tried to display them in our development system
and production system. the development systems shows the src code in
Se38 but in production system it goes to short dump DBIF_REPO_SQL_ERROR.
so is it possible to delete these 2 rows and update ourselves from our
development system at oracle level. will it have any impact on SAP
operation or upgrade in future.
Regards
Ramakrishna ReddyHello, we have solved the problem.
To help someone with the same error, what we have done is:
1.- wait until all the processes has finished and the export is stopped.
2.- startup SAP
3.- SE14 and look up the tables. Crete the tables in the database.
4.- stop SAP
5.- Retry the export (if you did all the steps with sapinst running but the dialogue window in the screen) or begin the sapinst again with the option: "continue with the old options".
Regards to all. -
EXP/IMP..of table having LOB column to export and import using expdp/impdp
we have one table in that having colum LOB now this table LOB size is approx 550GB.
as per our knowldge LOB space can not be resused.so we have alrady rasied SR on that
we are come to clusion that we need to take backup of this table then truncate this table and then start import
we need help on bekow ponts.
1)we are taking backup with expdp using parellal parameter=4 this backup will complete sussessfully? any other parameter need to keep in expdp while takig backup.
2)once truncate done,does import will complete successfully..?
any SGA PGA or undo tablespace size or undo retention we need to increase.. to completer susecfully import? because its production Critical database?
current SGA 2GB
PGA 398MB
undo retention 1800
undo tbs 6GB
please any one give suggestion to perform activity without error...also suggest parameter need to keep during expdp/impdp
thanks an advance.Hi,
From my experience be prepared for a long outage to do this - expdp is pretty quick at getting lobs out but very slow at getting them back in again - a lot of the speed optimizations that may datapump so excellent for normal objects are not available for lobs. You really need to test this somewhere first - can you not expdp from live and load into some test area - you don't need the whole database just the table/lob in question. You don;t want to find out after you truncate the table that its going to take 3 days to import back in....
You might want to consider DBMS_REDEFINITION instead?
Here you precreate a temporary table (with same definitiion as the existing one), load the data into it from the existing table and then do a dictionary switch to swap them over - giving you minimal downtime. I think this should work fine with LOBS at 10g but you should do some research and see if it works fine. You'll need a lot of extra tablespace (temporarily) for this approach though.
Regards,
Harry -
Buffer busy waits after cnanging lob storage to oracle securefiles
Hi Everyone
I need help resolving a problem with buffer busy waits in for a lob segment using securefiles for storage.
During the load the application inserts a record into a table with the lob segment and update the record after, populating lob data. The block size on the table space holding the lob is 8 kb and the chunk size on the lob segment is set to 8kb. The average size of the lob record is 6 kb and the minimum size is 4.03 KB. The problem occurs only when running a job with a big number of relatively small inserts (4.03 Kb) in to the lob column . The table definition allow in-row storage and the ptcfree set to 10%. The same jobs runs without problem when using basicfiles storage for the lob column.
According to [oracle white paper |http://www.oracle.com/technetwork/database/options/compression/overview/securefiles-131281.pdf] securefiles have a number of performance enhancements. I was particular interested to test Write Gather Cache as our application does a lot of relatively small inserts into a lob segment.
Below is a fragment from the AWR report. It looks like all buffer busy waits belong to a free list class. The lob segment is located in an ASSM tablespace and I cannot increase freelists.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning option
Host Name Platform CPUs Cores Sockets Memory(GB)
DB5 Microsoft Windows x86 64-bit 8 2 31.99
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 1259 01-Apr-11 14:40:45 135 5.5
End Snap: 1260 01-Apr-11 15:08:59 155 12.0
Elapsed: 28.25 (mins)
DB Time: 281.55 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 2,496M 2,832M Std Block Size: 8K
Shared Pool Size: 1,488M 1,488M Log Buffer: 11,888K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 10.0 0.1 0.01 0.00
DB CPU(s): 2.8 0.0 0.00 0.00
Redo size: 1,429,862.3 9,390.5
Logical reads: 472,459.0 3,102.8
Block changes: 9,849.7 64.7
Physical reads: 61.1 0.4
Physical writes: 98.6 0.7
User calls: 2,718.8 17.9
Parses: 669.8 4.4
Hard parses: 2.2 0.0
W/A MB processed: 1.1 0.0
Logons: 0.1 0.0
Executes: 1,461.0 9.6
Rollbacks: 0.0 0.0
Transactions: 152.3
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
buffer busy waits 1,002,549 8,951 9 53.0 Concurrenc
DB CPU 4,724 28.0
latch: cache buffers chains 11,927,297 1,396 0 8.3 Concurrenc
direct path read 121,767 863 7 5.1 User I/O
enq: DW - contention 209,278 627 3 3.7 Other
?Host CPU (CPUs: 8 Cores: 2 Sockets: )
~~~~~~~~ Load Average
Begin End %User %System %WIO %Idle
38.7 3.5 57.9
Instance CPU
~~~~~~~~~~~~
% of total CPU for Instance: 40.1
% of busy CPU for Instance: 95.2
%DB time waiting for CPU - Resource Mgr: 0.0
Memory Statistics
~~~~~~~~~~~~~~~~~ Begin End
Host Mem (MB): 32,762.6 32,762.6
SGA use (MB): 4,656.0 4,992.0
PGA use (MB): 318.4 413.5
% Host Mem used for SGA+PGA: 15.18 16.50
Avg
%Time Total Wait wait Waits % DB
Event Waits -outs Time (s) (ms) /txn time
buffer busy waits 1,002,549 0 8,951 9 3.9 53.0
latch: cache buffers chain 11,927,297 0 1,396 0 46.2 8.3
direct path read 121,767 0 863 7 0.5 5.1
enq: DW - contention 209,278 0 627 3 0.8 3.7
log file sync 288,785 0 118 0 1.1 .7
SQL*Net more data from cli 1,176,770 0 103 0 4.6 .6
Buffer Wait Statistics DB/Inst: ORA11G/ora11g Snaps: 1259-1260
-> ordered by wait time desc, waits desc
Class Waits Total Wait Time (s) Avg Time (ms)
free list 818,606 8,780 11
undo header 512,358 141 0
2nd level bmb 105,816 29 0
-> Total Logical Reads: 800,688,490
-> Captured Segments account for 19.8% of Total
Tablespace Subobject Obj. Logical
Owner Name Object Name Name Type Reads %Total
EAG50NSJ EAG50NSJ SYS_LOB0000082335C00 LOB 127,182,208 15.88
SYS SYSTEM TS$ TABLE 7,641,808 .95
Segments by Physical Reads DB/Inst: ORA11G/ora11g Snaps: 1259-1260
-> Total Physical Reads: 103,481
-> Captured Segments account for 224.4% of Total
Tablespace Subobject Obj. Physical
Owner Name Object Name Name Type Reads %Total
EAG50NSJ EAG50NSJ SYS_LOB0000082335C00 LOB 218,858 211.50
....Best regards
Yuri KogunHi Jonathan,
I was puzzled by the number of logical reads as well. This hasn't happened when the lob was stored as a basic fille and I assumed that the database is able to store the records in-row when we switched to securefiles. With regards to ASSM, according to the documentation this is the only option when using securefiles.
We did have high number of HW-enqueue waits in the database when running the test with basic files and had to set 44951 event
alter system set EVENTS '44951 TRACE NAME CONTEXT FOREVER, LEVEL 1024' There are 2 application servers running 16 jobs each, so we should not have more than 32 sessions inserting the data in the same time but I need to check wheter jobs can be brocken to smaller peaces. I that case the number of concurrent session may be bigger. Each session is configured with bundle size of 30 and it will issue commit every 30 inserts.
I am not sure how exactly the code does insert, as I've been told it should be straight insert and update I will be able to check this on Monday.
Below is the extract from the AWR reports with top SQL, I could not find any SQL related to the $TS table in the report. The query to the V$SEGMENT_STATISTICS was executed by me during the job run.
?SQL ordered by Elapsed Time DB/Inst: ORA11G/ora11g Snaps: 1259-1260
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
-> %Total - Elapsed Time as a percentage of Total DB time
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 91.3% of Total DB Time (s): 16,893
-> Captured PL/SQL account for 0.1% of Total DB Time (s): 16,893
Elapsed Elapsed Time
Time (s) Executions per Exec (s) %Total %CPU %IO SQL Id
7,837.5 119,351 0.07 46.4 28.3 .7 2zrh6mw372asz
Module: JDBC Thin Client
update JS_CHANNELDESTS set CHANNELID=:1, DESTID=:2, CHANNELDESTSTATUSDATE=:3, ST
ATUS=:4, BINOFFSET=:5, BINNAME=:6, PAGECOUNT=:7, DATA=:8, SORTORDER=:9, PRINTFOR
MAT=:10, ENVELOPEID=:11, DOCID=:12, CEENVELOPEID=:13, CHANNELTYPE=:14 where ID=:
15
7,119.0 115,997 0.06 42.1 23.1 .2 3vjx93vur4dw1
Module: JDBC Thin Client
insert into JS_CHANNELDESTS (CHANNELID, DESTID, CHANNELDESTSTATUSDATE, STATUS, B
INOFFSET, BINNAME, PAGECOUNT, DATA, SORTORDER, PRINTFORMAT, ENVELOPEID, DOCID, C
EENVELOPEID, CHANNELTYPE, ID) values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :
11, :12, :13, :14, :15)
85.6 2 42.80 .5 98.3 .0 cc19qha9pxsa4
Module: SQL Developer
select object_name, statistic_name, value from V$SEGMENT_STATISTICS
where object_name = 'SYS_LOB0000082335C00011$$'
35.0 111,900 0.00 .2 74.3 7.6 c5q15mpnbc43w
Module: JDBC Thin Client
insert into JS_ENVELOPES (BATCHID, TRANSACTIONNO, SPOOLID, JOBSETUPID, JOBSETUPN
AME, SPOOLNAME, STEPNO, MASTERCHANNELJOBID, SORTKEY1, SORTKEY2, SORTKEY3, ID) va
lues (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12)
34.9 111,902 0.00 .2 63.0 2.6 a0hmmbjwgwh1k
Module: JDBC Thin Client
insert into JS_CHANNELJOBPROPERTIES (NAME, VALUE, CHANNELJOBID, ID) values (:1,
:2, :3, :4)
29.2 950 0.03 .2 95.9 .1 du0hgjbn9vw0v
Module: JDBC Thin Client
SELECT * FROM JS_BATCHOVERVIEW WHERE BATCHID = :1
?SQL ordered by Executions DB/Inst: ORA11G/ora11g Snaps: 1259-1260
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Executions: 2,476,038
-> Captured SQL account for 96.0% of Total
Elapsed
Executions Rows Processed Rows per Exec Time (s) %CPU %IO SQL Id
223,581 223,540 1.0 22.4 63.7 .0 gz7n75pf57c
Module: JDBC Thin Client
SELECT SQ_CHANNELJOBPROPERTIES.NEXTVAL FROM DUAL
120,624 120,616 1.0 8.1 99.0 .0 6y3ayqzubcb
Module: JDBC Thin Client
select batch0_.BATCHID as BATCHID0_0_, batch0_.BATCHNAME as BATCHNAME0_0_, batch
0_.STARTDATE as STARTDATE0_0_, batch0_.PARFINDATE as PARFINDATE0_0_, batch0_.PRO
CCOMPDATE as PROCCOMP5_0_0_, batch0_.BATCHSTATUS as BATCHSTA6_0_0_, batch0_.DATA
FILE as DATAFILE0_0_, batch0_.BATCHCFG as BATCHCFG0_0_, batch0_.FINDATE as FINDA
119,351 227,878 1.9 7,837.5 28.3 .7 2zrh6mw372a
Module: JDBC Thin Client
update JS_CHANNELDESTS set CHANNELID=:1, DESTID=:2, CHANNELDESTSTATUSDATE=:3, ST
ATUS=:4, BINOFFSET=:5, BINNAME=:6, PAGECOUNT=:7, DATA=:8, SORTORDER=:9, PRINTFOR
MAT=:10, ENVELOPEID=:11, DOCID=:12, CEENVELOPEID=:13, CHANNELTYPE=:14 where ID=:
15
116,033 223,892 1.9 8.0 92.2 .0 406wh6gd9nk
Module: JDBC Thin Client
select m_jobprope0_.CHANNELJOBID as CHANNELJ4_1_, m_jobprope0_.ID as ID1_, m_job
prope0_.NAME as formula0_1_, m_jobprope0_.ID as ID4_0_, m_jobprope0_.NAME as NAM
E4_0_, m_jobprope0_.VALUE as VALUE4_0_, m_jobprope0_.CHANNELJOBID as CHANNELJ4_4
_0_ from JS_CHANNELJOBPROPERTIES m_jobprope0_ where m_jobprope0_.CHANNELJOBID=:1
115,997 115,996 1.0 7,119.0 23.1 .2 3vjx93vur4d
Module: JDBC Thin Client
insert into JS_CHANNELDESTS (CHANNELID, DESTID, CHANNELDESTSTATUSDATE, STATUS, B
INOFFSET, BINNAME, PAGECOUNT, DATA, SORTORDER, PRINTFORMAT, ENVELOPEID, DOCID, C
EENVELOPEID, CHANNELTYPE, ID) values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :
11, :12, :13, :14, :15)
115,996 115,996 1.0 15.9 75.0 4.5 3h58syyk145
Module: JDBC Thin Client
insert into JS_DOCJOBS (CREATEDATE, EFFDATE, JURIST, LANG, IDIOM, DD, DDVID, USE
RKEY1, USERKEY2, USERKEY3, USERKEY4, USERKEY5, USERKEY6, USERKEY7, USERKEY8, USE
RKEY9, USERKEY10, USERKEY11, USERKEY12, USERKEY13, USERKEY14, USERKEY15, USERKEY
16, USERKEY17, USERKEY18, USERKEY19, USERKEY20, REVIEWCASEID, ID) values (:1, :2
115,440 115,422 1.0 11.5 63.3 .0 2vn581q83s6
Module: JDBC Thin Client
SELECT SQ_CHANNELDESTS.NEXTVAL FROM DUAL
...The tablespace holding the lob segment has system extent allocation and the number of blocks for the lob segments roughly the same as the number of blocks in allocated extents.
select segment_name, blocks, count (*)
from dba_extents where segment_name = 'SYS_LOB0000082335C00011$$'
group by segment_name, blocks
order by blocks
SEGMENT_NAME BLOCKS COUNT(*)
SYS_LOB0000082335C00011$$ 8 1
SYS_LOB0000082335C00011$$ 16 1
SYS_LOB0000082335C00011$$ 128 158
SYS_LOB0000082335C00011$$ 256 1
SYS_LOB0000082335C00011$$ 1024 120
SYS_LOB0000082335C00011$$ 2688 1
SYS_LOB0000082335C00011$$ 8192 117
SELECT
sum(ceil(dbms_lob.getlength(data)/8000))
from EAG50NSJ.JS_CHANNELDESTS
SUM(CEIL(DBMS_LOB.GETLENGTH(DATA)/8000))
993216
select sum (blocks) from dba_extents where segment_name = 'SYS_LOB0000082335C00011$$'
SUM(BLOCKS)
1104536 Below is the instance activity stats related to securefiles from the AWR report
Statistic Total per Second per Trans
securefile allocation bytes 3,719,995,392 2,195,042.4 14,415.7
securefile allocation chunks 380,299 224.4 1.5
securefile bytes non-transformed 2,270,735,265 1,339,883.4 8,799.6
securefile direct read bytes 1,274,585,088 752,089.2 4,939.3
securefile direct read ops 119,725 70.7 0.5
securefile direct write bytes 3,719,995,392 2,195,042.4 14,415.7
securefile direct write ops 380,269 224.4 1.5
securefile number of non-transfo 343,918 202.9 1.3Best regards
Yuri
Edited by: ykogun on 02-Apr-2011 13:33 -
ORA_22290 error while using LOB's
"ErrMsg: ORA-22990: LOB locators cannot span transactions, Code: -22990"
can somebody help what this error means, and how to resolve it.
Regards,
ManiORA-22990: LOB locators cannot span transactions
Cause: A LOB locator selected in one transaction cannot be used in a different transaction.
Action: Re-select the LOB locator and retry the operation.
For future, you can view
http://tahiti.oracle.com/pls/db102/db102.error_search? -
ORA-22275: invalid LOB locator specified in a function
Hello all!!!
I am having a little problem with a function that returns a blob... When I call the function, I get that error... Here is the function (I took all the exception management code to clear it up a little...)
<CODE>
FUNCTION f_getfileblob (p_id IN NUMBER,
p_application IN VARCHAR2,
p_subject IN VARCHAR2)
RETURN BLOB
IS
v_table_name VARCHAR2(50);
v_sql_string VARCHAR2(1000);
lobfile BLOB := empty_blob();
v_error NUMBER;
BEGIN
SELECT TABLE_NAME INTO v_table_name FROM ORACLE_TEXT_FILE WHERE APPLICATION = p_application AND SUBJECT = p_subject;
v_sql_string := 'SELECT FILE_BLOB FROM ' || v_table_name || ' WHERE id = :1';
EXECUTE IMMEDIATE v_sql_string INTO lobfile USING p_id;
RETURN lobfile;
END;
</CODE>
So, in this function, the first select is to find the name of the table in which I store my blobs (I'm trying to do something generic and cross application). Once I have that name, I can do the select of the blob. I can only use dynamic SQL because of the table name that is not known in advance.
I tried this function with
DBMS_LOB.CREATETEMPORARY(LOBFILE, TRUE, DBMS_LOB.CALL);
to create the lob at the begining, but this returns another error... (i tried with and without the initialisation of the blob, empty_blob())
ORA-24801: illegal parameter value in OCI lob function. But I don't even know if it would help...
Can somebody please help me?
Thanks and best regards
Neil.Sorry about that, error came from elsewhere...
Thanks anyway
Best regards
Neil. -
ORA-22275: invalid LOB locator specified
Hello,
I use Oracle 11.2.0.3. APEX 4.2.2... Listener 2.0.3 .... Glassfish server 4.0.
When I run this procedure ( that is used in this tutorial )
I get ORA-22275: invalid LOB locator specified.
The error persists over Glass fish 3.0.2, Listener 2.0.1 and 2.0.2.
Also, I installed this patch 16803775, but to not avail.
declare
v_mime VARCHAR2(48);
v_length NUMBER;
v_file_name VARCHAR2(2000);
Lob_loc BLOB;
BEGIN
SELECT MIMETYPE, CONTENT, filename,DBMS_LOB.GETLENGTH(content)
INTO v_mime,lob_loc,v_file_name,v_length
FROM image
WHERE id = 70;
htp.init;
-- set up HTTP header
-- use an NVL around the mime type and
-- if it is a null set it to application/octect
-- application/octect may launch a download window from windows
owa_util.mime_header( nvl(v_mime,'application/octet'), FALSE );
-- set the size so the browser knows how much to download
htp.p('Content-length: ' || v_length);
-- the filename will be used by the browser if the users does a save as
htp.p('Content-Disposition: attachment; filename="'||replace(replace(substr(v_file_name,instr(v_file_name,'/')+1),chr(10),null),chr(13),null)|| '"');
-- close the headers
owa_util.http_header_close;
owa_util.http_header_close;
-- download the BLOB
wpg_docload.download_file( Lob_loc );
end ;
Any help pls, in getting that procedure works ?
Regards,
Fatehreplace this statement
select empty_clob() into c_xml from dual for update;
with
dbms_lob.createtemporary(c_xml, TRUE); -
I've tried to append couple of BLOB fields from one table and then update it to a record in another table. I got the following error. Does anyone know why?
CHECK POINT 1
begin
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: invalid LOB locator specified:
ORA-22275
ORA-06512: at "SYS.DBMS_LOB", line 753
ORA-06512: at "RBSSDEV.PG_TB_COMMENT_DETAIL", line 57
ORA-06512: at line 2
My store procedure is :
PROCEDURE SP_Insert_Comment_Detail
( v_CommentOID IN TB_COMMENT_DETAIL.COMMENTOID%TYPE ,
v_DBName IN TB_COMMENT_DETAIL.DBNAME%TYPE ,
v_ApplicationOID IN TB_COMMENT_DETAIL.APPLICATIONOID%TYPE ,
v_Comments IN TB_COMMENT_DETAIL.COMMENTS%TYPE ,
v_Created IN TB_COMMENT_DETAIL.CREATED%TYPE ,
v_UserID IN TB_COMMENT_DETAIL.USERID%TYPE
IS
lv_CommentsBlob BLOB := EMPTY_BLOB;
lv_CommentBlob BLOB := EMPTY_BLOB;
lv_NewComment VARCHAR2(1) := 'N';
CURSOR CommentDetail_cur IS
SELECT Comments
FROM TB_COMMENT_DETAIL
WHERE CommentOID = v_CommentOID
ORDER BY RecDate, DBName;
CURSOR Comment1_cur IS
SELECT Comments
FROM COMMENT1
WHERE CommentOID = v_CommentOID FOR UPDATE;
BEGIN
DBMS_LOB.CreateTemporary(lv_CommentBlob, TRUE, DBMS_LOB.CALL);
OPEN CommentDetail_cur;
FETCH CommentDetail_cur INTO lv_CommentBlob;
IF CommentDetail_cur%NOTFOUND THEN
lv_NewComment := 'Y';
END IF;
CLOSE CommentDetail_cur;
INSERT INTO TB_COMMENT_DETAIL
(CommentOID, RecDate, DBName, ApplicationOID,
Comments, Created, UserID)
VALUES
(v_CommentOID, SYSDATE, v_DBName, v_ApplicationOID,
v_Comments, v_Created, v_UserID);
IF lv_NewComment = 'Y' THEN
INSERT INTO Comment1
VALUES (v_CommentOID, v_ApplicationOID, v_Comments, v_Created, v_UserID);
COMMIT;
ELSE
DBMS_LOB.CreateTemporary(lv_CommentBlob, TRUE, DBMS_LOB.CALL);
DBMS_LOB.CreateTemporary(lv_CommentsBlob, TRUE, DBMS_LOB.CALL);
OPEN Comment1_cur;
FETCH Comment1_cur INTO lv_CommentsBlob;
-- Empty the Comments field of the Comment1 table
IF Comment1_cur%FOUND THEN
DBMS_OUTPUT.PUT_LINE('CHECK POINT 1');
DBMS_LOB.TRIM(lv_CommentsBlob, 0);
DBMS_OUTPUT.PUT_LINE('CHECK POINT 2');
END IF;
OPEN CommentDetail_cur;
LOOP
FETCH CommentDetail_cur INTO lv_CommentBlob;
EXIT WHEN CommentDetail_cur%NOTFOUND;
DBMS_OUTPUT.PUT_LINE('CHECK POINT 3');
DBMS_LOB.APPEND(lv_CommentsBlob, lv_CommentBlob);
DBMS_OUTPUT.PUT_LINE('CHECK POINT 4');
END LOOP;
COMMIT;
CLOSE Comment1_cur;
CLOSE CommentDetail_cur;
END IF;
END SP_Insert_Comment_Detail;Sorry about that, error came from elsewhere...
Thanks anyway
Best regards
Neil.
Maybe you are looking for
-
Help, Recurring Event WON'T Edit or Delete
I've read the threads about the slow iCal, or SBOD, or not responding. I've tried deleting the plists, backing up then restoring the calenders. I've made all my recurring events end at a date instead of never. I think I have the problem narrowed down
-
RFC Destination Seetings Problem between R/3 and XI System
Hi... In my RFC to File Scenario I cant see the messages in MONI...... When we create RFC destination through SM59.....we will fill gateway host and service of R/3......Then we Configure the RFC Sender CC and fill same Program ID, Gateway host & serv
-
Unlocked Iphone 4s doesn't work in my country
Hi I bought an "unlocked 4s" from this seller on e-bay. He has a lot of reputation and good feedback and seems genunine. http://www.ebay.co.uk/itm/350980444162 Now we live in little country called Andorra next to Spain and France. I put my local sim
-
I was trying to download cs2 free version but, the serial number given comes back invalid. Is there a new one?
-
No-sense error with videos!!!!!
I have a ******* problem with your unit Ipod and Itunes and I have a video in QuickTime (. Mov) 10 min. H.264 320x240 px which I can add to the iTunes library but I can not pass it on to my Ipod !!!!!!! This just seems to me tremendously inconsistent