Archiving a ztable with string/lob column
Hello,
We have a ZTable that is quite big.
Our functionals would like to preserve this information,
even if they rarely access it.
The table is independent of other tables.
This table contains a zone with the abap data type string.
This is stored as a lobs column in the Oracle database.
1. Is it possible to "sara" archive that kind of tables?
2. Where could I found examples of ADK abap delete/write/read/reload programs
for ztables?
Thanks in advance for your answers.
Give help.sap.com a chance:
http://help.sap.com/saphelp_sm40/helpdata/EN/2a/fa042d493111d182b70000e829fbfe/frameset.htm
it is pretty well explained there and even mentiones example programs: SBOOKA and SFLIGHTA
Similar Messages
-
Conflict resolution for a table with LOB column ...
Hi,
I was hoping for some guidance or advice on how to handle conflict resolution for a table with a LOB column.
Basically, I had intended to handle the conflict resolution using the MAXIMUM prebuilt update conflict handler. I also store
the 'update' transaction time in the same table and was planning to use this as the resolution column to resolve the conflict.
I see however these prebuilt conflict handlers do not support LOB columns. I assume therefore I need to code a customer handler
to do this for me. I'm not sure exactly what my custom handler needs to do though! Any guidance or links to similar examples would
be very much appreciated.Hi,
I have been unable to make any progress on this issue. I have made use of prebuilt update handlers with no problems
before but I just don't know how to resolve these conflicts for LOB columns using custom handlers. I have some questions
which I hope make sense and are relevant:
1.Does an apply process detect update conflicts on LOB columns?
2.If I need to create a custom update/error handler to resolve this, should I create a prebuilt update handler for non-LOB columns
in the table and then a separate one for the LOB columns OR is it best just to code a single custom handler for ALL columns?
3.In my custom handler, I assume I will need to use the resolution column to decide whether or not to resolve the conflict in favour of the LCR
but how do I compare the new value in the LCR with that in the destination database? I mean how do I access the current value in the destination
database from the custom handler?
4.Finally, if I need to resolve in favour of the LCR, do I need to call something specific for LOB related columns compared to non-LOB columns?
Any help with these would be very much appreciated or even if someone can direct me to documentation or other links that would be good too.
Thanks again. -
Large Block Chunk Size for LOB column
Oracle 10.2.0.4:
We have a table with 2 LOB columns. Avg blob size of one of the columns is 122K and the other column is 1K. so I am planning to move column with big blob size to 32K chunk size. Some of the questions I have is:
1. Do I need to create a new tablespace with 32K block size and then create table with chunk size of 32K for that LOB column or just create a table with 32K chunk size on the existing tablespace which has 8K block size? What are the advantages or disadvatanges of one approach over other.
2. Currently db_cache_size is set to "0", do I need to adjust some parameters for large chunk/block size?
3. If I create a 32K chunk is that chunk shared with other rows? For eg: If I insert 2K block would 30K block be available for other rows? The following link says 30K will be a wasted space:
[LOB performance|http://www.oracle.com/technology/products/database/application_development/pdf/lob_performance_guidelines.pdf]
Below is the output of v$db_cache_advice:
select
size_for_estimate c1,
buffers_for_estimate c2,
estd_physical_read_factor c3,
estd_physical_reads c4
from
v$db_cache_advice
where
name = 'DEFAULT'
and
block_size = (SELECT value FROM V$PARAMETER
WHERE name = 'db_block_size')
and
advice_status = 'ON';
C1 C2 C3 C4
2976 368094 1.2674 150044215
5952 736188 1.2187 144285802
8928 1104282 1.1708 138613622
11904 1472376 1.1299 133765577
14880 1840470 1.1055 130874818
17856 2208564 1.0727 126997426
20832 2576658 1.0443 123639740
23808 2944752 1.0293 121862048
26784 3312846 1.0152 120188605
29760 3680940 1.0007 118468561
29840 3690835 1 118389208
32736 4049034 0.9757 115507989
35712 4417128 0.93 110102568
38688 4785222 0.9062 107284008
41664 5153316 0.8956 106034369
44640 5521410 0.89 105369366
47616 5889504 0.8857 104854255
50592 6257598 0.8806 104258584
53568 6625692 0.8717 103198830
56544 6993786 0.8545 101157883
59520 7361880 0.8293 98180125With only a 1K LOB you are going to want to use a 8K chunk size as per the reference in the thread above to the Oracle document on LOBs the chunk size is the allocation unit.
Each LOB column has its own LOB table so each column can have its own LOB chunk size.
The LOB data type is not known for being space efficient.
There are major changes available on 11g with Secure Files being available to replace traditional LOBs now called Basic Files. The differences appear to be mostly in how the LOB data, segments, are managed by Oracle.
HTH -- Mark D Powell -- -
ORA-02348: cannot create VARRAY column with embedded LOB
Hi
This error message I get when I try to create a table from my schema file which has a (sub-) element of type CLOB.
In my XML document I have an element which needs to become declared a CLOB (because it's > 4000 bytes), in my Schema I define it's element node like:
<xs:element name="MocovuState" xdb:SQLType="CLOB">
I can register this Schema file but when I create the table, I get the error:
ORA-02348: cannot create VARRAY column with embedded LOB
Does anybody know how to handle this ?
MarcelYou need to use the xdb:storeVarrayAsTable="true" schema annotation so that unbounded elements are created at schema registration time as nested tables. Varrays can not contain CLOBs/BLOBS. Use the schema annotation xdb:SQLType="CLOB" to tell Oracle XMLDB to use CLOB storage for the element. See your schema below:
P.S. XMLSPY is invaluable as it supports Oracle XML Schema annotations.
<?xml version="1.0"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xdb="http://xmlns.oracle.com/xdb" targetNamespace="http://www.yourregisteredschemanamespace.com" elementFormDefault="qualified" attributeFormDefault="unqualified" xdb:storeVarrayAsTable="true">
<xs:element name="nRootNode">
<xs:complexType>
<xs:all>
<xs:element name="nID" type="xs:long"/>
<xs:element name="nStringGroup" type="nStringGroup" minOccurs="0"/>
</xs:all>
</xs:complexType>
</xs:element>
<xs:complexType name="nStringGroup">
<xs:sequence>
<xs:element name="nString" type="nString" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
<xs:complexType name="nString" xdb:SQLType="CLOB">
<xs:sequence>
<xs:element name="nValue" type="nValue" minOccurs="0" xdb:SQLType="CLOB"/>
</xs:sequence>
<xs:attribute name="id" type="xs:long" use="required"/>
</xs:complexType>
<xs:simpleType name="nValue">
<xs:restriction base="xs:string">
<xs:minLength value="1"/>
</xs:restriction>
</xs:simpleType>
</xs:schema> -
Hello,
I have a table with 1.000.000 BLOB records. I updated almost a half of the records with NULL. Now I try to reclaim the free space using:
ALTER TABLE table MODIFY LOB (column) (SHRINK SPACE);
It's still running from some time, but what I am surprised about is that this operation generates a lot of redo logs (the full table had 30Gb, after the update it should have 15Gb, and by now I already have about 8Gb of generated archive logs).
Do you know why this operation generates redo logs?
Thank you,
AdrianThe REDO stream that Oracle generates is full of physical addresses (i.e. ROWIDs). If you run an update statement
UPDATE some_table
SET some_column = 4
WHERE some_key = 12345;Oracle actually records in the REDO the logical equivalent of
UPDATE some_table
SET some_column = 4
WHERE ROWID = <<some ROWID>>That is, Oracle converts your logical SQL statement into a series of updates to a series of physical addresses. That's a really helpful thing if the REDO has to be re-applied at a later date because Oracle doesn't have to do all the work of processing the logical SQL statement again (this would be particularly useful if your UPDATE statement were running a bunch of queries that took minutes or hours to return).
But that means that if you are physically moving rows around, you have to record that fact in the redo stream. Otherwise, if you had to re-apply the redo information (or undo information) in the future, the physical addresses stored in the redo logs may not match the physical addresses in the database. That is, if you move the row with SOME_KEY = 12345 from ROWID A to ROWID B and move the row with SOME_KEY = 67890 from ROWID C to ROWID A, you have to record both of those moves in the redo stream so that the statement
UPDATE some_table
SET some_column = 4
WHERE ROWID = <<ROWID A>>updates the correct row.
Justin -
When using "Database diff" selecting other schemas only for compare own objects are shown too!Hi!
For tables with lob columns (clob, blob, etc.) indexes with system names are automatically created per lob column.
If I am on different database instances (eg. dev/test) these system names can differ and are shown as differences, but these is a false positive.
Unfortunately there is now way to influence the index names.
Any chance to fix this in sql developer?
Best regards
TorstenOnly the Sql Dev team can respond to that question.
Such indexes should ONLY be created by Oracle and should NOT be part of any DDL that you, the user, maintains outside the database since they will be created by Oracle when the table is created and will be named at that time.
It is up to the Sql Dev team to decide whether to deal with that issue and how to deal with it. -
Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column
Hi,
Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
INSERT AS SELECT
CREATE TABLE AS SELECT
DELETE
UPDATE
MERGE (conditional UPDATE and INSERT)
Multi-table INSERT
So I created and populated a simple table with a BLOB column:
SQL> CREATE TABLE T1 (A BLOB);
Table created.
Then, I tried to see the execution plan of a parallel DELETE:
SQL> EXPLAIN PLAN FOR
2 delete /*+parallel (t1,8) */ from t1;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3718066193
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | DELETE STATEMENT | | 2048 | 2 (0)| 00:00:01 | | | |
| 1 | DELETE | T1 | | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM)| :TQ10000 | 2048 | 2 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 4 | PX BLOCK ITERATOR | | 2048 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL| T1 | 2048 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
PLAN_TABLE_OUTPUT
Note
- dynamic sampling used for this statement (level=2)
And I finished by executing the statement.
SQL> commit;
Commit complete.
SQL> alter session enable parallel dml;
Session altered.
SQL> delete /*+parallel (t1,8) */ from t1;
2048 rows deleted.
As we can see, the statement has been run as parallel:
SQL> select * from v$pq_sesstat;
STATISTIC LAST_QUERY SESSION_TOTAL
Queries Parallelized 1 1
DML Parallelized 0 0
DDL Parallelized 0 0
DFO Trees 1 1
Server Threads 5 0
Allocation Height 5 0
Allocation Width 1 0
Local Msgs Sent 55 55
Distr Msgs Sent 0 0
Local Msgs Recv'd 55 55
Distr Msgs Recv'd 0 0
11 rows selected.
Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
Thank you for your help.
MichaelYes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
SQL> explain plan for delete from t1;
Explained.
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | DELETE STATEMENT | | 4 | 2 (0)| 00:00:01 | | | |
| 1 | DELETE | T1 | | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM)| :TQ10000 | 4 | 2 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 4 | PX BLOCK ITERATOR | | 4 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL| T1 | 4 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
The DELETE is not performed in Parallel.
I tried with another statement :
SQL> explain plan for
2 insert into t1 select * from t1;
Here are the results:
11g
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | INSERT STATEMENT | | 4 | 8008 | 2 (0)| 00:00:01 | | | |
| 1 | LOAD TABLE CONVENTIONAL | T1 | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 4 | PX BLOCK ITERATOR | | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL | T1 | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
12c
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | INSERT STATEMENT | | 4 | 8008 | 2 (0)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | LOAD AS SELECT | T1 | | | | | Q1,00 | PCWP | |
| 4 | OPTIMIZER STATISTICS GATHERING | | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4 | 8008 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
It seems that the DELETE statement has problems but not the INSERT AS SELECT ! -
ASSM and table with LOB column
I have a tablespace created with ASSM option. I've heard about that tables with LOB columns can't take the advantage of ASSM.
I made a test : create a table T with BLOB column in a ASSM tablespace. I succeeded!
Now I have some questions:
1. Since the segments of table T can't use ASSM to manage its blocks, what's the actrual approach? The traditional freelists??
2. Will there be some bad impacts on the usage of the tablespace if table T becomes larger and larger and is used frequently?
Thanks in advance.Can you explain what you mean by #1 because I believe it is incorrect and it does not make sense in my personal opinion. You can create a table in an ASSM tablespace that has a LOB column from 9iR2 on I believe (could be wrong). LOBs don't follow the traditional PCTFREE/PCTUSED scenario. They allocate data in what are called "chunks" that you can define at the time you create the table. In fact I think the new SECUREFILE LOBs actually require ASSM tablespaces.
HTH! -
Hi!
I have to export table with lob column (3 GB is the size of lob segment) and then drop that lob column from table. Table has about 350k rows.
(I was thinking) - I have to:
1. create new tablespace
2. create copy of my table with CTAS in new tablespace
3. alter new table to be NOLOGGING
4. insert all rows from original table with APPEND hint
5. export copy of table using transport tablespace feature
6. drop newly created tablespace
7. drop lob column and rebuild original table
DB is Oracle 9.2.0.6.0.
UNDO tablespace limited on 2GB with retention 10800 secs.
When I tried to insert rows to new table with /*+append*/ hint operation was very very slow so I canceled it.
How much time should I expect for this operation to complete?
Is my UNDO sufficient enough to avoid snapshot too old?
What do you think?
Thanks for your answers!
Regards,
Marko SuticI've seen that document before I posted this question.
Still I don't know what should I do. Look at this document - Doc ID: 281461.1
From that document:
FIX
Although the performance of the export cannot be improved directly, possible
alternative solutions are:
+1. If not required, do not use LOB columns.+
or:
+2. Use Transport Tablespace export instead of full/user/table level export.+
or:
+3. Upgrade to Oracle10g and use Export DataPump and Import DataPump.+
I just have to speed up CTAS little more somehow (maybe using parallel processing).
Anyway thanks for suggestion.
Regards,
Marko -
Add columns to a table with lob column
Hi,
Just a quick question: is there a performance penalty after adding columns to a table with a lob fied? the lob field is now the last column in the table and via via I was told that adding columns will impact badly the IO performance on the table if the lob field isn't anymore the last column. The table is on a Oracle 10.2.0.3 version.
thanks. regards
IvanHavent heard of performance degradation specifically due to a LOB column not being the last column in a table (although there are several issues with just having a LOB column in a table).
You may want to build a test database to test it out. It should be easy to run tests comparing one with the additional column and one the original to prove or refute it. The results would be interesting to learn - please post them up if you intend to test it out. -
Hi,
I have a proble. How to move a table with LOB colum? How to create a table with LOB column by specifying another tablespace for LOB column?
Please help me.
Regards,
MathewWhat is it that you are not able to find?
The link that I provided was answer to your second question. -
Protected memory exception during bulkcopy of table with LOB columns
Hi,
I'm using ADO BulkCopy to transfer data from a SqlServer database to Oracle. In some cases, and it seems to only happen on some tables with LOB columns, I get the following exception:
System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
at Oracle.DataAccess.Client.OpsBC.Load(IntPtr opsConCtx, OPOBulkCopyValCtx* pOPOBulkCopyValCtx, IntPtr pOpsErrCtx, Int32& pBadRowNum, Int32& pBadColNum, Int32 IsOraDataReader, IntPtr pOpsDacCtx, OpoMetValCtx* pOpoMetValCtx, OpoDacValCtx* pOpoDacValCtx)
at Oracle.DataAccess.Client.OracleBulkCopy.PerformBulkCopy()
at Oracle.DataAccess.Client.OracleBulkCopy.WriteDataSourceToServer()
at Oracle.DataAccess.Client.OracleBulkCopy.WriteToServer(IDataReader reader)
I'm not sure exactly what conditions trigger this exception; perhaps only when the LOB data is large enough?
I'm using Oracle 11gR2.
Has anyone seen this or have an idea how to solve it?
If I catch the exception and attempt row-by-row copying, I then get "ILLEGAL COMMIT" exceptions.
Thanks,
BenFrom the doc:
Data Types Supported by Bulk Copy
The data types supported by Bulk Copy are:
ORA_SB4
ORA_VARNUM
ORA_FLOAT
ORA_CHARN
ORA_RAW
ORA_BFLOAT
ORA_BDOUBLE
ORA_IBDOUBLE
ORA_IBFLOAT
ORA_DATE
ORA_TIMESTAMP
ORA_TIMESTAMP_TZ
ORA_TIMESTAMP_LTZ
ORA_INTERVAL_DS
ORA_INTERVAL_YM
I can't find any documentation on these datatypes (I'm guessing these are external datatype constants used by OCI??). This list suggests ADO.NET bulk copy of LOBs isn't supported at all (although it works fine most of the time), unless I'm misreading it.
The remaining paragraphs don't appear to apply to me.
Thanks,
Ben -
Jodbc works with LOB columns?
Hi,
I would like to now if the jodbc drivers (thin and/or OCI) work
with LOB columns.
Browsing the generic Server 804 docs in Oracle Technet site, I
read that jodbc thin drivers don't, but that the OCI drivers do.
But the jodbc.htm that camne with the 8.0.5 production release
for Linux doesn't mention LOB columns, only LONG and LONG RAW.
Was it just an oversight in the Oracle2Linux doc, or the jodbc
OCI driver for Linux really doesn work with LOBs? (or is the
generic docs wrong and none of the OCI driver work with LOBs?)
Anybody knows?
Any info would be greatly apreciated.
Leandro
nullThe amount of data is not really material, in this type of decision, it is the way you would most often access the data that is important. If the lobs are larger than about 4K they will be stored out of line (in a separate table) anyway, with only lob pointers in the actual table. You could look at partitioning (probably a hash partition would work best) to deal with the amount of data.
If you mostly query for a single attribute (i.e. You access a persons photo hundreds of times a day, but only look for a finger print once or twice a day), then it may make sense to split the table into three.
However, if you most often need to pull all three attributes, then storing in a single table makes most sense. Particularly since you say you do not always have all three attributes for a given person. With three tables, you will always be outer joining.
HTH
John -
How to replace the string of column value with other column value in same table
I have a temp table which contains
Id Name CTC Address Content
1 Ross $200 6th block Dear #Name your CTC is #CTC and your address is #address
2 Jhon $300 1oth cross Dear #Name your CTC is #CTC and your address is #address
Now i want to select content so that it should get replace with the respective columns and final output should come like this
Dear Ross your CTC is 200 and your address is 6th block
Dear Jhon your CTC is 300 and your address is 10th cross
Kindly suggestI think RSingh suggestion is ok ... what do you mean by another way? ...maybe something more generic?
maybe build a table whith the list of col you need to "replace" and dinamically build the replace query ...
declare @colList table(colName varchar(100))
insert into @colList
select 'name'
union all select 'ctc'
union all select 'address'
declare @cmd varchar(2000)
select @cmd='select '+ (select 'replace(' from @colList for xml path('') +' content '+
(select ',''#'+ colName +''', '+ colName +')' from @colList for xml path(''))
+' from YOURTABLENAME '
exec (@cmd)
or your request was different ? -
Importing multiple jpeg files from local folder into database LOB column
I have to programatically save multiple pictures (jpeg) from the folder on my PC into Oracle table LOB column. I have to be able to choose local folder on my PC where are the pictures, and press button on Oracle Forms to save pictures in LOB column in database.
I'm using Forms 6i and Oracle 10g Rel2 database.
Is this possible with Oracle Forms or the only way to do that is to use create directory database command and use dbms_lob package which I shouldn't do, because Oracle database directory is not allowed to see my local folder.As I said I don't know how to use object data type, I just given a shot as below. I know the following code has errors can you please correct it for me.
Public
Sub Main()
' Add your code here
Dim f1
As FileStream
Dim s1
As StreamReader
Dim date1
As
Object
Dim rline
As
String
Dim Filelist(1)
As
String
Dim FileName
As
String
Dim i
As
Integer
i = 1
date1 =
Filelist(0) =
"XYZ"
Filelist(1) =
"123"
For
Each FileName
In Filelist
f1 = File.OpenRead(FileName)
s1 = File.OpenText(FileName)
rline = s1.ReadLine
While
Not rline
Is
Nothing
If Left(rline, 4) =
"DATE"
Then
date1 (i)= Mid(rline, 7, 8)
i = i + 1
Exit
While
End
If
rline = s1.ReadLine
End
While
Next
Dts.Variables(
"date").Value = date1(1)
Dts.Variables(
"date1").Value = date1(2)
Dts.TaskResult = ScriptResults.Success
End
Sub
Maybe you are looking for
-
HT1277 Cannot send email from iCloud Mac Yosamite 10.10.1
I have not been able to send email from my account for weeks, off and on. I am using Safari Browser Mac Book Pro 17 Yosamite 10.10.1
-
IPhone4 stuck in Recovery Mode (iOS-5 update)
Hi Everyone, I recently attempted to upgrade my new iPhone4 to IOS-5.However as usual, after the OS software download and subsequent installation / restorebegan, the iPhone jammed in recovery mode (the one with the iTunes logo andcable). Error-Code 1
-
5800 XM FW v40 in North America
any news when Firmware v40 will be out in North Amereca?? just hopping someone can share. thanks
-
I've read that using image map 'hotspots' in html email is not such a good idea? In what email client/s will these fail? I've been studying the code in a boat load of html emails recently and some of the most well known companies in the UK use them i
-
Galaxy Nexus - not receiving texts from iPhone 4s
This morning, my girlfriend sent me a text from her iPhone 4s, synced to my computer with her Apple account, on the same contract, to my Galaxy Nexus. I kept getting texts from her, and every text I sent appeared to have sent, but she didn't get a s