NOLOGGING for LOBs
Oracle 11gR2 rhel5 64bit
Hi all,
We are trying to figure out a way to reduce the amount of redo that is being generated when we insert data (LOBs) into a table. Our database is in ARCHIVELOG mode and we set the table to NOLOGGING mode also. However, we see an increase in the redo generation (from before when it was in LOGGING mode) each time. I'm monitoring the "redo size" through OEM for the specific session. What could I be doing wrong? Are there any other factors that I need to be aware of?
Thanks.
NOLOGGING is supported in only a subset of the locations that support LOGGING. Only the following operations support the NOLOGGING mode in URL below
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/clauses005.htm#i999782
Similar Messages
-
Should we use LOGGING or NOLOGGING for table, lob segment, and indexes?
We have some DML performance issue on cf contention over the tables that also include LOB segments. In this case, should we define LOGGING on tables, lob segments, and/or INDEXES?
Based on the metalink note < Performance Degradation as a Result of 'enq: CF - contention' [ID 1072417.1]> It looks we need to turn on logging for at least table and lob segment. What about the indexes?
Thanks!>
These tables that have nologging are likely from the application team. Yes, we need to turn on the logging from nologging for tables and lob segments. What about the indexes?
>
Indexes only get modified when the underlying table is modified. When you need recovery you don't want to do things that can interfere with Oracle's ability to perform its normal recovery. For indexes there will never be loss of data that can't be recovered by rebuilding the index.
But use of NOLOGGING means that NO RECOVERY is possible. For production objects you should ALWAYS use LOGGING. And even for those use cases where use of NOLOGGING is appropriate for a table (loading a large amount of data into a staging table) the indexes are typically dropped (or at least disabled) before the load and then rebuilt afterward. When they are rebuilt NOLOGGING is used during the rebuild. Normal index operations will be logged anyway so for these 'offline' staging tables the setting for the indexes doesn't really matter. Still, as a rule of thumb you only use NOLOGGING during the specific load (for a table) or rebuild (for an index) and then you would ALTER the setting to LOGGING again.
This is from Tom Kyte in his AskTom blog from over 10 years ago and it still applies today.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5280714813869
>
NO NO NO -- it does not make sense to leave objects in NOLOGGING mode in a production
instance!!!! it should be used CAREFULLY, and only in close coordination with the guys
responsible for doing backups -- every non-logged operation performed makes media
recovery for that segment IMPOSSIBLE until you back it up.
>
Use of NOLOGGING is a special-case operation. It is mainly used in Datawarehouse (OLAP systems) data processing during truncate-and-load operations on staging tables. Those are background or even offline operations and the tables are NOT accessible by end users; they are work tables used to prepare the data that will be merged to the production tables.
1. TRUNCATE a table
2. load the table with data
3. process the data in the table
In those operations the table load is seldom backed up and rarely needs recovery. So use of NOLOGGING enhances the performance of the data load and the data can be recovered, if necessary, from the source it was loaded from to begin with.
Use of NOLOGGING is rarely, if ever, used for OLTP systems since that data needs to be recovered. -
JPA - lazy loading for LOB field
Hi all,
JPA 1.0 specification mandates that all JPA-compliant implementation support lazy loading for certain kind of entity field.
For LOB fields lazy loading is OPTIONAL.
I am experiencing odd runtime behaviors on my custom software which would point to this feature not being supported.
Can anyone please tell me if SAP JPA 1.0 implementation on NW CE 711 implements this feature or not?
Thanks in advance
Regards
VincenzoHi Vincenco,
I am sure that this is the same as with single-valued relationships (@OneToOne, @ManyToOne): Lazy loading would require bytecode manipulation/generation, so SAP JPA does not support it in 7.20 (and of course not in 7.11)
See tulsi jiddimani's elaborate answer here: Re: JPA: Documentation on LazyLoad.
In 7.30 enhancements, you really can find lazy loading support for single-valued relationships with getReference.
http://help.sap.com/saphelp_nw73/helpdata/en/68/f676ef36094f4381467a308a98fd2a/content.htm
but @Lob and @Basic is not mentioned.
If you need lazy loading in 7.11, you have two alternatives:
1. Put the Lob fields into separate entities, work around the missing feature in SAP JPA with ugly @OneToMany - Relations
2. Use another persistence provider like EclipseLink, read Sabine Heider's blogs about integrattion of EclipseLink in SAP NetWeaver and static bytecode weaving for lazy loading. /people/sabine.heider/blog
Regards
Rolf -
Nologging for insert statement
Hello,
Is it possible to use NOLOGGING with INSERT statement?
For example:
INSERT /*+ APPEND PARALLEL(test,4)*/INTO test NOLOGGING (select .....)
It will take NOLOGGING as alias name of table or what is correct way to use NOLOGGING for insert statement.There is no such thing as a NOLOGGING hint (or insert keyword as you've tried it).
You must use ALTER TABLE table_name NOLOGGING; -
XML log: Error during temp file creation for LOB Objects
Hi All,
I got this exception in the concurrent log file:
[122010_100220171][][EXCEPTION] !!Error during temp file creation for LOB Objects
[122010_100220172][][EXCEPTION] java.io.FileNotFoundException: null/xdo-dt-lob-1292864540169.tmp (No such file or directory (errno:2))
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:98)
at oracle.apps.xdo.dataengine.LOBList.initLOB(LOBList.java:39)
at oracle.apps.xdo.dataengine.LOBList.<init>(LOBList.java:30)
at oracle.apps.xdo.dataengine.XMLPGEN.updateMetaData(XMLPGEN.java:1051)
at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:511)
at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1121)
at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1144)
at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:558)
at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
at oracle.apps.xdo.dataengine.XMLPGEN.writeGroupStructure(XMLPGEN.java:308)
at oracle.apps.xdo.dataengine.XMLPGEN.processData(XMLPGEN.java:273)
at oracle.apps.xdo.dataengine.XMLPGEN.processXML(XMLPGEN.java:215)
at oracle.apps.xdo.dataengine.XMLPGEN.writeXML(XMLPGEN.java:254)
at oracle.apps.xdo.dataengine.DataProcessor.processDataStructre(DataProcessor.java:390)
at oracle.apps.xdo.dataengine.DataProcessor.processData(DataProcessor.java:355)
at oracle.apps.xdo.oa.util.DataTemplate.processData(DataTemplate.java:348)
at oracle.apps.xdo.oa.cp.JCP4XDODataEngine.runProgram(JCP4XDODataEngine.java:293)
at oracle.apps.fnd.cp.request.Run.main(Run.java:161)
I have this query defined in my data template:
<![CDATA[
SELECT lt.long_text inv_comment
FROM apps.fnd_attached_docs_form_vl ad,
apps.fnd_documents_long_text lt
WHERE ad.media_id = lt.media_id
AND ad.category_description = 'Draft Invoice Comments'
AND ad.pk1_value = :project_id
AND ad.pk2_value = :draft_invoice_num
]]>
Issue: The inv_comment is not printing on the PDF output.
I had the temp directory defined under the Admin tab.
I'm guessing if it's the LONG datatype of the long_text field that's causing the issue.
Anybody knows how this can be fixed? any help or advice is appreciated.
Thanks.
SW
Edited by: user12152845 on Dec 20, 2010 11:48 AMHi All,
I got this exception in the concurrent log file:
[122010_100220171][][EXCEPTION] !!Error during temp file creation for LOB Objects
[122010_100220172][][EXCEPTION] java.io.FileNotFoundException: null/xdo-dt-lob-1292864540169.tmp (No such file or directory (errno:2))
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:98)
at oracle.apps.xdo.dataengine.LOBList.initLOB(LOBList.java:39)
at oracle.apps.xdo.dataengine.LOBList.<init>(LOBList.java:30)
at oracle.apps.xdo.dataengine.XMLPGEN.updateMetaData(XMLPGEN.java:1051)
at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:511)
at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1121)
at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1144)
at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:558)
at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
at oracle.apps.xdo.dataengine.XMLPGEN.writeGroupStructure(XMLPGEN.java:308)
at oracle.apps.xdo.dataengine.XMLPGEN.processData(XMLPGEN.java:273)
at oracle.apps.xdo.dataengine.XMLPGEN.processXML(XMLPGEN.java:215)
at oracle.apps.xdo.dataengine.XMLPGEN.writeXML(XMLPGEN.java:254)
at oracle.apps.xdo.dataengine.DataProcessor.processDataStructre(DataProcessor.java:390)
at oracle.apps.xdo.dataengine.DataProcessor.processData(DataProcessor.java:355)
at oracle.apps.xdo.oa.util.DataTemplate.processData(DataTemplate.java:348)
at oracle.apps.xdo.oa.cp.JCP4XDODataEngine.runProgram(JCP4XDODataEngine.java:293)
at oracle.apps.fnd.cp.request.Run.main(Run.java:161)
I have this query defined in my data template:
<![CDATA[
SELECT lt.long_text inv_comment
FROM apps.fnd_attached_docs_form_vl ad,
apps.fnd_documents_long_text lt
WHERE ad.media_id = lt.media_id
AND ad.category_description = 'Draft Invoice Comments'
AND ad.pk1_value = :project_id
AND ad.pk2_value = :draft_invoice_num
]]>
Issue: The inv_comment is not printing on the PDF output.
I had the temp directory defined under the Admin tab.
I'm guessing if it's the LONG datatype of the long_text field that's causing the issue.
Anybody knows how this can be fixed? any help or advice is appreciated.
Thanks.
SW
Edited by: user12152845 on Dec 20, 2010 11:48 AM -
User_lobs and user_segments view shows different tablespace for lob segment
Hi,
I am trying to move lob in different tablespace for partitioned table.
I used
alter table <table_name> move partition <partition_name> lob(lob_column) store as (tablespace <tablespace_name>);
alter index <index_name> rebuild partition <partition_name> tablespace <tablespace_name>
ALTER TABLE <table_name> MODIFY DEFAULT ATTRIBUTES TABLESPACE <tablespace_name>
ALTER INDEX <index_name> modify default ATTRIBUTES TABLESPACE <tablespace_name>
Database - 10.2.0.5
OS- HP Itanium 11.31
I can see in user_lob_partitions, user_segments and user_part_tables shows me new tablespace information
whereas user_lobs and user_part_indexes shows me different information regarding tablespace.
I checked some documents in metalink but didnt help me.
I think that I am missing something or doing some step wrong.
Please help.
SQL> select partition_name, lob_partition_name, tablespace_name from user_lob_partitions where table_name in ('TRB1_PUB_LOG','TRB1_SUB_ERRS','TRB1_PUB_LOG') ;
PARTITION_NAME LOB_PARTITION_NAME TABLESPACE_NAME
S2000 SYS_LOB_P8585 USAGE_REORG_TBS
S2001 SYS_LOB_P8587 USAGE_REORG_TBS
S2003 SYS_LOB_P8589 USAGE_REORG_TBS
S2004 SYS_LOB_P8591 USAGE_REORG_TBS
S2005 SYS_LOB_P8593 USAGE_REORG_TBS
S2006 SYS_LOB_P8595 USAGE_REORG_TBS
S2007 SYS_LOB_P8597 USAGE_REORG_TBS
S2008 SYS_LOB_P8599 USAGE_REORG_TBS
S2010 SYS_LOB_P8601 USAGE_REORG_TBS
S2011 SYS_LOB_P8603 USAGE_REORG_TBS
S2012 SYS_LOB_P8605 USAGE_REORG_TBS
S2013 SYS_LOB_P8607 USAGE_REORG_TBS
S2014 SYS_LOB_P8609 USAGE_REORG_TBS
S2015 SYS_LOB_P8611 USAGE_REORG_TBS
S2888 SYS_LOB_P8613 USAGE_REORG_TBS
S2999 SYS_LOB_P8615 USAGE_REORG_TBS
S3000 SYS_LOB_P8617 USAGE_REORG_TBS
S3001 SYS_LOB_P8619 USAGE_REORG_TBS
S3004 SYS_LOB_P8621 USAGE_REORG_TBS
S3005 SYS_LOB_P8623 USAGE_REORG_TBS
S3006 SYS_LOB_P8625 USAGE_REORG_TBS
S3007 SYS_LOB_P8627 USAGE_REORG_TBS
S3008 SYS_LOB_P8629 USAGE_REORG_TBS
S3009 SYS_LOB_P8631 USAGE_REORG_TBS
S3010 SYS_LOB_P8633 USAGE_REORG_TBS
S3011 SYS_LOB_P8635 USAGE_REORG_TBS
S3012 SYS_LOB_P8637 USAGE_REORG_TBS
S3013 SYS_LOB_P8639 USAGE_REORG_TBS
S3014 SYS_LOB_P8641 USAGE_REORG_TBS
S3015 SYS_LOB_P8643 USAGE_REORG_TBS
S3050 SYS_LOB_P8645 USAGE_REORG_TBS
SMAXVALUE SYS_LOB_P8647 USAGE_REORG_TBS
32 rows selected.
SQL> select TABLE_NAME,COLUMN_NAME,SEGMENT_NAME,TABLESPACE_NAME,INDEX_NAME,PARTITIONED from user_lobs where TABLE_NAME in ('TRB1_PUB_LOG','TRB1_SUB_ERRS','TRB1_SUB_LOG') ;
TABLE_NAME COLUMN_NAME SEGMENT_NAME TABLESPACE_NAME INDEX_NAME PAR
TRB1_SUB_ERRS GENERAL_DATA_C SYS_LOB0006703055C00017$$ TRBDBUS1LNN1 SYS_IL0006703055C00017$$ YES
TRB1_SUB_LOG GENERAL_DATA_C SYS_LOB0006703157C00017$$ TRBDBUS1LNN1 SYS_IL0006703157C00017$$ YES
TRB1_PUB_LOG GENERAL_DATA_C SYS_LOB0006702987C00014$$ TRBDBUS1LNN1 SYS_IL0006702987C00014$$ YES
SQL> SQL> select unique segment_name ,tablespace_name from user_Segments where segment_name in (select SEGMENT_NAME from user_lobs where TABLE_NAME in('TRB1_PUB_LOG','TRB1_SUB_ERRS','TRB1_SUB_LOG') );
SEGMENT_NAME TABLESPACE_NAME
SYS_LOB0006702987C00014$$ USAGE_REORG_TBS
SYS_LOB0006703055C00017$$ USAGE_REORG_TBS
SYS_LOB0006703157C00017$$ USAGE_REORG_TBS
SQL> select unique segment_name ,tablespace_name from user_Segments where segment_name in (select INDEX_NAME from user_lobs where TABLE_NAME in('TRB1_PUB_LOG','TRB1_SUB_ERRS','TRB1_SUB_LOG') );
SEGMENT_NAME TABLESPACE_NAME
SYS_IL0006702987C00014$$ USAGE_REORG_TBS
SYS_IL0006703055C00017$$ USAGE_REORG_TBS
SYS_IL0006703157C00017$$ USAGE_REORG_TBS
SQL> select unique index_name,def_tablespace_name from user_part_indexes where table_name in ('TRB1_PUB_LOG','TRB1_SUB_ERRS','TRB1_SUB_LOG');
INDEX_NAME DEF_TABLESPACE_NAME
SYS_IL0006702987C00014$$ TRBDBUS1LNN1
SYS_IL0006703055C00017$$ TRBDBUS1LNN1
SYS_IL0006703157C00017$$ TRBDBUS1LNN1
TRB1_PUB_LOG_PK USAGE_REORG_IX_TBS
TRB1_SUB_ERRS_1IX USAGE_REORG_IX_TBS
TRB1_SUB_ERRS_1UQ USAGE_REORG_IX_TBS
TRB1_SUB_ERRS_2IX USAGE_REORG_IX_TBS
TRB1_SUB_LOG_1IX USAGE_REORG_IX_TBS
TRB1_SUB_LOG_PK USAGE_REORG_IX_TBS
SQL> select unique def_tablespace_name from user_part_tables where table_name in ('TRB1_PUB_LOG','TRB1_SUB_ERRS','TRB1_SUB_LOG');
DEF_TABLESPACE_NAME
USAGE_REORG_TBS
Please let me know if some more details required.>whereas user_lobs and user_part_indexes shows me different information regarding tablespace.
do you see different results after starting a new session after the ALTER statements were issued? -
Large Block Chunk Size for LOB column
Oracle 10.2.0.4:
We have a table with 2 LOB columns. Avg blob size of one of the columns is 122K and the other column is 1K. so I am planning to move column with big blob size to 32K chunk size. Some of the questions I have is:
1. Do I need to create a new tablespace with 32K block size and then create table with chunk size of 32K for that LOB column or just create a table with 32K chunk size on the existing tablespace which has 8K block size? What are the advantages or disadvatanges of one approach over other.
2. Currently db_cache_size is set to "0", do I need to adjust some parameters for large chunk/block size?
3. If I create a 32K chunk is that chunk shared with other rows? For eg: If I insert 2K block would 30K block be available for other rows? The following link says 30K will be a wasted space:
[LOB performance|http://www.oracle.com/technology/products/database/application_development/pdf/lob_performance_guidelines.pdf]
Below is the output of v$db_cache_advice:
select
size_for_estimate c1,
buffers_for_estimate c2,
estd_physical_read_factor c3,
estd_physical_reads c4
from
v$db_cache_advice
where
name = 'DEFAULT'
and
block_size = (SELECT value FROM V$PARAMETER
WHERE name = 'db_block_size')
and
advice_status = 'ON';
C1 C2 C3 C4
2976 368094 1.2674 150044215
5952 736188 1.2187 144285802
8928 1104282 1.1708 138613622
11904 1472376 1.1299 133765577
14880 1840470 1.1055 130874818
17856 2208564 1.0727 126997426
20832 2576658 1.0443 123639740
23808 2944752 1.0293 121862048
26784 3312846 1.0152 120188605
29760 3680940 1.0007 118468561
29840 3690835 1 118389208
32736 4049034 0.9757 115507989
35712 4417128 0.93 110102568
38688 4785222 0.9062 107284008
41664 5153316 0.8956 106034369
44640 5521410 0.89 105369366
47616 5889504 0.8857 104854255
50592 6257598 0.8806 104258584
53568 6625692 0.8717 103198830
56544 6993786 0.8545 101157883
59520 7361880 0.8293 98180125With only a 1K LOB you are going to want to use a 8K chunk size as per the reference in the thread above to the Oracle document on LOBs the chunk size is the allocation unit.
Each LOB column has its own LOB table so each column can have its own LOB chunk size.
The LOB data type is not known for being space efficient.
There are major changes available on 11g with Secure Files being available to replace traditional LOBs now called Basic Files. The differences appear to be mostly in how the LOB data, segments, are managed by Oracle.
HTH -- Mark D Powell -- -
ADD_SUBSET_RULES for LOB
Oracle 11g:
I need to add a subset_rule for eg: (column * (some forumula)) = result. But the problem is that the table has LOB columns. And subset rule documentation says:
Also, the specified table cannot have any LOB, LONG, or LONG RAW columns currently or in the future.I am currently in the fix as to what I can do to get rid of that row in capture process.Just a quick feedback on this "empty Array".
The documentation is not cristal clear but I am convinced it refers to the array of LCR
into the ANYDATA and not an array of ANAYDATA. Nevertherless, I have explored the ARRAY of anydata
using an action context associated with STREAMS$_ARRAY_TRANS_FUNCTION(as opposed
to STREAMS$_TRANS_FUNCTION) which allow the return of many ANYDATA for a single
one ANYDATA input and managed to procuded empty ANYDATA.
I did not found any reference or example to help me in Google or Metalink
so I published my findings here as it may serve further people in search of info about usage of
STREAMS$_ARRAY_TRANS_FUNCTION
Here is a type of function that could allow this.
The context must be associated with this function _ARRAY
declare
v_dml_rule_name VARCHAR2(30);
v_ddl_rule_name VARCHAR2(30);
action_ctx sys.re$nv_list;
ac_name varchar2(30) := 'STREAMS$_ARRAY_TRANS_FUNCTION'; <-- note the '_ARRAY'
BEGIN
action_ctx := sys.re$nv_list(sys.re$nv_array());
action_ctx.add_pair( ac_name, sys.anydata.convertvarchar2('strmadmin.DML_TRANSFORM_FUNCT'));
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'STRMADMIN.TEST_DROP_COL',
streams_type => 'capture',
streams_name => 'TEST_CAPTURE',
queue_name => 'STRMADMIN.TEST_CAPTURE_Q',
include_dml => true,
include_ddl => false,
include_tagged_lcr => false,
inclusion_rule => true,
dml_rule_name => v_dml_rule_name ,
ddl_rule_name => v_ddl_rule_name );
dbms_rule_adm.alter_rule( rule_name => v_dml_rule_name, action_context => action_ctx);
END;
/This create a variation of Streams transformation function with is reported
into as 'ONE to MANY' transformation into DBA_STREAMS_TRANSFORM_FUNCTION
col TRANSFORM_FUNCTION_NAME for a50
col VALUE_TYPE for a30
select RULE_OWNER, RULE_NAME, VALUE_TYPE, TRANSFORM_FUNCTION_NAME, CUSTOM_TYPE
from DBA_STREAMS_TRANSFORM_FUNCTION;The function becomes more complicated to adapt to this multi-anydata dimension:
RULE_OWNER RULE_NAME VALUE_TYPE TRANSFORM_FUNCTION_NAME CUSTOM_TYPE
STRMADMIN TEST_DROP_COL44 SYS.VARCHAR2 strmadmin.DML_TRANSFORM_FUNCT ONE TO MANY
CREATE OR REPLACE function DML_TRANSFORM_FUNCT( inAnyData in SYS.AnyData)
RETURN sys.STREAMS$_ANYDATA_ARRAY IS
ret pls_integer;
typelcr VARCHAR2(61);
lcrOut SYS.LCR$_ROW_RECORD;
var_any anydata ;
v_num number ;
v_lcr SYS.LCR$_ROW_RECORD;
v_arr SYS.STREAMS$_ANYDATA_ARRAY;
BEGIN
v_arr:=SYS.STREAMS$_ANYDATA_ARRAY();
typelcr := inAnyData.getTypeName();
IF typelcr = 'SYS.LCR$_ROW_RECORD' THEN
-- Typecast AnyData to LCR$_ROW_RECORD
ret := inAnyData.GetObject(lcrOut);
IF lcrOut.get_object_owner() = 'STRMADMIN' THEN
IF lcrOut.get_object_name() = 'TEST_DROP_COL' THEN
lcrOut.delete_column(column_name=>'DATA_LONG');
-- check if we don't need to discard this LCR
var_any := lcrOut.get_Value('NEW','MYKEY') ;
if var_any is not null then
ret:=var_any.getnumber(v_num);
if v_num = 4 then
-- We do nothing but return a null ANYDATA
RETURN v_arr ;
end if;
end if;
-- this LCR is not to be discared, then let's apply our transformation
lcrOut.set_value('new','DATA_SHORT',anydata.convertvarchar2('PP CONVERTED') );
v_arr.extend;
v_arr(1) :=SYS.AnyData.ConvertObject(lcrOut);
RETURN v_arr;
END IF;
END IF;
END IF;
-- if we are here then the LCR is not a row
-- or the row was not one bound for transformation,
-- so we alter nothing
v_arr.extend;
v_arr(1) :=inAnyData ;
RETURN v_arr;
END;
/Alas, when the value of MYKEY is null the function produces
a null ANYDATA which is explicitely forbidden :
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/d_streams_adm.htm#CDEJFBHD
Logminer Captured Capture
ID Capture user Start scn CHANGE_TIME Type Rule set Name Neg rule set Status
41 STRMADMIN 110500477205 02-07 13:54:48 LOCAL RULESET$_28 ABORTED
Last remote
Last system Last scn Delay Last scn confirmed Delay
Capture name QUEUE_NAME scn Scanned Scanned enqueued scn Applied Enq-Applied
TEST_CAPTURE TEST_CAPTURE_Q 110503647881
no rows selected
ORA-26747: The one-to-many transformation function strmadmin.DML_TRANSFORM_FUNCT
encountered the following error: return of NULL anydata array element not allowedOf course I could generate the header
v_lcr SYS.LCR$_ROW_RECORD;
ret:=var_any.getnumber(v_num);
if v_num = 4 then
-- lcrOut.set_value('new','DATA_SHORT',anydata.convertvarchar2('v_num = '||to_char(v_num)) );
v_lcr := SYS.LCR$_ROW_RECORD.CONSTRUCT (
source_database_name=>SYS_CONTEXT('USERENV','DB_NAME'),
command_type=>lcrOut.get_command_type(),
object_owner=>lcrOut.GET_OBJECT_OWNER(),
object_name=>lcrOut.GET_OBJECT_NAME() );
v_arr.extend;
v_arr(1) := SYS.AnyData.ConvertObject(v_lcr);
RETURN v_arr ;
end if;But the LCR is create with and sent, not discarded as expected despite
that there are no columns and it fails miserably at apply site.
That's all, and that's a failure. Maybe somebody else will have a better inspiration.
Edited by: bpolarsk on Jul 2, 2009 5:57 AM -
Hi
I am trying to use the createTemporary()
method provided by the Oracle OCI Driver
to insert a BLOB into the database.
I have the oracle client installed in my
PC. While executing the above said method
an "UnsatisfiedLinkError" is reported.
It says "lob_createTemporary" not found.
I am of the opinion that this could be
a .dll that gets installed with the Oracle
client. Do we need to do any custom
installation for installing DLLs related
to LOBs while we install the oracle client?
Regards
RamkumarHi
I am trying to use the createTemporary()
method provided by the Oracle OCI Driver
to insert a BLOB into the database.
I have the oracle client installed in my
PC. While executing the above said method
an "UnsatisfiedLinkError" is reported.
It says "lob_createTemporary" not found.
I am of the opinion that this could be
a .dll that gets installed with the Oracle
client. Do we need to do any custom
installation for installing DLLs related
to LOBs while we install the oracle client?
Regards
Ramkumar -
How to increase no of properties for LOB in BCS code
HI Team,
I have BCS code in visual studio. which is working fine and now i have a requirement to add few more properties. After adding new properties into .bdcm xml file, it is giving error that
the no of properties exceeds 50 and it is not allowing to create one more property.
Let me know, where can we modify no of properties per LOB system in BCS code. currently we have already 50 properties, and now i want to add few more properties.
Thanks & Regards, NeerubeeHI Team,
I have BCS code in visual studio. which is working fine and now i have a requirement to add few more properties. After adding new properties into .bdcm xml file, it is giving error that
the no of properties exceeds 50 and it is not allowing to create one more property.
Let me know, where can we modify no of properties per LOB system in BCS code. currently we have already 50 properties, and now i want to add few more properties.
Thanks & Regards, Neerubee -
Help on error : OIP-04902 - Invalid buffer length for LOB write op
We have a bit of VBA in Excel 2003 writing a BLOB to an 11.2.0.2 DB; for certain users it is throwing up the above message. I can't find any useful info on it; does anyone know the causes/resolution please ?
Resolved - a 0 byte object was being passed, which caused the error.
-
Nologging for a force refresh MV
Hi ,
I am using Oracle 10G
I have created a complex Mv , thus the need for a force refresh and i intend to put this mv into a refresh group
how can i actually get Oracle to truncate the data during the force refresh and
to insert w/o logging ?
i have tried to hide the complext view that uses analytic function into another view but still it complain of it being a complex query
pls advise
tks & rgdsMVs are powerful but aren't as customizable as we would sometimes like. l they function as they will and sometimes variants like the ones you need are difficult to implement if possible at all.
On our project we wanted to use a MV that needed to be completely refreshed daily using SYSDATE to swap effectivity dates in and out but kept current through the day too. SYSDATE prevented the possiblity of a fast refresh. We ended up creating 2 tables of our own (to use one while the other was being refreshed) to implment the functionlity we wanted. You could consider doing something similar.
I hope someone else can give you a more encouraging answer.
Good luck! -
Conflict resolution for a table with LOB column ...
Hi,
I was hoping for some guidance or advice on how to handle conflict resolution for a table with a LOB column.
Basically, I had intended to handle the conflict resolution using the MAXIMUM prebuilt update conflict handler. I also store
the 'update' transaction time in the same table and was planning to use this as the resolution column to resolve the conflict.
I see however these prebuilt conflict handlers do not support LOB columns. I assume therefore I need to code a customer handler
to do this for me. I'm not sure exactly what my custom handler needs to do though! Any guidance or links to similar examples would
be very much appreciated.Hi,
I have been unable to make any progress on this issue. I have made use of prebuilt update handlers with no problems
before but I just don't know how to resolve these conflicts for LOB columns using custom handlers. I have some questions
which I hope make sense and are relevant:
1.Does an apply process detect update conflicts on LOB columns?
2.If I need to create a custom update/error handler to resolve this, should I create a prebuilt update handler for non-LOB columns
in the table and then a separate one for the LOB columns OR is it best just to code a single custom handler for ALL columns?
3.In my custom handler, I assume I will need to use the resolution column to decide whether or not to resolve the conflict in favour of the LCR
but how do I compare the new value in the LCR with that in the destination database? I mean how do I access the current value in the destination
database from the custom handler?
4.Finally, if I need to resolve in favour of the LCR, do I need to call something specific for LOB related columns compared to non-LOB columns?
Any help with these would be very much appreciated or even if someone can direct me to documentation or other links that would be good too.
Thanks again. -
Hello everyone!!!
I'm using for testing an ORACLE XE on my own XP machine (which has a cluster of 4K), and I've created a tablespace for LOBs with those settings:
CREATE TABLESPACE lobs_tablespace
DATAFILE '.....lobs_datafile.dbf'
SIZE 20M
AUTOTEXTEND ON
NEXT 20M
MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
BLOCKSIZE 16K;
so I created a test table with a LOB field:
CREATE TABLE test_table
ID NUMBER,
FIELD1 VARCHAR2(50)
FIELD2 BLOB,
CONSTRAINT......
LOB (FIELD2) STORE AS LOB_FIELD2 (
TABLESPACE lobs_tablespace
STORAGE (INITIAL 5M NEXT 5M PCTINCREASE 0 MAXEXTENTS 99)
CHUNK 16384
NOCACHE NOLOGGING
INDEX LOB_FIELD2_IDX (
TABLESPACE lobs_tablespace_idxs));
where lobs_tablespace_idxs is created with blocksize of 16K
so at this point, because i'm doing some tests on functions, I tried to insert in this table with a:
FOR i IN 1..10000 LOOP
fn_insert_into_table('description', 'filename');
END LOOP;
trying to insert a word file with dimension of almost 5Mb and I get the datafile lobs_datafile.dbf increased from start of 50M to almost 5Gb...
I have some parameters settled as:
db_16K_cache_size=1028576
db_block_checking = false
undo_management = auto
db_block_size = 8192
sga_target = 128M
sga_max_size = 256M
so the question is: doing some calculus 5Mb of a file * 10000 should be at max 60Mb...not 5Gb...so why the datafile increased so much as like it did? shall I have to check something else that I've missed?....
Thanks a lot to everyone! :-)Hi,
I'm guessing that you'll need to do a bit of a re-org in order to free up the space.
You may well be able to do that just at the LOB level, rather than rebuilding the entire table.
There's stuff about that in Chapter 3 of the 10G App Developers Guide: Large Objects.
Of course, it the table is now empty, then you might as well just drop it and recreate.
After that, you should be able to resize the datafile. -
Hi all,
I'm working with 11gR2 version in RAC environment with 2 nodes. I'm creating the schemes on my new database with empty tablespaces and 100Mb of initial free space. I'm having an error on the creation of this table:
CREATE TABLE "IODBODB1"."DOCUMENTOS"
( "ID_DOC" NUMBER NOT NULL ENABLE,
"ID_APP" NUMBER(5,0) NOT NULL ENABLE,
"ID_CLS" NUMBER(6,0) NOT NULL ENABLE,
"CREATE_FEC" DATE NOT NULL ENABLE,
"LAST_MODIFIED_FEC" DATE NOT NULL ENABLE,
"DOC_NAME" VARCHAR2(100 BYTE) NOT NULL ENABLE,
"DOC_SIZE" NUMBER NOT NULL ENABLE,
"DOC_DATA" BLOB NOT NULL ENABLE,
"CLASE_FORMATO_FORMATO" VARCHAR2(150 BYTE),
"REGULAR_NAME" VARCHAR2(100 BYTE) NOT NULL ENABLE,
"USUARIO" VARCHAR2(25 BYTE),
"USUARIO_LEVEL" VARCHAR2(5 BYTE)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ODBODB_TBD_DADES"
LOB ("DOC_DATA") STORE AS BASICFILE (ENABLE STORAGE IN ROW CHUNK 8K PCTVERSION 10 NOCACHE LOGGING STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
PARTITION BY LIST ("ID_APP")
SUBPARTITION BY RANGE ("ID_DOC")
PARTITION "PTL_DOCUMENTOS_FICTICIO" VALUES (0) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ODBODB_TBD_DADES"
LOB ("DOC_DATA") STORE AS BASICFILE (ENABLE STORAGE IN ROW CHUNK 8K PCTVERSION 10 NOCACHE LOGGING STORAGE(BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
(SUBPARTITION "PTR_DOCUMENTOS_FICTICIO" VALUES LESS THAN (MAXVALUE) LOB ("DOC_DATA") STORE AS (TABLESPACE "ODBODB_TBL_LOBS" ) TABLESPACE "ODBODB_TBD_DADES" NOCOMPRESS ) );
PARTITION "PTL_DOCUMENTOS_ALFIMG" VALUES (2) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ODBODB_TBD_DADES"
LOB ("DOC_DATA") STORE AS BASICFILE (ENABLE STORAGE IN ROW CHUNK 32K PCTVERSION 10 NOCACHE LOGGING STORAGE(BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
(SUBPARTITION "STR_DOCUMENTOS_ALFIMG" VALUES LESS THAN (MAXVALUE) LOB ("DOC_DATA") STORE AS (TABLESPACE "BEALFIMG_TBL_ODB" ) TABLESPACE "BEALFIMG_TBD_ODB" NOCOMPRESS )
The error is:
ORA-03252: initial extent size not enough for LOB segment
I can't understand how Oracle can't allocate enough extents to create the LOB segment, and I don't know to which LOB segment it refers because the sqlplus don't mark it with the error.
Any ideas about this issue?
Thanks in advance!
dbajugPlease, forget this post, I feel dumb... The LOB segment need to have same size of the rest of partition...
Maybe you are looking for
-
How to set only one Index-group for a Search-iView as default?
Hi Everybody, i have a lot of index ID's for searching in different folders. Is it possible to set only one index-group as default for a search iView? Until now i get all groups as default in every search. It's inconvenient for the users to use the l
-
MOV imports get cropped in CS5. Help.
I imported footage from my iphone (MOV files). On Premiere Pro, the footage is cropped all around. How do I fix this? Oh, I have no idea about settings, where to find them or what any of the configurations mean. thank you, Liana
-
TDMS error -2503. Is seemingly corrupted file able to be recovered?
We performed a rather expensive test with multiple instruments and gauges on a specimen and recorded the data to a TDMS file. Before we performed the test, we recorded to a "dummy" tdms file for a few minutes and then reviewed the file to confirm tha
-
Mail suddenly stopped working?
this all started a couple of days ago. i start up mail and i opens up and every thing but i cant use the program, if i switch to it it just has the loading rainbow circle thing constantly there. iv tried restarting my machine twice and force quitting
-
I purchased the elements 13 bundle but do not see a MAC option on the download page. Is it not available for MAC?