ADD_SUBSET_RULES for LOB
Oracle 11g:
I need to add a subset_rule for eg: (column * (some forumula)) = result. But the problem is that the table has LOB columns. And subset rule documentation says:
Also, the specified table cannot have any LOB, LONG, or LONG RAW columns currently or in the future.I am currently in the fix as to what I can do to get rid of that row in capture process.
Just a quick feedback on this "empty Array".
The documentation is not cristal clear but I am convinced it refers to the array of LCR
into the ANYDATA and not an array of ANAYDATA. Nevertherless, I have explored the ARRAY of anydata
using an action context associated with STREAMS$_ARRAY_TRANS_FUNCTION(as opposed
to STREAMS$_TRANS_FUNCTION) which allow the return of many ANYDATA for a single
one ANYDATA input and managed to procuded empty ANYDATA.
I did not found any reference or example to help me in Google or Metalink
so I published my findings here as it may serve further people in search of info about usage of
STREAMS$_ARRAY_TRANS_FUNCTION
Here is a type of function that could allow this.
The context must be associated with this function _ARRAY
declare
v_dml_rule_name VARCHAR2(30);
v_ddl_rule_name VARCHAR2(30);
action_ctx sys.re$nv_list;
ac_name varchar2(30) := 'STREAMS$_ARRAY_TRANS_FUNCTION'; <-- note the '_ARRAY'
BEGIN
action_ctx := sys.re$nv_list(sys.re$nv_array());
action_ctx.add_pair( ac_name, sys.anydata.convertvarchar2('strmadmin.DML_TRANSFORM_FUNCT'));
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'STRMADMIN.TEST_DROP_COL',
streams_type => 'capture',
streams_name => 'TEST_CAPTURE',
queue_name => 'STRMADMIN.TEST_CAPTURE_Q',
include_dml => true,
include_ddl => false,
include_tagged_lcr => false,
inclusion_rule => true,
dml_rule_name => v_dml_rule_name ,
ddl_rule_name => v_ddl_rule_name );
dbms_rule_adm.alter_rule( rule_name => v_dml_rule_name, action_context => action_ctx);
END;
/This create a variation of Streams transformation function with is reported
into as 'ONE to MANY' transformation into DBA_STREAMS_TRANSFORM_FUNCTION
col TRANSFORM_FUNCTION_NAME for a50
col VALUE_TYPE for a30
select RULE_OWNER, RULE_NAME, VALUE_TYPE, TRANSFORM_FUNCTION_NAME, CUSTOM_TYPE
from DBA_STREAMS_TRANSFORM_FUNCTION;The function becomes more complicated to adapt to this multi-anydata dimension:
RULE_OWNER RULE_NAME VALUE_TYPE TRANSFORM_FUNCTION_NAME CUSTOM_TYPE
STRMADMIN TEST_DROP_COL44 SYS.VARCHAR2 strmadmin.DML_TRANSFORM_FUNCT ONE TO MANY
CREATE OR REPLACE function DML_TRANSFORM_FUNCT( inAnyData in SYS.AnyData)
RETURN sys.STREAMS$_ANYDATA_ARRAY IS
ret pls_integer;
typelcr VARCHAR2(61);
lcrOut SYS.LCR$_ROW_RECORD;
var_any anydata ;
v_num number ;
v_lcr SYS.LCR$_ROW_RECORD;
v_arr SYS.STREAMS$_ANYDATA_ARRAY;
BEGIN
v_arr:=SYS.STREAMS$_ANYDATA_ARRAY();
typelcr := inAnyData.getTypeName();
IF typelcr = 'SYS.LCR$_ROW_RECORD' THEN
-- Typecast AnyData to LCR$_ROW_RECORD
ret := inAnyData.GetObject(lcrOut);
IF lcrOut.get_object_owner() = 'STRMADMIN' THEN
IF lcrOut.get_object_name() = 'TEST_DROP_COL' THEN
lcrOut.delete_column(column_name=>'DATA_LONG');
-- check if we don't need to discard this LCR
var_any := lcrOut.get_Value('NEW','MYKEY') ;
if var_any is not null then
ret:=var_any.getnumber(v_num);
if v_num = 4 then
-- We do nothing but return a null ANYDATA
RETURN v_arr ;
end if;
end if;
-- this LCR is not to be discared, then let's apply our transformation
lcrOut.set_value('new','DATA_SHORT',anydata.convertvarchar2('PP CONVERTED') );
v_arr.extend;
v_arr(1) :=SYS.AnyData.ConvertObject(lcrOut);
RETURN v_arr;
END IF;
END IF;
END IF;
-- if we are here then the LCR is not a row
-- or the row was not one bound for transformation,
-- so we alter nothing
v_arr.extend;
v_arr(1) :=inAnyData ;
RETURN v_arr;
END;
/Alas, when the value of MYKEY is null the function produces
a null ANYDATA which is explicitely forbidden :
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/d_streams_adm.htm#CDEJFBHD
Logminer Captured Capture
ID Capture user Start scn CHANGE_TIME Type Rule set Name Neg rule set Status
41 STRMADMIN 110500477205 02-07 13:54:48 LOCAL RULESET$_28 ABORTED
Last remote
Last system Last scn Delay Last scn confirmed Delay
Capture name QUEUE_NAME scn Scanned Scanned enqueued scn Applied Enq-Applied
TEST_CAPTURE TEST_CAPTURE_Q 110503647881
no rows selected
ORA-26747: The one-to-many transformation function strmadmin.DML_TRANSFORM_FUNCT
encountered the following error: return of NULL anydata array element not allowedOf course I could generate the header
v_lcr SYS.LCR$_ROW_RECORD;
ret:=var_any.getnumber(v_num);
if v_num = 4 then
-- lcrOut.set_value('new','DATA_SHORT',anydata.convertvarchar2('v_num = '||to_char(v_num)) );
v_lcr := SYS.LCR$_ROW_RECORD.CONSTRUCT (
source_database_name=>SYS_CONTEXT('USERENV','DB_NAME'),
command_type=>lcrOut.get_command_type(),
object_owner=>lcrOut.GET_OBJECT_OWNER(),
object_name=>lcrOut.GET_OBJECT_NAME() );
v_arr.extend;
v_arr(1) := SYS.AnyData.ConvertObject(v_lcr);
RETURN v_arr ;
end if;But the LCR is create with and sent, not discarded as expected despite
that there are no columns and it fails miserably at apply site.
That's all, and that's a failure. Maybe somebody else will have a better inspiration.
Edited by: bpolarsk on Jul 2, 2009 5:57 AM
Similar Messages
-
JPA - lazy loading for LOB field
Hi all,
JPA 1.0 specification mandates that all JPA-compliant implementation support lazy loading for certain kind of entity field.
For LOB fields lazy loading is OPTIONAL.
I am experiencing odd runtime behaviors on my custom software which would point to this feature not being supported.
Can anyone please tell me if SAP JPA 1.0 implementation on NW CE 711 implements this feature or not?
Thanks in advance
Regards
VincenzoHi Vincenco,
I am sure that this is the same as with single-valued relationships (@OneToOne, @ManyToOne): Lazy loading would require bytecode manipulation/generation, so SAP JPA does not support it in 7.20 (and of course not in 7.11)
See tulsi jiddimani's elaborate answer here: Re: JPA: Documentation on LazyLoad.
In 7.30 enhancements, you really can find lazy loading support for single-valued relationships with getReference.
http://help.sap.com/saphelp_nw73/helpdata/en/68/f676ef36094f4381467a308a98fd2a/content.htm
but @Lob and @Basic is not mentioned.
If you need lazy loading in 7.11, you have two alternatives:
1. Put the Lob fields into separate entities, work around the missing feature in SAP JPA with ugly @OneToMany - Relations
2. Use another persistence provider like EclipseLink, read Sabine Heider's blogs about integrattion of EclipseLink in SAP NetWeaver and static bytecode weaving for lazy loading. /people/sabine.heider/blog
Regards
Rolf -
XML log: Error during temp file creation for LOB Objects
Hi All,
I got this exception in the concurrent log file:
[122010_100220171][][EXCEPTION] !!Error during temp file creation for LOB Objects
[122010_100220172][][EXCEPTION] java.io.FileNotFoundException: null/xdo-dt-lob-1292864540169.tmp (No such file or directory (errno:2))
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:98)
at oracle.apps.xdo.dataengine.LOBList.initLOB(LOBList.java:39)
at oracle.apps.xdo.dataengine.LOBList.<init>(LOBList.java:30)
at oracle.apps.xdo.dataengine.XMLPGEN.updateMetaData(XMLPGEN.java:1051)
at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:511)
at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1121)
at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1144)
at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:558)
at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
at oracle.apps.xdo.dataengine.XMLPGEN.writeGroupStructure(XMLPGEN.java:308)
at oracle.apps.xdo.dataengine.XMLPGEN.processData(XMLPGEN.java:273)
at oracle.apps.xdo.dataengine.XMLPGEN.processXML(XMLPGEN.java:215)
at oracle.apps.xdo.dataengine.XMLPGEN.writeXML(XMLPGEN.java:254)
at oracle.apps.xdo.dataengine.DataProcessor.processDataStructre(DataProcessor.java:390)
at oracle.apps.xdo.dataengine.DataProcessor.processData(DataProcessor.java:355)
at oracle.apps.xdo.oa.util.DataTemplate.processData(DataTemplate.java:348)
at oracle.apps.xdo.oa.cp.JCP4XDODataEngine.runProgram(JCP4XDODataEngine.java:293)
at oracle.apps.fnd.cp.request.Run.main(Run.java:161)
I have this query defined in my data template:
<![CDATA[
SELECT lt.long_text inv_comment
FROM apps.fnd_attached_docs_form_vl ad,
apps.fnd_documents_long_text lt
WHERE ad.media_id = lt.media_id
AND ad.category_description = 'Draft Invoice Comments'
AND ad.pk1_value = :project_id
AND ad.pk2_value = :draft_invoice_num
]]>
Issue: The inv_comment is not printing on the PDF output.
I had the temp directory defined under the Admin tab.
I'm guessing if it's the LONG datatype of the long_text field that's causing the issue.
Anybody knows how this can be fixed? any help or advice is appreciated.
Thanks.
SW
Edited by: user12152845 on Dec 20, 2010 11:48 AMHi All,
I got this exception in the concurrent log file:
[122010_100220171][][EXCEPTION] !!Error during temp file creation for LOB Objects
[122010_100220172][][EXCEPTION] java.io.FileNotFoundException: null/xdo-dt-lob-1292864540169.tmp (No such file or directory (errno:2))
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:98)
at oracle.apps.xdo.dataengine.LOBList.initLOB(LOBList.java:39)
at oracle.apps.xdo.dataengine.LOBList.<init>(LOBList.java:30)
at oracle.apps.xdo.dataengine.XMLPGEN.updateMetaData(XMLPGEN.java:1051)
at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:511)
at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1121)
at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1144)
at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:558)
at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
at oracle.apps.xdo.dataengine.XMLPGEN.writeGroupStructure(XMLPGEN.java:308)
at oracle.apps.xdo.dataengine.XMLPGEN.processData(XMLPGEN.java:273)
at oracle.apps.xdo.dataengine.XMLPGEN.processXML(XMLPGEN.java:215)
at oracle.apps.xdo.dataengine.XMLPGEN.writeXML(XMLPGEN.java:254)
at oracle.apps.xdo.dataengine.DataProcessor.processDataStructre(DataProcessor.java:390)
at oracle.apps.xdo.dataengine.DataProcessor.processData(DataProcessor.java:355)
at oracle.apps.xdo.oa.util.DataTemplate.processData(DataTemplate.java:348)
at oracle.apps.xdo.oa.cp.JCP4XDODataEngine.runProgram(JCP4XDODataEngine.java:293)
at oracle.apps.fnd.cp.request.Run.main(Run.java:161)
I have this query defined in my data template:
<![CDATA[
SELECT lt.long_text inv_comment
FROM apps.fnd_attached_docs_form_vl ad,
apps.fnd_documents_long_text lt
WHERE ad.media_id = lt.media_id
AND ad.category_description = 'Draft Invoice Comments'
AND ad.pk1_value = :project_id
AND ad.pk2_value = :draft_invoice_num
]]>
Issue: The inv_comment is not printing on the PDF output.
I had the temp directory defined under the Admin tab.
I'm guessing if it's the LONG datatype of the long_text field that's causing the issue.
Anybody knows how this can be fixed? any help or advice is appreciated.
Thanks.
SW
Edited by: user12152845 on Dec 20, 2010 11:48 AM -
User_lobs and user_segments view shows different tablespace for lob segment
Hi,
I am trying to move lob in different tablespace for partitioned table.
I used
alter table <table_name> move partition <partition_name> lob(lob_column) store as (tablespace <tablespace_name>);
alter index <index_name> rebuild partition <partition_name> tablespace <tablespace_name>
ALTER TABLE <table_name> MODIFY DEFAULT ATTRIBUTES TABLESPACE <tablespace_name>
ALTER INDEX <index_name> modify default ATTRIBUTES TABLESPACE <tablespace_name>
Database - 10.2.0.5
OS- HP Itanium 11.31
I can see in user_lob_partitions, user_segments and user_part_tables shows me new tablespace information
whereas user_lobs and user_part_indexes shows me different information regarding tablespace.
I checked some documents in metalink but didnt help me.
I think that I am missing something or doing some step wrong.
Please help.
SQL> select partition_name, lob_partition_name, tablespace_name from user_lob_partitions where table_name in ('TRB1_PUB_LOG','TRB1_SUB_ERRS','TRB1_PUB_LOG') ;
PARTITION_NAME LOB_PARTITION_NAME TABLESPACE_NAME
S2000 SYS_LOB_P8585 USAGE_REORG_TBS
S2001 SYS_LOB_P8587 USAGE_REORG_TBS
S2003 SYS_LOB_P8589 USAGE_REORG_TBS
S2004 SYS_LOB_P8591 USAGE_REORG_TBS
S2005 SYS_LOB_P8593 USAGE_REORG_TBS
S2006 SYS_LOB_P8595 USAGE_REORG_TBS
S2007 SYS_LOB_P8597 USAGE_REORG_TBS
S2008 SYS_LOB_P8599 USAGE_REORG_TBS
S2010 SYS_LOB_P8601 USAGE_REORG_TBS
S2011 SYS_LOB_P8603 USAGE_REORG_TBS
S2012 SYS_LOB_P8605 USAGE_REORG_TBS
S2013 SYS_LOB_P8607 USAGE_REORG_TBS
S2014 SYS_LOB_P8609 USAGE_REORG_TBS
S2015 SYS_LOB_P8611 USAGE_REORG_TBS
S2888 SYS_LOB_P8613 USAGE_REORG_TBS
S2999 SYS_LOB_P8615 USAGE_REORG_TBS
S3000 SYS_LOB_P8617 USAGE_REORG_TBS
S3001 SYS_LOB_P8619 USAGE_REORG_TBS
S3004 SYS_LOB_P8621 USAGE_REORG_TBS
S3005 SYS_LOB_P8623 USAGE_REORG_TBS
S3006 SYS_LOB_P8625 USAGE_REORG_TBS
S3007 SYS_LOB_P8627 USAGE_REORG_TBS
S3008 SYS_LOB_P8629 USAGE_REORG_TBS
S3009 SYS_LOB_P8631 USAGE_REORG_TBS
S3010 SYS_LOB_P8633 USAGE_REORG_TBS
S3011 SYS_LOB_P8635 USAGE_REORG_TBS
S3012 SYS_LOB_P8637 USAGE_REORG_TBS
S3013 SYS_LOB_P8639 USAGE_REORG_TBS
S3014 SYS_LOB_P8641 USAGE_REORG_TBS
S3015 SYS_LOB_P8643 USAGE_REORG_TBS
S3050 SYS_LOB_P8645 USAGE_REORG_TBS
SMAXVALUE SYS_LOB_P8647 USAGE_REORG_TBS
32 rows selected.
SQL> select TABLE_NAME,COLUMN_NAME,SEGMENT_NAME,TABLESPACE_NAME,INDEX_NAME,PARTITIONED from user_lobs where TABLE_NAME in ('TRB1_PUB_LOG','TRB1_SUB_ERRS','TRB1_SUB_LOG') ;
TABLE_NAME COLUMN_NAME SEGMENT_NAME TABLESPACE_NAME INDEX_NAME PAR
TRB1_SUB_ERRS GENERAL_DATA_C SYS_LOB0006703055C00017$$ TRBDBUS1LNN1 SYS_IL0006703055C00017$$ YES
TRB1_SUB_LOG GENERAL_DATA_C SYS_LOB0006703157C00017$$ TRBDBUS1LNN1 SYS_IL0006703157C00017$$ YES
TRB1_PUB_LOG GENERAL_DATA_C SYS_LOB0006702987C00014$$ TRBDBUS1LNN1 SYS_IL0006702987C00014$$ YES
SQL> SQL> select unique segment_name ,tablespace_name from user_Segments where segment_name in (select SEGMENT_NAME from user_lobs where TABLE_NAME in('TRB1_PUB_LOG','TRB1_SUB_ERRS','TRB1_SUB_LOG') );
SEGMENT_NAME TABLESPACE_NAME
SYS_LOB0006702987C00014$$ USAGE_REORG_TBS
SYS_LOB0006703055C00017$$ USAGE_REORG_TBS
SYS_LOB0006703157C00017$$ USAGE_REORG_TBS
SQL> select unique segment_name ,tablespace_name from user_Segments where segment_name in (select INDEX_NAME from user_lobs where TABLE_NAME in('TRB1_PUB_LOG','TRB1_SUB_ERRS','TRB1_SUB_LOG') );
SEGMENT_NAME TABLESPACE_NAME
SYS_IL0006702987C00014$$ USAGE_REORG_TBS
SYS_IL0006703055C00017$$ USAGE_REORG_TBS
SYS_IL0006703157C00017$$ USAGE_REORG_TBS
SQL> select unique index_name,def_tablespace_name from user_part_indexes where table_name in ('TRB1_PUB_LOG','TRB1_SUB_ERRS','TRB1_SUB_LOG');
INDEX_NAME DEF_TABLESPACE_NAME
SYS_IL0006702987C00014$$ TRBDBUS1LNN1
SYS_IL0006703055C00017$$ TRBDBUS1LNN1
SYS_IL0006703157C00017$$ TRBDBUS1LNN1
TRB1_PUB_LOG_PK USAGE_REORG_IX_TBS
TRB1_SUB_ERRS_1IX USAGE_REORG_IX_TBS
TRB1_SUB_ERRS_1UQ USAGE_REORG_IX_TBS
TRB1_SUB_ERRS_2IX USAGE_REORG_IX_TBS
TRB1_SUB_LOG_1IX USAGE_REORG_IX_TBS
TRB1_SUB_LOG_PK USAGE_REORG_IX_TBS
SQL> select unique def_tablespace_name from user_part_tables where table_name in ('TRB1_PUB_LOG','TRB1_SUB_ERRS','TRB1_SUB_LOG');
DEF_TABLESPACE_NAME
USAGE_REORG_TBS
Please let me know if some more details required.>whereas user_lobs and user_part_indexes shows me different information regarding tablespace.
do you see different results after starting a new session after the ALTER statements were issued? -
Large Block Chunk Size for LOB column
Oracle 10.2.0.4:
We have a table with 2 LOB columns. Avg blob size of one of the columns is 122K and the other column is 1K. so I am planning to move column with big blob size to 32K chunk size. Some of the questions I have is:
1. Do I need to create a new tablespace with 32K block size and then create table with chunk size of 32K for that LOB column or just create a table with 32K chunk size on the existing tablespace which has 8K block size? What are the advantages or disadvatanges of one approach over other.
2. Currently db_cache_size is set to "0", do I need to adjust some parameters for large chunk/block size?
3. If I create a 32K chunk is that chunk shared with other rows? For eg: If I insert 2K block would 30K block be available for other rows? The following link says 30K will be a wasted space:
[LOB performance|http://www.oracle.com/technology/products/database/application_development/pdf/lob_performance_guidelines.pdf]
Below is the output of v$db_cache_advice:
select
size_for_estimate c1,
buffers_for_estimate c2,
estd_physical_read_factor c3,
estd_physical_reads c4
from
v$db_cache_advice
where
name = 'DEFAULT'
and
block_size = (SELECT value FROM V$PARAMETER
WHERE name = 'db_block_size')
and
advice_status = 'ON';
C1 C2 C3 C4
2976 368094 1.2674 150044215
5952 736188 1.2187 144285802
8928 1104282 1.1708 138613622
11904 1472376 1.1299 133765577
14880 1840470 1.1055 130874818
17856 2208564 1.0727 126997426
20832 2576658 1.0443 123639740
23808 2944752 1.0293 121862048
26784 3312846 1.0152 120188605
29760 3680940 1.0007 118468561
29840 3690835 1 118389208
32736 4049034 0.9757 115507989
35712 4417128 0.93 110102568
38688 4785222 0.9062 107284008
41664 5153316 0.8956 106034369
44640 5521410 0.89 105369366
47616 5889504 0.8857 104854255
50592 6257598 0.8806 104258584
53568 6625692 0.8717 103198830
56544 6993786 0.8545 101157883
59520 7361880 0.8293 98180125With only a 1K LOB you are going to want to use a 8K chunk size as per the reference in the thread above to the Oracle document on LOBs the chunk size is the allocation unit.
Each LOB column has its own LOB table so each column can have its own LOB chunk size.
The LOB data type is not known for being space efficient.
There are major changes available on 11g with Secure Files being available to replace traditional LOBs now called Basic Files. The differences appear to be mostly in how the LOB data, segments, are managed by Oracle.
HTH -- Mark D Powell -- -
Hi
I am trying to use the createTemporary()
method provided by the Oracle OCI Driver
to insert a BLOB into the database.
I have the oracle client installed in my
PC. While executing the above said method
an "UnsatisfiedLinkError" is reported.
It says "lob_createTemporary" not found.
I am of the opinion that this could be
a .dll that gets installed with the Oracle
client. Do we need to do any custom
installation for installing DLLs related
to LOBs while we install the oracle client?
Regards
RamkumarHi
I am trying to use the createTemporary()
method provided by the Oracle OCI Driver
to insert a BLOB into the database.
I have the oracle client installed in my
PC. While executing the above said method
an "UnsatisfiedLinkError" is reported.
It says "lob_createTemporary" not found.
I am of the opinion that this could be
a .dll that gets installed with the Oracle
client. Do we need to do any custom
installation for installing DLLs related
to LOBs while we install the oracle client?
Regards
Ramkumar -
Oracle 11gR2 rhel5 64bit
Hi all,
We are trying to figure out a way to reduce the amount of redo that is being generated when we insert data (LOBs) into a table. Our database is in ARCHIVELOG mode and we set the table to NOLOGGING mode also. However, we see an increase in the redo generation (from before when it was in LOGGING mode) each time. I'm monitoring the "redo size" through OEM for the specific session. What could I be doing wrong? Are there any other factors that I need to be aware of?
Thanks.NOLOGGING is supported in only a subset of the locations that support LOGGING. Only the following operations support the NOLOGGING mode in URL below
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/clauses005.htm#i999782 -
How to increase no of properties for LOB in BCS code
HI Team,
I have BCS code in visual studio. which is working fine and now i have a requirement to add few more properties. After adding new properties into .bdcm xml file, it is giving error that
the no of properties exceeds 50 and it is not allowing to create one more property.
Let me know, where can we modify no of properties per LOB system in BCS code. currently we have already 50 properties, and now i want to add few more properties.
Thanks & Regards, NeerubeeHI Team,
I have BCS code in visual studio. which is working fine and now i have a requirement to add few more properties. After adding new properties into .bdcm xml file, it is giving error that
the no of properties exceeds 50 and it is not allowing to create one more property.
Let me know, where can we modify no of properties per LOB system in BCS code. currently we have already 50 properties, and now i want to add few more properties.
Thanks & Regards, Neerubee -
Help on error : OIP-04902 - Invalid buffer length for LOB write op
We have a bit of VBA in Excel 2003 writing a BLOB to an 11.2.0.2 DB; for certain users it is throwing up the above message. I can't find any useful info on it; does anyone know the causes/resolution please ?
Resolved - a 0 byte object was being passed, which caused the error.
-
Conflict resolution for a table with LOB column ...
Hi,
I was hoping for some guidance or advice on how to handle conflict resolution for a table with a LOB column.
Basically, I had intended to handle the conflict resolution using the MAXIMUM prebuilt update conflict handler. I also store
the 'update' transaction time in the same table and was planning to use this as the resolution column to resolve the conflict.
I see however these prebuilt conflict handlers do not support LOB columns. I assume therefore I need to code a customer handler
to do this for me. I'm not sure exactly what my custom handler needs to do though! Any guidance or links to similar examples would
be very much appreciated.Hi,
I have been unable to make any progress on this issue. I have made use of prebuilt update handlers with no problems
before but I just don't know how to resolve these conflicts for LOB columns using custom handlers. I have some questions
which I hope make sense and are relevant:
1.Does an apply process detect update conflicts on LOB columns?
2.If I need to create a custom update/error handler to resolve this, should I create a prebuilt update handler for non-LOB columns
in the table and then a separate one for the LOB columns OR is it best just to code a single custom handler for ALL columns?
3.In my custom handler, I assume I will need to use the resolution column to decide whether or not to resolve the conflict in favour of the LCR
but how do I compare the new value in the LCR with that in the destination database? I mean how do I access the current value in the destination
database from the custom handler?
4.Finally, if I need to resolve in favour of the LCR, do I need to call something specific for LOB related columns compared to non-LOB columns?
Any help with these would be very much appreciated or even if someone can direct me to documentation or other links that would be good too.
Thanks again. -
Hi all,
I'm working with 11gR2 version in RAC environment with 2 nodes. I'm creating the schemes on my new database with empty tablespaces and 100Mb of initial free space. I'm having an error on the creation of this table:
CREATE TABLE "IODBODB1"."DOCUMENTOS"
( "ID_DOC" NUMBER NOT NULL ENABLE,
"ID_APP" NUMBER(5,0) NOT NULL ENABLE,
"ID_CLS" NUMBER(6,0) NOT NULL ENABLE,
"CREATE_FEC" DATE NOT NULL ENABLE,
"LAST_MODIFIED_FEC" DATE NOT NULL ENABLE,
"DOC_NAME" VARCHAR2(100 BYTE) NOT NULL ENABLE,
"DOC_SIZE" NUMBER NOT NULL ENABLE,
"DOC_DATA" BLOB NOT NULL ENABLE,
"CLASE_FORMATO_FORMATO" VARCHAR2(150 BYTE),
"REGULAR_NAME" VARCHAR2(100 BYTE) NOT NULL ENABLE,
"USUARIO" VARCHAR2(25 BYTE),
"USUARIO_LEVEL" VARCHAR2(5 BYTE)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ODBODB_TBD_DADES"
LOB ("DOC_DATA") STORE AS BASICFILE (ENABLE STORAGE IN ROW CHUNK 8K PCTVERSION 10 NOCACHE LOGGING STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
PARTITION BY LIST ("ID_APP")
SUBPARTITION BY RANGE ("ID_DOC")
PARTITION "PTL_DOCUMENTOS_FICTICIO" VALUES (0) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ODBODB_TBD_DADES"
LOB ("DOC_DATA") STORE AS BASICFILE (ENABLE STORAGE IN ROW CHUNK 8K PCTVERSION 10 NOCACHE LOGGING STORAGE(BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
(SUBPARTITION "PTR_DOCUMENTOS_FICTICIO" VALUES LESS THAN (MAXVALUE) LOB ("DOC_DATA") STORE AS (TABLESPACE "ODBODB_TBL_LOBS" ) TABLESPACE "ODBODB_TBD_DADES" NOCOMPRESS ) );
PARTITION "PTL_DOCUMENTOS_ALFIMG" VALUES (2) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ODBODB_TBD_DADES"
LOB ("DOC_DATA") STORE AS BASICFILE (ENABLE STORAGE IN ROW CHUNK 32K PCTVERSION 10 NOCACHE LOGGING STORAGE(BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
(SUBPARTITION "STR_DOCUMENTOS_ALFIMG" VALUES LESS THAN (MAXVALUE) LOB ("DOC_DATA") STORE AS (TABLESPACE "BEALFIMG_TBL_ODB" ) TABLESPACE "BEALFIMG_TBD_ODB" NOCOMPRESS )
The error is:
ORA-03252: initial extent size not enough for LOB segment
I can't understand how Oracle can't allocate enough extents to create the LOB segment, and I don't know to which LOB segment it refers because the sqlplus don't mark it with the error.
Any ideas about this issue?
Thanks in advance!
dbajugPlease, forget this post, I feel dumb... The LOB segment need to have same size of the rest of partition...
-
EXP/IMP..of table having LOB column to export and import using expdp/impdp
we have one table in that having colum LOB now this table LOB size is approx 550GB.
as per our knowldge LOB space can not be resused.so we have alrady rasied SR on that
we are come to clusion that we need to take backup of this table then truncate this table and then start import
we need help on bekow ponts.
1)we are taking backup with expdp using parellal parameter=4 this backup will complete sussessfully? any other parameter need to keep in expdp while takig backup.
2)once truncate done,does import will complete successfully..?
any SGA PGA or undo tablespace size or undo retention we need to increase.. to completer susecfully import? because its production Critical database?
current SGA 2GB
PGA 398MB
undo retention 1800
undo tbs 6GB
please any one give suggestion to perform activity without error...also suggest parameter need to keep during expdp/impdp
thanks an advance.Hi,
From my experience be prepared for a long outage to do this - expdp is pretty quick at getting lobs out but very slow at getting them back in again - a lot of the speed optimizations that may datapump so excellent for normal objects are not available for lobs. You really need to test this somewhere first - can you not expdp from live and load into some test area - you don't need the whole database just the table/lob in question. You don;t want to find out after you truncate the table that its going to take 3 days to import back in....
You might want to consider DBMS_REDEFINITION instead?
Here you precreate a temporary table (with same definitiion as the existing one), load the data into it from the existing table and then do a dictionary switch to swap them over - giving you minimal downtime. I think this should work fine with LOBS at 10g but you should do some research and see if it works fine. You'll need a lot of extra tablespace (temporarily) for this approach though.
Regards,
Harry -
LoB Sideload Requirements on Windows 8.1 Pro
Hi All,
According to the documentation on Sideloading Requirements for LoB apps, it mentioned that a certain Group Policy needs to be enabled on domain-joined devices.
The company for which we are developing the app for, has the device connected to their AD. However, their AD schema is not at the correct functional level to push out Group Policies to domain-joined devices.
The Group Policy Allow all trusted apps to install was enabled using the Local Group Policy Editor. I tried enabling the
Allow development of Windows Store apps without installing a developer license using the same. But, while installing the app package, it still asks me to 'Acquire a developer license'.
Am I missing something here? I read elsewhere, that the certificate which was used to sign the app, should also be made available on the device. If so, where should I store this certificate on the device?
The app was developed using Visual Studio (C#).
Your input would be much appreciated.
Thank you,
ABHello Diramoh,
Thank you for the feedback.
Please help me understand this. It seems to me that irrespective of whether a device is domain-joined, it needs a Sideloading Product Key (to not go through Windows Store for app distribution)?
Would the following be applicable for us to sideload our app:
1) Configure PC to enable Sideloading. In case of domain-joined devices, this is enabled (only if it is pushed as an Enterprise Group Policy).
In our case, since the AD schema is not up to date; this was done using Local Group Policy Editor.
2) Configure PCs for developing Windows Store Apps:
http://msdn.microsoft.com/en-us/library/hh852635.aspx#BKMK_DeveloperLicense
The above link asks us to enable another Group Policy - Allow development of Windows Store apps without installing a developer license.
In our case, again, this was done using Local Group Policy Editor.
3) Have Sideloading Product Keys.
From http://msdn.microsoft.com/en-us/library/hh852635.aspx#SideloadingRequirements, for domain-joined devices, it does not mention that you need sideloading product keys.
This is confusing.
4) Unknown factor - Is there anything extra to be enabled in the code itself? While signing the App Package, are there extra steps that need to be taken?
I appreciate your help with this.
Thank you,
AB -
Guys,
We did some testing on a non-partitioned tables with LOB and it all worked.. Here is the case
Check for lob segments to be shrinked
SQL> select table_name, segment_name from dba_lobs where owner='STR';
TABLE_NAME SEGMENT_NAME
STR_INBOUND_CDM_LOG SYS_LOB0000033379C00002$$
STR_OUTBOUND_CDM_LOG SYS_LOB0000033383C00002$$
TRS_REPORT_LINE SYS_LOB0000033392C00006$$
STR_EXCEPTIONS SYS_LOB0000033376C00005$$
Find the space occupied by the LOB segment for a table ( In MB ).
SQL> select segment_name,bytes/1048576 from dba_segments where segment_name='SYS_LOB0000033392C00006$$';
SEGMENT_NAME BYTES/1048576
SYS_LOB0000033392C00006$$ 464
Describe the table to see the column name on which the lob segment is defined
SQL> desc STR.TRS_REPORT_LINE
Name Null? Type
MESSAGE_ID NOT NULL VARCHAR2(100)
FSA_ACTION NOT NULL VARCHAR2(30)
TRANSACTION_ID NOT NULL VARCHAR2(100)
SUBMITTING_DEPT NOT NULL VARCHAR2(100)
REPORT_FILE_ID NUMBER
TRS_XML CLOB
TRANSACTION_STATUS VARCHAR2(50)
ACTION VARCHAR2(30)
RECEIVED_TIMESTAMP DATE
UPDATED_TIMESTAMP DATE
TRS_XML_VERSION VARCHAR2(30)
OWNER VARCHAR2(30)
CHANNEL VARCHAR2(30)
TXN_REF_TYPE VARCHAR2(30)
TRANSACTION_VER NOT NULL VARCHAR2(30)
Update the CLOB column to NULL
SQL> update STR.TRS_REPORT_LINE set TRS_XML=NULL;
62334 rows updated.
SQL> commit;
Commit complete.
Shrink the LOB segment
SQL> alter table STR.TRS_REPORT_LINE modify lob(TRS_XML) (shrink space);
Table altered.
We can now see that the space is freed up ! Less than 1 Mb is occupied now.
SQL> select segment_name,bytes/1048576 from dba_segments where segment_name='SYS_LOB0000033392C00006$$';
SEGMENT_NAME BYTES/1048576
SYS_LOB0000033392C00006$$ .0625I am noticing that this works different with a partitioned tables, where in the LOBs will get partitioned too.. And as a result the following command isn't working.. Please can someone tell me how to shrink them ? It's urgent as we plan to go live in a day or two
TABLE_NAME LOB_PARTITION_NAME PARTITION_NAME
IRS_REPORT_LINE SYS_LOB_P21 PRT2007_10
IRS_REPORT_LINE SYS_LOB_P22 PRT2007_11
IRS_REPORT_LINE SYS_LOB_P23 PRT2007_12
IRS_REPORT_LINE SYS_LOB_P24 PRT2008_01
IRS_REPORT_LINE SYS_LOB_P25 PRT2008_02
IRS_REPORT_LINE SYS_LOB_P26 PRT2008_03
IRS_REPORT_LINE SYS_LOB_P27 PRT2008_04
IRS_REPORT_LINE SYS_LOB_P28 PRT2008_05
IRS_REPORT_LINE SYS_LOB_P29 PRT2008_06
IRS_REPORT_LINE SYS_LOB_P30 PRT2008_07
IRS_REPORT_LINE SYS_LOB_P31 PRT2008_08
TABLE_NAME LOB_PARTITION_NAME PARTITION_NAME
IRS_REPORT_LINE SYS_LOB_P32 PRT2008_09
I see now the LOBS are not getting shrunk after this command
alter table IRS_REPORT_LINE modify lob (irs_xml) (shrink space)
Thanks
GIf it's that urgent, ask Oracle Support, otherwise you have to wait, like everybody else.
C. -
Oracle 8i array DML operations with LOB objects
Hi all,
I have a question about Oracle 8i array DML operations with LOB objects, both CLOB and BLOB. With the following statement in mind:
INSERT INTO TABLEX (COL1, COL2) VALUES (:1, :2)
where COL1 is a NUMBER and COL2 is a BLOB, I want to use OCIs array DML functionality to insert multiple records with a single statement execution. I have allocated an array of LOB locators, initialized them with OCIDescriptorAlloc(), and bound them to COL2 where mode is set to OCI_DATA_AT_EXEC and dty (IN) is set to SQLT_BLOB. It is after this where I am getting confused.
To send the LOB data, I have tried using the user-defined callback method, registering the callback function via OCIBindDynamic(). I initialize icbfps arguments as I would if I were dealing with RAW/LONG RAW data. When execution passes from the callback function, I encounter a memory exception within an Oracle dll. Where dvoid **indpp equals 0 and the object is of type RAW/LONG RAW, the function works fine. Is this not a valid methodology for CLOB/BLOB objects?
Next, I tried performing piecewise INSERTs using OCIStmtGetPieceInfo() and OCIStmtSetPieceInfo(). When using this method, I use OCILobWrite() along with a user-defined callback designed for LOBs to send LOB data to the database. Here everything works fine until I exit the user-defined LOB write callback function where an OCI_INVALID_HANDLE error is encountered. I understand that both OCILobWrite() and OCIStmtExecute() return OCI_NEED_DATA. And it does seem to me that the two statements work separately rather than in conjunction with each other. So I rather doubt this is the proper methodology.
As you can see, the correct method has evaded me. I have looked through the OCI LOB samples, but have not found any code that helps answer my question. Oracles OCI documentation has not been of much help either. So if anyone could offer some insight I would greatly appreciate it.
Chris Simms
[email protected]
nullBefore 9i, you will have to first insert empty locators using EMPTY_CLOB() inlined in the SQL and using RETURNING clause to return the locator. Then use OCILobWrite to write to the locators in a streamed fashion.
From 9i, you can actually bind a long buffer to each lob position without first inserting an empty locator, retrieving it and then writing to it.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by CSimms:
Hi all,
I have a question about Oracle 8i array DML operations with LOB objects, both CLOB and BLOB. With the following statement in mind:
INSERT INTO TABLEX (COL1, COL2) VALUES (:1, :2)
where COL1 is a NUMBER and COL2 is a BLOB, I want to use OCIs array DML functionality to insert multiple records with a single statement execution. I have allocated an array of LOB locators, initialized them with OCIDescriptorAlloc(), and bound them to COL2 where mode is set to OCI_DATA_AT_EXEC and dty (IN) is set to SQLT_BLOB. It is after this where I am getting confused.
To send the LOB data, I have tried using the user-defined callback method, registering the callback function via OCIBindDynamic(). I initialize icbfps arguments as I would if I were dealing with RAW/LONG RAW data. When execution passes from the callback function, I encounter a memory exception within an Oracle dll. Where dvoid **indpp equals 0 and the object is of type RAW/LONG RAW, the function works fine. Is this not a valid methodology for CLOB/BLOB objects?
Next, I tried performing piecewise INSERTs using OCIStmtGetPieceInfo() and OCIStmtSetPieceInfo(). When using this method, I use OCILobWrite() along with a user-defined callback designed for LOBs to send LOB data to the database. Here everything works fine until I exit the user-defined LOB write callback function where an OCI_INVALID_HANDLE error is encountered. I understand that both OCILobWrite() and OCIStmtExecute() return OCI_NEED_DATA. And it does seem to me that the two statements work separately rather than in conjunction with each other. So I rather doubt this is the proper methodology.
As you can see, the correct method has evaded me. I have looked through the OCI LOB samples, but have not found any code that helps answer my question. Oracles OCI documentation has not been of much help either. So if anyone could offer some insight I would greatly appreciate it.
Chris Simms
[email protected]
<HR></BLOCKQUOTE>
null
Maybe you are looking for
-
I am suddenly getting an error message when I try to open Mail what can I do? After restarting my computer I got the following error message: Mail cannot update your mailboxes because your home is full. You must free up space in your home folder befo
-
Can I embed or iframe a website in DPS on tablet (without built-in browser)?
I am wondering if it's possible to have an article embed or iframe a URL in a way that allows me to navigate the site without the built-in DPS browser popping up everytime I click on a link? I understand that the in-app browser was built for a simila
-
hi all after the problem happend with me (delete C with everything even recovery manager app) now i had install a cracked version of win 8 but i have in my hard the orginal win 8 thats comes with laptop hp probook 4540s .. now i need the recovery man
-
Solaris 10 x86 w/ Matrox G450 dual head and xinerama
I have solaris 10 x86 with matrox g450 dual head and x.org x-server. The problem is to get xinerama work. When xinerama isn't enabled, two different desktops work well - or, they work, but they are pretty useless because I can't move windows between
-
Hi, I am working with CUA systems and i want to set a security lock in such a way that, when i do the transports i don't want the user assignments to transport to get transported to target system and getting replaced. Thanks