Split CLOB column to improve performance
Hi All,
We have a transactional table which has 3 columns and one among those is CLOB which holds XML data.Inserts are coming at 35K/hr to this table and data will be deleted as soon as job is completed. So anytime the total records in this table will be less than 1000.
The XML data contains binary info of images and the size of each XML file ranges any where between 200KB to 600KB and the elapsed time for each insert varies from 1 to 2 secs depending upon the concurrency. As we need to achieve 125K/hour soon we were planning to do few modifications on table level.
1. Increase the CHUNK size from 8KB to 32KB.
2. Disabling logging for table,clob and index.
3. Disable flashback for database.
4. Move the table to a non default blocksize of 32KB. Default is 8KB
5. Increase the SDU value.
6. Split the XML data and store it on multiple CLOB columns.
We don't do any update to this table. Its only INSERT,SELECT and DELETE operations.
The major wait events I'm seeing during the insert is
1. direct path read
2. direct path write
3. flashback logfile sync
4. SQL*Net more data from client
5. Buffer busy wait
My doubt over here is ,
1. If I allocate a 2G memory for the non default block size and change the clob to CACHE, will my other objects in buffer_cache gets affected or gets aged out fast?
2. And moving this table to a SECUREFILE from BASICFILE will help?
3. Splitting the XML data to insert into different columns in the same table will give a performance boost?
Oracle EE 11.2.0.1,ASM
Thanks,
Arun
Thanks to all for the replies
@Sybrand
Please answer first whether the column is stored in a separate lobsegment.
No. Table,Index,LOB,LOB index uses the same TS. I missed adding this point( moving to separate TS) as part of table modifications.
@Hemant
There's a famous paper / blog post about CLOBs and Database Flashback. If I find it, I'll post the URL.
Is this the one you are referring to
http://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
By moving the CLOB column to different block size , I will test the performance improvement it gives and will share the results.
We dont need any data from this table. XML file contains details about finger prints and once the application server completes the job , XML data is deleted from this table.
So no need of backup/recovery operations for this table. Client will be able to replay the transactions if any problem occurs.
@Billy
We are not performing XML parsing on DB side. Gets the XML data from client -> insert into table -> client selects from table -> Upon successful completion of the Job from client ,XML data gets deleted.
Regarding binding of LOB from client side, will check on that side also to reduce round trips.
By changing the blocksize, I can keep db_32K_cache_size=2G and keep this table in CACHE. If I directly put my table to CACHE, it will age out all other operation from buffer which makes things worse for us.
This insert is part of transaction( Registration of a finger print) and this is the only statement taking time as of now compared to other statements in the transaction.
Thanks,
Arun
Similar Messages
-
Split of Cubes to improve the performance of reports
Hello Friends . We are now Implementing the Finance GL Line Items for Global Automobile operations in BMW and services to Outsourced to Japan which increased the data volume to 300 millions records for last 2 years since we go live. we have 200 Company codes.
How To Improve performance
1. Please suggest if I want to split the cubes based on the year and Company codes which are region based. which means european's will run report out of one cube and same report for america will be on another cube
But Question here is if I make 8 cube (2 For each year : 1- current year comp code ABC & 1 Current Year DEF), (2 For each year : 1- Prev year comp code ABC & 1 Prev Year DEF)
(2 For each year : 1- Arch year comp code ABC & 1 Archieve Year DEF)
1. Then what how I can I tell the query to look the data from which cube. since Company code is authorization variable so to pick that value of comp code and make a customer exit variable for infoprovider will increase lot of work.
Is there any good way to do this. does split of cubes make sense based on company code or just make it on year.
Please suggest me a excellent approach step by step to split cubes for 60 million records in 2 years growth will be same for
next 4 years since more company codes are coming.
2. Please suggest if split of cube will improve performance of report or it will make it worse since now query need to go thru 5-6 different cubes.
Thanks
Regards
SoniyaHi Soniya,
There are two ways in which you can split your cube....either based on Year or based on Company code.(i.e Region). While loading the data, write a code in the start routine which will filter tha data. For example, if you are loading data for three region say 1, 2, and 3, you code will be something like
DELETE SOURCE_PACKAGE WHERE REGION EQ '2' OR
REGION EQ '3'.
This will load data to your cube correspoding to region 1.
you can build your reports either on these cubes or you can have a multiprovider above these cubes and build the report.
Thanks..
Shambhu -
Searching CLOB column of XML documents with leading wildcard - Performance
Hi, our table has a text indexed CLOB column of XML documents and when performing a search with a leading wild card, we never retrieve any results.
The query looks like this:
select id from <table> where contains(columnname, '(%12345)') > 0;
I cant even generate an explain plan from this query. I killed it after 39 minutes.
If the query changes to:
select id from <table> where contains(columnname, '(12345%)') > 0;
I get an explain plan immediately with a cost=2 and when I execute the query, I get results in less than a second.
I'd appreciate any thoughts of what I should check or what the problem might be.
Thanks! DougCan you provide a script that reproduces the case. I am unable to reproduce the problem with just some small sample data, as shown below. That means that there is nothing wrong with the syntax, but you may be having problems due to the size of your data or other parameters that have not been mentioned.
SCOTT@10gXE> CREATE TABLE your_table (id NUMBER, columnname CLOB)
2 /
Table created.
SCOTT@10gXE> insert into your_table
2 select 1, dbms_xmlgen.getxml
3 ('select deptno, dname,
4 cursor (select empno, ename
5 from emp
6 where emp.deptno = dept.deptno ) employee
7 from dept
8 where deptno = 10')
9 from dual
10 /
1 row created.
SCOTT@10gXE> SELECT * FROM your_table
2 /
ID COLUMNNAME
1 <?xml version="1.0"?>
<ROWSET>
<ROW>
<DEPTNO>10</DEPTNO>
<DNAME>ACCOUNTING</DNAME>
<EMPLOYEE>
<EMPLOYEE_ROW>
<EMPNO>7782</EMPNO>
<ENAME>CLARK</ENAME>
</EMPLOYEE_ROW>
<EMPLOYEE_ROW>
<EMPNO>7839</EMPNO>
<ENAME>KING</ENAME>
</EMPLOYEE_ROW>
<EMPLOYEE_ROW>
<EMPNO>7934</EMPNO>
<ENAME>MILLER</ENAME>
</EMPLOYEE_ROW>
</EMPLOYEE>
</ROW>
</ROWSET>
SCOTT@10gXE> CREATE INDEX your_idx ON your_table (columnname)
2 INDEXTYPE IS CTXSYS.CONTEXT
3 /
Index created.
SCOTT@10gXE> EXEC DBMS_STATS.GATHER_TABLE_STATS ('SCOTT', 'YOUR_TABLE')
PL/SQL procedure successfully completed.
SCOTT@10gXE> SET AUTOTRACE ON EXPLAIN
SCOTT@10gXE> select id from your_table where contains (columnname, '(%839)') > 0
2 /
ID
1
Execution Plan
Plan hash value: 2832585188
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 888 | 0 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| YOUR_TABLE | 1 | 888 | 0 (0)| 00:00:01 |
|* 2 | DOMAIN INDEX | YOUR_IDX | | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("CTXSYS"."CONTAINS"("COLUMNNAME",'(%839)')>0)
SCOTT@10gXE> SET AUTOTRACE OFF -
FI-CA events to improve performance
Hello experts,
Does anybody use the FI-CA events to improve the extraction performance for datasources 0FC_OP_01 and 0FC_CI_01 (open and cleared items)?
It seems that this specific exits associated to BW events have been developped especially to improve performance.
Any documentation, guide should be appreciate.
Thanks.
Thibaud.Thanks to all for the replies
@Sybrand
Please answer first whether the column is stored in a separate lobsegment.
No. Table,Index,LOB,LOB index uses the same TS. I missed adding this point( moving to separate TS) as part of table modifications.
@Hemant
There's a famous paper / blog post about CLOBs and Database Flashback. If I find it, I'll post the URL.
Is this the one you are referring to
http://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
By moving the CLOB column to different block size , I will test the performance improvement it gives and will share the results.
We dont need any data from this table. XML file contains details about finger prints and once the application server completes the job , XML data is deleted from this table.
So no need of backup/recovery operations for this table. Client will be able to replay the transactions if any problem occurs.
@Billy
We are not performing XML parsing on DB side. Gets the XML data from client -> insert into table -> client selects from table -> Upon successful completion of the Job from client ,XML data gets deleted.
Regarding binding of LOB from client side, will check on that side also to reduce round trips.
By changing the blocksize, I can keep db_32K_cache_size=2G and keep this table in CACHE. If I directly put my table to CACHE, it will age out all other operation from buffer which makes things worse for us.
This insert is part of transaction( Registration of a finger print) and this is the only statement taking time as of now compared to other statements in the transaction.
Thanks,
Arun -
How to run query in parallel to improve performance
I am using ALDSP2.5, My data tables are split to 12 ways, based on hash of a particular column name. I have a query to get a piece of data I am looking for. However, this data is split across the 12 tables. So, even though my query is the same, I need to run it on 12 tables instead of 1. I want to run all 12 queries in parallel instead of one by one, collapse the datasets returned and return it back to the caller. How can I do this in ALDSP ?
To be specific, I will call below operation to get data:
declare function ds:SOA_1MIN_POOL_METRIC() as element(tgt:SOA_1MIN_POOL_METRIC_00)*
src0:SOA_1MIN_POOL_METRIC(),
src1:SOA_1MIN_POOL_METRIC(),
src2:SOA_1MIN_POOL_METRIC(),
src3:SOA_1MIN_POOL_METRIC(),
src4:SOA_1MIN_POOL_METRIC(),
src5:SOA_1MIN_POOL_METRIC(),
src6:SOA_1MIN_POOL_METRIC(),
src7:SOA_1MIN_POOL_METRIC(),
src8:SOA_1MIN_POOL_METRIC(),
src9:SOA_1MIN_POOL_METRIC(),
src10:SOA_1MIN_POOL_METRIC(),
src11:SOA_1MIN_POOL_METRIC()
This method acts as a proxy, it aggregates data from 12 data tables
src0:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_00 table
src1:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_01 table and so on.
The data source of each table is different (src0, src1 etc), how can I run these queries in parallel to improve performance?Thanks Mike.
The async function works, from the log, I could see the queries are executed in parallel.
but the behavior is confused, with same input, sometimes it gives me right result, some times(especially when there are few other applications running in the machine) it throws below exception:
java.lang.IllegalStateException
at weblogic.xml.query.iterators.BasicMaterializedTokenStream.deRegister(BasicMaterializedTokenStream.java:256)
at weblogic.xml.query.iterators.BasicMaterializedTokenStream$MatStreamIterator.close(BasicMaterializedTokenStream.java:436)
at weblogic.xml.query.runtime.core.RTVariable.close(RTVariable.java:54)
at weblogic.xml.query.runtime.core.RTVariableSync.close(RTVariableSync.java:74)
at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
at weblogic.xml.query.runtime.core.IfThenElse.close(IfThenElse.java:99)
at weblogic.xml.query.runtime.core.CountMapIterator.close(CountMapIterator.java:222)
at weblogic.xml.query.runtime.core.LetIterator.close(LetIterator.java:140)
at weblogic.xml.query.runtime.constructor.SuperElementConstructor.prepClose(SuperElementConstructor.java:183)
at weblogic.xml.query.runtime.constructor.PartMatElemConstructor.close(PartMatElemConstructor.java:251)
at weblogic.xml.query.runtime.querycide.QueryAssassin.close(QueryAssassin.java:65)
at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
at weblogic.xml.query.runtime.core.QueryIterator.close(QueryIterator.java:146)
at com.bea.ld.server.QueryInvocation.getResult(QueryInvocation.java:462)
at com.bea.ld.EJBRequestHandler.executeFunction(EJBRequestHandler.java:346)
at com.bea.ld.ServerBean.executeFunction(ServerBean.java:108)
at com.bea.ld.Server_ydm4ie_EOImpl.executeFunction(Server_ydm4ie_EOImpl.java:262)
at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invokeFunction(XmlDataServiceBase.java:312)
at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invoke(XmlDataServiceBase.java:231)
at com.ebay.rds.dao.SOAMetricDAO.getMetricAggNumber(SOAMetricDAO.java:502)
at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:199)
at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:174)
at RDSWS.getMetricAggNumber(RDSWS.jws:240)
at jrockit.reflect.VirtualNativeMethodInvoker.invoke(Ljava.lang.Object;[Ljava.lang.Object;)Ljava.lang.Object;(Unknown Source)
at java.lang.reflect.Method.invoke(Ljava.lang.Object;[Ljava.lang.Object;I)Ljava.lang.Object;(Unknown Source)
at com.bea.wlw.runtime.core.dispatcher.DispMethod.invoke(DispMethod.java:371)
below is my code example, first I get data from all the 12 queries, each query is enclosed with fn-bea:async function, finally, I do a group by aggregation based on the whole data set, is it possible that the exception is due to some threads are not returned data yet, but the aggregation has started?
the metircName, serviceName, opname, and $soaDbRequest are simply passed from operation parameters.
let $METRIC_RESULT :=
fn-bea:async(
for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
for $SOA_POOL_METRIC in src0:SOA_1MIN_POOL_METRIC()
where
$SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
and $SOA_POOL_METRIC/CAL_CUBE_ID ge fn-bea:fence($soaDbRequest/ns16:StartTime)
and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
return
$SOA_POOL_METRIC
fn-bea:async(for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
for $SOA_POOL_METRIC in src1:SOA_1MIN_POOL_METRIC()
where
$SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
and $SOA_POOL_METRIC/CAL_CUBE_ID ge fn-bea:fence($soaDbRequest/ns16:StartTime)
and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
return
$SOA_POOL_METRIC
... //12 similar queries
for $Metric_data in $METRIC_RESULT
group $Metric_data as $Metric_data_Group
by $Metric_data/ROLE_TYPE as $role_type_id
return
<ns0:RawMetric>
<ns0:endTime?></ns0:endTime>
<ns0:target?>{$role_type_id}</ns0:target>
<ns0:value0>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE0)}</ns0:value0>
<ns0:value1>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE1)}</ns0:value1>
<ns0:value2>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE2)}</ns0:value2>
<ns0:value3>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE3)}</ns0:value3>
</ns0:RawMetric>
could you tell me why the result is unstable? thanks! -
How to read/write .CSV file into CLOB column in a table of Oracle 10g
I have a requirement which is nothing but a table has two column
create table emp_data (empid number, report clob)
Here REPORT column is CLOB data type which used to load the data from the .csv file.
The requirement here is
1) How to load data from .CSV file into CLOB column along with empid using DBMS_lob utility
2) How to read report columns which should return all the columns present in the .CSV file (dynamically because every csv file may have different number of columns) along with the primariy key empid).
eg: empid report_field1 report_field2
1 x y
Any help would be appreciated.If I understand you right, you want each row in your table to contain an emp_id and the complete text of a multi-record .csv file.
It's not clear how you relate emp_id to the appropriate file to be read. Is the emp_id stored in the csv file?
To read the file, you can use functions from [UTL_FILE|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/u_file.htm#BABGGEDF] (as long as the file is in a directory accessible to the Oracle server):
declare
lt_report_clob CLOB;
l_max_line_length integer := 1024; -- set as high as the longest line in your file
l_infile UTL_FILE.file_type;
l_buffer varchar2(1024);
l_emp_id report_table.emp_id%type := 123; -- not clear where emp_id comes from
l_filename varchar2(200) := 'my_file_name.csv'; -- get this from somewhere
begin
-- open the file; we assume an Oracle directory has already been created
l_infile := utl_file.fopen('CSV_DIRECTORY', l_filename, 'r', l_max_line_length);
-- initialise the empty clob
dbms_lob.createtemporary(lt_report_clob, TRUE, DBMS_LOB.session);
loop
begin
utl_file.get_line(l_infile, l_buffer);
dbms_lob.append(lt_report_clob, l_buffer);
exception
when no_data_found then
exit;
end;
end loop;
insert into report_table (emp_id, report)
values (l_emp_id, lt_report_clob);
-- free the temporary lob
dbms_lob.freetemporary(lt_report_clob);
-- close the file
UTL_FILE.fclose(l_infile);
end;This simple line-by-line approach is easy to understand, and gives you an opportunity (if you want) to take each line in the file and transform it (for example, you could transform it into a nested table, or into XML). However it can be rather slow if there are many records in the csv file - the lob_append operation is not particularly efficient. I was able to improve the efficiency by caching the lines in a VARCHAR2 up to a maximum cache size, and only then appending to the LOB - see [three posts on my blog|http://preferisco.blogspot.com/search/label/lob].
There is at least one other possibility:
- you could use [DBMS_LOB.loadclobfromfile|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_lob.htm#i998978]. I've not tried this before myself, but I think the procedure is described [here in the 9i docs|http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96591/adl12bfl.htm#879711]. This is likely to be faster than UTL_FILE (because it is all happening in the underlying DBMS_LOB package, possibly in a native way).
That's all for now. I haven't yet answered your question on how to report data back out of the CLOB. I would like to know how you associate employees with files; what happens if there is > 1 file per employee, etc.
HTH
Regards Nigel
Edited by: nthomas on Mar 2, 2009 11:22 AM - don't forget to fclose the file... -
How to retreive soap xml data from clob column in a table
Hi,
I am trying to retrieve the XML tag value from clob column.
Table name = xyz and column= abc (clob datatype)
data stored in abc column is as below
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:head="http://www.abc.com/gcgi/shared/system/header" xmlns:v6="http://www.abc.com/gcgi/services/v6_0_0_0" xmlns:sys="http://www.abc.com/gcgi/shared/system/systemtypes">
<soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing">
<RqHeader soapenv:mustUnderstand="0" xmlns="http://www.abc.com/gcgi/shared/system/header">
<DateAndTimeStamp>2011-12-20T16:02:36.677+08:00</DateAndTimeStamp>
<UUID>1000002932</UUID>
<Version>6_0_0_0</Version>
<ClientDetails>
<Org>ABC</Org>
<OrgUnit>ABC</OrgUnit>
<ChannelID>HBK</ChannelID>
<TerminalID>0000</TerminalID>
<SrcCountryCode>SG</SrcCountryCode>
<DestCountryCode>SG</DestCountryCode>
<UserGroup>HBK</UserGroup>
</ClientDetails>
</RqHeader>
<wsa:Action>/SvcImpl/bank/
SOAPEndpoint/AlertsService.serviceagent/OpEndpointHTTP/AlertDeleteInq</wsa:Action></soapenv:Header>
<soapenv:Body>
<v6:AlertDeleteInqRq>
<v6:Base>
<v6:VID>20071209013112</v6:VID>
<!--Optional:-->
<v6:Ref>CTAA00000002644</v6:Ref>
</v6:Base>
</v6:AlertDeleteInqRq>
</soapenv:Body>
</soapenv:Envelope>
And i want to retrieve the values of tag
<ChannelID> and <v6:VID>
can somebody help, i have tried with extractvalue but not able to get the valuesI have used the below two queries but not able to get the expected results. Both queries result into no values.
select xmltype(MED_REQ_PAYLOAD).extract('//ClientDetails/Org','xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" || xmlns="http://www.abc.com/gcgi/shared/system/header"').getStringValue() from ESB_OUTPUT_TEMP where SOAPACTION = '/SvcImpl/bank/alerts/v6_0_0_0/SOAPEndpoint/AlertsService.serviceagent/OpEndpointHTTP/AlertDeleteInq'
select EXTRACTVALUE(xmltype(MED_REQ_PAYLOAD),'/RqHeader/) from ESB_OUTPUT_TEMP where SOAPACTION = '/SvcImpl/bank/SOAPEndpoint/AlertsService.serviceagent/OpEndpointHTTP/AlertDeleteInq'
Well, for starters, both queries are syntactically wrong :
- non terminated string
- incorrect namespace mapping declaration
- unknown XMLType method "getStringValue()"
Secondly, all those functions are deprecated now.
Here's an up-to-date example using XMLTable. It will retrieve the two tag values you mentioned :
SQL> select x.*
2 from esb_output_temp t
3 , xmltable(
4 xmlnamespaces(
5 'http://schemas.xmlsoap.org/soap/envelope/' as "soap"
6 , 'http://www.abc.com/gcgi/shared/system/header' as "head"
7 , 'http://www.abc.com/gcgi/services/v6_0_0_0' as "v6"
8 )
9 , '/soap:Envelope'
10 passing xmlparse(document t.med_req_payload)
11 columns ChannelID varchar2(10) path 'soap:Header/head:RqHeader/head:ClientDetails/head:ChannelID'
12 , VID varchar2(30) path 'soap:Body/v6:AlertDeleteInqRq/v6:Base/v6:VID'
13 ) x
14 ;
CHANNELID VID
HBK 20071209013112
You may also want to store XML in XMLType columns for both performance and storage optimizations. -
Trying to Insert an XML Element into XML data stored in CLOB column
Hi all,
My ORACLE DB version is:
('Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production');
('PL/SQL Release 11.2.0.2.0 - Production');
('CORE 11.2.0.2.0 Production');
('TNS for Linux: Version 11.2.0.2.0 - Production');
('NLSRTL Version 11.2.0.2.0 - Production');
I have this XML data stored in a CLOB column:
<Activity>
<Changes>
</Changes>
<Inserts>
</Inserts>
<Definition>
</Definition>
<Assignment TYPE="Apply">
</Assignment>
<Spawned>
<Activity>576D8CD9-57A1-8608-1563-8F6DC74BDF3C</Activity>
<Activity>11226E79-5D24-02EB-A950-D34A9CCFB3FF</Activity>
<Activity>DAA68DC0-CA9A-BB15-DE31-9596E19513EE</Activity>
<Activity>93F667D6-966A-7EAD-9B70-630D9BEFDDD2</Activity>
<Activity>FA63D9D3-86BB-3FF0-BE69-17EAA7581637</Activity>
</Spawned>
<SpawnedBy>AFC49BD4-5AA7-38C0-AE27-F59D16EE1B1C</SpawnedBy>
</Activity>
I am in need of some assistance in creating an update that will insert another <Activity>SomeGUID</Activity> into the <Spawned> parent.
Any help is greatly appreciated.
Thanks.
Edited by: 943783 on Dec 14, 2012 12:58 PMSee XML updating functions : http://docs.oracle.com/cd/E11882_01/appdev.112/e23094/xdb04cre.htm#i1032611
For example :
UPDATE my_table t
SET t.my_clob =
XMLSerialize(document
insertChildXML(
XMLParse(document t.my_clob)
, '/Activity/Spawned'
, 'Activity'
, XMLElement("Activity", 'Some GUID')
WHERE ...
;Although it works, there's overhead introduced by parsing the CLOB, then serializing again.
Is there any chance you can change the CLOB to SECUREFILE binary XMLType storage instead?
You would then be able to benefit from optimized piecewise update of the XML and improved storage. -
Creation of view with clob column in select and group by clause.
Hi,
We are trying to migrate a view from sql server2005 to oracle 10g. It has clob column which is used in group by clause. How can the same be achived in oracle 10g.
Below is the sql statament used in creating view aling with its datatypes.
CREATE OR REPLACE FORCE VIEW "TEST" ("CONTENT_ID", "TITLE", "KEYWORDS", "CONTENT", "ISPOPUP", "CREATED", "SEARCHSTARTDATE", "SEARCHENDDATE", "HITS", "TYPE", "CREATEDBY", "UPDATED", "ISDISPLAYED", "UPDATEDBY", "AVERAGERATING", "VOTES") AS
SELECT content_ec.content_id,
content_ec.title,
content_ec.keywords,
content_ec.content content ,
content_ec.ispopup,
content_ec.created,
content_ec.searchstartdate,
content_ec.searchenddate,
COUNT(contenttracker_ec.contenttracker_id) hits,
contenttypes_ec.type,
users_ec_1.username createdby,
Backup_Latest.created updated,
Backup_Latest.isdisplayed,
users_ec_1.username updatedby,
guideratings.averagerating,
guideratings.votes
FROM users_ec users_ec_1
JOIN Backup_Latest
ON users_ec_1.USER_ID = Backup_Latest.USER_ID
RIGHT JOIN content_ec
JOIN contenttypes_ec
ON content_ec.contenttype_id = contenttypes_ec.contenttype_id
ON Backup_Latest.content_id = content_ec.content_id
LEFT JOIN guideratings
ON content_ec.content_id = guideratings.content_id
LEFT JOIN contenttracker_ec
ON content_ec.content_id = contenttracker_ec.content_id
LEFT JOIN users_ec users_ec_2
ON content_ec.user_id = users_ec_2.USER_ID
GROUP BY content_ec.content_id,
content_ec.title,
content_ec.keywords,
to_char(content_ec.content) ,
content_ec.ispopup,
content_ec.created,
content_ec.searchstartdate,
content_ec.searchenddate,
contenttypes_ec.type,
users_ec_1.username,
Backup_Latest.created,
Backup_Latest.isdisplayed,
users_ec_1.username,
guideratings.averagerating,
guideratings.votes;
Column Name Data TYpe
CONTENT_ID NUMBER(10,0)
TITLE VARCHAR2(50)
KEYWORDS VARCHAR2(100)
CONTENT CLOB
ISPOPUP NUMBER(1,0)
CREATED TIMESTAMP(6)
SEARCHSTARTDATE TIMESTAMP(6)
SEARCHENDDATE TIMESTAMP(6)
HITS NUMBER
TYPE VARCHAR2(50)
CREATEDBY VARCHAR2(20)
UPDATED TIMESTAMP(6)
ISDISPLAYED NUMBER(1,0)
UPDATEDBY VARCHAR2(20)
AVERAGERATING NUMBER
VOTES NUMBERAny help realyy appreciated.
Thanks in advance
Edited by: user512743 on Dec 10, 2008 10:46 PMHello,
Specifically, this should be asked in the
ASP.Net MVC forum on forums.asp.net.
Karl
When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
My Blog: Unlock PowerShell
My Book: Windows PowerShell 2.0 Bible
My E-mail: -join ('6F6C646B61726C40686F746D61696C2E636F6D'-split'(?<=\G.{2})'|%{if($_){[char][int]"0x$_"}}) -
A64 Tweaker and Improving Performance
I noticed a little utility called "A64 Tweaker" being mentioned in an increasing number of posts, so I decided to track down a copy and try it out...basically, it's a memory tweaking tool, and it actually is possible to get a decent (though not earth-shattering by any means) performance boost with it. It also lacks any real documentation as far as I can find, so I decided to make a guide type thing to help out users who would otherwise just not bother with it.
Anyways, first things first, you can get a copy of A64 Tweaker here: http://www.akiba-pc.com/download.php?view.40
Now that that's out of the way, I'll walk through all of the important settings, minus Tcl, Tras, Trcd, and Trp, as these are the typical RAM settings that everyone is always referring to when they go "CL2.5-3-3-7", so information on them is widely available, and everyone knows that for these settings, lower always = better. Note that for each setting, I will list the measured cange in my SiSoft Sandra memory bandwidth score over the default setting. If a setting produces a change of < 10 MB/sec, its effects will be listed as "negligible" (though note that it still adds up, and a setting that has a negligible impact on throughput may still have an important impact on memory latency, which is just as important). As for the rest of the settings (I'll do the important things on the left hand side first, then the things on the right hand side...the things at the bottom are HTT settings that I'm not going to muck with):
Tref - I found this setting to have the largest impact on performance out of all the available settings. In a nutshell, this setting controls how your RAM refreshes are timed...basically, RAM can be thought of as a vast series of leaky buckets (except in the case of RAM, the buckets hold electrons and not water), where a bucket filled beyond a certain point registers as a '1' while a bucket with less than that registers as a '0', so in order for a '1' bucket to stay a '1', it must be periodically refilled (i.e. "refreshed"). The way I understand this setting, the frequency (100 MHz, 133 MHz, etc.) controls how often the refreshes happen, while the time parameter (3.9 microsecs, 1.95 microsecs, etc.) controls how long the refresh cycle lasts (i.e. how long new electrons are pumped into the buckets). This is important because while the RAM is being refreshed, other requests must wait. Therefore, intuitively it would seem that what we want are short, infrequent refreshes (the 100 MHz, 1.95 microsec option). Experimentation almost confirms this, as my sweet spot was 133 MHz, 1.95 microsecs...I don't know why I had better performance with this setting, but I did. Benchmark change from default setting of 166 MHz, 3.9 microsecs: + 50 MB/sec
Trfc - This setting offered the next largest improvement...I'm not sure exactly what this setting controls, but it is doubtless similar to the above setting. Again, lower would seem to be better, but although I was stable down to '12' for the setting, the sweet spot here for my RAM was '18'. Selecting '10' caused a spontaneous reboot. Benchmark change from the default setting of 24: +50 MB/sec
Trtw - This setting specifies how long the system must wait after it reads a value before it tries to overwrite the value. This is necessary due to various technical aspects related to the fact that we run superscalar, multiple-issues CPU's that I don't feel like getting into, but basically, smaller numbers are better here. I was stable at '2', selecting '1' resulted in a spontaneou reboot. Benchmark change from default setting of 4: +10 MB/sec
Twr - This specifies how much delay is applied after a write occurs before the new information can be accessed. Again, lower is better. I could run as low as 2, but didn't see a huge change in benchmark scores as a result. It is also not too likely that this setting affects memory latency in an appreciable way. Benchmark change from default setting of 3: negligible
Trrd - This controls the delay between a row address strobe (RAS) and a seccond row address strobe. Basically, think of memory as a two-dimensional grid...to access a location in a grid, you need both a row and column number. The way memory accesses work is that the system first asserts the column that is wants (the column address strobe, or CAS), and then asserts the row that it wants (row address strobe). Because of a number of factors (prefetching, block addressing, the way data gets laid out in memory), the system will often access multiple rows from the same column at once to improve performance (so you get one CAS, followed by several RAS strobes). I was able to run stably with a setting of 1 for this value, although I didn't get an appreciable increase in throughput. It is likely however that this setting has a significant impact on latency. Benchmark change from default setting of 2: negligible
Trc - I'm not completely sure what this setting controls, although I found it had very little impact on my benchmark score regardless of what values I specified. I would assume that lower is better, and I was stable down to 8 (lower than this caused a spontaneous reboot), and I was also stable at the max possible setting. It is possible that this setting has an effect on memory latency even though it doesn't seem to impact throughput. Benchmark change from default setting of 12: negligible
Dynamic Idle Cycle Counter - I'm not sure what this is, and although it sounds like a good thing, I actually post a better score when running with it disabled. No impact on stability either way. Benchmark change from default setting of enabled: +10 MB/sec
Idle Cycle Limit - Again, not sure exactly what this is, but testing showed that both extremely high and extremely low settings degrade performance by about 20 MB/sec. Values in the middle offer the best performance. I settled on 32 clks as my optimal setting, although the difference was fairly minimal over the default setting. This setting had no impact on stability. Benchmark change from default setting of 16 clks: negligible
Read Preamble - As I understand it, this is basically how much of a "grace period" is given to the RAM when a read is asserted before the results are expected. As such, lower values should offer better performance. I was stable down to 3.5 ns, lower than that and I would get freezes/crashes. This did not change my benchmark scores much, though in theory it should have a significant impact on latency. Benchmark change from default setting of 6.0 ns: negligible
Read Write Queue Bypass - Not sure what it does, although there are slight performance increases as the value gets higher. I was stable at 16x, though the change over the 8x default was small. It is possible, though I think unlikely, that this improves latency as well. Benchmark change from default setting of 8x: negligible
Bypass Max - Again not sure what this does, but as with the above setting, higher values perform slightly better. Again I feel that it is possible, though not likely, that this improves latency as well. I was stable at the max of 7x. Benchmark change from the default setting of 4x: negligible
Asynch latency - A complete mystery. Trying to run *any* setting other than default results in a spontaneous reboot for me. No idea how it affects anything, though presumably lower would be better, if you can select lower values without crashing.
...and there you have it. With the tweaks mentioned above, I was able to gain +160 MB/sec on my Sandra score, +50 on my PCMark score, and +400 on my 3dMark 2001 SE score. Like I said, not earth-shattering, but a solid performance boost, and it's free to boot. Settings what I felt had no use in tweaking the RAM for added performance, or which are self-explanatory, have been left out. The above tests were performed on Corsair XMS PC4000 RAM @ 264 MHz, CL2.5-3-4-6 1T.Quote
Hm...I wonder which one is telling the truth, the BIOS or A64 tweaker.
I've wondered this myself. From my understanding it the next logic step from the WCREDIT programs. I understand how clock gen can misreport frequency because it's probably not measuring frequency itself but rather a mathmatical representation of a few numbers it's gathered and one clk frequency(HTT maybe?), and the non supported dividers messes up the math...but I think the tweaker just extracts hex values strait from the registers and displays in "English", I mean it could be wrong, but seeing how I watch the BIOS on The SLI Plat change the memory timings in the POST screen to values other then SPD when it Auto with agressive timings in disabled, I actually want to side with the A64 tweaker in this case.
Hey anyone know what Tref in A64 relates to in the BIOS. i.e 200 1.95us = what in the BIOS. 1x4028, 1x4000, I'm just making up numbers here but it's different then 200 1.95, last time I searched I didn't find anything. Well I found ALOT but not waht I wanted.. -
Slow delete on a table with one CLOB column
Hi,
I have a table which has one CLOB column and even if I delete one row from it, it takes approx. 16 seconds. Since UNDO isn't generated for CLOBs (at least not in the UNDO tablespace), I can't figure out why this is so? The CLOB has defined a RETENTION clause, so it depends upon UNDO_RETENTION which is set to 900. There wasn't any lock from another session present on this table.
The table currently contains only 6 rows but it used to be bigger in the past, so I thought that maybe a full table scan is going on when deleting. But even if I limit the DELETE statement with an ROWID (to avoid a FTS), it doesn't help:
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
SQL> select count(*) from scott.packet;
COUNT(*)
6
SQL> column segment_name format a30
SQL> select segment_name
2 from dba_lobs
3 where owner = 'SCOTT'
4 and table_name = 'PACKET';
SEGMENT_NAME
SYS_LOB0000081487C00002$$
SQL> select segment_name, bytes/1024/1024 MB
2 from dba_segments
3 where owner = 'SCOTT'
4 and segment_name in ('PACKET', 'SYS_LOB0000081487C00002$$');
SEGMENT_NAME MB
PACKET ,4375
SYS_LOB0000081487C00002$$ 576
SQL> -- packet_xml is the CLOB column
SQL> select sum(dbms_lob.getlength (packet_xml))/1024/1024 MB from scott.packet;
MB
19,8279037
SQL> column rowid new_value rid
SQL> select rowid from scott.packet where rownum=1;
ROWID
AAAT5PAAEAAEEDHAAN
SQL> set timing on
SQL> delete from scott.packet where rowid = '&rid';
old 1: delete from scott.packet where rowid = '&rid'
new 1: delete from scott.packet where rowid = 'AAAT5PAAEAAEEDHAAN'
1 row deleted.
Elapsed: 00:00:15.64From another session I monitored v$session.event for the session performing the DELETE and the reported wait event was 'db file scattered read'.
Someone asked Jonathan Lewis a similar looking question (under comment #5) here: http://jonathanlewis.wordpress.com/2007/05/11/lob-sizing/ but unfortunately I couldn't find if he wrote any answer/note about that.
So if anyone has any suggestion, I'll appreciate it very much.
Regards,
JureAfter reviewing the tkprof as suggested by user503699, I noticed that the DELETE itself is instantaneous. The problem is another statement:
select /*+ all_rows */ count(1)
from
"SCOTT"."MESSAGES" where "PACKET_ID" = :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 2 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 1.40 16.93 125012 125128 0 1
total 3 1.40 16.93 125012 125128 2 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 SORT AGGREGATE (cr=125128 pr=125012 pw=125012 time=0 us)
0 TABLE ACCESS FULL MESSAGES (cr=125128 pr=125012 pw=125012 time=0 us cost=32900 size=23056 card=5764)I checked if there was any "ON DELETE" trigger and since there wasn't, I suspected this might be a problem of unindexed foreign keys. As soon as I created an index on SCOTT.MESSAGES.PACKET_ID the DELETE executed immediately. The "funny" thing is that, the table SCOTT.MESSAGES is empty, but it has allocated 984MB of extents (since it wasn't truncated), so a time consuming full tablescan was occurring on it.
Thanks for pointing me to the 10046 trace which solved the problem.
Regards,
Jure -
Exporting Table with CLOB Columns
Hello All,
I am trying to export table with clob columns with no luck. It errors saying EXP-00011TABLE do not exist.
I can query the table and the owner is same as what i am exporting it from.
Please let me know.An 8.0.6 client definitely changes things. Other posters have already posted links to information on what versions of exp and imp can be used to move data between versions.
I will just add that if you were using a client to do the export then if the client version is less than the target database version you can upgrade the client or better yet if possilbe use the target database export utility to perform the export.
I will not criticize the existance of an 8.0.6 system as we had a parent company dump a brand new 8.0.3 application on us less than two years ago. We have since been allowed to update the database and pro*c modules to 9.2.0.6.
If the target database is really 8.0.3 then I suggest you consider using dbms_metadata to generate the DDL, if needed, and SQLPlus to extact the data into delimited files that you can then reload via sqlldr. This would allow you to move the data with some potential adjustments for any 10g only features in the code.
HTH -- Mark D Powell -- -
Improve Performance of Dimension and Fact table
Hi All,
Can any one explain me the steps how to improve performance of Dimension and Fact table.
Thanks in advace....
reddHi!
There is much to be said about performance in general, but I will try to answer your specific question regarding fact and dimension tables.
First of all try to compress as many requests as possible in the fact table and do that regularily.
Partition your compressed fact table physically based on for example 0CALMONTH. In the infocube maintenance, in the Extras menu, choose partitioning.
Partition your cube logically into several smaller cubes based on for example 0CALYEAR. Combine the cubes with a multiprovider.
Use constants on infocube level (Extras->Structure Specific Infoobject properties) and/or restrictions on specific cubes in your multiprovider queries if needed.
Create aggregates of subsets of your characteristics based on your query design. Use the debug option in RSRT to investigate which objects you need to include.
To investigate the size of the dimension tables, first use the test in transaction RSRV (Database Information about InfoProvider Tables). It will tell you the relative sizes of your dimensions in comparison to your fact table. Then go to transaction DB02 and conduct a detailed analysis on the large dimension tables. You can choose "table columns" in the detailed analysis screen to see the number of distinct values in each column (characteristic). You also need to understand the "business logic" behind these objects. The ones that have low cardinality, that is relate to each other shoule be located together. With this information at hand you can understand which objects contribute the most to the size of the dimension and separate the dimension.
Use line item dimension where applicable, but use the "high cardinality" option with extreme care.
Generate database statistics regularily using process chains or (if you use Oracle) schedule BRCONNECT runs using transaction DB13.
Good luck!
Kind Regards
Andreas -
Improving performance of zreport
Hi,
I have on requirement of improving performance of zreport.
(1) Apart from nested loops, select within loop, inner joins, is there any other changes i have to consider.
(2) There 4 nested loops found, which have upto 1 level. Is it mandatory for avoiding nested loops which have upto 1 level?
(3) How to avoid SELECT, ENDSELECT within LOOP and ENDLOOP
(4) The FM WS_DOWNLOAD, which is outdated is called for downloading data into excel. Which one i have to choose for replacing this FM.
I will reward, if it is useful.Hi,
Downloading Internal Tables to Excel
Often we face situations where we need to download internal table contents onto an Excel sheet. We are familiar with the function module WS_DOWNLOAD. Though this function module downloads the contents onto the Excel sheet, there cannot be any column headings or we cannot differentiate the primary keys just by seeing the Excel sheet.
For this purpose, we can use the function module XXL_FULL_API. The Excel sheet which is generated by this function module contains the column headings and the key columns are highlighted with a different color. Other options that are available with this function module are we can swap two columns or supress a field from displaying on the Excel sheet. The simple code for the usage of this function module is given below.
Program code :
REPORT Excel.
TABLES:
sflight.
header data................................
DATA :
header1 LIKE gxxlt_p-text VALUE 'Suresh',
header2 LIKE gxxlt_p-text VALUE 'Excel sheet'.
Internal table for holding the SFLIGHT data
DATA BEGIN OF t_sflight OCCURS 0.
INCLUDE STRUCTURE sflight.
DATA END OF t_sflight.
Internal table for holding the horizontal key.
DATA BEGIN OF t_hkey OCCURS 0.
INCLUDE STRUCTURE gxxlt_h.
DATA END OF t_hkey .
Internal table for holding the vertical key.
DATA BEGIN OF t_vkey OCCURS 0.
INCLUDE STRUCTURE gxxlt_v.
DATA END OF t_vkey .
Internal table for holding the online text....
DATA BEGIN OF t_online OCCURS 0.
INCLUDE STRUCTURE gxxlt_o.
DATA END OF t_online.
Internal table to hold print text.............
DATA BEGIN OF t_print OCCURS 0.
INCLUDE STRUCTURE gxxlt_p.
DATA END OF t_print.
Internal table to hold SEMA data..............
DATA BEGIN OF t_sema OCCURS 0.
INCLUDE STRUCTURE gxxlt_s.
DATA END OF t_sema.
Retreiving data from sflight.
SELECT * FROM sflight
INTO TABLE t_sflight.
Text which will be displayed online is declared here....
t_online-line_no = '1'.
t_online-info_name = 'Created by'.
t_online-info_value = 'KODANDARAMI REDDY.S'.
APPEND t_online.
Text which will be printed out..........................
t_print-hf = 'H'.
t_print-lcr = 'L'.
t_print-line_no = '1'.
t_print-text = 'This is the header'.
APPEND t_print.
t_print-hf = 'F'.
t_print-lcr = 'C'.
t_print-line_no = '1'.
t_print-text = 'This is the footer'.
APPEND t_print.
Defining the vertical key columns.......
t_vkey-col_no = '1'.
t_vkey-col_name = 'MANDT'.
APPEND t_vkey.
t_vkey-col_no = '2'.
t_vkey-col_name = 'CARRID'.
APPEND t_vkey.
t_vkey-col_no = '3'.
t_vkey-col_name = 'CONNID'.
APPEND t_vkey.
t_vkey-col_no = '4'.
t_vkey-col_name = 'FLDATE'.
APPEND t_vkey.
Header text for the data columns................
t_hkey-row_no = '1'.
t_hkey-col_no = 1.
t_hkey-col_name = 'PRICE'.
APPEND t_hkey.
t_hkey-col_no = 2.
t_hkey-col_name = 'CURRENCY'.
APPEND t_hkey.
t_hkey-col_no = 3.
t_hkey-col_name = 'PLANETYPE'.
APPEND t_hkey.
t_hkey-col_no = 4.
t_hkey-col_name = 'SEATSMAX'.
APPEND t_hkey.
t_hkey-col_no = 5.
t_hkey-col_name = 'SEATSOCC'.
APPEND t_hkey.
t_hkey-col_no = 6.
t_hkey-col_name = 'PAYMENTSUM'.
APPEND t_hkey.
populating the SEMA data..........................
t_sema-col_no = 1.
t_sema-col_typ = 'STR'.
t_sema-col_ops = 'DFT'.
APPEND t_sema.
t_sema-col_no = 2.
APPEND t_sema.
t_sema-col_no = 3.
APPEND t_sema.
t_sema-col_no = 4.
APPEND t_sema.
t_sema-col_no = 5.
APPEND t_sema.
t_sema-col_no = 6.
APPEND t_sema.
t_sema-col_no = 7.
APPEND t_sema.
t_sema-col_no = 8.
APPEND t_sema.
t_sema-col_no = 9.
APPEND t_sema.
t_sema-col_no = 10.
t_sema-col_typ = 'NUM'.
t_sema-col_ops = 'ADD'.
APPEND t_sema.
CALL FUNCTION 'XXL_FULL_API'
EXPORTING
DATA_ENDING_AT = 54
DATA_STARTING_AT = 5
filename = 'TESTFILE'
header_1 = header1
header_2 = header2
no_dialog = 'X'
no_start = ' '
n_att_cols = 6
n_hrz_keys = 1
n_vrt_keys = 4
sema_type = 'X'
SO_TITLE = ' '
TABLES
data = t_sflight
hkey = t_hkey
online_text = t_online
print_text = t_print
sema = t_sema
vkey = t_vkey
EXCEPTIONS
cancelled_by_user = 1
data_too_big = 2
dim_mismatch_data = 3
dim_mismatch_sema = 4
dim_mismatch_vkey = 5
error_in_hkey = 6
error_in_sema = 7
file_open_error = 8
file_write_error = 9
inv_data_range = 10
inv_winsys = 11
inv_xxl = 12
OTHERS = 13
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
reward if helpful
raam -
Uploading clob column's data into table
Hello everyone ...
I used webutil to opload data into blob column then converted blob to clob ... what i have now is comma delimited data that i want to put into a table...
so , if i have the clob column like this
1,adam,11/08/08,accountant
2,john,12/08/09,clerk
and the destination table is :
ID,
NAME,
Hire_date,
Job
how can i put the 1 in the ID , adam in the name and so on?
I need help urgent please
Thanks alotperhaps its easier to directly read the file using CLIENT_TEXTIO line by line instead of first transfering it into the database and then extracting it from there again.
Code example for CLIENT_TEXTIO (not tested):
DECLARE
tf CLIENT_TEXT_IO.FILE_TYPE;
vcBuffer VARCHAR2(32000);
nId NUMBER;
vcName VARCHAR2(4000);
dtHiredate DATE;
vcJob VARCHAR2(4000);
FUNCTION FK_SPLIT(io_vcText IN OUT VARCHAR2)
RETURN VARCHAR2 IS
vcReturn VARCHAR2(4000);
BEGIN
IF INSTR(io_vcText, ',')>0 THEN
vcReturn:=SUBSTR(io_vcText, 1, INSTR(io_vcText, ',')-1);
io_vcText:=SUBSTR(io_vcText, INSTR(io_vcText, ',')+1);
ELSE
vcReturn:=io_vcText;
io_vcText:=NULL;
END IF;
RETURN vcReturn;
END;
BEGIN
tf:=CLIENT_TEXT_IO.FOPEN('filename', 'r');
LOOP
BEGIN
-- read data line by line
CLIENT_TEXT_IO.GET_LINE(tf, vcBuffer);
-- split buffer into vars
nId:=TO_NUMBER(FK_SPLIT(vcBuffer));
vcName:=FK_SPLIT(vcBuffer);
dtHireDate:=TO_DATE(FK_SPLIT(vcBuffer), 'dd/mm/yyyy');
vcJob:=FK_SPLIT(vcBuffer);
INSERT INTO DEST_TABLE (
ID,
NAME,
HIRE_DATE,
JOB
) VALUES (
nId,
vcName,
dtHireDate,
vcJob
EXCEPTION
WHEN NO_DATA_FOUND THEN
EXIT;
END LOOP;
CLIENT_TEXT_IO.FCLOSE(tf);
END;(not tested)
If your importing large files it might be better to upload the file to the oas using WEBUTIL_FILE.CLIENT_TO_AS and then use TEXT_IO instead of CLIENT_TEXT_IO.
Edited by: Andreas Weiden on 04.12.2008 20:47
Maybe you are looking for
-
How can i export (render) or view a composition without background
I have a 3D moving object composition with a black background. I need to export only the object with no background. Like the gray\white checkers background, that allows you to export only the object. So after that i can plant it in an editing softwa
-
How to download and upload SPRO configuration to and from local disk?
Hello, I'm very new to SAP. Recently we had a problem while refreshing our QNA system from PRD. I knew there are some configurations, which do not have a transport and thus configured directly in QNA. These changes were not noted by mistake and af
-
Exchange 2003 to 2010 mailbox migration fails
I'm trying to move users from my single 2003 server to my 2010 3-node DAG. I get this error for several of the mailboxes I'm moving: Error: Mailbox changes failed to replicate. Database 33da328a-9b34-43ba-8914-1be08f143e17 doesn't satisfy the constra
-
Import the test results and doors links back to doors with NI Requirement Gateway
Hi together, I will try to explain my problem: We are using TestStand, DOORS and the NI Requirement Gateway. We created a test specification in DOORS, TestStand used the DOORS IDs to connect the TestSteps in Teststand with the requirements in DOORS.
-
It was a solid black background with a white scroll pattern and orange butterflies flapping every once in a while...animated. I cannot find it in the personas/skins any longer and wanted to know if I could get it back.