SELECT query performance : One big table Vs many small tables
Hello,
We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
multiple small tables will help ?
For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
Thanks.
Hello,
There is some information on this topic in the FAQ at:
http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
If this does not address your question, please just let me know.
Thanks,
Sandra
Similar Messages
-
Using case when statement in the select query to create physical table
Hello,
I have a requirement where in I have to execute a case when statement with a session variable while creating a physical table using a select query. let me explain with an example.
I have a physical table based on a select table with one column.
SELECT 'VALUEOF(NQ_SESSION.NAME_PARAMETER)' AS NAME_PARAMETER FROM DUAL. Let me call this table as the NAME_PARAMETER table.
I also have a customer table.
In my dashboard that has two pages, Page 1 contains a table with the customer table with column navigation to my second dashboard page.
In my second dashboard page I created a dashboard report based on NAME_PARAMETER table and a prompt based on customer table that sets the NAME_ PARAMETER request variable.
EXECUTION
When i click on a particular customer, the prompt sets the variable NAME_PARAMETER and the NAME_PARAMETER table shows the appropriate customer.
everything works as expected. YE!!
Now i created another table called NAME_PARAMETER1 with a little modification to the earlier table. the query is as follows.
SELECT CASE WHEN 'VALUEOF(NQ_SESSION.NAME_PARAMETER)'='Customer 1' THEN 'TEST_MART1' ELSE TEST_MART2' END AS NAME_PARAMETER
FROM DUAL
Now I pull in this table into the second dashboard page along with the NAME_PARAMETER table report.
surprisingly, NAME_PARAMETER table report executes as is, but the other report based on the NAME_PARAMETER1 table fails with the following error.
Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 16001] ODBC error state: S1000 code: 1756 message: [Oracle][ODBC][Ora]ORA-01756: quoted string not properly terminated. [nQSError: 16014] SQL statement preparation failed. (HY000)
SQL Issued: SET VARIABLE NAME_PARAMETER='Novartis';SELECT NAME_PARAMETER.NAME_PARAMETER saw_0 FROM POC_ONE_DOT_TWO ORDER BY saw_0
If anyone has any explanation to this error and how we can achieve the same, please help.
Thanks.Hello,
Updates :) sorry.. the error was a stupid one.. I resolved and I got stuck at my next step.
I am creating a physical table using a select query. But I am trying to obtain the name of the table dynamically.
Here is what I am trying to do. the select query of the physical table is as follows.
SELECT CUSTOMER_ID AS CUSTOMER_ID, CUSTOMER_NAME AS CUSTOMER_NAME FROM 'VALUEOF(NQ_SESSION.SCHEMA_NAME)'.CUSTOMER.
The idea behind this is to obtain the data from the same table from different schemas dynamically based on what a session variable. Please let me know if there is a way to achieve this, if not please let me know if this can be achieved in any other method in OBIEE.
Thanks. -
How to improve Query performance on large table in MS SQL Server 2008 R2
I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups is a best option or splitting the table into multiple smaller tables?
Hi bala197164,
First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
Partitioning:
http://msdn.microsoft.com/en-us/library/ms178148.aspx
CREATE INDEX (Transact-SQL):
http://msdn.microsoft.com/en-us/library/ms188783.aspx
TechNet
Subscriber Support
If you are
TechNet Subscription user and have any feedback on our support quality, please send your feedback
here.
Allen Li
TechNet Community Support -
Hello! Can't open an IDML file. ID file was created in CC (10). It is a 100 page (50 spreads) doc that is one big table. It was created in CC (10) and saved as an IDML file. I have CS6 and when I try to open it, it shuts down ID almost instantly. The file was created on a MAC and I am trying to open it on a MAC. Any/all advice is greatly appreciated as I am up against a deadline with this unopened file! Many thanks in advance, Diane
There's a good chance the file is corrupt. As whomever sent it to you to verify it opens on their end.
-
Select Query failing on a table that has per sec heavy insertions.
Hi
Problem statement
1- We are using 11g as an database.
2- We have a table that is partitioned on the date as range partition.
3- The insertion of data is very high.i.e. several hundreds records per sec. in the current partitioned.
4- The data is continuously going in the current partitioned as and when buffer is full or per sec timer expires.
5-- We have to make also select query on the same table and on the current partitioned say for the latest 500 records.
6- Effecient indexes are also created on the table.
Solutions Tried.
1- After analyzing by tkprof it is observed that select and execute is working fine but fetch is taking too much time to show the out put. Say it takes 1 hour.
2- Using the 11g sql advisior and SPM several baseline is created but the success rate of them is observed also too low.
please suggest any solution to this issue
1- i.e. Redisgn of table.
2- Any better way to quey to fix the fetch issue.
3- Any oracle seetings or parameter changes to fix the fetch issue.
Thanks in advance.
Regards
Vishal SharmaI am uploading the latest stats please let me know how can improve as this is taking 25 minutes
####TKPROF output#########
SQL ID : 2j5w6bv437cak
select almevttbl.AlmEvtId, almevttbl.AlmType, almevttbl.ComponentId,
almevttbl.TimeStamp, almevttbl.Severity, almevttbl.State,
almevttbl.Category, almevttbl.CauseCode, almevttbl.UnitType,
almevttbl.UnitId, almevttbl.UnitName, almevttbl.ServerName,
almevttbl.StrParam, almevttbl.ExtraStrParam, almevttbl.ExtraStrParam2,
almevttbl.ExtraStrParam3, almevttbl.ParentCustId, almevttbl.ExtraParam1,
almevttbl.ExtraParam2, almevttbl.ExtraParam3,almevttbl.ExtraParam4,
almevttbl.ExtraParam5, almevttbl.SRCIPADDRFAMILY,almevttbl.SrcIPAddress11,
almevttbl.SrcIPAddress12,almevttbl.SrcIPAddress13,almevttbl.SrcIPAddress14,
almevttbl.DESTIPADDRFAMILY,almevttbl.DestIPAddress11,
almevttbl.DestIPAddress12,almevttbl.DestIPAddress13,
almevttbl.DestIPAddress14, almevttbl.DestPort, almevttbl.SrcPort,
almevttbl.SessionDir, almevttbl.CustomerId, almevttbl.ProfileId,
almevttbl.ParentProfileId, almevttbl.CustomerName, almevttbl.AttkDir,
almevttbl.SubCategory, almevttbl.RiskCategory, almevttbl.AssetValue,
almevttbl.IPSAction, almevttbl.l4Protocol,almevttbl.ExtraStrParam4 ,
almevttbl.ExtraStrParam5,almevttbl.username,almevttbl.ExtraStrParam6,
IpAddrFamily1,IPAddrValue11,IPAddrValue12,IPAddrValue13,IPAddrValue14,
IpAddrFamily2,IPAddrValue21,IPAddrValue22,IPAddrValue23,IPAddrValue24
FROM
AlmEvtTbl PARTITION(ALMEVTTBLP20100323) WHERE AlmEvtId IN ( SELECT * FROM
( SELECT /*+ FIRST_ROWS(1000) INDEX (AlmEvtTbl AlmEvtTbl_Index) */AlmEvtId
FROM AlmEvtTbl PARTITION(ALMEVTTBLP20100323) where ((AlmEvtTbl.Customerid
= 0 or AlmEvtTbl.ParentCustId = 0)) ORDER BY AlmEvtTbl.TIMESTAMP DESC)
WHERE ROWNUM < 602) order by timestamp desc
call count cpu elapsed disk query current rows
Parse 1 0.10 0.17 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 42 1348.25 1521.24 1956 39029545 0 601
total 44 1348.35 1521.41 1956 39029545 0 601
Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 82
Rows Row Source Operation
601 PARTITION RANGE SINGLE PARTITION: 24 24 (cr=39029545 pr=1956 pw=1956 time=11043 us cost=0 size=7426 card=1)
601 TABLE ACCESS BY LOCAL INDEX ROWID ALMEVTTBL PARTITION: 24 24 (cr=39029545 pr=1956 pw=1956 time=11030 us cost=0 size=7426 card=1)
601 INDEX FULL SCAN ALMEVTTBL_INDEX PARTITION: 24 24 (cr=39029377 pr=1956 pw=1956 time=11183 us cost=0 size=0 card=1)(object id 72557)
601 FILTER (cr=39027139 pr=0 pw=0 time=0 us)
169965204 COUNT STOPKEY (cr=39027139 pr=0 pw=0 time=24859073 us)
169965204 VIEW (cr=39027139 pr=0 pw=0 time=17070717 us cost=0 size=13 card=1)
169965204 PARTITION RANGE SINGLE PARTITION: 24 24 (cr=39027139 pr=0 pw=0 time=13527031 us cost=0 size=48 card=1)
169965204 TABLE ACCESS BY LOCAL INDEX ROWID ALMEVTTBL PARTITION: 24 24 (cr=39027139 pr=0 pw=0 time=10299895 us cost=0 size=48 card=1)
169965204 INDEX FULL SCAN ALMEVTTBL_INDEX PARTITION: 24 24 (cr=1131414 pr=0 pw=0 time=3222624 us cost=0 size=0 card=1)(object id 72557)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 42 0.00 0.00
SQL*Net message from client 42 11.54 133.54
db file sequential read 1956 0.20 28.00
latch free 21 0.00 0.01
latch: cache buffers chains 9 0.01 0.02
SQL ID : 0ushr863b7z39
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0)
FROM
(SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("PLAN_TABLE") FULL("PLAN_TABLE")
NO_PARALLEL_INDEX("PLAN_TABLE") */ 1 AS C1, CASE WHEN
"PLAN_TABLE"."STATEMENT_ID"=:B1 THEN 1 ELSE 0 END AS C2 FROM
"SYS"."PLAN_TABLE$" "PLAN_TABLE") SAMPLESUB
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.01 1 3 0 1
total 3 0.00 0.01 1 3 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 82 (recursive depth: 1)
Rows Row Source Operation
1 SORT AGGREGATE (cr=3 pr=1 pw=1 time=0 us)
0 TABLE ACCESS FULL PLAN_TABLE$ (cr=3 pr=1 pw=1 time=0 us cost=29 size=138856 card=8168)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 1 0.01 0.01
SQL ID : bjkdb51at8dnb
EXPLAIN PLAN SET STATEMENT_ID='PLUS30350011' FOR select almevttbl.AlmEvtId,
almevttbl.AlmType, almevttbl.ComponentId, almevttbl.TimeStamp,
almevttbl.Severity, almevttbl.State, almevttbl.Category,
almevttbl.CauseCode, almevttbl.UnitType, almevttbl.UnitId,
almevttbl.UnitName, almevttbl.ServerName, almevttbl.StrParam,
almevttbl.ExtraStrParam, almevttbl.ExtraStrParam2, almevttbl.ExtraStrParam3,
almevttbl.ParentCustId, almevttbl.ExtraParam1, almevttbl.ExtraParam2,
almevttbl.ExtraParam3,almevttbl.ExtraParam4,almevttbl.ExtraParam5,
almevttbl.SRCIPADDRFAMILY,almevttbl.SrcIPAddress11,almevttbl.SrcIPAddress12,
almevttbl.SrcIPAddress13,almevttbl.SrcIPAddress14,
almevttbl.DESTIPADDRFAMILY,almevttbl.DestIPAddress11,
almevttbl.DestIPAddress12,almevttbl.DestIPAddress13,
almevttbl.DestIPAddress14, almevttbl.DestPort, almevttbl.SrcPort,
almevttbl.SessionDir, almevttbl.CustomerId, almevttbl.ProfileId,
almevttbl.ParentProfileId, almevttbl.CustomerName, almevttbl.AttkDir,
almevttbl.SubCategory, almevttbl.RiskCategory, almevttbl.AssetValue,
almevttbl.IPSAction, almevttbl.l4Protocol,almevttbl.ExtraStrParam4 ,
almevttbl.ExtraStrParam5,almevttbl.username,almevttbl.ExtraStrParam6,
IpAddrFamily1,IPAddrValue11,IPAddrValue12,IPAddrValue13,IPAddrValue14,
IpAddrFamily2,IPAddrValue21,IPAddrValue22,IPAddrValue23,IPAddrValue24 FROM
AlmEvtTbl PARTITION(ALMEVTTBLP20100323) WHERE AlmEvtId IN ( SELECT * FROM
( SELECT /*+ FIRST_ROWS(1000) INDEX (AlmEvtTbl AlmEvtTbl_Index) */AlmEvtId
FROM AlmEvtTbl PARTITION(ALMEVTTBLP20100323) where ((AlmEvtTbl.Customerid
= 0 or AlmEvtTbl.ParentCustId = 0)) ORDER BY AlmEvtTbl.TIMESTAMP DESC)
WHERE ROWNUM < 602) order by timestamp desc
call count cpu elapsed disk query current rows
Parse 1 0.28 0.26 0 0 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.29 0.27 0 0 0 0
Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 82
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 13 0.71 0.96 3 10 0 0
Execute 14 0.20 0.29 4 304 26 21
Fetch 92 2402.17 2714.85 3819 70033708 0 1255
total 119 2403.09 2716.10 3826 70034022 26 1276
Misses in library cache during parse: 10
Misses in library cache during execute: 6
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 49 0.00 0.00
SQL*Net message from client 48 29.88 163.43
db file sequential read 1966 0.20 28.10
latch free 21 0.00 0.01
latch: cache buffers chains 9 0.01 0.02
latch: session allocation 1 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 940 0.51 0.73 1 2 38 0
Execute 3263 1.93 2.62 7 1998 43 23
Fetch 6049 1.32 4.41 214 12858 36 13724
total 10252 3.78 7.77 222 14858 117 13747
Misses in library cache during parse: 172
Misses in library cache during execute: 168
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 88 0.04 0.62
latch: shared pool 8 0.00 0.00
latch: row cache objects 2 0.00 0.00
latch free 1 0.00 0.00
latch: session allocation 1 0.00 0.00
34 user SQL statements in session.
3125 internal SQL statements in session.
3159 SQL statements in session.
Trace file: ora11g_ora_2064.trc
Trace file compatibility: 11.01.00
Sort options: default
6 sessions in tracefile.
98 user SQL statements in trace file.
9111 internal SQL statements in trace file.
3159 SQL statements in trace file.
89 unique SQL statements in trace file.
30341 lines in trace file.
6810 elapsed seconds in trace file.
###################################### AutoTrace Output#################
Statistics
3901 recursive calls
0 db block gets
39030275 consistent gets
1970 physical reads
140 redo size
148739 bytes sent via SQL*Net to client
860 bytes received via SQL*Net from client
42 SQL*Net roundtrips to/from client
73 sorts (memory)
0 sorts (disk)
601 rows processed -
How to insert select columns from one internal table to another
Hi,
How to insert select columns from one internal table to another based on condition as we do from a standart table to internal table.
regards,
SriramHi,
If your question is for copying data from 1 int table to other ;
we can use
APPEND LINES OF it_1 TO it_2.
or if they have different columns then:
loop at it_1 into wa_it1.
move wa_it1-data to wa_it2-d1.
apped wa_it2 to it_2.
clear wa_it2.
endloop.
thnxz -
Select query performance improvement - Index on EDIDC table
Hi Experts,
I have a scenario where in I have to select data from the table EDIDC. The select query being used is given below.
SELECT docnum
direct
mestyp
mescod
rcvprn
sndprn
upddat
updtim
INTO CORRESPONDING FIELDS OF TABLE t_edidc
FROM edidc
FOR ALL ENTRIES IN t_error_idoc
WHERE
upddat GE gv_date1 AND
upddat LE gv_date2 AND
updtim GE p_time AND
status EQ t_error_idoc-status.
As the volume of the data is very high, our client requested to put up some index or use an existing one to improve the performance of the data selection query.
Question:
4. How do we identify the index to be used.
5. On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
6. What will be the impact on the table performance if we create a new index.
Regards ,
RaghavQuestion:
1. How do we identify the index to be used.
Generally the index is automatically selected by SAP (DB Optimizer ) ( You can still mention the index name in your select query by changing the syntax)
For your select Query the second Index will be called automatically by the Optimizer, ( Because the select query has u2018Updatu2019 , u2018uptimu2019 in the sequence before the u2018statusu2019 ) .
2. On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
(Create a new Index with MANDT and the 4 fields which are in the where clause in sequence )
3. What will be the impact on the table performance if we create a new index.
( Since the index which will be newly created is only the 4th index for the table, there shouldnu2019t be any side affects)
After creation of index , Check the change in performance of the current program and also some other programs which are having the select queries on EDIDC ( Various types of where clauses preferably ) to verify that the newly created index is not having the negative impact on the performance. Additionally, if possible , check if you can avoid into corresponding fields .
Regards ,
Seth -
Modify a SELECT Query on ISU DB tables to improve performance
Hi Experts,
I have a SELECT query in a Program which is hitting 6 DB tables by means of 5 inner joins.
The outcome is that the program takes an exceptionally long time to execute, the SELECT statement being the main time consumer.
Need your expertise on how to split the Query without affecting functionality -
The Query :
SELECT fkkvkpgpart eablablbelnr eabladat eablistablart
FROM eabl
INNER JOIN eablg ON eablgablbelnr = eablablbelnr
INNER JOIN egerh ON egerhequnr = eablequnr
INNER JOIN eastl ON eastllogiknr = egerhlogiknr
INNER JOIN ever ON everanlage = eastlanlage
INNER JOIN fkkvkp ON fkkvkpvkont = evervkonto
INTO TABLE itab
WHERE eabl~adat GT [date which is (sy-datum - 3 years)]
Thanks in advance,
PDHi Prajakt
There are a couple of issues with the code provided by Aviansh:
1) Higher Memory consumption by extensive use of internal tables (possible shortdump TSV_NEW_PAGE_ALLOC_FAILED)
2) In many instances multiple SELECT ... FOR ALL ENTRIES... are not faster than a single JOIN statement
3) In the given code the timeslices tables are limited to records active of today, which is not the same as your select (taking into account that you select for the last three years you probably want historical meter/installation relationships as well*)
4) Use of sorted/hashed internal tables instead of standard ones could also improve the runtime (in case you stick to all the internal tables)
Did you create an index on EABL including columns MANDT, ADAT?
Did you check the execution plan of your original JOIN Select statement?
Yep
Jürgen
You should review your selection, because you probably want business partner that was linked to the meter reading at the time of ADAT, while your select doesn't take the specific Contract / Device Installation of the time of ADAT into account.
Example your meter reading is from 16.02.2010
Meter 00001 was in Installation 3000001 between 01.02.2010 and 23.08.2010
Meter 00002 was in Installation 3000001 between 24.08.2010 and 31.12.9999
Installation 3000001 was linked to Account 4000001 between 01.01.2010 and 23.01.2011
Installation 3000001 was linked to Account 4000002 between 24.01.2010 and 31.12.9999
This means with your select returns four lines and you probably want only one.
To achieve that you have to limit all timeslices to the date of EABL-ADAT (selects from EGERH, EASTL, EVER).
Update:
Coming back to point one and the memory consumption:
What are you planning to do with the output of the select statment?
Did you get a shortdump TSV_NEW_PAGE_ALLOC_FAILED with three years meter reading history?
Or did you never run on production like volumes yet?
Dependent on this you might want to redesign your program anyway.
Edited by: sattlerj on Jun 24, 2011 10:38 AM -
Need help in optimisation for a select query on a large table
Hi Gurus
Please help in optimising the code. It takes 1 hr for 3-4000 records. Its very slow.
My Select is reading from a table which contains 10 Million records.
I am writing the select on large table and Retrieving the values from large tables by comparing my table which has 3-4 k records.
I am pasting the code. please help
Data: wa_i_tab1 type tys_tg_1 .
DATA: i_tab TYPE STANDARD TABLE OF tys_tg_1.
Data : wa_result_pkg type tys_tg_1,
wa_result_pkg1 type tys_tg_1.
SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1 from
/BIC/PZREB_SDAT *******************THIS TABLE CONTAINS 10 MILLION RECORDS
into CORRESPONDING FIELDS OF table i_tab
FOR ALL ENTRIES IN RESULT_PACKAGE***************CONTAINS 3000-4000 RECORDS
where
/bic/ZREB_SDAT = RESULT_PACKAGE-/BIC/ZREB_SDAT
AND
AGREEMENT = RESULT_PACKAGE-AGREEMENT
AND /BIC/ZLITEM1 = RESULT_PACKAGE-/BIC/ZLITEM1.
sort RESULT_PACKAGE by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
sort i_tab by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
loop at RESULT_PACKAGE into wa_result_pkg.
read TABLE i_tab INTO wa_i_tab1 with key
/BIC/ZREB_SDAT =
wa_result_pkg-/BIC/ZREB_SDAT
AGREEMENT = wa_result_pkg-AGREEMENT
/BIC/ZLITEM1 = wa_result_pkg-/BIC/ZLITEM1.
IF SY-SUBRC = 0.
move wa_i_tab1-/BIC/ZSETLRUN to
wa_result_pkg-/BIC/ZSETLRUN.
wa_result_pkg1-/BIC/ZSETLRUN = wa_result_pkg-/BIC/ZSETLRUN.
modify RESULT_PACKAGE from wa_result_pkg1
TRANSPORTING /BIC/ZSETLRUN.
ENDIF.
CLEAR: wa_i_tab1,wa_result_pkg1,wa_result_pkg.
endloop.Hi,
1) RESULT_PACKAGE internal table contains any duplicate records or not bassed on the where condotion like below
2) Remove the into CORRESPONDING FIELDS OF table instead of that into table use.
refer the below code is
RESULT_PACKAGE1[] = RESULT_PACKAGE[].
sort RESULT_PACKAGE1 by /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
delete adjustant duplicate form RESULT_PACKAGE1 comparing /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1
from /BIC/PZREB_SDAT
into table i_tab
FOR ALL ENTRIES IN RESULT_PACKAGE1
where
/bic/ZREB_SDAT = RESULT_PACKAGE1-/BIC/ZREB_SDAT
AND
AGREEMENT = RESULT_PACKAGE1-AGREEMENT
AND /BIC/ZLITEM1 = RESULT_PACKAGE1-/BIC/ZLITEM1.
and one more thing your getting 10 million records so use package size in you select query.
Refer the following link also For All Entry for 1 Million Records
Regards,
Dhina..
Edited by: Dhina DMD on Sep 15, 2011 7:17 AM -
the table i have
create table test of xmltype;
and the xml that i have loaded is
<root>
<company>
<department>
<id>10</id>
<name>Accounting</name>
</department>
<department>
<id>11</id>
<name>Billing</name>
</department>
</company>
</root>
select query using xmltable is
select id,name from test,xmltable('/root/company/department'
passing object_value
columns
id number path 'id',
name varchar2(20) 'name');
the query is working fine but issue is performance
i have imlplemented index using extract() and extractstringval() functions but as i have multiple
occurance of data ther two are not working. I have non-schema-based xmltype table.
I need help for creating index on multiple occurance element
Any help is appreciatedFirst of all "XMLOptimizationCheck" AFAIK is not yet explained. Haven't checked support.oracle.com for a while though.
It's more or less currently an internal used, but for the public a fast method to detect, that Oracle internal XQuery / XPath optimization rewrites towards SQL methods (shortcuts) are not properly working. SYS_XQEXVAL probably means something like XQuery Element XML Value/validation (??? towards SQL value) isn't producing a simple construct with a predicate validation. The reasons section gives insight, just like a 10053 trace, on what attempts/rules where applied and failed or worked. I am guessing that the overall cost for the use of the normal PK index is so high because it can not be properly matched and/or optimized against the global index structure supporting the partitions.
In all, a bit more info regarding the table/partition structure and its used index regime/structure would be helpful.
Beside that. THIS IS A BUG and should be reported, request for help, via support.oracle.com
Edited by: Marco Gralike on Mar 23, 2011 9:35 PM -
SELECT Query performance tunning
Hi All,
our objective is to read value from three DSO table, for that we have written three select query .
In this we have used three internal talbes.
We have written in END routine.
A model select statement for reading the Values in DSO and move statement i have given .
for 1,75000 records it is taking about 8 hours for DTP to run .
Usually they are meaning that it will take just 20 minutes.
Can anbody help on this please ??????????????????????????????
SELECT logsys
doc_num
doc_item
comp_code
/bic/gpusiteid
/bic/gpumtgrid
/bic/gpuspntyp
/bic/gpuspndid
/bic/gpuprocmt
/bic/gpubufunc
co_area
order_quan
po_unit
entry_date
/bic/gpuitmddt
/bic/gpuovpoc
currency
/bic/gpudel_in
BT8695*
costcenter
/bic/gpuordnum
/bic/gpupostxt
BT8695*
FROM (c_poadm_det)
INTO TABLE t_podetails
FOR ALL ENTRIES IN result_package
WHERE logsys EQ result_package-logsys
AND doc_num EQ result_package-doc_num
AND doc_item EQ result_package-doc_item.
LOOP AT result_package
ASSIGNING <result_fields>.
UNASSIGN <fs_podetails>.
READ TABLE t_podetails
ASSIGNING <fs_podetails>
WITH KEY logsys = <result_fields>-logsys
doc_num = <result_fields>-doc_num
doc_item = <result_fields>-doc_item.
IF sy-subrc EQ 0.
MOVE <fs_podetails>-/bic/gpusiteid TO <result_fields>-/bic/gpusiteid.
MOVE <fs_podetails>-/bic/gpumtgrid TO <result_fields>-/bic/gpumtgrid.
MOVE <fs_podetails>-/bic/gpuspntyp TO <result_fields>-/bic/gpuspntyp.
IF <result_fields>-order_quan NE ' '.
MOVE c_true TO <result_fields>-/bic/gpucount.
ENDIF.
ENDIF.Hi,
In the Read statement just use BINARY SEARCH it will improve the performance. Before putting BINARY SEARCH first the
internal table should be sort like wht field you giving the condition in read statement.
sort t_podetails by logsys doc_num doc_item."add this line
LOOP AT result_package
ASSIGNING <result_fields>.
UNASSIGN <fs_podetails>."why your giving the unassigned here it will give the dump. why because the field symbol is not assigned after the read symbol only they going to assign.
READ TABLE t_podetails
ASSIGNING <fs_podetails>
WITH KEY logsys = <result_fields>-logsys
doc_num = <result_fields>-doc_num
doc_item = <result_fields>-doc_item. " use BINARY SEARCH here
IF sy-subrc EQ 0.
MOVE <fs_podetails>-/bic/gpusiteid TO <result_fields>-/bic/gpusiteid.
MOVE <fs_podetails>-/bic/gpumtgrid TO <result_fields>-/bic/gpumtgrid.
MOVE <fs_podetails>-/bic/gpuspntyp TO <result_fields>-/bic/gpuspntyp.
IF <result_fields>-order_quan NE ' '.
MOVE c_true TO <result_fields>-/bic/gpucount.
ENDIF.
ENDIF.
Regards,
Dhina.. -
SELECT query performance issue
Hello experts!!!
I am facing the performance issue in the below SELECT query. Its taking long time to execute this query.
Please suggest how can i improve the performance of this query.
SELECT MBLNR MATNR LIFNR MENGE WERKS BUKRS LGORT BWART INTO CORRESPONDING FIELDS OF TABLE IT_MSEG
FROM MSEG
WHERE MATNR IN S_MATNR
AND LIFNR IN S_LIFNR
AND WERKS IN S_WERKS
AND BUKRS IN S_BUKRS
AND XAUTO = ''
AND BWART IN ('541' , '542' , '543' , '544', '105' , '106').
Thanks in advance.
Regards
AnkurHi Ankur,
the MSEG index for material is
Index MSEG~M
MANDT
MATNR
WERKS
LGORT
BWART
SOBKZ
It could be used very efficient if you supply values for MATNR, WERKS and LGORT.
There is no index on LIFNR. IKf you want the data for specific vendor(s), you should select from EKKO first, ir has index Index EKKO~1
MANDT
LIFNR
EKORG
EKGRP
BEDAT
You can JOIN EKKO and EKBE to get the BSEG key fields GJAHR BELNR BUZEI directly.
I don't know your details but I think you can get all you need from EKKO and EKBE. You may also consider EKPO as is has a material index Index EKPO~1
MANDT
MATNR
WERKS
BSTYP
LOEKZ
ELIKZ
MATKL
Do you really need the (much bigger) MSEG?
Regards,
Clemens -
Using DB Links - Improving SELECT query performance
Hi there,
I am using dblink in the following query:
I would like to improve performance of the query by using hints as per described in the link: http://www.experts-exchange.com/Database/Oracle/9.x/Q_23640348.html. However, i am not sure how can i include this in my select query.
Details are:
Oracle - 9i Database Terminal Release .8
DB Link: TCPROD
Could someone please explain with an example how to use hints to get the query to select data on the remote database and then return the results to the target database?
Many Thanks.
SELECT ec.obid AS prObid,
ec.b2ProgramName AS program,
ec.projectName AS project,
ec.wbsID AS prNo,
ec.wbsName AS title,
ec.revision AS revision,
ec.superseded AS revisionSuperseded,
ec.lifeCycleState AS lifeCycleState,
ec.b2ChangeType AS type,
ec.b2Complexity AS subType,
ec.r1SsiCode AS ssi,
ec.b2disposition as disposition,
ec.wbsOriginator AS requestor,
ec.wbsAdministrator AS administrator,
ec.changepriority as priority,
ec.r1tsc as tsc,
ec.t1comments as tenixComments,
ec.b2securityclass as securityClassification,
ec.t1changesafety as safety,
ec.t1actionofficer as actionOfficer,
ec.t1changereason as changeReason,
ec.t1wbsextchangenumber as extChangeNo,
ec.creator as creator,
to_date(substr(ec.creationdate,
0,
instr(ec.creationdate, ':', 1, 3) - 1),
'YYYY/MM/DD-HH24:MI:SS') as creationdate,
to_date(ec.originatorassigndate, 'YYYY/MM/DD') as originatorassigndate,
zbd.description as description,
zbc.comments as comments
FROM (SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM awdbt1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM mart1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM mpsdt1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM nondt1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnast1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnlht1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnolt1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rzptt1m4.cmPrRpIt@TCPRODit's the tablename in the hint, not the column name
something like
SELECT ec.obid AS prObid,
ec.b2ProgramName AS program,
ec.projectName AS project,
ec.wbsID AS prNo,
ec.wbsName AS title,
ec.revision AS revision,
ec.superseded AS revisionSuperseded,
ec.lifeCycleState AS lifeCycleState,
ec.b2ChangeType AS type,
ec.b2Complexity AS subType,
ec.r1SsiCode AS ssi,
ec.b2disposition as disposition,
ec.wbsOriginator AS requestor,
ec.wbsAdministrator AS administrator,
ec.changepriority as priority,
ec.r1tsc as tsc,
ec.t1comments as tenixComments,
ec.b2securityclass as securityClassification,
ec.t1changesafety as safety,
ec.t1actionofficer as actionOfficer,
ec.t1changereason as changeReason,
ec.t1wbsextchangenumber as extChangeNo,
ec.creator as creator,
to_date(substr(ec.creationdate,
0,
instr(ec.creationdate, ':', 1, 3) - 1),
'YYYY/MM/DD-HH24:MI:SS') as creationdate,
to_date(ec.originatorassigndate, 'YYYY/MM/DD') as originatorassigndate
FROM (SELECT /*+ DRIVING_SITE(awdbt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM awdbt1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(mart1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM mart1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(mpsdt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM mpsdt1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(nondt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM nondt1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(rnast1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnast1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(rnlht1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnlht1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(rnolt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnolt1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(rzptt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rzptt1m4.cmPrRpIt@TCPROD) ec(not tested, of course) -
Select query for 6 different tables with vbeln as same selction criteria
Hi,
I have a query..
I am using 6 differnet tables with vbeln being the same primary key on the basis of which i have to match the data.
I have assign vbeln with different name but in the select query it gives me the error that vbeln2 is not the correct field.
Can anyone please suggest how can i use the different field name and read the data from the table.hi,
Use alias name for fields / tables in select query, problem will solve
Regards,
Praveen Savanth.N -
How to select data from one nested table and into another nested table
create or replace
TYPE ctxt_code_rec as object
ctxt_header varchar2(10),
header_description varchar2(300),
status varchar2(30),
adjacent_code varchar2(300),
adjacent_desc Varchar2(400),
adjacent_flag varchar2(4000),
adjacent_text_href varchar2(4000)
create or replace
type ctxt_code_table as table of CTXT_CODE_REC
d_table ctxt_code_table ;
v_tab ctxt_code_table ;
Iam trying to select data from d_table to v_tab
using and bulk collect into
select m.*
bulk collect into p_code_result
from table(l_loop_diag_code_table1)m
order by 1;
Receiving error:
ora 94007 : not enoughvalues
Could you please let me know how to solve it?
Thanks,
in advance>
create or replace
TYPE ctxt_code_rec as object
ctxt_header varchar2(10),
header_description varchar2(300),
status varchar2(30),
adjacent_code varchar2(300),
adjacent_desc Varchar2(400),
adjacent_flag varchar2(4000),
adjacent_text_href varchar2(4000)
create or replace
type ctxt_code_table as table of CTXT_CODE_REC
d_table ctxt_code_table ;
v_tab ctxt_code_table ;
Iam trying to select data from d_table to v_tab
using and bulk collect into
select m.*
bulk collect into p_code_result
from table(l_loop_diag_code_table1)m
order by 1;
Receiving error:
ora 94007 : not enoughvalues
Could you please let me know how to solve it?
>
Not unless you provide the code you are actually using.
There is no definition of 'p_code_result' in your post and you say you 'trying to select data from d_table' but there is no code that loads 'd_table' in what you posted.
And the SELECT query you posted actuall selects from an object named 'l_loop_idag_code_table1' which isn't mentioned in your code.
Post the actual code you are using and all of the structures being used.
Also explain why you even need to use nested tables and PL/SQL for whatever it is you are really doing.
Maybe you are looking for
-
Invoke Web Service - An unknown error occurred
We recently moved our Runbooks to Windows Server 2012 R2. The Invoke Web Services activity has not been executed previously on this installation and thus no cache files. Entering any SOAP WSDL into the WSDL Properties fields then clicking Method thro
-
Question Re: Auto Email receiving Problem
I have noticed that for a few days now that my iphone will not show any emails received until I go into each email account and select the refresh button the receive it manually. Prior to this my iphone would just auto pick them up from the server and
-
Use Enum to Switch on Strings where one string is a java reserved word.
Hi, I'm having problems switching on a string using enums, when one of the strings is a java reserved word. Any suggestions would be appreciated. Or maybe even a suggestion that I should just be stringing together if-else if and doing string compares
-
My skype is logged in on someone elses phone ,i do...
To contact customer services online, it states you must know the exact month and year you signed up to skype, i have no idea. So firstly, does anyone know how to find out the sign up date? Someone i know but have no contact with is logged into my sky
-
Please resolve this issue for Project Systems
I our org v need a solution for this scenario "After when an notification is created in tran 'IQS1' of type 'GO' by the site engineer it should be intimated to the planning engineer. The planning enginner can either 'put in process'or 'pospond'or 'c