Simple query takes 18 minutes to retrieve data....
Hi,
I am facing this problem at the customer site where a simple query on a table takes 18 minutes or more. Please find below the details.
Table Structure
CREATE TABLE dsp_data
quantum_id NUMBER(11) NOT NULL,
src NUMBER(11) NOT NULL,
call_status NUMBER(11) NOT NULL,
dst NUMBER(11) NOT NULL,
measurement_id NUMBER(11) NOT NULL,
is_originating NUMBER(1) NOT NULL,
measurement_value NUMBER(15,4) NOT NULL,
data_type_id NUMBER(3) NOT NULL,
data VARCHAR2(200) NOT NULL
TABLESPACE dsp_data_tspace
STORAGE (PCTINCREASE 0 INITIAL 100K NEXT 1024K)
PARTITION BY RANGE (quantum_id)
(PARTITION dsp_data_default VALUES LESS THAN (100));
CREATE INDEX dsp_data_idx ON dsp_data
quantum_id,
src,
call_status,
dst,
measurement_id,
is_originating,
measurement_value,
data_type_id
TABLESPACE dsp_data_idx_tspace
LOCAL;
CREATE INDEX dsp_data_src_idx ON dsp_data
src
TABLESPACE dsp_data_idx_tspace
LOCAL;
CREATE INDEX dsp_data_dst_idx ON dsp_data
dst
TABLESPACE dsp_data_idx_tspace
LOCAL;
ALTER TABLE dsp_data
ADD CONSTRAINT fk_dsp_data_1
FOREIGN KEY
quantum_id
REFERENCES mds_measurement_intervals
quantum_id
ALTER TABLE dsp_data
ADD CONSTRAINT fk_dsp_data_2
FOREIGN KEY
data_type_id
REFERENCES mds_drilldown_types
type_id
ALTER TABLE dsp_data
ADD CONSTRAINT pk_dsp_data
PRIMARY KEY
quantum_id,
src,
call_status,
dst,
measurement_id,
is_originating,
measurement_value,
data_type_id,
data
USING INDEX
TABLESPACE dsp_data_idx_tspace
LOCAL;
Table Space Creation
All table space creation is done using following command
CREATE TABLESPACE [tablespaceName]
DATAFILE [tablespaceDatafile] SIZE 500M REUSE
AUTOEXTEND ON NEXT 10240K
DEFAULT STORAGE ( INITIAL 1024K
NEXT 1024K
MINEXTENTS 10
MAXEXTENTS UNLIMITED
PCTINCREASE 0
Server Configuration on CUtsomer Site
(1) 2 x Dual PA8900 Proc = 4GHz
(2) RAM = 16GB
(3) 3 x Internal HDDs
(4) 1 x External MSA-30 storage array (oracle db)
Record Information On Customer Site
select count(*) from dsp_data;
COUNT(*)
181931197
select min (quantum_id) from dsp_data where dst=2;
This takes 18 minutes or more....
SQL> SQL> SQL> explain plan for select min (quantum_id) from dsp_data where dst=2;
Explained.
SQL> @?/rdbms/admin/utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 999040277
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 14 | 1 (0)| 00:00:01 | | |
| 1 | SORT AGGREGATE | | 1 | 14 | | | | |
| 2 | FIRST ROW | | 92 | 1288 | 1 (0)| 00:00:01 | | |
| 3 | PARTITION RANGE ALL | | 92 | 1288 | 1 (0)| 00:00:01 | 1 | 29 |
|* 4 | INDEX FULL SCAN (MIN/MAX)| DSP_DATA_IDX | 92 | 1288 | 1 (0)| 00:00:01 | 1 | 29 |
As mentioned above the query takes 18 minutes or more. This is a critical issue at customer. Can you please give your suggestions how to improve and reduce the query time. Thanks in advance.
Hi,
I did the following changes in the indexes of table.
drop index DSP_DATA_IDX;
create index DSP_DATA_MEASUREMENT_ID_IDX on DSP_DATA (MEASUREMENT_ID) TABLESPACE dsp_data_idx_tspace LOCAL;
After that I did explain plan,
explain plan for select min(QUANTUM_ID) from mds.DSP_DATA where SRC=11;
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU
| 0 | SELECT STATEMENT | | 1 | 11 | 3 (0
| 1 | SORT AGGREGATE | | 1 | 11 |
| 2 | FIRST ROW | | 430K| 4626K| 3 (0
| 3 | PARTITION RANGE ALL | | 430K| 4626K| 3 (0
| 4 | INDEX FULL SCAN (MIN/MAX)| PK_DSP_DATA | 430K| 4626K| 3 (0
Note
- 'PLAN_TABLE' is old version
14 rows selected
SELECT table_name, index_name, monitoring, used FROM v$object_usage;
TABLE_NAME INDEX_NAME MONITORING USED
DSP_DATA DSP_DATA_SRC_IDX YES NO
It seems that DSP_DATA_SRC_IDX is not getting used in query. What changes do i need to make so that DSP_DATA_SRC_IDX index gets used.
Also, you have stated that to create global index on src and dst. How do i create them.
Thanks in Advance.
Edited by: 780707 on Jul 8, 2010 11:58 PM
Similar Messages
-
How can I take minutes from mysql date format
how can I take minutes from mysql date format??
example 10:30:00 is stored in my sql and I want to create 3 variables which will store hours, minutes and seconds..
Cheers.."use application date format" is the choice you want.
Denes Kubicek
http://deneskubicek.blogspot.com/
http://www.opal-consulting.de/training
http://apex.oracle.com/pls/otn/f?p=31517:1
http://www.amazon.de/Oracle-APEX-XE-Praxis/dp/3826655494
------------------------------------------------------------------- -
This simple query takes 2 hrs. How to improve it??
This is a simple query. It takes 2 hours to run this query. Tables have over 100,000 rows.
SELECT
TO_CHAR(BC_T_ARRIVALS.ARR_FLIGHT_DATE,'DD/MM/YYYY') ARR_FLIGHT_DATE
FROM
BC_T_ARRIVALS a, BC_M_FLIGHTS f
WHERE
a.ARR_FLT_SEQ_NO = f.FLT_SEQ_NO AND
f.FLT_LOC_CODE = PK_BC_R_LOCATIONS.FN_SEL_LOC_CODE('BANDARANAYAKE INTERNATIONAL AIRPORT') AND TO_CHAR(a.ARR_FLIGHT_DATE,'YYYY/MM/DD') >= TO_CHAR(:P_FROM_DATE,'YYYY/MM/DD')
AND TO_CHAR(a.ARR_FLIGHT_DATE,'YYYY/MM/DD') <= TO_CHAR(:P_TO_DATE,'YYYY/MM/DD')
UNION
SELECT
TO_CHAR(BC_T_DEPARTURES.DEP_FLIGHT_DATE,'DD/MM/YYYY') DEP_FLIGHT_DATE
FROM
BC_T_DEPARTURES d, BC_M_FLIGHTS f
WHERE
d.DEP_FLT_SEQ_NO = BC_M_FLIGHTS.FLT_SEQ_NO AND
f.FLT_LOC_CODE = PK_BC_R_LOCATIONS.FN_SEL_LOC_CODE('BANDARANAYAKE INTERNATIONAL AIRPORT') AND TO_CHAR(d.DEP_FLIGHT_DATE,'YYYY/MM/DD') >= TO_CHAR(:P_FROM_DATE,'YYYY/MM/DD')
AND TO_CHAR(d.DEP_FLIGHT_DATE,'YYYY/MM/DD') <= TO_CHAR(:P_TO_DATE,'YYYY/MM/DD')As I see it, this query will not make the DB engine use any indexes since expressions are used in the 'WHERE' clause. Am I correct?
How can we improve the performance of this query???Maybe (do you really need to convert dates to chars ? That might prevent index use ...)
select f.BC_M_FLIGHTS,
TO_CHAR(BC_T_DEPARTURES.DEP_FLIGHT_DATE,'DD/MM/YYYY') DEP_FLIGHT_DATE,
TO_CHAR(BC_T_ARRIVALS.ARR_FLIGHT_DATE,'DD/MM/YYYY') ARR_FLIGHT_DATE
from (select BC_M_FLIGHTS,
FLT_LOC_CODE
from BC_M_FLIGHTS
where FLT_LOC_CODE = PK_BC_R_LOCATIONS.FN_SEL_LOC_CODE('BANDARANAYAKE INTERNATIONAL AIRPORT')
) f,
BC_T_ARRIVALS a,
BC_T_DEPARTURES d
where f.BC_M_FLIGHTS = a.ARR_FLT_SEQ_NO
and f.BC_M_FLIGHTS = d.DEP_FLT_SEQ_NO
and (TO_CHAR(a.ARR_FLIGHT_DATE,'YYYY/MM/DD') between TO_CHAR(:P_FROM_DATE,'YYYY/MM/DD') and TO_CHAR(:P_TO_DATE,'YYYY/MM/DD')
or TO_CHAR(d.DEP_FLIGHT_DATE,'YYYY/MM/DD') between TO_CHAR(:P_FROM_DATE,'YYYY/MM/DD') and TO_CHAR(:P_TO_DATE,'YYYY/MM/DD')
)Regards
Etbin
Edited by: Etbin on 2.3.2012 18:44
select column list altered -
Simple query takes time to run
Hi,
I have a simple query whcih takes about 20 mins to run.. here is the TKPROF forit:
SELECT
SY2.QBAC0,
sum(decode(SALES_ORDER.SDCRCD,'USD', SALES_ORDER.SDAEXP,'CAD', SALES_ORDER.SDAEXP /1.0452))
FROM
JDE.F5542SY2 SY2,
JDE.F42119 SALES_ORDER,
JDE.F0116 SHIP_TO,
JDE.F5542SY1 SY1,
JDE.F4101 PRODUCT_INFO
WHERE
( SHIP_TO.ALAN8=SALES_ORDER.SDSHAN )
AND ( SY1.QANRAC=SY2.QBNRAC and SY1.QAOTCD=SY2.QBOTCD )
AND ( PRODUCT_INFO.IMITM=SALES_ORDER.SDITM )
AND ( SY2.QBSHAN=SALES_ORDER.SDSHAN )
AND ( SALES_ORDER.SDLNTY NOT IN ('H ','HC','I ') )
AND ( PRODUCT_INFO.IMSRP1 Not In (' ','000','689') )
AND ( SALES_ORDER.SDDCTO IN ('CO','CR','SA','SF','SG','SP','SM','SO','SL','SR') )
AND (
( SY1.QACTR=SHIP_TO.ALCTR )
AND ( PRODUCT_INFO.IMSRP1=SY1.QASRP1 )
GROUP BY
SY2.QBAC0
call count cpu elapsed disk query current rows
Parse 1 0.07 0.07 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 10 92.40 929.16 798689 838484 0 131
total 12 92.48 929.24 798689 838484 0 131
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 62
Rows Row Source Operation
131 SORT GROUP BY
3535506 HASH JOIN
4026100 HASH JOIN
922 TABLE ACCESS FULL OBJ#(187309)
3454198 HASH JOIN
80065 INDEX FAST FULL SCAN OBJ#(30492) (object id 30492)
3489670 HASH JOIN
65192 INDEX FAST FULL SCAN OBJ#(30457) (object id 30457)
3489936 PARTITION RANGE ALL PARTITION: 1 9
3489936 TABLE ACCESS FULL OBJ#(30530) PARTITION: 1 9
97152 TABLE ACCESS FULL OBJ#(187308)
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.07 0.07 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 10 92.40 929.16 798689 838484 0 131
total 13 92.48 929.24 798689 838484 0 131
Misses in library cache during parse: 1kindly suggest how to resolve this...
OS is windows and its 9i DB...
Thanks> ... you want to get rid of the IN statements.
They prevent Oracle from usering the index.
SQL> create table mytable (id,num,description)
2 as
3 select level
4 , case level
5 when 0 then 0
6 when 1 then 1
7 else 2
8 end
9 , 'description ' || to_char(level)
10 from dual
11 connect by level <= 10000
12 /
Table created.
SQL> create index i1 on mytable(num)
2 /
Index created.
SQL> exec dbms_stats.gather_table_stats(user,'mytable')
PL/SQL procedure successfully completed.
SQL> set autotrace on explain
SQL> select id
2 , num
3 , description
4 from mytable
5 where num in (0,1)
6 /
ID NUM DESCRIPTION
1 1 description 1
1 row selected.
Execution Plan
Plan hash value: 2172953059
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5001 | 112K| 2 (0)| 00:00:01 |
| 1 | INLIST ITERATOR | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| MYTABLE | 5001 | 112K| 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | I1 | 5001 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("NUM"=0 OR "NUM"=1)Regards,
Rob. -
Can I retrieve data from a multiple select query?
I recently have been able to consolidate many queries into one large one, or into large compound queries. I found an instance where I am using a select query in a for loop that may execute up to 400 times. I tried building a huge compound query, and using DB Tools Execute Query vi, followed by the DB Tools Fetch Recordset Data vi. The query executes at the database without an error, but the Fetch Recordset Data vi only returns the first instance. Is there a way to retrieve all the data without having to send up to 400 separate select queries?
Sorry I didn't replt earlier, I was on vacation. The query I am using is to check serial numbers, and determine if they are all valid. The programs purpose is to define a serial number to a pre-existing part number. Our company makes inclinometers and accelerometers, and this entire series of LabVIEW programs is designed to automate the calibration and testing of these units. The part number definitions can contain 3 or 4 hundred parameters, so the database itself consistes of 44 tables with potentially several hundred columns per table. It is designed to not only provide definitions to every part number, but also to store all potential raw unit data to be calculated and formed into a report at any time. The logistics of getting that much data in and out of the database have forced me to do things more effeciently. The actual query in question is to take each serial number either manually entered, or automatically picked, and see if they already exist with the part number they are being defined as. If there are any duplicates, then the program will alert the operator that serial numbers x, y, and z for instance have already been asigned as the part number in question. Currently I run a simple query once for each serial number. This works, but there may be 200 serial numbers assigned. Also the serial numbers can contain upper or lower case letters. By making all the serial number letters into capitals, then into lower case, it could mean up to 400 individual queries going out over the LAN. This is a bandwidth hog, and time consuming. I started experimenting with compound queries. The actual query used is below.
SELECT SERIALNO FROM "maintable" WHERE PARTNO = '475196-001' AND SERIALNO = '3000005';SELECT SERIALNO FROM "maintable" WHERE PARTNO = '475196-001' AND SERIALNO = '3000006';SELECT SERIALNO FROM "maintable" WHERE PARTNO = '475196-001' AND SERIALNO = '3000007';SELECT SERIALNO FROM "maintable" WHERE PARTNO = '475196-001' AND SERIALNO = '3000008';SELECT SERIALNO FROM "maintable" WHERE PARTNO = '475196-001' AND SERIALNO = '3000009'
When I execute this query, SQL Server 2000 has no problem with it, but the DB Tools Fetch Recordset Data vi only returns the first match. I think my answer may lie with OR statements. Rather than sending what amounts to potentially dozens of individual queries, I should be able to chain them into one query with a lot of OR statements. As long as the OR statement is not an exclusive OR statement, I think it should work. I haven't tried it yet, and it may take some time to get the syntax right. The query is built in a for loop with the number of iterations equal to the number of serial numbers being defined. Once I get this working I will alter it to include both upper and lower case letters that can be included in the query. Any suggestiona of how the query should be structured would be most helpful, or another way to achieve what I am trying to accomplish.
SciManStev -
Query Takes Longer time as the Data Increases.
Hi ,
We have one of the below Query which takes around 4 to 5 minutes to retrieve the data and this appears to be very slow as the data grows.
DB Version=10.2.0.4
OS=Solaris 10
tst_trd_owner@MIFEX3> explain plan for select * from TIBEX_OrderBook as of scn 7785234991 where meid='ME4';
Explained.
tst_trd_owner@MIFEX3> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'));
PLAN_TABLE_OUTPUT
Plan hash value: 3096779986
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 303 | | 609K (1)| 01:46:38 |
|* 1 | HASH JOIN SEMI | | 1 | 303 | 135M| 609K (1)| 01:46:38 |
|* 2 | HASH JOIN | | 506K| 129M| | 443K (1)| 01:17:30 |
| 3 | TABLE ACCESS BY INDEX ROWID| TIBEX_ORDERSTATUSENUM | 1 | 14 | | 2 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | TIBEX_ORDERSTAT_ID_DESC | 1 | | | 1 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | TIBEX_ORDER | 3039K| 736M| | 443K (1)| 01:17:30 |
| 6 | VIEW | VW_NSO_1 | 7931K| 264M| | 159K (1)| 00:27:53 |
| 7 | HASH GROUP BY | | 7931K| 378M| 911M| 159K (1)| 00:27:53 |
|* 8 | HASH JOIN RIGHT ANTI | | 7931K| 378M| | 77299 (1)| 00:13:32 |
|* 9 | VIEW | index$_join$_004 | 2 | 28 | | 2 (50)| 00:00:01 |
|* 10 | HASH JOIN | | | | | | |
| 11 | INLIST ITERATOR | | | | | | |
|* 12 | INDEX RANGE SCAN | TIBEX_ORDERSTAT_ID_DESC | 2 | 28 | | 2 (0)| 00:00:01 |
| 13 | INDEX FAST FULL SCAN | XPKTIBEX_ORDERSTATUSENUM | 2 | 28 | | 1 (0)| 00:00:01 |
| 14 | INDEX FAST FULL SCAN | IX_ORDERBOOK | 11M| 408M| | 77245 (1)| 00:13:31 |
Predicate Information (identified by operation id):
1 - access("A"."MESSAGESEQUENCE"="$nso_col_1" AND "A"."ORDERID"="$nso_col_2")
2 - access("A"."ORDERSTATUS"="ORDERSTATUS")
4 - access("SHORTDESC"='ORD_OPEN')
5 - filter("MEID"='ME4')
8 - access("ORDERSTATUS"="ORDERSTATUS")
9 - filter("SHORTDESC"='ORD_NOTFND' OR "SHORTDESC"='ORD_REJECT')
10 - access(ROWID=ROWID)
12 - access("SHORTDESC"='ORD_NOTFND' OR "SHORTDESC"='ORD_REJECT')
33 rows selected.
The View Query TIBEX_OrderBook.
SELECT ORDERID, USERORDERID, ORDERSIDE, ORDERTYPE, ORDERSTATUS,
BOARDID, TIMEINFORCE, INSTRUMENTID, REFERENCEID,
PRICETYPE, PRICE, AVERAGEPRICE, QUANTITY, MINIMUMFILL,
DISCLOSEDQTY, REMAINQTY, AON, PARTICIPANTID, ACCOUNTTYPE,
ACCOUNTNO, CLEARINGAGENCY, 'OK' AS LASTINSTRESULT,
LASTINSTMESSAGESEQUENCE, LASTEXECUTIONID, NOTE, TIMESTAMP,
QTYFILLED, MEID, LASTINSTREJECTCODE, LASTEXECPRICE, LASTEXECQTY,
LASTINSTTYPE, LASTEXECUTIONCOUNTERPARTY, VISIBLEQTY,
STOPPRICE, LASTEXECCLEARINGAGENCY, LASTEXECACCOUNTNO,
LASTEXECCPCLEARINGAGENCY, MESSAGESEQUENCE, LASTINSTUSERALIAS,
BOOKTIMESTAMP, ParticipantIDMM, MarketState, PartnerExId,
LastExecSettlementCycle, LastExecPostTradeVenueType,
PriceLevelPosition, PrevReferenceID, EXPIRYTIMESTAMP, matchType,
lastExecutionRole, a.MDEntryID, a.PegOffset, a.haltReason,
a.LastInstFixSequence, A.COMPARISONPRICE, A.ENTEREDPRICETYPE
FROM tibex_Order A
WHERE (A.MessageSequence, A.OrderID) IN (
SELECT max(B.MessageSequence), B.OrderID
FROM tibex_Order B
WHERE orderStatus NOT IN (
SELECT orderStatus
FROM tibex_orderStatusEnum
WHERE ShortDesc in ('ORD_REJECT', 'ORD_NOTFND')
GROUP By B.OrderID
AND A.OrderStatus IN (
SELECT OrderStatus
FROM tibex_orderStatusEnum
WHERE ShortDesc IN ('ORD_OPEN')
/Any helpful suggestions.
Regards
NMHi Centinul,
I tried your modified version of the query on the test Machine.It used Quite a lot of Temp space around 9GB and Finally ran out of disk space.
On the test Machine i have generated stats and Executed the Queries but in the production our stats will be always Stale reason is
In the Morning we have 3000 records in Tibex_Order and as the day progresses data will be increment and goes upto 20 millions records by the end of day and we generate the stats and Truncate the Transaction tables(Tibex_Order=20 Million records) and next day our stats will be stale and if the user runs any Query then they will take Ages to retrieve Example is the below one.
tst_trd_owner@MIFEX3>
tst_trd_owner@MIFEX3> CREATE OR REPLACE VIEW TIBEX_ORDERBOOK_TEMP
2 (ORDERID, USERORDERID, ORDERSIDE, ORDERTYPE, ORDERSTATUS,
3 BOARDID, TIMEINFORCE, INSTRUMENTID, REFERENCEID, PRICETYPE,
4 PRICE, AVERAGEPRICE, QUANTITY, MINIMUMFILL, DISCLOSEDQTY,
5 REMAINQTY, AON, PARTICIPANTID, ACCOUNTTYPE, ACCOUNTNO,
6 CLEARINGAGENCY, LASTINSTRESULT, LASTINSTMESSAGESEQUENCE, LASTEXECUTIONID, NOTE,
7 TIMESTAMP, QTYFILLED, MEID, LASTINSTREJECTCODE, LASTEXECPRICE,
8 LASTEXECQTY, LASTINSTTYPE, LASTEXECUTIONCOUNTERPARTY, VISIBLEQTY, STOPPRICE,
9 LASTEXECCLEARINGAGENCY, LASTEXECACCOUNTNO, LASTEXECCPCLEARINGAGENCY, MESSAGESEQUENCE, LASTINSTUSERALIAS,
10 BOOKTIMESTAMP, PARTICIPANTIDMM, MARKETSTATE, PARTNEREXID, LASTEXECSETTLEMENTCYCLE,
11 LASTEXECPOSTTRADEVENUETYPE, PRICELEVELPOSITION, PREVREFERENCEID, EXPIRYTIMESTAMP, MATCHTYPE,
12 LASTEXECUTIONROLE, MDENTRYID, PEGOFFSET, HALTREASON, LASTINSTFIXSEQUENCE,
13 COMPARISONPRICE, ENTEREDPRICETYPE)
14 AS
15 SELECT orderid
16 , MAX(userorderid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
17 , MAX(orderside) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
18 , MAX(ordertype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
19 , MAX(orderstatus) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
20 , MAX(boardid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
21 , MAX(timeinforce) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
22 , MAX(instrumentid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
23 , MAX(referenceid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
24 , MAX(pricetype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
25 , MAX(price) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
26 , MAX(averageprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
27 , MAX(quantity) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
28 , MAX(minimumfill) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
29 , MAX(disclosedqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
30 , MAX(remainqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
31 , MAX(aon) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
32 , MAX(participantid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
33 , MAX(accounttype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
34 , MAX(accountno) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
35 , MAX(clearingagency) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
36 , 'ok' as lastinstresult
37 , MAX(lastinstmessagesequence) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
38 , MAX(lastexecutionid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
39 , MAX(note) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
40 , MAX(timestamp) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
41 , MAX(qtyfilled) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
42 , MAX(meid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
43 , MAX(lastinstrejectcode) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
44 , MAX(lastexecprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
45 , MAX(lastexecqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
46 , MAX(lastinsttype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
47 , MAX(lastexecutioncounterparty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
48 , MAX(visibleqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
49 , MAX(stopprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
50 , MAX(lastexecclearingagency) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
51 , MAX(lastexecaccountno) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
52 , MAX(lastexeccpclearingagency) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
53 , MAX(messagesequence) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
54 , MAX(lastinstuseralias) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
55 , MAX(booktimestamp) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
56 , MAX(participantidmm) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
57 , MAX(marketstate) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
58 , MAX(partnerexid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
59 , MAX(lastexecsettlementcycle) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
60 , MAX(lastexecposttradevenuetype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
61 , MAX(pricelevelposition) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
62 , MAX(prevreferenceid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
63 , MAX(expirytimestamp) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
64 , MAX(matchtype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
65 , MAX(lastexecutionrole) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
66 , MAX(mdentryid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
67 , MAX(pegoffset) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
68 , MAX(haltreason) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
69 , MAX(lastinstfixsequence) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
70 , MAX(comparisonprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
71 , MAX(enteredpricetype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
72 FROM tibex_order
73 WHERE orderstatus IN (
74 SELECT orderstatus
75 FROM tibex_orderstatusenum
76 WHERE shortdesc IN ('ORD_OPEN')
77 )
78 GROUP BY orderid
79 /
View created.
tst_trd_owner@MIFEX3> SELECT /*+ gather_plan_statistics */ * FROM TIBEX_OrderBook_TEMP as of scn 7785234991 where meid='ME4';
SELECT /*+ gather_plan_statistics */ * FROM TIBEX_OrderBook_TEMP as of scn 7785234991 where meid='ME4'
ERROR at line 1:
ORA-01114: IO error writing block to file %s (block # %s)
ERROR:
ORA-03114: not connected to ORACLEAny Suggestion will be helpful
Regards
NM -
Query Report:To Retrieve Data from A/R Invoice and A/P Invoice
Hii Experts,
I am a new Sap B1 Trainee.I am facing a problem when retrieving data from A/R Invoice and A/P Invoice in order to track
Expenditure and Revenue according to a Bussiness partner,
I am using union to retrieve the information,but it is saying a error that Error Converting Varchar to Numeric and also
i would like to know how can i show the total final payment by reflecting Downpayments in A/R Invoice and A/P Invoice
With Regards
Cherry.Hii.
My Sap B1 version is 8.8.1 and patch level is 20. Actully i need a scenario where i can able to show both Expenditure and Revenue of a particular bussiness partner and profit/loss in a single query report.
I need some tips regarding this,When i am doing union i am getting conversion error converting varchar to numeric when i take
Sum(Line Total) from OINV and Sum(line total) OPCH group by docdate and docentry and BP .
and another scenario is how to deduct A/P downpayment or A/R downpayment from A/P invoices and A/R invoice to get the final Revenue and Expenditure ..
Thanks & Regards
Cherry -
Retrieving data from a BW Query runs endlessly in Crystal
Hi,
I have installed Crystal Reports 2008 SP2 with SAP BO Integration Kit 3.1. When I create a new file from a SAP BW 7 Query I everything works fine (field selction etc) until I run it and try to extract data from the database. I can enter the parameters and enter the login data for the BW connection. After that I can see the entry "retrieving data from the database" in the status bar and that's it. The screen kind of freezes and I don't get any result. I can wait for 2 hrs no result. In the end I cannot even get back to screen. I have to close it through taskmanager.
I have tested the query with transaction RSRT and the same parameter value I chose in Crystal reports and there I get a result. Therefore the entry values should be fine.
Did I install anything wrong, did somebody encounter this problem as well?
Thanks a lot in advance for any help provided
Kind Regards,
MelanieHi,
I entered the SQL statement in transaction MDXTest but it is running since 7 minutes.
>> Transaction MDXTEST has an option to generate a test sequence and is not there to test SQL statements. You need to enter a valid MDX statement.
Function module "/Crystal/MDX_GET_STREAM_INFO" not found
>> then I would suggest you make sure that all the Transports from the SAP Integration Kit have been properly imported - see the Installation Guide for the SAP Integration Kit for details.
Ingo -
Retrieve data partially without re-execution of query
I am using Visual Basic.NET for programming.
Database is Oracle 9i Enterprise Edition Release 9.0.1.1.1
Platform is Windows XP Professional.
I am going to execute an Oracle Query (or Stored Procedure) from my VB.NET environment.
This query returns 1,000,000 records and takes lots of time to be executed.
I found a way in .NET to fill DataSet with specific number of records. For Example I can show 100 records from position 10 to position 110 by this code:
MyDataAdapter.Fill (MyDataset, 10, 100,"TEST")
But my problem happens when I want to show 100 records from position 900,000 to position 900,100 by this code:
MyDataAdapter.Fill (MyDataset, 900000, 100,"TEST")
This line takes lots of times to be executed.
I think each time I run the above line in VB.NET, the query executes once again. And this is not what I expect.
Besides I used Microsoft.NET Oracle Client too, but still have problem.
Would you please kindly help me to find a way to retrieve data partially without re-execution of query?
Thanks for co-operation in advance.I am using Visual Basic.NET for programming.
Database is Oracle 9i Enterprise Edition Release 9.0.1.1.1
Platform is Windows XP Professional.
I am going to execute an Oracle Query (or Stored Procedure) from my VB.NET environment.
This query returns 1,000,000 records and takes lots of time to be executed.
I found a way in .NET to fill DataSet with specific number of records. For Example I can show 100 records from position 10 to position 110 by this code:
MyDataAdapter.Fill (MyDataset, 10, 100,"TEST")
But my problem happens when I want to show 100 records from position 900,000 to position 900,100 by this code:
MyDataAdapter.Fill (MyDataset, 900000, 100,"TEST")
This line takes lots of times to be executed.
I think each time I run the above line in VB.NET, the query executes once again. And this is not what I expect.
Besides I used Microsoft.NET Oracle Client too, but still have problem.
Would you please kindly help me to find a way to retrieve data partially without re-execution of query?
Thanks for co-operation in advance. -
Error while trying to retrieve data from BW BEx query
The following error is coming while trying to retrieve data from BW BEx query (on ODS) when the Characters are more than 50.
In BEx report there is a limitation but is it also a limitation in Webi report.
Is there any other solution for this scenario where it is possible to retrieve more than 50 Characters?
A database error occured. The database error text is: The MDX query SELECT { [Measures].[3OD1RJNV2ZXI7XOC4CY9VXLZI], [Measures].[3P71KBWTVNGY9JTZP9FTP6RZ4], [Measures].[3OEAEUW2WTYJRE2TOD6IOFJF4] } ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [ZHOST_ID2].[LEVEL01].MEMBERS, [ZHOST_ID3].[LEVEL01].MEMBERS ), [ZHOST_ID1].[LEVEL01].MEMBERS ), [ZREVENDDT__0CALDAY].[LEVEL01].MEMBERS ) ........................................................ failed to execute with the error Invalid MDX command with UNSUPPORTED: > 50 CHARACT.. (WIS 10901)Hi,
That warning / error message will be coming from the MDX interface on the BW server. It does not originate from BOBJ.
This question would be better asked to support component BW-BEX-OT-MDX
Similar discussion can be found using search: Limitation of Number of Objects used in Webi with SAP BW Universe as Source
Regards,
Henry -
Web Analysis Error -- Error while executing query and retrieving data
Regarding Web Analysis:
We have a number of reports that we created. yesterday they were all working fine. today we try to open them and most are generating an error.
The error is:
Error while executing query and retrieving data.; nested exception is:
com.hyperion.ap.APException: [1033] Native:
1013033[Thu Oct 22 09:08:17
2009]server name/application name/database name/user name/Error91013033)
ReportWriter exit abnormally
Does anyone have any insight into what is going on?
My version information is:
Hyperion System 9 BI+ Analytic Administration Services 9.3.0.1.1 Build 2
Web Analysis 9.3.0.0.0.286
Thanks in advance for your help.
DaveWHi,
And also click on check option by opening the query in Query designer,as Mr . Arun suggested.
And if you get any error in checking, see the long message(detail).
With rgds,
Anil Kumar Sharma .P -
When I enable imatch on my iPhone 4s it takes approximately 30 minutes before other data fills 13.2gb of usable data on the phone. This problem does not occur when I manually sync music to my phone only when I access imatch. Is this a common bug, and if so; is there a fix?
yes it is. you can sign out of itunes account then sign back in. use http://support.apple.com/kb/ht1311 to sign out.
-
Query to retrieve date depending on specific date
hi all,
i have a question regarding dates.
ihave a table with following data
with table1 as
select 123 id, 'text' txt, to_date('9/16/2010','mm/dd/yyyy') date1 from dual union all
select 111 id, 'text2' txt, to_date('9/16/2010','mm/dd/yyyy') date1 from dual union all
select 222 id, 'text3' txt, to_date('9/16/2010','mm/dd/yyyy') date1 from dual union all
select 333 id, 'text4' txt, to_date('9/16/2010','mm/dd/yyyy') date1 from dual union all
select 444 id, 'text5' txt, to_date('9/20/2010','mm/dd/yyyy') date1 from dual union all
select 555 id, 'text6' txt, to_date('9/20/2010','mm/dd/yyyy') date1 from dual union all
select 666 id, 'text7' txt, to_date('9/20/2010','mm/dd/yyyy') date1 from dual union all
select 777 id, 'text8' txt, to_date('9/20/2010','mm/dd/yyyy') date1 from dual
)i am creating a procedure that will insert data into a table and this procedure will run every day.
so for example, if procedure runs today, then it will check today's date against table1 and see we have
4 rows for 9/16/2010 and insert into a table. then tomorrow 9/17/2010 it will see if there is a 9/17/2010 data.
since there is no data, then the 4 rows with 9/16/2010 should be pick up and inserted with 9/17/2010 date.
now lets say that today is 9/20/2010 then the query should pick up 9/20/2010 data set.
for 9/21/2010 the same data set for 9/20/2010 should be pick since that is the latest date less than or equal to 9/21/2010
here is sample output.
lets say we run query today. then output should be
id txt date1
================================
123 text 09.16.2010
111 text2 09.16.2010
222 text3 09.16.2010
333 text4 09.16.2010 now lets say today is 9/17/2010 then query will look at table for the 9/17/2010, since it is not found
it will be pick the latest less than or equal to 9/17/2010
output should be
id txt date1
================================
123 text 09.17.2010
111 text2 09.17.2010
222 text3 09.17.2010
333 text4 09.17.2010 now lets say that today is 9/20/2010, the query will look at the table for date 9/20/2010, it found there there is an entry and will pick up that data set.
so output should be
444 text5 09.20.2010
555 text6 09.20.2010
666 text7 09.20.2010
777 text8 09.20.2010if you run for 9/21/2010 , query should pick up latest date less or equal to 9/21/2010 which is 9/20/2010 so output should be
444 text5 09.21.2010
555 text6 09.21.2010
666 text7 09.21.2010
777 text8 09.21.2010can someone help write a query that given a date can retrieve output above? thankswith table1 as
select 123 id, 'text' txt, to_date('9/16/2010','mm/dd/yyyy') date1 from dual union all
select 111 id, 'text2' txt, to_date('9/16/2010','mm/dd/yyyy') date1 from dual union all
select 222 id, 'text3' txt, to_date('9/16/2010','mm/dd/yyyy') date1 from dual union all
select 333 id, 'text4' txt, to_date('9/16/2010','mm/dd/yyyy') date1 from dual union all
select 444 id, 'text5' txt, to_date('9/20/2010','mm/dd/yyyy') date1 from dual union all
select 555 id, 'text6' txt, to_date('9/20/2010','mm/dd/yyyy') date1 from dual union all
select 666 id, 'text7' txt, to_date('9/20/2010','mm/dd/yyyy') date1 from dual union all
select 777 id, 'text8' txt, to_date('9/20/2010','mm/dd/yyyy') date1 from dual
/* substitute trunc(sysdate) with your parameter */
select t.id, t.txt, trunc(sysdate) as date1 from table1 t
where t.date1=(select max(t2.date1) from table1 t2 where t2.date1 <= trunc(sysdate) ) -
QUERY TAKES MORE THAN 30 MINUTES AND IT'S CANCELED
Hi
I have one workbook and sometimes it takes more than 30 minutes to be executed, but Discoverer cancels it, Does anybody know how to alter this? I mean if the query takes more than 30 minutes i need it to be finished.
Any help will be appreciated.
Best Regards
Yuri LópezHi
You need to alter the timeout settings and these are located in multiple places.
Discoverer Plus / Viewer using this workflow:
1. Edit the pref.txt file on the server located here: $ORACLE_HOME\discoverer\util
2. Locate the preference called Timeout and change it to your desired value in seconds. The default is 1800 which means 30 minutes
3. Save pref.txt
4. Execute applypreferences.bat if running on Windows or applypreferences.sh if running on Linux or Unix
5. Stop and restart the Discoverer server
Discoverer Administrator using this workflow:
1. Launch Discoverer Administrator
2. From the menu bar, select Tools | Privileges
3. Select the user
4. Open the Query Governor tab
5. If "Prevent queries from running longer than" is checked, increase the limit
6. Click OK
Note: if "Prevent queries from running longer than" is not checked then the Timout in the pref.txt controls how long before the queries stop. If it is checked and it is lower than Timeout you need to increase it.
Let me know if this helps
Best wishes
Michael Armstrong-Smith
URL: http://learndiscoverer.com
Blog: http://learndiscoverer.blogspot.com -
consider this situation,
Two or more productid will be accquired by same customerid, by same shipvia, on the same day of the week of shipped date. i want the simple query for this.
my tables are from northwind:
[orders] = OrderID, CustomerID, EmployeeID, OrderDate, RequiredDate, ShippedDate, ShipVia, Freight, ShipName, ShipAddress, ShipCity, ShipRegion, ShipPostalCode, ShipCountry.
[orders details] = OrderID, ProductID, UnitPrice, Quantity, Discount.
i tried some but it is not exact, it gives wrong result.
select pd.CustomerID,pd.ProductID, pd.no_of_time_purchased, sd.ShipVia, sd.same_ship_count, shipped_day from
select ProductID,o.CustomerID,COUNT(productid) as no_of_time_purchased
from orders o join [Order Details] od on o.OrderID=od.OrderID group by ProductID,o.CustomerID
having count(od.ProductID) >1) pd
join
(select OrderID,customerid, shipvia, count(shipvia)as same_ship_count, DATENAME(DW,ShippedDate)as shipped_day from orders
group by customerid, ShipVia, ShippedDate having COUNT(ShipVia) > 1 ) sd
on sd.CustomerID=pd.CustomerIDHi,
I think I have a solution that will at least give you a clue how to go about it. I have simplified the tables you mentioned and created them as temporary tables on my side, with some fake data to test with. I have incldued the generation of these temporary
tables for your review.
In my example I have included:
1. A customer which has purchased the same product on the same day, using the same ship 3 times,
2. Another example the same as the first but the third purchase was on a different day
3. Another example the same as the first but the third purchase was a different product
4. Another example the same as the first but the third purchase was using a different "ShipVia".
You should be able to see that by grouping on all of the columns that you wich to return, you should not need to perform any subselects.
Please let me know if I have missed any requirements.
Hope this helps:
CREATE TABLE #ORDERS
OrderID INT,
CustomerID INT,
OrderDate DATETIME,
ShipVia VARCHAR(5)
CREATE TABLE #ORDERS_DETAILS
OrderID INT,
ProductID INT,
INSERT INTO #ORDERS
VALUES
(1, 1, GETDATE(), 'ABC'),
(2, 1, GETDATE(), 'ABC'),
(3, 1, GETDATE(), 'ABC'),
(4, 2, GETDATE() - 4, 'DEF'),
(5, 2, GETDATE() - 4, 'DEF'),
(6, 2, GETDATE() - 5, 'DEF'),
(7, 3, GETDATE() - 10, 'GHI'),
(8, 3, GETDATE() - 10, 'GHI'),
(9, 3, GETDATE() - 10, 'GHI'),
(10, 4, GETDATE() - 10, 'JKL'),
(11, 4, GETDATE() - 10, 'JKL'),
(12, 4, GETDATE() - 10, 'MNO')
INSERT INTO #ORDERS_DETAILS
VALUES
(1, 1),
(2, 1),
(3, 1),
(4, 2),
(5, 2),
(6, 2),
(7, 3),
(8, 3),
(9, 4),
(10, 5),
(11, 5),
(12, 5)
SELECT * FROM #ORDERS
SELECT * FROM #ORDERS_DETAILS
SELECT
O.CustomerID,
OD.ProductID,
O.ShipVia,
COUNT(O.ShipVia),
DATENAME(DW, O.OrderDate) AS [Shipped Day]
FROM #ORDERS O
JOIN #ORDERS_DETAILS OD ON O.orderID = OD.OrderID
GROUP BY OD.ProductID, O.CustomerID, O.ShipVia, DATENAME(DW, O.OrderDate) HAVING COUNT(OD.ProductID) > 1
DROP TABLE #ORDERS
DROP TABLE #ORDERS_DETAILS
Maybe you are looking for
-
How do I move photos from laptop to desktop WITH Lightroom Edits?
I have Lightroom on my laptop and on my desktop. I've just done a big shoot with my camera tethered to Lightroom on my laptop, and applied some develop settings as the photos were imported. I also did a little cropping and a few other adjustments w
-
Using the bapi BAPI_INSPECTIONPLAN_CREATE
Hi , I am using the bapi BAPI_INSPECTIONPLAN_CREATE to create inspection plans. Here I am facing one issue, as a part of this creation in inspection characteristics, I am not able to handle all the control indicators. In this control indicato
-
Ical events all showing at same time
Hi guys, I am at the end of my tether. I have a couple of google calendars that then feed into ical. They were both set up with the calibration program from google. Which then feeds in with my iphone. I can add and edit events on all three, however e
-
There seems to be a private byte memory leak with Adobe Reader 8.0.0 and 8.1.2 (tested both) when you try to print a PDF document. If you monitor the private bytes usage of the AcroRD32.exe with perfmon and then try to print you will see the memory u
-
Doubt on Hardware Load Balancing?
Hi Community, I think there are two ways to perform the load balancing.One of them is proxy and other one is hardware load balancing.I have know something about the Proxy related Load balancing using Apache,oracle HTTP server and others.My quest