Use Secoandary Index for VBFA for fast execution .
HI all,
How can i use secoandary index for VBFA table to improve my select statement performance
in ALV report.below statement executing very slow.
SELECT
vbelv
POSNV
vbtyp_v
vbeln
POSNN
vbtyp_n
FROM VBFA
INTO TABLE IT_TY_SD_FLOW_do
FOR ALL ENTRIES IN IT_INV
WHERE VBELN = IT_INV-VBELN
AND POSNV = IT_INV-POSNR
AND VBTYP_N = 'M'
AND VBTYP_V = 'J'.
Your query will never be optimized if you do not check first whether your internal table used in the where clause has at least one row.
If the table is empty you end up with a full table scan and this cannot be your objective.
Have fun.
Similar Messages
-
Allow indexing service to index this disk for fast file searching
Hello,
I'm using Oracle on Win 2000, I have lately found out that the option "Compress drive to save disk space"
is not supported in Oracle.
When I've opened the drive specifications on the properties dialog, I found also another option called "Allow indexing service to index this disk for fast file searching".
My question are;
1. Should I turn it off?
2. Same question when the indexed directories does not include Oracle files?
3. Same question, when the indexing service is set to off?
Thanks,
Tal Olier ([email protected])If this is a production machine, I'd definitely turn this off. Hopefully, no one is going to be searching for particular files on a production server-- they'll know where to go. You probably don't want any unnecessary background processes using up CPU or RAM either.
If this is a development machine, the developer may be searching for files with some frequency. If that's the case, it may be worth it to keep this service enabled.
I'm not aware of any Oracle issues when this service is running, but I'm paranoid enough not to trust it on a production box. If you're running on a dev box, you shouldn't have problems, particularly if you're not indexing Oracle files.
Justin -
Why yahoo/gmail retrieval and facebook app so so slow when in WIFI. But using the internet to check for emails and facebook is fast using the same WIFI connection??
Hi SandyS_VZW,
Yes tried resetting the wifi connection and problem still persist.
Here it is...to make it clear. Connected thru the same wifi at home...
-> emails (yahoo/gmail) and facebook WEBSITES are working fine and fast when using/accessing thru a browser (chrome/samsung browser) - no problem with this.
-> emails (yahoo/gmail) and facebook APP is soooooo sloooww (thru the App). Slow I mean comparing it to using their browser/websites... news feeds/emails refreshing so quickly but not when using the APP installed in Samsung Galaxy Note 4. Slow like - It will take around 5-10minutes just to get your emails and news feed refreshed.
THIS HAPPENS ONLY WHEN CONNECTED THRU A WIFI which has a speed of 10-20mb. It is not happening when connected to the network data/plan.
My wife has the same Samsung Galaxy Note 4 (coming from different provider at&t) - same setup (emails, fb app), same wifi connection, but she's not experiencing anything like it.
Not sure why, I dont want to believe that while connected to a WIFI, Verizon is restricting anything and ******* me off to make me switch to my data plan connection everytime - which is Unfair!
Was there a known issue similar about this case?
thanks, -
How to use the index method for pathpoints object in illustrator through javascripts
hii...
am using Illustrator CS2 using javascripts...
how to use the index method for pathpoints object in illustrator through javascripts..Hi, what are you trying to do with path points?
CarlosCanto -
I am using ethernet and am using a Linksys SE1500 5- port fast etherner switch, i cant seem to sign in on the apple for anything although I have accounts set up in itunes/apple and netflix . Any suggestions PLEASE have spent 3 hours trying everything I can think of. Any help will be appreciated!
I have a linksys router but it isn't strong enough to get to this computer, so we are using ethernet.
Thank you
ColleenWelcome to the Apple Community.
Are you getting any messages about date and time.
Have you tried restarting the Apple TV by removing ALL the cables for 30 seconds.
Are you connected to the network correctly. -
Why is the Star Transformation using two indexes for the same dimension?
Hi,
Recently, I have made an investigation about the Star Transformation feature. I have found a strange test case, which plays an important role in my strategy for our overall DWH architecture. Here it is:
The Strategy:
I would like to have the classical Star Transformation approach (single column Bitmap Indexes for each dimension foreign key column in the fact table), together with additional Bitmap Join Indexes for some of the dimension attributes, which would benefit from the materialization of the join (bitmap merge operation will be skipped/optimized).
The query:
select dp.brand, ds. region_name, dc.region_name
, count(*), sum(f.extended_price)
from fact_line_item f
, dim_part dp
, dim_supplier ds
, dim_customer dc
where dp.mfgr = 10 -- dimension selectivity = 1/10 --> acttual/fact selectivity = 6/10
and f.part_dk = dp.dk
and ds.region_name = 'REGION #1' -- dimension selectivity = 1/9
and f.supplier_dk = ds.dk
and dc.region_name = 'REGION #1' -- dimension selectivity = 1/11
and f.customer_dk = dc.dk
group by dp.brand, ds. region_name, dc.region_name
The actual plan:
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
| 0 | SELECT STATEMENT | | 1 | | 3247 (100)| 1 |00:01:42.05 | 264K| 220K|
| 1 | HASH GROUP BY | | 1 | 2 | 3247 (1)| 1 |00:01:42.05 | 264K| 220K|
|* 2 | HASH JOIN | | 1 | 33242 | 3037 (1)| 217K|00:01:29.67 | 264K| 220K|
|* 3 | TABLE ACCESS FULL | DIM_SUPPLIER | 1 | 1112 | 102 (0)| 1112 |00:00:00.01 | 316 | 4 |
|* 4 | HASH JOIN | | 1 | 33245 | 2934 (1)| 217K|00:01:29.10 | 264K| 220K|
|* 5 | TABLE ACCESS FULL | DIM_CUSTOMER | 1 | 910 | 102 (0)| 910 |00:00:00.08 | 316 | 8 |
|* 6 | HASH JOIN | | 1 | 33248 | 2831 (1)| 217K|00:01:28.57 | 264K| 220K|
|* 7 | TABLE ACCESS FULL | DIM_PART | 1 | 10 | 3 (0)| 10 |00:00:00.01 | 6 | 0 |
| 8 | PARTITION RANGE ALL | | 1 | 36211 | 2827 (1)| 217K|00:01:28.01 | 264K| 220K|
| 9 | TABLE ACCESS BY LOCAL INDEX ROWID| FACT_LINE_ITEM | 6 | 36211 | 2827 (1)| 217K|00:01:33.85 | 264K| 220K|
| 10 | BITMAP CONVERSION TO ROWIDS | | 6 | | | 217K|00:00:07.09 | 46980 | 3292 |
| 11 | BITMAP AND | | 6 | | | 69 |00:00:08.33 | 46980 | 3292 |
| 12 | BITMAP MERGE | | 6 | | | 193 |00:00:02.09 | 2408 | 1795 |
| 13 | BITMAP KEY ITERATION | | 6 | | | 4330 |00:00:04.66 | 2408 | 1795 |
| 14 | BUFFER SORT | | 6 | | | 60 |00:00:00.01 | 6 | 0 |
|* 15 | TABLE ACCESS FULL | DIM_PART | 1 | 10 | 3 (0)| 10 |00:00:00.01 | 6 | 0 |
|* 16 | BITMAP INDEX RANGE SCAN | FACT_LI__P_PART_DIM_KEY_BIX | 60 | | | 4330 |00:00:02.11 | 2402 | 1795 |
|* 17 | BITMAP INDEX SINGLE VALUE | FACT_LI__P_PART_MFGR_BJX | 6 | | | 1747 |00:00:06.65 | 890 | 888 |
| 18 | BITMAP MERGE | | 6 | | | 169 |00:00:02.78 | 16695 | 237 |
| 19 | BITMAP KEY ITERATION | | 6 | | | 5460 |00:00:01.56 | 16695 | 237 |
| 20 | BUFFER SORT | | 6 | | | 5460 |00:00:00.02 | 316 | 0 |
|* 21 | TABLE ACCESS FULL | DIM_CUSTOMER | 1 | 910 | 102 (0)| 910 |00:00:00.01 | 316 | 0 |
|* 22 | BITMAP INDEX RANGE SCAN | FACT_LI__P_CUST_DIM_KEY_BIX | 5460 | | | 5460 |00:00:02.07 | 16379 | 237 |
| 23 | BITMAP MERGE | | 6 | | | 170 |00:00:03.65 | 26987 | 372 |
| 24 | BITMAP KEY ITERATION | | 6 | | | 6672 |00:00:02.23 | 26987 | 372 |
| 25 | BUFFER SORT | | 6 | | | 6672 |00:00:00.01 | 316 | 0 |
|* 26 | TABLE ACCESS FULL | DIM_SUPPLIER | 1 | 1112 | 102 (0)| 1112 |00:00:00.01 | 316 | 0 |
|* 27 | BITMAP INDEX RANGE SCAN | FACT_LI__S_SUPP_DIM_KEY_BIX | 6672 | | | 6672 |00:00:02.74 | 26671 | 372 |
The Question:
Why is the Star Transformation using both indexes FACT_LI__P_PART_DIM_KEY_BIX and FACT_LI__P_PART_MFGR_BJX for the same dimension criteria (dp.mfgr = 10)?? The introduction of the additional Bitmap Join Index actually make Oracle to do the work twice !!!
Anybody, any idea ?!?Dom, here it is the plan with the predicates:
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
| 0 | SELECT STATEMENT | | 1 | | 3638 (100)| 1 |00:06:41.17 | 445K| 236K|
| 1 | HASH GROUP BY | | 1 | 2 | 3638 (1)| 1 |00:06:41.17 | 445K| 236K|
|* 2 | HASH JOIN | | 1 | 33242 | 3429 (1)| 217K|00:08:18.02 | 445K| 236K|
|* 3 | TABLE ACCESS FULL | DIM_SUPPLIER | 1 | 1112 | 102 (0)| 1112 |00:00:00.03 | 319 | 313 |
|* 4 | HASH JOIN | | 1 | 33245 | 3326 (1)| 217K|00:08:17.47 | 445K| 236K|
|* 5 | TABLE ACCESS FULL | DIM_CUSTOMER | 1 | 910 | 102 (0)| 910 |00:00:00.01 | 319 | 313 |
|* 6 | HASH JOIN | | 1 | 33248 | 3223 (1)| 217K|00:08:16.63 | 445K| 236K|
|* 7 | TABLE ACCESS FULL | DIM_PART | 1 | 10 | 3 (0)| 10 |00:00:00.01 | 6 | 0 |
| 8 | PARTITION RANGE ALL | | 1 | 36211 | 3219 (1)| 217K|00:08:16.30 | 445K| 236K|
| 9 | TABLE ACCESS BY LOCAL INDEX ROWID| FACT_LINE_ITEM | 6 | 36211 | 3219 (1)| 217K|00:08:40.89 | 445K| 236K|
| 10 | BITMAP CONVERSION TO ROWIDS | | 6 | | | 217K|00:00:32.00 | 46919 | 19331 |
| 11 | BITMAP AND | | 6 | | | 69 |00:00:34.50 | 46919 | 19331 |
| 12 | BITMAP MERGE | | 6 | | | 193 |00:00:00.58 | 2353 | 1 |
| 13 | BITMAP KEY ITERATION | | 6 | | | 4330 |00:00:00.10 | 2353 | 1 |
| 14 | BUFFER SORT | | 6 | | | 60 |00:00:00.01 | 6 | 0 |
|* 15 | TABLE ACCESS FULL | DIM_PART | 1 | 10 | 3 (0)| 10 |00:00:00.01 | 6 | 0 |
|* 16 | BITMAP INDEX RANGE SCAN | FACT_LI__P_PART_DIM_KEY_BIX | 60 | | | 4330 |00:00:00.07 | 2347 | 1 |
|* 17 | BITMAP INDEX SINGLE VALUE | FACT_LI__P_PART_MFGR_BJX | 6 | | | 1747 |00:01:23.64 | 882 | 565 |
| 18 | BITMAP MERGE | | 6 | | | 169 |00:00:09.14 | 16697 | 7628 |
| 19 | BITMAP KEY ITERATION | | 6 | | | 5460 |00:00:02.19 | 16697 | 7628 |
| 20 | BUFFER SORT | | 6 | | | 5460 |00:00:00.01 | 316 | 0 |
|* 21 | TABLE ACCESS FULL | DIM_CUSTOMER | 1 | 910 | 102 (0)| 910 |00:00:00.01 | 316 | 0 |
|* 22 | BITMAP INDEX RANGE SCAN | FACT_LI__P_CUST_DIM_KEY_BIX | 5460 | | | 5460 |00:00:08.78 | 16381 | 7628 |
| 23 | BITMAP MERGE | | 6 | | | 170 |00:00:21.46 | 26987 | 11137 |
| 24 | BITMAP KEY ITERATION | | 6 | | | 6672 |00:00:10.29 | 26987 | 11137 |
| 25 | BUFFER SORT | | 6 | | | 6672 |00:00:00.01 | 316 | 0 |
|* 26 | TABLE ACCESS FULL | DIM_SUPPLIER | 1 | 1112 | 102 (0)| 1112 |00:00:00.01 | 316 | 0 |
|* 27 | BITMAP INDEX RANGE SCAN | FACT_LI__S_SUPP_DIM_KEY_BIX | 6672 | | | 6672 |00:00:20.94 | 26671 | 11137 |
Predicate Information (identified by operation id):
2 - access("F"."SUPPLIER_DK"="DS"."DK")
3 - filter("DS"."REGION_NAME"='REGION #1')
4 - access("F"."CUSTOMER_DK"="DC"."DK")
5 - filter("DC"."REGION_NAME"='REGION #1')
6 - access("F"."PART_DK"="DP"."DK")
7 - filter("DP"."MFGR"=10)
15 - filter("DP"."MFGR"=10)
16 - access("F"."PART_DK"="DP"."DK")
17 - access("F"."SYS_NC00017$"=10)
21 - filter("DC"."REGION_NAME"='REGION #1')
22 - access("F"."CUSTOMER_DK"="DC"."DK")
26 - filter("DS"."REGION_NAME"='REGION #1')
27 - access("F"."SUPPLIER_DK"="DS"."DK")
Note
- star transformation used for this statement -
Which Event Classes i should use for finding good indexs and statistics for queries in SP.
Dear all,
I am trying to use pro filer to create a trace,so that it can be used as workload in
"Database Engine Tuning Advisor" for optimization of one stored procedure.
Please tel me about the Event classes which i should use in trace.
The stored proc contains three insert queries which insert data into a table variable,
Finally a select query is used on same table variable with one union of the same table variable, to generate a sequence for records based on certain condition of few columns.
There are three cases where i am using the above structure of the SP, so there are three SPS out of three , i will chose one based on their performance.
1) There is only one table with three inserts which gets into a table variable with a final sequence creation block.
2) There are 15 tables with 45 inserts , which gets into a tabel variable with a final
sequence creation block.
3)
There are 3 tables with 9 inserts , which gets into a table variable with a final
sequence creation block.
In all the above case number of record will be around 5 lacks.
Purpose is optimization of queries in SP
like which Event Classes i should use for finding good indexs and statistics for queries in SP.
yours sincerely"Database Engine Tuning Advisor" for optimization of one stored procedure.
Please tel me about the Event classes which i should use in trace.
You can use the "Tuning" template to capture the workload to a trace file that can be used by the DETA. See
http://technet.microsoft.com/en-us/library/ms190957(v=sql.105).aspx
If you are capturing the workload of a production server, I suggest you not do that directly from Profiler as that can impact server performance. Instead, start/stop the Profiler Tuning template against a test server and then script the trace
definition (File-->Export-->Script Trace Definition). You can then customize the script (e.g. file name) and run the script against the prod server to capture the workload to the specified file. Stop and remove the trace after the workload
is captured with sp_trace_setstatus:
DECLARE @TraceID int = <trace id returned by the trace create script>
EXEC sp_trace_setstatus @TraceID, 0; --stop trace
EXEC sp_trace_setstatus @TraceID, 2; --remove trace definition
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
Total space used by indexes for specific schema
Hi all.
Running Oracle 9.2.0.4.
Is there a way I can see how much index space is used for a specific schema? I query index_stats and it returns 0 rows but my stats were updated this morning.
Any input is apreciated.
ThanksHi,
Index space and index stats are two different things.
What you are interested to know?
1) How much space is used by index for a particular schema
select sum(bytes)/1024/1024 from dba_segments
where (owner, segment_name) in
(select owner, index_name from dba_indexes where
owner = '<schema you are interested in>')2) Is the stats gathered on index
select index_name, last_analyzed, num_rows from dba_indexes where
owner = <'schema you are interested in >'Regards
Anurag -
Id been using my iphone 5 16g for fifteen days already and im experiencing overheat everytime im using the 3g internet and when im turning on my personal hotspot. the battery as well draind so fast. i thought this phone will make me satisfied but why?
My god what a bible up above ..
I just been thinking this might bee good replacement and ip 5 can stay live longer ?
http://www.ebay.co.uk/itm/360731576101?ssPageName=STRK:MEWAX:IT&_trksid=p3984.m1 423.l2649 -
Is raid good for anything or faster using 2 different harddisks?
I have raid on my msi kt3ultra2, should I make use of it? is it any faster if I put my second hd on it?
Both hd's are splitted into two and each server their purpose, so I have no intention of running two identical systems.Hi Danielsan,
RAID only comes faster when you set 2 identical HDD as an array for RAID 0. If you're using 2 different HDDs, the different performance for each HDD may impact the performance of the RAID 0 to even worse than a single drive performance.
If you have critical data and want instant backup, RAID 1 serves you well where the data is 100% mirrored on the 2nd HDD. This RAID 1 does not provide any performance edge as RAID 0 though.
You can also use the IDE3 and IDE4 for single drive operation with some workaround.
Some would suggest to use modded BIOS (which void the warranty) to flash Ultra ATA133 controller to the BIOS and lose RAID function, then you can use both IDE3 and IDE4 for single drive and even optical drives.
Basically that's all you can get from the RAID IDE3 and IDE4 on the board. -
Index (or not) for excluding NULL values in a query
Hello,
I have table that can become very large. The table has a varchar2 column (let's call it TEXT) that can contain NULL values. I want to process only the records that have a value (NOT NULL). Also, the table is continuously expanded with newly inserted records. The inserts should suffer as little performance loss as possible.
My question: should I use an index on the column and if so, what kind of index?
I have done a little test with a function based index (inspired by this Tom Kyte article: http://tkyte.blogspot.com/2006/01/something-about-nothing.html):
create index text_isnull_idx on my_table(text,0);
I notice that if I use the clause WHERE TEXT IS NULL, the index is used. But if I use a clause WHERE TEXT IS NOT NULL (which is the clause I want to use), a full table scan is performed. Is this bad? Can I somehow improve the speed of this selection?
Thanks in advance,
FransI build a test case with very simple table with 2 columns and it shows that FTS is better than index access even when above ratio is <= 0.01 (1%):
DROP TABLE T1;
CREATE TABLE T1
C1 VARCHAR2(100)
,C2 NUMBER
INSERT INTO T1 (SELECT TO_CHAR(OBJECT_ID), ROWNUM FROM USER_OBJECTS);
BEGIN
FOR I IN 1..100 LOOP
INSERT INTO T1 (SELECT NULL, ROWNUM FROM USER_OBJECTS);
END LOOP;
END;
CREATE INDEX T1_IDX ON T1(C1);
ANALYZE TABLE T1 COMPUTE STATISTICS
FOR TABLE
FOR ALL INDEXES
FOR ALL INDEXED COLUMNS
SET AUTOTRACE TRACEONLY
SELECT
C1, C2
FROM T1 WHERE C1 IS NOT NULL;
3864 rows selected.
real: 1344
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=59 Card=3864 Bytes=30912)
1 0 TABLE ACCESS (FULL) OF 'T1' (Cost=59 Card=3864 Bytes=30912)
Statistics
0 recursive calls
0 db block gets
2527 consistent gets
3864 rows processed
BUT
SELECT
--+ FIRST_ROWS
C1, C2
FROM T1 WHERE C1 IS NOT NULL;
3864 rows selected.
real: 1296
Execution Plan
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=35 Card=3864 Bytes=30912)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=35 Card=3864 Bytes=30912)
2 1 INDEX (FULL SCAN) OF 'T1_IDX' (NON-UNIQUE) (Cost=11 Card=3864)
Statistics
0 recursive calls
0 db block gets
5052 consistent gets
3864 rows processed
and just for comparison:
SELECT * FROM T1 WHERE C1 IS NULL;
386501 rows selected.
real: 117878
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=59 Card=386501 Bytes=3092008)
1 0 TABLE ACCESS (FULL) OF 'T1' (Cost=59 Card=386501 Bytes=3092008)
Statistics
0 recursive calls
0 db block gets
193850 consistent gets
386501 rows processedHence you have to benchmark you queries with and w/o index[es] -
Auto-indexing is slow for arrays with 1 dimensions
Hi,
I was looking at the performance of operations on all individual elements in 3D arrays, especially the difference between auto-indexing (left image) and manual-indexing (right image, calling "Index array" and "Replace array subset" in the innermost loop). I'm a bit puzzled by the results and post it here for discussion and hopefully somebody's benefit in the future.
Left: auto-indexing; right: manual-indexing
In the tests I added a different random number to all individual elements in a 3D array. I found that auto-indexing is 1.2 to 1.5 times slower than manual-indexing. I also found that the performance of auto-indexing is much more dependent on the size the dimensions: an array with 1000x200x40 elements is 20% slower than an array with 1000x40x200 elements. For manual-indexing there is hardly any difference. The detailed results can be found at the end of this post.
I was under the impression that auto-indexing was the preferred method for this kind of operation: it achieves exactly the same result and it is much clearer in the block diagram. I also expected that auto-indexing would have been optimized in LabView, but the the tests show this is clearly not the case.
What is auto-indexing doing?
With two tests I looked at how auto-index works.
First, I looked if auto-indexing reorganizes the data in an inefficient way. To do this I made a method "fake-auto-indexing" which calls "Array subset" and "Reshape array" (or "Index array" for a 1D-array) when it enters _every_ loop and calls "Replace array subset" when exiting _every_ loop (as opposed to manual-indexing, where I do this only in the inner loop). I replicated this in a test (calling it fake-auto-indexing) and found that the performance is very similar to auto-indexing, especially looking at the trend for the different array lengths.
Fake-auto-indexing
Second, I looked if Locality of reference (how the 3D array is stored in memory and how efficiently you can iterate over that) may be an issue. Auto-indexing loops over pages-rows-columns (i.e. pages in the outer for-loop, rows in the middle for-loop, columns in the inner for-loop). This can't be changed for auto-indexing, but I did change it for manual and fake-indexing. The performance of manual-indexing is now similar to auto-indexing, except for two cases that I can't explain. Fake-auto-indexing performs way worse in all cases.
It seems that the performance problem with auto-indexing is due to inefficient data organization.
Other tests
I did the same tests for 1D and 2D arrays. For 1D arrays the three methods perform identical. For 2D arrays auto-indexing is 15% slower than manual-indexing, while fake-auto-indexing is 8% slower than manual-indexing. I find it interesting that auto-indexing is the slowest of the three methods.
Finally, I tested the performance of operating on entire columns (instead of every single element) of a 3D array. In all cases it is a lot faster than iterating over individual elements. Auto-indexing is more than 1.8 to 3.4 times slower than manual-indexing, while fake-auto-indexing is about 1.5 to 2.7 times slower. Because of the number of iterations that has to be done, the effect of the size of the column is important: an array with 1000x200x40 elements is in all cases much slower than an array with 1000x40x200 elements.
Discussion & conclusions
In all the cases I tested, auto-indexing is significantly slower than manual-indexing. Because auto-indexing is the standard way of indexing arrays in LabView I expected the method to be highly optimized. Judging by the tests I did, that is not the case. I find this puzzling.
Does anybody know any best practices when it comes to working with >1D arrays? It seems there is a lack of documentation about the performance, surprising given the significant differences I found.
It is of course possible that I made mistakes. I tried to keep the code as straightforward as possible to minimize that risk. Still, I hope somebody can double-check the work I did.
Results
I ran the tests on a computer with a i5-4570 @ 3.20 GHz processor (4 cores, but only 1 is used), 8 GB RAM running Windows 7 64-bit and LabView 2013 32-bit. The tests were averaged 10 times. The results are in milliseconds.
3D-arrays, iterate pages-rows-columns
pages x rows x cols : auto manual fake
40 x 200 x 1000 : 268.9 202.0 268.8
40 x 1000 x 200 : 276.9 204.1 263.8
200 x 40 x 1000 : 264.6 202.8 260.6
200 x 1000 x 40 : 306.9 205.0 300.0
1000 x 40 x 200 : 253.7 203.1 259.3
1000 x 200 x 40 : 307.2 206.2 288.5
100 x 100 x 100 : 36.2 25.7 33.9
3D-arrays, iterate columns-rows-pages
pages x rows x cols : manual fake
40 x 200 x 1000 : 277.6 457
40 x 1000 x 200 : 291.6 461.5
200 x 40 x 1000 : 277.4 431.9
200 x 1000 x 40 : 452.5 572.1
1000 x 40 x 200 : 298.1 460.4
1000 x 200 x 40 : 460.5 583.8
100 x 100 x 100 : 31.7 51.9
2D-arrays, iterate rows-columns
rows x cols : auto manual fake
200 x 20000 : 123.5 106.1 113.2
20000 x 200 : 119.5 106.1 115.0
10000 x 10000 : 3080.25 2659.65 2835.1
1D-arrays, iterate over columns
cols : auto manual fake
500000 : 11.5 11.8 11.6
3D-arrays, iterate pages-rows, operate on columns
pages x rows x cols : auto manual fake
40 x 200 x 1000 : 83.9 23.3 62.9
40 x 1000 x 200 : 89.6 31.9 69.0
200 x 40 x 1000 : 74.3 27.6 62.2
200 x 1000 x 40 : 135.6 76.2 107.1
1000 x 40 x 200 : 75.3 31.2 68.6
1000 x 200 x 40 : 133.6 71.7 108.7
100 x 100 x 100 : 13.0 5.4 9.9
VI's
I attached the VI's I used for testing. "ND_add_random_number.vi" (where N is 1, 2 or 3) is where all the action happens, taking a selector with a method and an array with the N dimensions as input. It returns the result and time in milliseconds. Using "ND_add_random_number_automated_test.vi" I run this for a few different situations (auto/manual/fake-indexing, interchanging the dimensions). The VI's starting with "3D_locality_" are used for the locality tests. The VI's starting with "3D_norows_" are used for the iterations over pages and columns only.
Attachments:
3D_arrays_2.zip 222 KBRobert,
the copy-thing is not specific for auto-indexing. It is common for all tunnels. A tunnel is first of all a unique data space.
This sounds hard, but there is an optimization in the compiler trying to reduce the number of copies the VI will create. This optimization is called "in-placeness".
The in-placeness algorithm checks, if the wire passing data to the is connected to anything else ("branch"). If there is no other connection then the tunnel, chance is high that the tunnel won't create an additional copy.
Speaking of loops, tunnels always copies. The in-placeness algorithm cannot opt that out.
You can do a small test to elaborate on this: Simply wire "0" (or anything less than the array sizy of the input array) to the 'N' terminal.......
Norbert
PS: Auto-indexing is perfect for a "by element" operation of analysis without modification of the element in the array. If you want to modify values, use shift registers and Replace Array Subset.
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it. -
SQL - Need Tunning tips for group by [LATEST EXECUTION PLAN IS ATTACHED]
Hi All Experts,
My SQL is taking so much time to execute. If I remove the group by clause it is running within a minute but as soon as I am putting sum() and group by clause it is taking ages to run the sql. Number of records are wihout group by clause is almost 85 Lachs (8 Million). Is hugh dataset is killing this? Is there any way to tune the data on Group by clause. Below is my Select hints and execution plan. Please help.
SQL
SELECT /*+ CURSOR_SHARING_EXACT gather_plan_statistics all_rows no_index(atm) no_expand
leading (src cpty atm)
index(bk WBKS_PK) index(src WSRC_UK1) index(acct WACC_UK1)
use_nl(acct src ccy prd cpty grate sb) */
EXECUTION PLAN
PLAN_TABLE_OUTPUT
SQL_ID 1y5pdhnb9tks5, child number 0
SELECT /*+ CURSOR_SHARING_EXACT gather_plan_statistics all_rows no_index(atm) no_expand leading (src cpty atm) index(bk
WBKS_PK) index(src WSRC_UK1) index(acct WACC_UK1) use_nl(acct src ccy prd cpty grate sb) */ atm.business_date,
atm.entity legal_entity, TO_NUMBER (atm.set_of_books) setofbooksid, atm.source_system_id sourcesystemid,
ccy.ccy_currency_code ccy_currency_code, acct.acct_account_code, 0 gl_bal, SUM (atm.amount)
atm_bal, 0 gbp_equ, ROUND (SUM (atm.amount * grate.rate), 4) AS
atm_equ, prd.prd_product_code, glacct.parentreportingclassification parentreportingclassification,
cpty_counterparty_code FROM wh_sources_d src,
Plan hash value: 4193892926
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 1 | HASH GROUP BY | | 1 | 1 | 471 |00:31:38.26 | 904M| 76703 | 649K| 649K| 1149K (0)|
| 2 | NESTED LOOPS | | 1 | 1 | 8362K|00:47:06.06 | 904M| 76703 | | | |
| 3 | NESTED LOOPS | | 1 | 1 | 10M|00:28:48.84 | 870M| 17085 | | | |
| 4 | NESTED LOOPS | | 1 | 1 | 10M|00:27:56.05 | 849M| 17084 | | | |
| 5 | NESTED LOOPS | | 1 | 8 | 18M|00:14:10.93 | 246M| 17084 | | | |
| 6 | NESTED LOOPS | | 1 | 22 | 18M|00:11:58.96 | 189M| 17084 | | | |
| 7 | NESTED LOOPS | | 1 | 22 | 18M|00:10:24.69 | 152M| 17084 | | | |
| 8 | NESTED LOOPS | | 1 | 1337 | 18M|00:06:00.74 | 95M| 17083 | | | |
| 9 | NESTED LOOPS | | 1 | 1337 | 18M|00:02:52.20 | 38M| 17073 | | | |
|* 10 | HASH JOIN | | 1 | 185K| 18M|00:03:46.38 | 1177K| 17073 | 939K| 939K| 575K (0)|
| 11 | NESTED LOOPS | | 1 | 3 | 3 |00:00:00.01 | 11 | 0 | | | |
| 12 | TABLE ACCESS BY INDEX ROWID | WH_SOURCES_D | 1 | 3 | 3 |00:00:00.01 | 3 | 0 | | | |
|* 13 | INDEX RANGE SCAN | WSRC_UK1 | 1 | 3 | 3 |00:00:00.01 | 2 | 0 | | | |
|* 14 | TABLE ACCESS BY INDEX ROWID | WH_COUNTERPARTIES_D | 3 | 1 | 3 |00:00:00.01 | 8 | 0 | | | |
|* 15 | INDEX UNIQUE SCAN | WCPY_U1 | 3 | 1 | 3 |00:00:00.01 | 5 | 0 | | | |
| 16 | PARTITION RANGE SINGLE | | 1 | 91M| 91M|00:00:00.08 | 1177K| 17073 | | | |
|* 17 | TABLE ACCESS FULL | WH_ATM_BALANCES_F | 1 | 91M| 91M|00:00:00.04 | 1177K| 17073 | | | |
|* 18 | TABLE ACCESS BY INDEX ROWID | WH_PRODUCTS_D | 18M| 1 | 18M|00:01:43.88 | 37M| 0 | | | |
|* 19 | INDEX UNIQUE SCAN | WPRD_UK1 | 18M| 1 | 18M|00:00:52.13 | 18M| 0 | | | |
|* 20 | TABLE ACCESS BY GLOBAL INDEX ROWID| WH_BOOKS_D | 18M| 1 | 18M|00:02:53.01 | 56M| 10 | | | |
|* 21 | INDEX UNIQUE SCAN | WBKS_PK | 18M| 1 | 18M|00:01:08.32 | 37M| 10 | | | |
|* 22 | TABLE ACCESS BY INDEX ROWID | T_SDM_SOURCEBOOK | 18M| 1 | 18M|00:03:43.66 | 56M| 1 | | | |
|* 23 | INDEX RANGE SCAN | TSSB_N5 | 18M| 2 | 23M|00:01:11.50 | 18M| 1 | | | |
|* 24 | TABLE ACCESS BY INDEX ROWID | WH_CURRENCIES_D | 18M| 1 | 18M|00:01:51.21 | 37M| 0 | | | |
|* 25 | INDEX UNIQUE SCAN | WCUR_PK | 18M| 1 | 18M|00:00:49.26 | 18M| 0 | | | |
| 26 | TABLE ACCESS BY INDEX ROWID | WH_GL_DAILY_RATES_F | 18M| 1 | 18M|00:01:55.84 | 56M| 0 | | | |
|* 27 | INDEX UNIQUE SCAN | WGDR_U2 | 18M| 1 | 18M|00:01:10.89 | 37M| 0 | | | |
| 28 | INLIST ITERATOR | | 18M| | 10M|00:22:40.03 | 603M| 0 | | | |
|* 29 | TABLE ACCESS BY INDEX ROWID | WH_ACCOUNTS_D | 150M| 1 | 10M|00:20:19.05 | 603M| 0 | | | |
|* 30 | INDEX UNIQUE SCAN | WACC_UK1 | 150M| 5 | 150M|00:10:16.81 | 452M| 0 | | | |
| 31 | TABLE ACCESS BY INDEX ROWID | T_SDM_GLACCOUNT | 10M| 1 | 10M|00:00:50.64 | 21M| 1 | | | |
|* 32 | INDEX UNIQUE SCAN | TSG_PK | 10M| 1 | 10M|00:00:26.17 | 10M| 0 | | | |
|* 33 | TABLE ACCESS BY INDEX ROWID | WH_COMMON_TRADES_D | 10M| 1 | 8362K|00:18:52.56 | 33M| 59618 | | | |
|* 34 | INDEX UNIQUE SCAN | WCTD_PK | 10M| 1 | 10M|00:03:06.56 | 21M| 5391 | | | |
Edited by: user535789 on Mar 17, 2011 9:45 PM
Edited by: user535789 on Mar 20, 2011 8:33 PMuser535789 wrote:
Hi All Experts,
My SQL is taking so much time to execute. If I remove the group by clause it is running within a minute but as soon as I am putting sum() and group by clause it is taking ages to run the sql. Number of records are wihout group by clause is almost 85 Lachs (*8 Million*). Is hugh dataset is killing this? Is there any way to tune the data on Group by clause. Below is my Select hints and execution plan. Please help.I doubt that your 8 million records are shown within minutes.
I guess that the output started after a few minutes. But this does not mean that the full resultset is there. It just means the database is able to return to you the first few records after a few minutes.
Once you add a group by (or an order by) then this requires that all the rows need to be fetched before the database can start showing them to you.
But maybe you could run some tests to compare the full output. I find it useful to SET AUTOTRACE TRACEONLY for such a purpose (in sql plus). This avoids to print the selection on the screen. -
Function module for VBFA table
Hi Gurus,
Please Help me on this Issue..
I want to retrieve data from vbfa table..while making query to retrieve data from vbfa tabl,its making performance issue on production server.
SELECT vbelv
vbeln
INTO TABLE i_ref_data
FROM vbfa FOR ALL ENTRIES IN i_billing_main
WHERE
vbelv EQ i_billing_main-vbeln AND
vbtyp_n EQ c_vbtyp_n.
so i have tried to retrieve data using functiom module "RV_ORDER_FLOW_INFORMATION" .in this function i was not able to pass multiple document no to this function module so put this function module inside the loop but this option also making performance issue..
LOOP AT i_billing_main_temp INTO wa_billing_main.
CLEAR : wa_comwa,wa_vbfa.
REFRESH i_vbfa.
wa_comwa-vbeln = wa_billing_main-vbeln.
This function module used for retrieving document flow data from VBFA
CALL FUNCTION 'RV_ORDER_FLOW_INFORMATION'
EXPORTING
comwa = wa_comwa
TABLES
vbfa_tab = i_vbfa.
SORT i_vbfa BY vbelv vbeln vbtyp_n.
DELETE ADJACENT DUPLICATES FROM i_vbfa COMPARING vbelv vbeln vbtyp_n.
SORT i_vbfa BY vbtyp_n.
READ TABLE i_vbfa
INTO wa_vbfa
WITH KEY vbtyp_n = c_vbtyp_n
BINARY SEARCH.
IF sy-subrc EQ 0.
wa_ref_data-vbeln = wa_vbfa-vbeln.
wa_ref_data-vbelv = wa_billing_main-vbeln.
APPEND wa_ref_data TO i_ref_data.
ENDIF.
ENDLOOP.
so kindly give me the solution for improving performance of this issue.
Is it having any function module to pass multiple inputs to the function module.?
Regards
P.Senthil Kumar
Edited by: senthil kumar on Mar 23, 2009 12:23 PMPlease add check condition to check internal table is blank.
if not i_billing_main[] is initial.
SELECT vbelv
vbeln
INTO TABLE i_ref_data
FROM vbfa FOR ALL ENTRIES IN i_billing_main
WHERE
vbelv EQ i_billing_main-vbeln AND
vbtyp_n EQ c_vbtyp_n.
endif.
This is the best possible way to retrive data from VBFA table.
Other method you adopted will take more time since you are calling the FM in loop.
Please check ST05 trace for your above query to see if primary index is being used. Else contact Basis to help you out. -
Select statement is taking lot of time for the first time Execution.?
Hi Experts,
I am facing the following issue. I am using one select statement to retrieve all the contracts from the table CACS_CTRTBU according to FOR ALL ENTRIES restriction.
if p_lt_zcacs[] is not initial.
SELECT
appl ctrtbu_id version gpart
busi_begin busi_end tech_begin tech_end
flg_cancel_obj flg_cancel_vers int_title
FROM cacs_ctrtbu INTO TABLE lt_cacs FOR ALL ENTRIES IN p_lt_zcacs
WHERE
appl EQ gv_appl
AND ctrtbu_id EQ p_lt_zcacs-ctrtbu_id
AND ( flg_cancel_vers EQ '' OR version EQ '000000' )
AND flg_cancel_obj EQ ''
AND busi_begin LE p_busbegin
AND busi_end GT p_busbegin.
endif.
The WHERE condition is in order with the available Index. The index has APPL,CTRTBU_ID,FLG_CANCEL_VERS and FLG_CANCEL_OBJ.
The technical settings of table CACS_CTRTBU says that the "Buffering is not allowed"
Now the problem is , for the first time execution of this select statement, with 1.5 lakh entries in P_LT_ZCACS table, the select statement takes 3 minutes.
If I execute this select statement again, in another run with Exactly the same parameter values and number of entries in P_LT_ZCACS ( i.e 1.5 lakh entries), it gets executed in 3-4 seconds.
What can be the issue in this case? Why first execution takes longer time?.. Or is there any way to modify the Select statemnt to get better performance.
Thanks in advance
Sreejith A PHi,
>
sree jith wrote:
> What can be the issue in this case? Why first execution takes longer time?..
> Sreejith A P
Sounds like caching or buffering in some layer down the i/o stack. Your first execution
seems to do the "physical I/O" where your following executions can use the caches / buffers
that are filled by your first exectution.
>
sree jith wrote:
> Or is there any way to modify the Select statemnt to get better performance.
> Sreejith A P
If modifying your SELECTS statement or your indexes could help depends on your access details:
does your internal table P_LT_ZCACS contain duplicates?
how do your indexes look like?
how does your execution plan look like?
what are your execution figures in ST05 - Statement Summary?
(nr. of executions, records in total, total time, time per execuiton, records per execution, time per record,...)
Kind regards,
Hermann
Maybe you are looking for
-
Epson r1800 - 'Media not loaded correctly'
Hello All, I'm having a lot of trouble with my Epson r1800 - I've searched around the internet a bit for solutions but so far haven't come up with anything. To start, I have updated the epson drivers and also tried Gutenprint drivers. Basically, no m
-
Please Help Me Install Flash player
Yesterday I unintentionally uninstalled Flash player with a product named Perfect Uninstaller. During the process it states that the Item I uninstalled will be uninstalled now and forever. Every time I click "run" to install it starts for about one s
-
Two Airport Extreme (6th gen): Two seperate networks or one router one bridge?
Greetings All, I have fiber optic service and the modem that came from the ISP is wirelessly capable of old version of 802.11n and short range as well, it has four lan ports as well. So I bought two Airport Extreme (AE 802.11ac) and its working ok bu
-
Other Than Technical queries on OBIEE
hi, This forum discusses queries which are technical in nature hence the professionals who have experience in implementation would be able to answer easily we have release 3 coming up for Oracle BI which will implement BI for sales, Marketing and Dep
-
TS1702 Trouble installing apps
Hello, I recently bought an iPod touch 5 and have been downloading games and apps for about three days now, but i'm having a problem. When installing a few of my games, they either quit installing and slowly start all over, or I get an error message