Optimize calculation
What are the ways to do Calculation script optimization?
Alll your posts seem to follow the same idea, how to.... how to....
Unfortunately you won't get much help with questions like that on here.
You should spend some time reading the docs and formulating useful questions that will be worth answering.
Cheers
John
http://john-goodwin.blogspot.com/
Similar Messages
-
Optimize calculation (Agg)
I have a cube that is about 75 GB now. I am running a consolidation on this cube. the issue is Consolidation takes more than an HOUR to complete. I am trying to see if I could optimize and bring the time down. I played with Cache size and Data File size - it improved a little but not much. Any suggestions on how to go about optimizing the calc will be highly appreciated. Thanks
Details ----
Essbase - 7.1.5. in a 8 quad with 14 GB RAM machine. there are 6 other cubes running on the same box.
2 with more than 50 GB in size.
Direct I/O, LRE
Index size - 3.5 GB
Page Size - 75GB
Index Cache - 300 MB
Data File Cache - 650 MB
Code format is as below -
SET UPDATECALC OFF;
set msg summary;
set notice default;
set cache high;
set calchashtbl on;
set aggmissg on;
SET FRMLBOTTOMUP ON;
SET CALCPARALLEL 3;
FIX(&budgetYear, "CY Budget")
agg (Sparse1,Sparse2,Sparse3,Sparse4,Sparse5,Sparse6 );
ENDFIXAlll your posts seem to follow the same idea, how to.... how to....
Unfortunately you won't get much help with questions like that on here.
You should spend some time reading the docs and formulating useful questions that will be worth answering.
Cheers
John
http://john-goodwin.blogspot.com/ -
Hi all,
I am trying to tune some statements that worked fine in Oracle 9 and are a pain in Oracle 10.
I have one big select (200 rows) that joins a hand full of tables (each with 80thousand or so records, one is bigger). The execution plan for this select looks fine, data is returned in a reasonable time (some seconds). The estimated cost is okay (5 digits long).
If I take this same select and put an "insert into target_table (...) as" before it, the execution plan looks totally different. In Oracle 9 the execution plan looks the same with the "insert into target_table (...) as" and the cost is also the same. But in Oracle 10 the cost has 13 digits ... and it does not finish within days.
The execution plan for the select only uses many hash joins; with the insert wrapped around a lot of them are replaced with nested loops. I am a noob in reading execution plans, but even I can see that the system is approaching this totally differently.
Why is the optimizer calculating a totally different execution plan if it pulls the data for inserting instead of a simple select?
As a workaround, I will probably save the stuff in a materialized view ... but still I am curious why this is happening.
Best regards,
Steff
PS: I tried hints (APPEND and ALL_ROWS) but it did not change the execution plan to a better result; it looks different, but has the same horrifying cost and does not return within a few minutes either.Hi,
I somehow forgot about this issue, but I'm still curious what the reason is behind this ...
Good plan when creating ... :
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
CREATE TABLE STATEMENT Optimizer Mode=ALL_ROWS 48 K 11681
LOAD AS SELECT WINA_BESITZER.TEST_ODAP_BTSSM
VIEW WINA_BESITZER.V_ODAP_BTSSM 48 K 17 M 11089
HASH UNIQUE 48 K 19 M 10422
NESTED LOOPS OUTER 48 K 19 M 6024
HASH JOIN RIGHT OUTER 48 K 18 M 6024
VIEW 355 26 K 920
MERGE JOIN 355 35 K 918
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.AUFBAUZUSTAND 6 114 2
INDEX FULL SCAN WINA_BESITZER.AZST_PK 6 1
SORT JOIN 355 28 K 916
TABLE ACCESS FULL WINA_BESITZER.FE_BTS_SM 355 28 K 915
HASH JOIN RIGHT OUTER 48 K 15 M 5103
VIEW 14 896 1122
HASH JOIN 14 1 K 1121
VIEW SYS.VW_SQ_1 1 K 33 K 222
HASH GROUP BY 1 K 45 K 220
INDEX FAST FULL SCAN WINA_BESITZER.IDX_FESM_01 1 K 46 K 219
MERGE JOIN 1 K 137 K 899
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.AUFBAUZUSTAND 6 114 2
INDEX FULL SCAN WINA_BESITZER.AZST_PK 6 1
SORT JOIN 1 K 107 K 897
TABLE ACCESS FULL WINA_BESITZER.FE_BTS_SM 1 K 107 K 896
HASH JOIN RIGHT OUTER 48 K 12 M 3980
VIEW WINA_BESITZER.V_MANAGING_OMC_B 1 15 12
NESTED LOOPS OUTER 1 105 11
NESTED LOOPS 1 79 7
NESTED LOOPS 1 58 5
NESTED LOOPS 1 53 4
INDEX FAST FULL SCAN WINA_BESITZER.IDX_FOMC_01 1 24 2
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 29 2
INDEX UNIQUE SCAN WINA_BESITZER.IDX_FE_04 1 1
INDEX RANGE SCAN WINA_BESITZER.IDX_STO_01 1 5 1
INDEX RANGE SCAN WINA_BESITZER.IDX_BZ_11 1 21 2
VIEW PUSHED PREDICATE 1 26 4
NESTED LOOPS 1 63 3
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 28 3
INDEX UNIQUE SCAN WINA_BESITZER.FE_PK 1 2
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FE_NE_KLASSE_VIP 1 35 0
INDEX RANGE SCAN WINA_BESITZER.FNKV_FE_FK_I 1 0
HASH JOIN RIGHT OUTER 48 K 11 M 3967
VIEW WINA_BESITZER.V_BTSSM_PARENT_BSC_B 1 21 53
HASH UNIQUE 1 153 52
NESTED LOOPS OUTER 1 153 51
NESTED LOOPS 1 127 50
NESTED LOOPS OUTER 1 101 48
NESTED LOOPS 1 75 47
HASH JOIN 19 798 9
INDEX RANGE SCAN WINA_BESITZER.IDX_BZ_11 186 3 K 3
INDEX RANGE SCAN WINA_BESITZER.IDX_BZ_11 719 14 K 5
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 33 2
INDEX UNIQUE SCAN WINA_BESITZER.IDX_FE_04 1 1
INDEX RANGE SCAN WINA_BESITZER.IDX_STO_01 1 26 1
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 26 2
INDEX UNIQUE SCAN WINA_BESITZER.FE_PK 1 1
INDEX RANGE SCAN WINA_BESITZER.IDX_STO_01 1 26 1
HASH JOIN RIGHT OUTER 48 K 10 M 3913
VIEW WINA_BESITZER.V_MANAGING_OMC_P 1 15 13
NESTED LOOPS OUTER 1 115 12
NESTED LOOPS 1 89 8
NESTED LOOPS 1 58 5
NESTED LOOPS 1 53 4
INDEX FAST FULL SCAN WINA_BESITZER.IDX_FOMC_01 1 24 2
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 29 2
INDEX UNIQUE SCAN WINA_BESITZER.IDX_FE_04 1 1
INDEX RANGE SCAN WINA_BESITZER.IDX_STO_01 1 5 1
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.BEZIEHUNG 1 31 3
INDEX RANGE SCAN WINA_BESITZER.IDX_BZ_03 1 2
VIEW PUSHED PREDICATE 1 26 4
NESTED LOOPS 1 63 3
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 28 3
INDEX UNIQUE SCAN WINA_BESITZER.FE_PK 1 2
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FE_NE_KLASSE_VIP 1 35 0
INDEX RANGE SCAN WINA_BESITZER.FNKV_FE_FK_I 1 0
HASH JOIN RIGHT OUTER 48 K 9 M 3900
VIEW WINA_BESITZER.V_BTSSM_PARENT_BSC_P 1 21 68
HASH UNIQUE 1 173 67
NESTED LOOPS OUTER 1 173 66
NESTED LOOPS 1 147 65
NESTED LOOPS OUTER 1 121 63
NESTED LOOPS 1 95 62
HASH JOIN 19 1 K 24
INDEX RANGE SCAN WINA_BESITZER.IDX_BZ_12 186 5 K 6
INDEX RANGE SCAN WINA_BESITZER.IDX_BZ_12 719 21 K 18
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 33 2
INDEX UNIQUE SCAN WINA_BESITZER.IDX_FE_04 1 1
INDEX RANGE SCAN WINA_BESITZER.IDX_STO_01 1 26 1
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 26 2
INDEX UNIQUE SCAN WINA_BESITZER.FE_PK 1 1
INDEX RANGE SCAN WINA_BESITZER.IDX_STO_01 1 26 1
HASH JOIN 48 K 8 M 3831
INDEX FAST FULL SCAN WINA_BESITZER.IDX_STO_01 64 K 1 M 97
HASH JOIN RIGHT OUTER 48 K 7 M 3205
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FE_NE_TYP 238 8 K 11
INDEX RANGE SCAN WINA_BESITZER.FNT_IDX_STATUSDATES 8 6
HASH JOIN RIGHT OUTER 48 K 5 M 3193
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FE_NE_KLASSE 1 K 28 K 16
INDEX RANGE SCAN WINA_BESITZER.FNK_IDX_DATES 39 10
HASH JOIN RIGHT OUTER 48 K 4 M 3177
TABLE ACCESS FULL WINA_BESITZER.SYSTEM_HERSTELLER 171 2 K 3
HASH JOIN RIGHT OUTER 48 K 4 M 3173
INDEX FULL SCAN WINA_BESITZER.IDX_NL_02 12 72 1
HASH JOIN 48 K 4 M 3171
TABLE ACCESS FULL WINA_BESITZER.NETZWERK 7 63 3
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 48 K 3 M 3166
NESTED LOOPS 48 K 3 M 3167
INDEX RANGE SCAN WINA_BESITZER.IDX_FET_01 1 11 1
INDEX RANGE SCAN WINA_BESITZER.FE_FET_FK_I 48 K 41
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.SW_VERSION 1 18 0
INDEX UNIQUE SCAN WINA_BESITZER.SWV_PK 1 0 Bad plan when inserting:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
INSERT STATEMENT Optimizer Mode=ALL_ROWS 48 K 12 M
VIEW WINA_BESITZER.V_ODAP_BTSSM 48 K 16 M 12 M
SORT UNIQUE 48 K 26 M 12 M
HASH JOIN RIGHT OUTER 48 K 26 M 12 M
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FE_NE_KLASSE 1 K 28 K 16
INDEX RANGE SCAN WINA_BESITZER.FNK_IDX_DATES 39 10
NESTED LOOPS OUTER 48 K 25 M 12 M
HASH JOIN RIGHT OUTER 48 K 25 M 12 M
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FE_NE_TYP 238 8 K 11
INDEX RANGE SCAN WINA_BESITZER.FNT_IDX_STATUSDATES 8 6
HASH JOIN RIGHT OUTER 48 K 23 M 12 M
TABLE ACCESS FULL WINA_BESITZER.SYSTEM_HERSTELLER 171 2 K 3
HASH JOIN 48 K 22 M 12 M
TABLE ACCESS FULL WINA_BESITZER.NETZWERK 7 63 3
HASH JOIN 48 K 22 M 12 M
INDEX FAST FULL SCAN WINA_BESITZER.IDX_STO_01 64 K 1 M 97
HASH JOIN RIGHT OUTER 48 K 20 M 12 M
INDEX FULL SCAN WINA_BESITZER.IDX_NL_02 12 72 1
NESTED LOOPS OUTER 48 K 20 M 12 M
NESTED LOOPS OUTER 48 K 19 M 11 M
HASH JOIN RIGHT OUTER 48 K 19 M 11 M
VIEW WINA_BESITZER.V_BTSSM_PARENT_BSC_P 1 30 67
SORT UNIQUE 1 173 67
NESTED LOOPS OUTER 1 173 66
NESTED LOOPS 1 147 65
NESTED LOOPS OUTER 1 121 63
NESTED LOOPS 1 95 62
HASH JOIN 19 1 K 24
INDEX RANGE SCAN WINA_BESITZER.IDX_BZ_12 186 5 K 6
INDEX RANGE SCAN WINA_BESITZER.IDX_BZ_12 719 21 K 18
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 33 2
INDEX UNIQUE SCAN WINA_BESITZER.IDX_FE_04 1 1
INDEX RANGE SCAN WINA_BESITZER.IDX_STO_01 1 26 1
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 26 2
INDEX UNIQUE SCAN WINA_BESITZER.FE_PK 1 1
INDEX RANGE SCAN WINA_BESITZER.IDX_STO_01 1 26 1
HASH JOIN RIGHT OUTER 48 K 17 M 11 M
VIEW WINA_BESITZER.V_BTSSM_PARENT_BSC_B 1 30 52
SORT UNIQUE 1 153 52
NESTED LOOPS OUTER 1 153 51
NESTED LOOPS 1 127 50
NESTED LOOPS OUTER 1 101 48
NESTED LOOPS 1 75 47
HASH JOIN 19 798 9
INDEX RANGE SCAN WINA_BESITZER.IDX_BZ_11 186 3 K 3
INDEX RANGE SCAN WINA_BESITZER.IDX_BZ_11 719 14 K 5
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 33 2
INDEX UNIQUE SCAN WINA_BESITZER.IDX_FE_04 1 1
INDEX RANGE SCAN WINA_BESITZER.IDX_STO_01 1 26 1
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 26 2
INDEX UNIQUE SCAN WINA_BESITZER.FE_PK 1 1
INDEX RANGE SCAN WINA_BESITZER.IDX_STO_01 1 26 1
NESTED LOOPS OUTER 48 K 16 M 11 M
NESTED LOOPS OUTER 48 K 10 M 196170
NESTED LOOPS 48 K 4 M 3167
INDEX RANGE SCAN WINA_BESITZER.IDX_FET_01 1 11 1
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 48 K 3 M 3166
INDEX RANGE SCAN WINA_BESITZER.FE_FET_FK_I 48 K 41
VIEW PUSHED PREDICATE 1 139 4
NESTED LOOPS 1 107 4
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FE_BTS_SM 1 88 3
INDEX RANGE SCAN WINA_BESITZER.IDX_FESM_01 1 2
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.AUFBAUZUSTAND 1 19 1
INDEX UNIQUE SCAN WINA_BESITZER.AZST_PK 1 0
VIEW PUSHED PREDICATE 1 126 225
HASH JOIN 1 116 225
NESTED LOOPS 1 94 5
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FE_BTS_SM 1 75 4
INDEX RANGE SCAN WINA_BESITZER.IDX_FESM_01 1 3
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.AUFBAUZUSTAND 1 19 1
INDEX UNIQUE SCAN WINA_BESITZER.AZST_PK 1 0
VIEW SYS.VW_SQ_1 1 K 33 K 220
SORT GROUP BY 1 K 45 K 220
INDEX FAST FULL SCAN WINA_BESITZER.IDX_FESM_01 1 K 46 K 219
VIEW PUSHED PREDICATE WINA_BESITZER.V_MANAGING_OMC_B 1 16 10
NESTED LOOPS OUTER 1 89 10
NESTED LOOPS 1 85 7
NESTED LOOPS 1 61 6
NESTED LOOPS 1 56 5
INDEX RANGE SCAN WINA_BESITZER.IDX_BZ_11 1 27 3
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 29 2
INDEX UNIQUE SCAN WINA_BESITZER.IDX_FE_04 1 1
INDEX RANGE SCAN WINA_BESITZER.IDX_STO_01 1 5 1
INDEX RANGE SCAN WINA_BESITZER.IDX_FOMC_01 1 24 1
VIEW PUSHED PREDICATE 1 4 3
NESTED LOOPS 1 63 3
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 28 3
INDEX UNIQUE SCAN WINA_BESITZER.FE_PK 1 2
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FE_NE_KLASSE_VIP 1 35 0
INDEX RANGE SCAN WINA_BESITZER.FNKV_FE_FK_I 1 0
VIEW PUSHED PREDICATE WINA_BESITZER.V_MANAGING_OMC_P 1 16 11
NESTED LOOPS OUTER 1 99 11
NESTED LOOPS 1 95 8
NESTED LOOPS 1 71 7
NESTED LOOPS 1 66 6
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.BEZIEHUNG 1 37 4
INDEX RANGE SCAN WINA_BESITZER.IDX_BZ_02 1 3
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 29 2
INDEX UNIQUE SCAN WINA_BESITZER.IDX_FE_04 1 1
INDEX RANGE SCAN WINA_BESITZER.IDX_STO_01 1 5 1
INDEX RANGE SCAN WINA_BESITZER.IDX_FOMC_01 1 24 1
VIEW PUSHED PREDICATE 1 4 3
NESTED LOOPS 1 63 3
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FUNKTIONSEINHEIT 1 28 3
INDEX UNIQUE SCAN WINA_BESITZER.FE_PK 1 2
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.FE_NE_KLASSE_VIP 1 35 0
INDEX RANGE SCAN WINA_BESITZER.FNKV_FE_FK_I 1 0
TABLE ACCESS BY INDEX ROWID WINA_BESITZER.SW_VERSION 1 18 0
INDEX UNIQUE SCAN WINA_BESITZER.SWV_PK 1 0 Same statistics, same view ... totally different plan. -
Index not picked when bind parameter can be null?
Hi,
In my query i have a primary key say deptid and has an unique index defined.
when i do a query as
select * from dept where ((deptid = :pDeptId) or (:pDeptId is null))
index is not picked up and a full table scan happens.
Can one explain why?
Thanks
ManishSolomon Yakobson wrote:
I purposely chose small table. Yes, optimizer calculated costs are such that FTS cost is same as combined cost of index fast scan + table access by rowid. And that is exactly what I am questioning. Set aside what you call "boundary" cases where rows are sparse enough (e.g. your example with lots of deleted rows) and similar scenarios (e.g. pctfree is very large, etc.) FTS is always less costly than index fast scan + table access by rowid by at least one IO. I understand, I am splitting hairs a bit since the difference is negligible, but still it is a sub-optimal plan.Solomon,
and yes, in those circumstances you describe the FTS will be chosen, so what is your point? Your test case represents such an "odd" scenario (sparse, large pctfree, etc., less rows than blocks, a clustering factor of the index less than the number of blocks of the table), so what do you want to prove using this test case? It doesn't represent the "normal" circumstances where the FTS is less costly.
In passing, the cost of the FTS and the combined operation are not the same in my default 10.2.0.4 database, the FTS is more expensive, as I've outlined previously.
The reason: It takes at least 1 multi-block I/O request to read the 5 blocks of the table. The optimizer assumes that the time it takes to perform a multi-block read request is 26ms when using default NOWORKLOAD system statistics with 8KB default block size. It assumes further it takes 12ms to perform a single-block read request, so the cost (expressed in single-block reads) of performing a single multi-block read request is 2.16. Since Oracle 9i, there is a hidden parameter "_table_scan_cost_plus_one" set to "true", it used to be "false" in 8i, making the cost 3.16, so we end up with a rounded cost of 3 for the FTS.
The cost of the combined index + table access is 2, one I/O for the index, plus one I/O for the table access. So from an optimizer perspective this plan is not sub-optimal, but the superior one, mainly due to the fact that one gets added to the table scan cost from 9i on, and you have the unusual clustering factor of 1.
As you can see, it takes a lot of "boundary" conditions that this happens, and it's all down to the fact that the table is so small, that is has only 4 rows in 5 blocks, and the clustering factor of the index is less than the number of blocks of the table, which is quite unusual. A "optimal" clustering factor usually corresponds to the number of blocks of the table, a "bad" clustering factor corresponds to the number of rows.
Therefore I suggested to "adjust" the clustering factor to a more usual value, but there is actually nothing usual about this test case since you have less rows than blocks.
So I guess you can say whatever you want about this being a "sub-optimal" plan, but I really don't see the point it makes, given the flawed test case. Run it with a more reasonable setup and the FTS will be used in those cases where it is reasonable according to the inputs the CBO gets.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
How to utilize index in selection statement
hi
how to utilize index in selection statement and how is it reduces performance whether another alternative is there to reduce performance .
thanksHi Suresh,
For each SQL statement, the database optimizer determines the strategy for accessing data records. Access can be with database indexes (index access), or without database indexes (full table scan).The cost-based database optimizer determines the access strategy on the basis of:
*Conditions in the WHERE clause of the SQL statement
*Database indexes of the table(s) affected
*Selectivity of the table fields contained in the database indexes
*Size of the table(s) affected
*The table and index statistics supply information about the selectivity of table fields, the selectivity of combinations of table fields, and table size. Before a database access is performed, the database optimizer cannot calculate the exact cost of a database access. It uses the information described above to estimate the cost of the database access.The optimization calculation is the amount by which the data blocks to be read (logical read accesses) can be reduced. Data blocks show the level of detail in which data is written to the hard disk or read from the hard disk.
<b>Inroduction to Database Indexes</b>
When you create a database table in the ABAP Dictionary, you must specify the combination of fields that enable an entry within the table to be clearly identified. Position these fields at the top of the table field list, and define them as key fields.
After activating the table, an index is created (for Oracle, Informix, DB2) that consists of all key fields. This index is called a primary index. The primary index is unique by definition. As well as the primary index, you can define one or more secondary indexes for a table in the ABAP Dictionary, and create them on the database. Secondary indexes can be unique or non-unique. Index records and table records are organized in data blocks.
If you dispatch an SQL statement from an ABAP program to the database, the program searches for the data records requested either in the database table itself (full table scan) or by using an index (index unique scan or index range scan). If all fields requested are found in the index using an index scan, the table records do not need to be accessed.
A data block shows the level of detail in which data is written to the hard disk or read from the hard disk. Data blocks may contain multiple data records, but a single data record may be spread across several data blocks.
Data blocks can be index blocks or table blocks. The database organizes the index blocks in the form of a multi-level B* tree. There is a single index block at root level, which contains pointers to the index blocks at branch level. The branch blocks contain either some of the index fields and pointers to index blocks at leaf level, or all index fields and a pointer to the table records organized in table blocks. The index blocks at leaf level contain all index fields and pointers to the table records from the table blocks.
The pointer that identifies one or more table records has a specific name. It is called, for example, ROWID for Oracle databases. The ROWID consists of the number of the database file, the number of the table block, and the row number within the table block.
The index records are stored in the index tree and sorted according to index field. This enables accelerated access using the index. The table records in the table blocks are not sorted.
An index should not consist of too many fields. Having a few very selective fields increases the chance of reusability, and reduces the chance of the database optimizer selecting an unsuitable access path.
<b>Index Unique Scan</b>
If, for all fields in a unique index (primary index or unique secondary index), WHERE conditions are specified with '=' in the WHERE clause, the database optimizer selects the access strategy index unique scan.
For the index unique scan access strategy, the database usually needs to read a maximum of four data blocks (three index blocks and one table block) to access the table record.
<b><i>select * from VVBAK here vbeln = '00123' ......end select.</i></b>
In the SELECT statement shown above, the table VVBAK is accessed. The fields MANDT and VBELN form the primary key, and are specified with '=' in the WHERE clause. The database optimizer therefore selects the index unique scan access strategy, and only needs to read four data blocks to find the table record requested.
<b>Index Range Scan</b>
<b><i>select * from VVBAP here vbeln = '00123' ......end select.</i></b>
In the example above, not all fields in the primary index of the table VVBAP (key fields MANDT, VBELN, POSNR) are specified with '=' in the WHERE clause. The database optimizer checks a range of index records and deduces the table records from these index records. This access strategy is called an index range scan.
To execute the SQL statement, the DBMS first reads a root block (1) and a branch block (2). The branch block contains pointers to two leaf blocks (3 and 4). In order to find the index records that fulfill the criteria in the WHERE clause of the SQL statement, the DBMS searches through these leaf blocks sequentially. The index records found point to the table records within the table blocks (5 and 6).
If index records from different index blocks point to the same table block, this table block must be read more than once. In the example above, an index record from index block 3 and an index record from index block 4 point to table records in table block 5. This table block must therefore be read twice. In total, seven data blocks (four index blocks and three table blocks) are read.
The index search string is determined by the concatenation of the WHERE conditions for the fields contained in the index. To ensure that as few index blocks as possible are checked, the index search string should be specified starting from the left, without placeholders ('_' or %). Because the index is stored and sorted according to the index fields, a connected range of index records can be checked, and fewer index blocks need to be read.
All index blocks and table blocks read during an index range scan are stored in the data buffer at the top of a LRU (least recently used) list. This can lead to many other data blocks being forced out of the data buffer. Consequently, more physical read accesses become necessary when other SQL statements are executed
<b>DB Indexex :Concatenation</b>
In the concatenation access strategy, one index is reused. Therefore, various index search strings also exist. An index unique scan or an index range scan can be performed for the various index search strings. Duplicate entries in the results set are filtered out when the search results are concatenated.
<i><b>Select * from vvbap where vbeln in ('00123', '00133', '00134').
endselect.</b></i>
In the SQL statement above, a WHERE condition with an IN operation is specified over field VBELN. The fields MANDT and VBELN are shown on the left of the primary index. Various index search strings are created, and an index range scan is performed over the primary index for each index search string. Finally, the result is concatenated.
<b>Full Table Scan</b>
<b><i>select * from vvbap where matnr = '00015'.
endselect</i></b>
If the database optimizer selects the full table scan access strategy, the table is read sequentially. Index blocks do not need to be read.
For a full table scan, the read table blocks are added to the end of an LRU list. Therefore, no data blocks are forced out of the data buffer. As a result, in order to process a full table scan, comparatively little memory space is required within the data buffer.
The full table scan access strategy is very effective if a large part of a table (for example, 5% of all table records) needs to be read. In the example above, a full table scan is more efficient than access using the primary index.
<i><b>In Brief</b></i>
<i>Index unique scan:</i> The index selected is unique (primary index or unique secondary index) and fully specified. One or no table record is returned. This type of access is very effective, because a maximum of four data blocks needs to be read.
<i>Index range scan:</i> The index selected is unique or non-unique. For a non-unique index, this means that not all index fields are specified in the WHERE clause. A range of the index is read and checked. An index range scan may not be as effective as a full table scan. The table records returned can range from none to all.
<i>Full table scan:</i> The whole table is read sequentially. Each table block is read once. Since no index is used, no index blocks are read. The table records returned can range from none to all.
<i>Concatenation:</i> An index is used more than once. Various areas of the index are read and checked. To ensure that the application receives each table record only once, the search results are concatenated to eliminate duplicate entries. The table records returned can range from none to all.
Regards,
Balaji Reddy G
***Rewards if answers are helpful -
Universe design on Database Views Vs. on Tables
Hi,
I am builidng a universe over oracle database. The warehouse has both tables and views. Some of the reports we will be creating are based on the tables for which views are created. It would be my call to to choose the reporting too (webi or crystal).
I want to know what will be the best approach. Whether creating universe which has database views affect the report performance?(As it will add additional layer. Universe > View > Actual table) If yes, then will it be feasible to go for Crystal Reports instead of webi?
Regards,
ChetashriThere are two points which you have to think.
1. Either crystal or Webi
a) Generally, we prefer Crystal report when Business users want to have pixel perfect reporting. Enterprise reports such as generla ledger, transactions and so on we prefer Crystal Report.
Cyrstal report has lot of options which helps us to create reporting which is impossible in other tools as possible in Crystal.
b) WebI reports very flexible and easy to create reports. Not as flexible as compared to Cyrstal. But most of the reporting capabilities are available in WebI. Mostly, first I try to build the report in webI. If the tool is not sufficient or capable of creating that report, I go for Crystal.
But WebI reports are preferrable.
2. Peformance with Table or View
There is no big difference now between view and table when view refers only single table because of optimization strategies available in oracle database.
But when a view refers mutlple tables in its definition, there is little edge for view with respect to performance because View optimizations calculated by oracle in advance.
But you could use Tables or views vice versa if there is less data.
Steps for creating reports either from views or tables are same. Instead of view you could use tables and Instead of tables you could use views.
Hope this helps
Regards
Gowtham -
Good afternoon
we are looking at implementing ORACLE Planning. We had it 5 years ago, but found it incredibly cumbersome and slow. My questions are as follows:
Does it still sit on a block storage essbase cube? or have they figured out how to use ASOcubes with it?
If it is on a BSO, that would mean calculations - has the speed improved?
Is there still a size limitation on the input forms?
How do you report out of it? Can you use the Excel Essbase add in? Or is Smartview exclusively used?
Thanks as alwaysuser3055639 wrote:
Good afternoon
we are looking at implementing ORACLE Planning. We had it 5 years ago, but found it incredibly cumbersome and slow. My questions are as follows:
Does it still sit on a block storage essbase cube? or have they figured out how to use ASOcubes with it?It still stis on a BSO database, but if you own Essbase as well, there is an option to spin off an ASO reporting cube.
If it is on a BSO, that would mean calculations - has the speed improved?It may or may not have depending on what you are calculating. I will say a good planning consultant has may ways to optimize calculations
Is there still a size limitation on the input forms? There is no official limit on form size, but if you get too big, they will be slower and cumbersome. The newest version 11.1.2.2 has redesigned the forms using ADF controls so it should be faster and more flexible. With forms you can now also allow users to do ad-hoc queries from them.
How do you report out of it? Can you use the Excel Essbase add in? Or is Smartview exclusively used?While you can use the add-in, you are better off using Smart View. Smart View will do almost everything the Planning web forms will do (you can bring up the forms in Smart View) and load from there, run business rules etc.
Thanks as always -
Hi All,
I wanted to know about the statistic checks on Cubes or ODS, that should be carried out in the system (BW 3.5)
Which are the statistics checks which I should carry out daily or others and with what frequency?
ALso, I wanted to know about the significance of these statistics checks.
Thank and Best Regards,
SharmishthaHi,
for an overview about your tables statistics goto table DBSTATT*; for instance DBSTATTORA if your DB is Oracle. It will give you how the table was analyzed and when it eas last analyzed.
DB20 is the central transaction for statistics for checking.
You should refresh your statistics via the performance tab of an infoprovider and/or via process chains.
Statistics must be updated regularly in order to have your RDBMS optimizer calculating the right execution plan for a particular query.
The frequency to pdate statistics will depend on the number of records changed / inserted / deleted in comparison with the total (well shown in DB20)
hope this helps,
Olivier. -
Hi All,
I have two dimensions that sort of relate to each other
- Products
- SKU
Each product (or service) is a combination of multiple SKU's. One SKU may be used across multiple products as well.
In order to optimize calculation I want to do something like this:
FIX ("TR1100" /*Product Code*/ or I can use @Descendants("All Products",0)
FIX( @UDA("SKU", @NAME(@CURRMBR("PRODUCT")) ) )
//Do calculation here!
ENDFIX
ENDFIX
I hope to get only the relevant SKU's for each product this optimizing the calculation. However I get the following error:
Error: 1200315 Error parsing formula for [FIX STATEMENT] (line 15): invalid object type
If I remove the @NAME function, I get the following error (which is quite obvious this should happen):
Error: 1200354 Error parsing formula for [FIX STATEMENT] (line 16): expected string [STRING] found [MEMBER] ([@CURRMBR]) in function [@UDA]
can anyone help me figuring out how to get this to work? If not any alternate strategies to get this to work would be great!
ShehzadIt looks like your syntax should work.
Do you have any "PRODUCT" members that don't resolve to a UDA?
I wonder if maybe the "Invalid Object Type" error could be because there is a "PRODUCT" member that isn't a UDA.
Robert -
Hi all
I am wondering that in Metalink document "Case Study: Analyzing 10053 Trace Files" there is a sql that' considered has bad plan with cost 20762 and same sql with NO_INDEX hint considered has good plan with cost 58201.
Diffrence is more than %50 how it could be ?Cbo could choose a sql that has higher cost instead of lower cost one ? am I thinking wrong ? or misunderstood document ?
Best RegardsThe optimizer calculated cost is based on a large number of statistics and parameters. If, according to the section "D) Calculate the multiblock read divisor", the "Mdivisor" (MBRC?) is 1.07 - which would suggest to the optimizer that when it requests a full table scan or fast full index scan, on average Oracle will be able to perform multiblock reads of an average 1.07 blocks. This will lead the optimizer to believe that a full table scan or fast full index scan will operate very slowly - possibly reading 8KB in each read request, rather than a more efficient 512KB to 1MB. The document also suggests that someone may have inappropriately changed other parameters, which would affect the calculated costs. Consider what might happen if someone adjusts the OPTIMIZER_INDEX_COST_ADJ parameter to a value of 1 - suddenly index access paths will have calculated costs that are 0.01 times their original value, making index access paths appear to the optimizer to be very cheap, yet such a change will not make index access paths complete 100 times faster than before.
In short, the optimizer is capable of being fooled by poorly set statistics and parameters.
Charles Hooper
Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Optimize MRP material package calculation in parallel MRP
Dear all,
How to Optimize MRP material package calculation in parallel MRP?
How and where can i see and change build up package for parallel MRP run?
Thanks
RAGHUDear,
Please run report program RMMDMONI
Important point.
-Materials
-Runtime
-Read situation
Save result
-MRP list
In.Last page of MRP Spool. In the end of the spool, you can see
how the WPs is used.
Depending on the usage of WP, you need to think of packet size used for
parallel planning.
Also refer the OSS Note 568593 for details.
Also please test BAdI MD_MRP_RUN_PARALLEL to set the package.
OPP1 -> Business Add-Ins - > 'Package size in parallel planning contains detailed information about this topic.
Hope clear to you.
Regards,
R.Brahmankar -
Regarding calculation script optimization
Hi,
I have a gone through the calculation optimization in dbag.
I have a calc script that needs to be optimized.
So are there any particular ways that can be followed to optimize an calc script inoder to reduce the time taken to run.
Thanks,
RamHi,
1. When you say , it needs optimized. Is it taking more time , or whats the issue.
2. In addition to the standard recommendations in DBAG, you need to look your calculation script and which line of your script is taking time ( with the help of logs).
3. NOt every standard recommendation would work , thats why consultants come handy.
Sandeep Reddy Enti
HCC
http://hyperionconsultancy.com/ -
Hi Experts
I have an Issue with the business rule ,running long time ,here is the case.
calculation is taking almost 2 hours to run and it needs to be optimized.
The maximum time it should take is 5 - 10 min to execute.
Please suggest me ASAP.hi this is the macro
FIX("00011")
FIX("No SubLedger", "Amount")
/********* Get Actual Margin % ***********/
FIX(@RELATIVE("Existing FL",0))
FIX([parm1]:[Parm2])
"Margin %"(
IF("500000"->"AllSubLedger"->"Actual"->"Final"->&ACTYR==#MISSING OR
"500000"->"AllSubLedger"->"Actual"->"Final"->&ACTYR==0)
"Margin %"="Margin %"->"No SubLedger"->"No Management Entity"->"ActualJDE"->"Final"->&ACTYR;
"COGS"="500000"->"AllSubLedger" * (1 - "Margin %");
"Initial Margin"="500000"->"AllSubLedger" - "COGS";
ELSE
"Margin %"="Margin"->"AllSubLedger"->"Actual"->"Final"->&ACTYR /
"500000"->"AllSubLedger"->"Actual"->"Final"->&ACTYR;
"COGS"="500000"->"AllSubLedger" * (1 - "Margin %");
"Initial Margin"="500000"->"AllSubLedger" - "COGS";
ENDIF)
ENDFIX
FIX([parm3]:"Dec")
"Margin %"(
IF("500000"->"AllSubLedger"->"Actual"->"Final"->&PRIORYRACT==#MISSING OR
"500000"->"AllSubLedger"->"Actual"->"Final"->&PRIORYRACT==0)
"Margin %"="Margin %"->"No SubLedger"->"No Management Entity"->"ActualJDE"->"Final"->&PRIORYRACT;
"COGS"="500000"->"AllSubLedger" * (1 - "Margin %");
"Initial Margin"="500000"->"AllSubLedger" - "COGS";
ELSE
"Margin %"="Margin"->"AllSubLedger"->"Actual"->"Final"->&PRIORYRACT /
"500000"->"AllSubLedger"->"Actual"->"Final"->&PRIORYRACT;
"COGS"="500000"->"AllSubLedger" * (1 - "Margin %");
"Initial Margin"="500000"->"AllSubLedger" - "COGS";
ENDIF)
ENDFIX
ENDFIX
FIX(@RELATIVE("New FL",0))
FIX([parm1]:[Parm2])
"Margin %"="Margin %"->"No SubLedger"->"No Management Entity"->"ActualJDE"->"Final"->&ACTYR;
"COGS"="500000"->"AllSubLedger" * (1 - "Margin %");
"Initial Margin"="500000"->"AllSubLedger" -"COGS";
ENDFIX
FIX([parm3]:"Dec")
"Margin %"="Margin %"->"No SubLedger"->"No Management Entity"->"ActualJDE"->"Final"->&PRIORYRACT;
"COGS"="500000"->"AllSubLedger" * (1 - "Margin %");
"Initial Margin"="500000"->"AllSubLedger" -"COGS";
ENDFIX
ENDFIX
/************* Margin by Total Category ****************/
FIX([parm4]:"Dec")
AGG("ManagementEntity");
FIX("No Management Entity")
"Margin %"=("500000"->"AllSubLedger"->"FL" - "COGS"->"FL")/"500000"->"AllSubLedger"->"FL";
"Target Margin %"(
IF("Target Margin %"==#MISSING)
"Target Margin %"="Margin %";
ENDIF)
"Target Margin"="500000"->"AllSubLedger"->"FL" * "Target Margin %";
"Target COGS"="500000"->"AllSubLedger"->"FL" * (1 - "Target Margin %");
"Margin Improvement"="Target Margin" - "Initial Margin"->"FL";
"Improvement Rate %"="Margin Improvement" / "500000"->"AllSubLedger"->"FL";
ENDFIX
FIX(@RELATIVE("FL",0))
"New COGS"="COGS" - ("Improvement Rate %"->"No Management Entity" * "500000"->"AllSubLedger");
"New Margin"="500000"->"AllSubLedger" - "New COGS";
"New Margin %"="New Margin" / "500000"->"AllSubLedger";
"New COGS"(
IF("New Margin Override %">0)
"New COGS"="500000"->"AllSubLedger" * (1 - "New Margin Override %");
"New Margin"="500000"->"AllSubLedger" - "New COGS";
"New Margin %"="New Margin" / "500000"->"AllSubLedger";
ENDIF)
ENDFIX
AGG("ManagementEntity");
FIX("Margin Plug Store")
"611010.1000"="Target COGS"->"No Management Entity" - "New COGS"->"FL";
ENDFIX
AGG("ManagementEntity");
ENDFIX
ENDFIX
/******** Allocate to Category Level **********/
FIX(@RELATIVE("ProductCategory",0))
FIX(@RELATIVE("Existing FL",0))
FIX(@RELATIVE("610000",0), @RELATIVE("620000",0), @RELATIVE("630000",0), @RELATIVE("670000",0))
FIX([parm1]:[Parm2])
"Amount"(
IF("Amount"->"COS"->"AllSubLedger"->"Actual"->"Final"->&ACTYR==#MISSING OR "Amount"->"COS"->"AllSubLedger"->"Actual"->"Final"->&ACTYR==0)
"Amount"="Amount"->"New COGS"->"No SubLedger" * "Amount"->"No Management Entity"->"ActualJDE"->"Final"->&ACTYR;
ELSE
"Amount"="Amount"->"New COGS"->"No SubLedger" * ("Amount"->"Actual"->"Final"->&ACTYR
/"Amount"->"COS"->"AllSubLedger"->"Actual"->"Final"->&ACTYR);
ENDIF)
ENDFIX
FIX([Parm3]:"Dec")
"Amount"(
IF("Amount"->"COS"->"AllSubLedger"->"Actual"->"Final"->&PRIORYRACT==#MISSING
OR "Amount"->"COS"->"AllSubLedger"->"Actual"->"Final"->&PRIORYRACT==0)
"Amount"="Amount"->"New COGS"->"No SubLedger" *
"Amount"->"No Management Entity"->"ActualJDE"->"Final"->&PRIORYRACT;
ELSE
"Amount"="Amount"->"New COGS"->"No SubLedger" *
("Amount"->"Actual"->"Final"->&PRIORYRACT /"Amount"->"COS"->"AllSubLedger"->"Actual"->"Final"->&PRIORYRACT);
ENDIF)
ENDFIX
ENDFIX
FIX([parm4]:"Dec")
/* Shrink Rate*/
"640010"="500000"*"Shrink Rate %"->"BegBalance"->"No Management Entity";
ENDFIX
ENDFIX
FIX( @RELATIVE("New FL",0))
FIX(@RELATIVE("610000",0), @RELATIVE("620000",0),
@RELATIVE("630000",0), @RELATIVE("670000",0))
FIX([parm1]:[Parm2])
"Amount"="Amount"->"New COGS"->"No SubLedger" *
"Amount"->"No Management Entity"->"ActualJDE"->"Final"->&ACTYR;
ENDFIX
FIX([Parm3]:"Dec")
"Amount"="Amount"->"New COGS"->"No SubLedger" * "Amount"->"No Management Entity"->"ActualJDE"->"Final"->&PRIORYRACT;
ENDFIX
ENDFIX
FIX([parm4]:"Dec")
"640010"="500000"*"Shrink Rate %"->"BegBalance"->"No Management Entity";
ENDFIX
ENDFIX
ENDFIX
ENDFIX
CALC DIM("Account");
AGG("SubLedger","Company","ManagementEntity");
FIX([parm4]:"Dec","00011", "No SubLedger", "Amount", "No Management Entity")
"New Margin %"=("500000"->"AllSubLedger"->"FL" -
"COS"->"AllSubLedger"->"FL")/ "500000"->"AllSubLedger"->"FL";
ENDFIX -
Which is faster - Member formula or Calculation script?
Hi,
I have a very basic question, though I am not sure if there is a definite right or wrong answer.
To keep the calculation scripts to a minimum, I have put all the calculations in member formula.
Which is faster - Member formula or calculation scripts? Because, if i am not mistaken, FIX cannot be used in member formulas, so I need to resort to the use of IF, which is not index driven!
Though in the calculation script,while aggregating members which have member formula, I have tried to FIX as many members as I can.
What is the best way to optimize member formulas?
I am using Hyperion Planning and Essbase 11.1.2.1.
Thanks.Re the mostly "free" comment -- if the block is in memory (qualification #1), and the formula is within the block (qualification #2), the the expensive bit was reading the block off of the disk and expanding it into memory. Once that is done, I typically think of the dynamic calcs as free as the amount of data being moved about is very, very, very small. That goes out the window if the formula pulls lots of blocks to value and they get cycled in and out of the cache. Then they are not free and are potentially slower. And yes, I have personally shot myself in the foot with this -- I wrote a calc that did @PRIORS against a bunch of years. It was a dream when I pulled 10 cells. And then I found out that the client had reports that pulled 5,000. Performance when right down the drain at that point. That one was 100% my fault for not forcing the client to show me what they were reporting.
I think your reference to stored formulas being 10-15% faster than calc script formulas deals with if the Formulas are executed from within the default calc. When the default Calc is used, it precompiles the formulas and handles many two pass calculations in a single pass. Perhaps that is what you are thinking of.^^^I guess that must be it. I think I remember you talking about this technique at one of your Kscope sessions and realizing that I had never tried that approach. Isn't there something funky about not being able to turn off the default calc if a user has calc access? I sort of thing so. I typically assing a ; to the default calc so it can't do anything.
Regards,
Cameron Lackpour -
Which is faster - Member formula or Calculation scripts?
Hi,
I have a very basic question, though I am not sure if there is a definite right or wrong answer.
To keep the calculation scripts to a minimum, I have put all the calculations in member formula.
Which is faster - Member formula or calculation scripts? Because, if i am not mistaken, FIX cannot be used in member formulas, so I need to resort to the use of IF, which is not index driven!
Though in the calculation script,while aggregating members which have member formula, I have tried to FIX as many members as I can.
What is the best way to optimize member formulas?
I am using Hyperion Planning and Essbase 11.1.2.1.
Thanks.The idea that you can't reference a member formula in a FIX is false. Here's an example:
- Assume you have an account that has a data storage of Stored or Never Share.
- This account is called Account_A and it has a member formula of Account_B * Account_C;.
- You would calculate this account within a FIX (inside of a business rule) something like this:
FIX(whatever . . . )
"Account_A";
ENDFIX
If you simply place the member named followed by a semi-colon within a business rule, the business rule will execute the code in the in that member's member formula.
Why would you want to do this instead of just putting ALL of the logic inside the business rule? Perhaps that logic gets referenced in a LOT of different business rules, and you want to centralize the code in the outline? This way, if the logic changes, you only need to update it in one location. The downside to this is that it can make debugging a bit harder. When something doesn't work, you can find yourself searching for the code a bit.
Most of my applications end up with a mix of member formulas and business rules. I find that performance isn't the main driving force behind where I put my code. (The performance difference is usually not that significant when you're talking about stored members.) What typically drives my decision is the organization of code and future maintenance. It's more art than science.
Hope this helps,
- Jake
Maybe you are looking for
-
How can I use iCloud for storage without duplication problems?
I recently set up iCloud to use on my Mac and PC. With the exception of a single data file that I have stored in the cloud (as a working data file) that I use both on my Mac and PC....I want to use the rest of my 20 gigs of space on iCloud for stora
-
Photos app shows I have less photos than my settings does
Hello all. I got a new iPhone 5s in January and since I have noticed that in my photos app (where photos are stored) it says I have taken 624 photos and 14 videos but when going into Settings> General> About> Videos 15 and Photos 1,214. Why is this a
-
I have already built an 8 channel 10:1 voltage divider using 1% resistors. My problem is there is a lot of AC noise on the DC signal and the voltages don't match my Fluke DVM which is sitting beside. Solved! Go to Solution. Attachments: 8 Channel Vo
-
Patch 8602263 and 7502698 requires Password
Hi, I am on 12.1.1. version and I am trying to download patches mentioned below. 1) 8602263:R12.AZ.B - 1OFF:8352532:12.1.1:12.1.1:ONDEMAND:ISETUP EXTRACT OF INVENTORY CATEGORY INFORMA 2) 8652905:R12.AZ.B - 1OFF:8424285:12.1.1:12.1.1:FRAMEWORK SUPPORT
-
Pemiere shone on an urgent project...
Premiere came through for me on this so I thought I would pop up a youtube link to a project I just finished. Initially it was to be edited (FCP) by my collaborator but he decided to take a holiday in the islands and dumped the project back to me da