Partition by statement
I am getting the error - ORA-00923: FROM keyword not found where expected at P while executing the following statement.
(DIFF) OVER (PARTITION BY ITEM_CODE ORDER BY ITEM_CODE)DKindly advice.
Sanjay
Ah, ofcourse!
... slaps forehead
DIFF is a column name and you forgot to put a function in front of it!
I interpreted DIFF as being a function ;)
Thanks for the feedback, I now know it's time for an extra coffee
Similar Messages
-
Explain plan change after partitioning - Bitmap conversion to rowid
hi gurus,
before partitioning the table using range paritioning,
for the query,
SELECT MEDIUMID
FROM MEDIUM_TB
WHERE CMREFERENCEID =8
AND CONTENTTYPEID = 8
AND CMSTATUSID = 5
AND SUBTYPEID = 99
A. before partitioning
SELECT STATEMENT, GOAL = ALL_ROWS 2452 882 16758
SORT ORDER BY 2452 882 16758
TABLE ACCESS BY INDEX ROWID DBA1 MEDIUM_TB 2451 882 16758
BITMAP CONVERSION TO ROWIDS
BITMAP AND
BITMAP CONVERSION FROM ROWIDS
INDEX RANGE SCAN DBA1 MEDIUM_TB_IX07 242 94423
BITMAP CONVERSION FROM ROWIDS
INDEX RANGE SCAN DBA1 MEDIUM_TB_IX02 1973 94423
B. after partitioning
the explain plan changed to
SELECT STATEMENT, GOAL = ALL_ROWS 33601 796 15124
TABLE ACCESS BY GLOBAL INDEX ROWID DBA1 MEDIUM_TB 33601 796 15124
INDEX RANGE SCAN DBA1 MEDIUM_TB_IX07 300 116570 as you can see in the plan, the paln cost is very high after partitioning and the query is taking more time.
index MEDIUM_TB_IX02 is not used in the second plan and also the plan method BITMAP CONVERSION.
fyi, there is all the indexes are b-tree based and global indexes.
what could be reason for the plan change?
please help
thanks,
charlesuser570138 wrote:
SELECT STATEMENT, GOAL = ALL_ROWS 2452 882 16758
SORT ORDER BY 2452 882 16758
TABLE ACCESS BY INDEX ROWID DBA1 MEDIUM_TB 2451 882 16758
BITMAP CONVERSION TO ROWIDS
BITMAP AND
BITMAP CONVERSION FROM ROWIDS
INDEX RANGE SCAN DBA1 MEDIUM_TB_IX07 242 94423
BITMAP CONVERSION FROM ROWIDS
INDEX RANGE SCAN DBA1 MEDIUM_TB_IX02 1973 94423
If you supplied proper execution plans we might be able to offer some advice.
Use dbms_xplan.
A list of the index definitions might also help
Regards
Jonathan Lewis -
How to create MIN/MAX limitations in SELECT statement ??
Hey,
I have a table which rank90 (city population ranked from 1>*) and state_abrv which has the corresponding state for each city rank.
Is there a way to select only the maximum AND minimum ranks for each state ??
I realise there is a max and min function, but i need to do it for EACH state_abrv.
An example say Los Angeles is ranked 2, San Diego is ranked 6, and San Fransico is ranked 14 (All of these citys are in california (CA)). How do i display a table which lists only Los Angeles (Highest rank) and San Fransico (lowest rank) but DOESNT list San diego ??
Thanks, you guys are helping me heaps and im starting to learn a lot more :P
Message was edited by:
user495524SQL> create table t (state varchar2(2), city varchar2(20), n number);
Table created.
SQL> insert into t values ('CA','San Francisco',14);
1 row created.
SQL> insert into t values ('CA','San Diego',6);
1 row created.
SQL> insert into t values ('CA','Los Angeles',2);
1 row created.
SQL> insert into t values ('NY','Buffalo',4);
1 row created.
SQL> insert into t values ('NY','Syracuse',7);
1 row created.
SQL> insert into t values ('NY','Mt Kisco',2);
1 row created.
SQL> insert into t values ('NY','Albany',5);
1 row created.
SQL> select * from t order by state, n desc;
ST CITY N
CA San Francisco 14
CA San Diego 6
CA Los Angeles 2
NY Syracuse 7
NY Albany 5
NY Buffalo 4
NY Mt Kisco 2
7 rows selected.
SQL> select state, city, n from
2 (
3 select t.*, min(n) over (partition by state) min_n,
4 max(n) over (partition by state) max_n from t
5 )
6 where n in (min_n, max_n) order by state, n desc;
ST CITY N
CA San Francisco 14
CA Los Angeles 2
NY Syracuse 7
NY Mt Kisco 2
SQL> -
The Partition Attributes change from read/write to read only is there any way to monitor this in SCOM2012?
Hi Jensen,
According to your description, you want to get the Measure group partition process state and last processed date through DMV, right?
As mentioned above, there is no other query other then Olaf posted. Another option for you is using SSAS AMO to programmatically get the state and last processed date time. The cube, measure group and partition classes of AMO has 2 properties that might
helps State and Last Processed. You can refer to the link below to see more information AMO.
http://msdn.microsoft.com/en-US/library/ms124924(v=sql.90)
Regards,
Charlie Liao
TechNet Community Support -
Tuning SQL | Partition Table | MJC
All good hearted people -
Problem :- This SQL runs forever and returns nothing when STATS are stale. If I collect the table level stats (dbms_stats) on these partitioned table it runs again as normal (< 2 minutes).
I see Merge Join cartesian in the explain plan when it runs bad. After the stats done, this MJC disappeared from the plan and things back to normal.
Also, If convert one of those partition into a regular table(amms partition 2010-03-16 ) and join to the other partition table's (cust ) partition this works fine.
Note : After every load we run partition level stats on these tables (not table level stats).
My question is why am I getting MJC? How to solve this issue?
<code>
select aln.acct_no as acct_no, aln.as_of_dt, max(acm.appno) as appno, count( * )
from amr.amms aln, acr.cust acm <================= both tables are range partitioned by date
where acm.acctno = aln.acct_no
and acm.acctno > 0
and acm.as_of_dt = date '2010-03-16' <============ partition key on cust table < 2M rows
and aln.as_of_dt = date '2010-03-12' < ============= partition key on amms table < 2M rows
group by aln.acct_no, aln.as_of_dt
having count( * ) = 1
</code>
Env: Oracle 10g | 10.2.0.4 | ASM | 2 node RAC | Linux x86 | Archivelog | Partition | Optimizer Choose |and acm.as_of_dt = date '2010-03-16'
and aln.as_of_dt = date '2010-03-12' not valid syntax! -
Setting up semantic partitions ASE
Running ASE 15.5
We have a non-partitioned table with 640 million rows. We are looking at setting up range semantic partitions by create date
There are 200 + million rows for 2013 and 380 + million rows for 2014
I am thinking about setting up following partitions by create date:
Partitions 1 - 4 : For 2015 By quarter
Partition 5 - For year 2014
Partition 6 - 2013 and and earlier
Add new Partitons for each new year ...
Only updating current data -- i.e. any data more than month old is no longer updated ..
Is this a viable breakdown ?
1st attempt at partitioning ..Actually, I would like to comment that there are some nuances with partitioning and stats to be aware of.....but as far as your question goes, a lot depends on your version of ASE. For pre-ASE 15.7, sampling works, but the best practice taught in classes was to do a full non-sampled stats first, then do 2-3 updates with sampling, then a full non-sampled stats in cycle - so if doing update stats weekly, the first run of the month would be a full non-sampled and other weeks in the month would be sampled. However, what this is doing is trying to help you determine if stats sampling is working similarly to a non-sampled stats by virtue of the fact that you may have performance issues the latter weeks of the month using sampled stats vs. non-sampled. How well this works often depends on how values are added to the different columns - e.g. scattered around evenly or monotonically increasing. I personally have found that in later revs of 15.7 (e.g. sp100+) that running stats with hashing is much faster than stats with sampling and generates more accurate stats. I know Mark has seen some issues - not sure where/why - but then I have seen problems with just update stats generically in which we have had to delete stats before re-running update stats....so not sure if the problem was caused by starting with non-hashed stats and then trying to update with hashed stats or not (I have always started with hashed stats).
Now, there is an interesting nuance of update stats with partitioning. Yes, you can run update stats on partition basis.......but it doesn't update table stats (problem #1) and it can also lead to stats explosion. I am not saying don't run update stats on a partition basis - I actually encourage it - but suggest you know what is going to happen. For example, partitioning - especially range partitioning - works best as far as maintenance commands - when you get in the 30+ partition range - and especially in the 50-100 partition range - assuming evenly distributed partitions. In your case, you will likely get the same effect on 2014 and 2015 partitions as they will be much smaller. When you run update stats on each partition, (assuming the default histogram steps) you will get 20 steps PER partition.....which can mean 1000+ for the entire table (if 50 partitions). Not necessarily a problem unless the query needs to hit all the partitions (or some significant number of them) at which point the query will need considerable proc cache to load those stats. Sooo......when using partitions, keep in mind that you may need to increase proc cache to handle the increase use during optimization. On the table stats perspective, what it means is that periodically you might want to run update statistics (not update index statistics) on the table.....however, in my experience this hasn't been as necessary as one would think....and might only be necessary if you see the optimizer picking a table/partition scan when you think it should be choosing an index.
In your case, you might only have 20 steps for the whonking huge partition and then 20 steps for 2014 and 20 steps for each of the 2015 quarterly partition. You might want to run the update stats for the 2013 and before partition with a larger step count (e.g. 100) and then run it with 20 or so for the other partitions.
Using partitions the way you are doing is interesting in a different perspective. The current data is extremely small and therefore fast access (fewer index levels) and you don't get quite the penalty for queries that span a lot of partitions - e.g. a 5 year query doesn't have to hit 20 partitions the way it would for complete quarterly partitions. However, this assumes the scheme is:
Partition 1 = data 2+ previous years
Partition 2 = data for previous year
Partitions 3-6 = data for current year by quarter
Which means at the end (or so) of each year, you will need to merge partitions. Whenever you merge partitions, you will then need to run update stats again.
If the scheme instead is just to have historical partitions but going forward each year will simply have data by quarter, you might want to see what the impact on queries are - especially reports on non-unique indexes where the date range spans a lot of partitions or date range is not part of the query. -
How to copy statistics of a partition to another??
Hi,
I have a partitioned table (Oracle 9.2).
We will be creating new partitions for increasing data and I would like to copy the statistics of the existing partition for the newly created partition.
Whats the best way to do this?
RegardsHi,
<br>
1. Create a table to hold statistics with <a href=CREATE_STAT_TABLE Procedure. <br>
2. Use EXPORT_TABLE_STATS Procedure to export a partition level stats.<br>
3. Update your own stattab (created in step 1) to change the partition name.<br>
4 Use IMPORT_TABLE_STATS Procedure to import stats into a partition.<br>
<br>
Here a simple example with a table stats level (but for partition, it's the same thing, verify which the column you need to update, where partition name is) :<br>
SQL> create table teststat (col number);
Table created.
--rows insert
SQL> select * from teststat;
COL
1
SQL> select num_rows from dba_tables where table_name = 'TESTSTAT'
SQL> /
NUM_ROWS
SQL> exec DBMS_STATS.GATHER_TABLE_STATS('SYSTEM','TESTSTAT')
PL/SQL procedure successfully completed.
SQL> select num_rows from dba_tables where table_name = 'TESTSTAT'
2 /
NUM_ROWS
1
SQL> exec DBMS_STATS.CREATE_STAT_TABLE ('SYSTEM','MYSTATS')
PL/SQL procedure successfully completed.
SQL> desc mystats
Name Null? Type
STATID VARCHAR2(30)
TYPE CHAR(1)
VERSION NUMBER
FLAGS NUMBER
C1 VARCHAR2(30)
C2 VARCHAR2(30)
C3 VARCHAR2(30)
C4 VARCHAR2(30)
C5 VARCHAR2(30)
N1 NUMBER
N2 NUMBER
N3 NUMBER
N4 NUMBER
N5 NUMBER
N6 NUMBER
N7 NUMBER
N8 NUMBER
N9 NUMBER
N10 NUMBER
N11 NUMBER
N12 NUMBER
D1 DATE
R1 RAW(32)
R2 RAW(32)
CH1 VARCHAR2(1000)
SQL> exec DBMS_STATS.EXPORT_TABLE_STATS ('SYSTEM','TESTSTAT',null,'MYSTATS')
PL/SQL procedure successfully completed.
SQL>
SQL> select c1 from mystats; --c1 for table name, verify where is partition name
C1
TESTSTAT
TESTSTAT
SQL> update mystats set c1 = 'TESTSTAT2' where c1 = 'TESTSTAT';
2 rows updated.
SQL> commit;
Commit complete.
SQL> create table TESTSTAT2 (col number);
Table created.
SQL> select * from teststat2;
no rows selected
SQL> select num_rows from dba_tables where table_name = 'TESTSTAT2'
SQL> /
NUM_ROWS
SQL> exec DBMS_STATS.IMPORT_TABLE_STATS ('SYSTEM','TESTSTAT2',null,'MYSTATS')
PL/SQL procedure successfully completed.
SQL> select table_name,num_rows
from dba_tables
where table_name in ('TESTSTAT','TESTSTAT2');
TABLE_NAME NUM_ROWS
TESTSTAT 1
TESTSTAT2 1
SQL>
SQL> <br>
HTH,<br>
<br>
Nicolas.
Message was edited by: <br>
N. Gasparotto<br>
I change a little bit my example -
How to create report condition based on "total"
Hello,
I have a Discoverer report that shows revenue by city and sub-totals revenue by state.
I need to modify this report so that only cities in states with revenue (sub-total) more than one million are pulled.
Example:
Pittsburgh - 100,000
Harrisburg - 200,000
Erie - 300,000
------State:PA 600,000
Los Angeles 500,000
San Fransisco 600,000
Oakland 200,000
-----State:CA 1,300,000
In this example, the report should show only the cities in California, as the revenue sum is over 1 million:
Los Angeles 500,000
San Fransisco 600,000
Oakland 200,000
---State:CA 1,300,000
Is this possible?
I'm using Discoverer version 10.1.2.2.
Thank you.
Edited by: [email protected] on Dec 11, 2009 3:03 PMHello
You need to do two things to solve this problem.
1. You need to create an analytic calculation that returns 1 when the sum of the revenue in all of the cities in a state is one million or more
2. You need to create a condition such that you only include states when the above is 1
Let's call the calculation, OneMillionStates and I will assume your fields are called State, City and Revenue
Here's the formula:
CASE WHEN SUM(REVENUE) OVER(PARTITION BY STATE) > 1000000 THEN 1 ELSE 0 END
Now your condition shoudl be: ONEMILLIONSTATE = 1
You don't even need to display the calculation on screen and notice how the city does not even come into the calculation.
Best wishes
Michael
URL: http://ascbi.com
Blog: http://learndiscoverer.blogspot.com -
Kind of loop in sql? Any alternative?
Hi,
We have the following table
create table orders
order_id NUMBER(10),
vehicle_id NUMEBR(10),
customer_id NUMBER(10),
data VARCHAR(10)
order_id, customer_id and vehicle_id are indexed.
In this table are stored multiple orders for multiple vehicles.
I need an sql-statements which returns me the last 5 orders for each truck.
For only one vehicle its no problem:
select * from orders
where vehicle_id = <ID>
and rownum <=5
order by order_id desc;
But I need something like a loop to perform this statement for each vehicle_id.
Or is there any way to put it into a subselect?
Any ideas are welcome ;-)
Thanks in advance,
AndreasHello
Effectively by having the bind variable in there you are partitioning by customer and vehicle id, so by adding customer_id into the partition statement, the optimiser should be able to push the bind variable right down to the inner most view...
XXX> CREATE TABLE dt_orders
2 ( order_id NUMBER NOT NULL,
3 customer_id NUMBER NOT NULL,
4 vehicle_id NUMBER NOT NULL,
5 some_padding VARCHAR2(100) NOT NULL
6 )
7 /
Table created.
Elapsed: 00:00:00.23
XXX> INSERT INTO dt_orders SELECT ROWNUM ID, MOD(ROWNUM,100),MOD(ROWNUM,100), lpad(
2 /
10000 rows created.
Elapsed: 00:00:00.43
XXX> CREATE INDEX dt_orders_i1 ON dt_orders(customer_id)
2 /
Index created.
Elapsed: 00:00:00.17
XXX> select *
2 from (
3 select o.*, rank() over(partition by vehicle_id order by order_id desc) rk
4 from dt_orders o
5 where customer_id = :var_cust_id
6 )
7 where rk <= 5;
5 rows selected.
Elapsed: 00:00:00.11
Execution Plan
Plan hash value: 3174093828
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 107 | 11128 | 22 (5)| 00:00:01 |
|* 1 | VIEW | | 107 | 11128 | 22 (5)| 00:00:01 |
|* 2 | WINDOW SORT PUSHED RANK | | 107 | 9737 | 22 (5)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DT_ORDERS | 107 | 9737 | 21 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | DT_ORDERS_I1 | 43 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RK"<=5)
2 - filter(RANK() OVER ( PARTITION BY "VEHICLE_ID" ORDER BY
INTERNAL_FUNCTION("ORDER_ID") DESC )<=5)
4 - access("CUSTOMER_ID"=TO_NUMBER(:VAR_CUST_ID)) <----
Note
- dynamic sampling used for this statement
Statistics
36 recursive calls
0 db block gets
247 consistent gets
2 physical reads
0 redo size
518 bytes sent via SQL*Net to client
239 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
5 rows processedyour original statement showing that the bind variable has been applied to access the dt_orders table via the index (predicate 4)
If I change the statement to put the bind variable outside the inline view, we now do a full scan and you can see from predicate 1 that the customer id is being filtered at the highest level.
XXX> select *
2 from (
3 select o.*, rank() over(partition by vehicle_id order by order_id desc) rk
4 from dt_orders o
5 )
6 where rk <= 5
7 AND customer_id = :var_cust_id ;
5 rows selected.
Elapsed: 00:00:00.32
Execution Plan
Plan hash value: 3560032888
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10696 | 1086K| | 268 (2)| 00:00:04 |
|* 1 | VIEW | | 10696 | 1086K| | 268 (2)| 00:00:04 |
|* 2 | WINDOW SORT PUSHED RANK| | 10696 | 950K| 2216K| 268 (2)| 00:00:04 |
| 3 | TABLE ACCESS FULL | DT_ORDERS | 10696 | 950K| | 39 (3)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RK"<=5 AND "CUSTOMER_ID"=TO_NUMBER(:VAR_CUST_ID)) <---
2 - filter(RANK() OVER ( PARTITION BY "VEHICLE_ID" ORDER BY
INTERNAL_FUNCTION("ORDER_ID") DESC )<=5)
Note
- dynamic sampling used for this statement
Statistics
4 recursive calls
0 db block gets
240 consistent gets
0 physical reads
0 redo size
519 bytes sent via SQL*Net to client
239 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
5 rows processedBut those two statements are really the same. By applying the filter inside the view as in your original, it means it's only going to calculate the rank for those customers. So we can add the customer id to the partition by statement which means the optimiser can safely push the predicate back down to the access of the orders table..
XXX> select *
2 from (
3 select o.*, rank() over(partition by customer_id,vehicle_id order by order_id desc) rk
4 from dt_orders o
5 )
6 where rk <= 5
7 AND customer_id = :var_cust_id ;
5 rows selected.
Elapsed: 00:00:00.04
Execution Plan
Plan hash value: 3174093828
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 107 | 11128 | 22 (5)| 00:00:01 |
|* 1 | VIEW | | 107 | 11128 | 22 (5)| 00:00:01 |
|* 2 | WINDOW SORT PUSHED RANK | | 107 | 9737 | 22 (5)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DT_ORDERS | 107 | 9737 | 21 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | DT_ORDERS_I1 | 43 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RK"<=5)
2 - filter(RANK() OVER ( PARTITION BY "CUSTOMER_ID","VEHICLE_ID" ORDER BY
INTERNAL_FUNCTION("ORDER_ID") DESC )<=5)
4 - access("O"."CUSTOMER_ID"=TO_NUMBER(:VAR_CUST_ID)) <----
Note
- dynamic sampling used for this statement
Statistics
9 recursive calls
0 db block gets
244 consistent gets
0 physical reads
0 redo size
519 bytes sent via SQL*Net to client
239 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
5 rows processedHTH
David -
Query Degradation--Hash Join Degraded
Hi All,
I found one query degradation issue.I am on 10.2.0.3.0 (Sun OS) with optimizer_mode=ALL_ROWS.
This is a dataware house db.
All 3 tables involved are parition tables (with daily partitions).Partitions are created in advance and ELT jobs loads bulk data into daily partitions.
I have checked that CBO is not using local indexes-created on them which i believe,is appropriate because when i used INDEX HINT, elapsed time increses.
I checked giving index hint for all tables one by one but dint get any performance improvement.
Partitions are daily loaded and after loading,partition-level stats are gathered with dbms_stats.
We are collecting stats at partition level(granularity=>'PARTITION').Even after collecting global stats,there is no change in access pattern.Stats gather command is given below.
PROCEDURE gather_table_part_stats(i_owner_name,i_table_name,i_part_name,i_estimate:= DBMS_STATS.AUTO_SAMPLE_SIZE, i_invalidate IN VARCHAR2 := 'Y',i_debug:= 'N')
Only SOT_KEYMAP.IPK_SOT_KEYMAP is GLOBAL.Rest all indexes are LOCAL.
Earlier,we were having BIND PEEKING issue,which i fixed but introducing NO_INVALIDATE=>FALSE in stats gather job.
Here,Partition_name (20090219) is being passed through bind variables.
SELECT a.sotrelstg_sot_ud sotcrct_sot_ud,
b.sotkey_ud sotcrct_orig_sot_ud, a.ROWID stage_rowid
FROM (SELECT sotrelstg_sot_ud, sotrelstg_sys_ud,
sotrelstg_orig_sys_ord_id, sotrelstg_orig_sys_ord_vseq
FROM sot_rel_stage
WHERE sotrelstg_trd_date_ymd_part = '20090219'
AND sotrelstg_crct_proc_stat_cd = 'N'
AND sotrelstg_sot_ud NOT IN(
SELECT sotcrct_sot_ud
FROM sot_correct
WHERE sotcrct_trd_date_ymd_part ='20090219')) a,
(SELECT MAX(sotkey_ud) sotkey_ud, sotkey_sys_ud,
sotkey_sys_ord_id, sotkey_sys_ord_vseq,
sotkey_trd_date_ymd_part
FROM sot_keymap
WHERE sotkey_trd_date_ymd_part = '20090219'
AND sotkey_iud_cd = 'I'
--not to select logical deleted rows
GROUP BY sotkey_trd_date_ymd_part,
sotkey_sys_ud,
sotkey_sys_ord_id,
sotkey_sys_ord_vseq) b
WHERE a.sotrelstg_sys_ud = b.sotkey_sys_ud
AND a.sotrelstg_orig_sys_ord_id = b.sotkey_sys_ord_id
AND NVL(a.sotrelstg_orig_sys_ord_vseq, 1) = NVL(b.sotkey_sys_ord_vseq, 1);
During normal business hr, i found that query takes 5-7 min(which is also not acceptable), but during high load business hr,it is taking 30-50 min.
I found that most of the time it is spending on HASH JOIN (direct path write temp).We have sufficient RAM (64 GB total/41 GB available).
Below is the execution plan i got during normal business hr.
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem | Used-Mem | Used-Tmp|
| 1 | HASH GROUP BY | | 1 | 1 | 7844K|00:05:28.78 | 16M| 217K| 35969 | | | | |
|* 2 | HASH JOIN | | 1 | 1 | 9977K|00:04:34.02 | 16M| 202K| 20779 | 580M| 10M| 563M (1)| 650K|
| 3 | NESTED LOOPS ANTI | | 1 | 6 | 7855K|00:01:26.41 | 16M| 1149 | 0 | | | | |
| 4 | PARTITION RANGE SINGLE| | 1 | 258K| 8183K|00:00:16.37 | 25576 | 1149 | 0 | | | | |
|* 5 | TABLE ACCESS FULL | SOT_REL_STAGE | 1 | 258K| 8183K|00:00:16.37 | 25576 | 1149 | 0 | | | | |
| 6 | PARTITION RANGE SINGLE| | 8183K| 326K| 327K|00:01:10.53 | 16M| 0 | 0 | | | | |
|* 7 | INDEX RANGE SCAN | IDXL_SOTCRCT_SOT_UD | 8183K| 326K| 327K|00:00:53.37 | 16M| 0 | 0 | | | | |
| 8 | PARTITION RANGE SINGLE | | 1 | 846K| 14M|00:02:06.36 | 289K| 180K| 0 | | | | |
|* 9 | TABLE ACCESS FULL | SOT_KEYMAP | 1 | 846K| 14M|00:01:52.32 | 289K| 180K| 0 | | | | |
I will attached the same for high load business hr once query gives results.It is still executing for last 50 mins.
INDEX STATS (INDEXES ARE LOCAL INDEXES)
TABLE_NAME INDEX_NAME COLUMN_NAME COLUMN_POSITION NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR
SOT_REL_STAGE IDXL_SOTRELSTG_SOT_UD SOTRELSTG_SOT_UD 1 25461560 25461560 184180
SOT_REL_STAGE SOTRELSTG_TRD_DATE 2 25461560 25461560 184180
_YMD_PART
TABLE_NAME INDEX_NAME COLUMN_NAME COLUMN_POSITION NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR
SOT_KEYMAP IDXL_SOTKEY_ENTORDSYS_UD SOTKEY_ENTRY_ORD_S 1 1012306940 3 38308680
YS_UD
SOT_KEYMAP IDXL_SOTKEY_HASH SOTKEY_HASH 1 1049582320 1049582320 1049579520
SOT_KEYMAP SOTKEY_TRD_DATE_YM 2 1049582320 1049582320 1049579520
D_PART
SOT_KEYMAP IDXL_SOTKEY_SOM_ORD SOTKEY_SOM_UD 1 1023998560 268949136 559414840
SOT_KEYMAP SOTKEY_SYS_ORD_ID 2 1023998560 268949136 559414840
SOT_KEYMAP IPK_SOT_KEYMAP SOTKEY_UD 1 1030369480 1015378900 24226580
TABLE_NAME INDEX_NAME COLUMN_NAME COLUMN_POSITION NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR
SOT_CORRECT IDXL_SOTCRCT_SOT_UD SOTCRCT_SOT_UD 1 412484756 412484756 411710982
SOT_CORRECT SOTCRCT_TRD_DATE_Y 2 412484756 412484756 411710982
MD_PART
INDEX partiton stas (from dba_ind_partitions)
INDEX_NAME PARTITION_NAME STATUS BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS SAMPLE_SIZE LAST_ANALYZ GLO
IDXL_SOTCRCT_SOT_UD P20090219 USABLE 1 372 327879 216663 327879 327879 20-Feb-2009 YES
IDXL_SOTKEY_ENTORDSYS_UD P20090219 USABLE 2 2910 3 36618 856229 856229 19-Feb-2009 YES
IDXL_SOTKEY_HASH P20090219 USABLE 2 7783 853956 853914 853956 119705 19-Feb-2009 YES
IDXL_SOTKEY_SOM_ORD P20090219 USABLE 2 6411 531492 157147 799758 132610 19-Feb-2009 YES
IDXL_SOTRELSTG_SOT_UD P20090219 USABLE 2 13897 9682052 45867 9682052 794958 20-Feb-2009 YESThanks in advance.
Bhavik DesaiHi Randolf,
Thanks for the time you spent on this issue.I appreciate it.
Please see my comments below:
1. You've mentioned several times that you're passing the partition name as bind variable, but you're obviously testing the statement with literals rather than bind
variables. So your tests obviously don't reflect what is going to happen in case of the actual execution. The cardinality estimates are potentially quite different when
using bind variables for the partition key.
Yes.I intentionaly used literals in my tests.I found couple of times that plan used by the application and plan generated by AUTOTRACE+EXPLAIN PLAN command...is same and
caused hrly elapsed time.
As i pointed out earlier,last month we solved couple of bind peeking issue by intproducing NO_VALIDATE=>FALSE in stats gather procedure,which we execute just after data
load into such daily partitions and before start of jobs which executes this query.
Execution plans From AWR (with parallelism on at table level DEGREE>1)-->This plan is one which CBO has used when degradation occured.This plan is used most of the times.
ELAPSED_TIME_DELTA BUFFER_GETS_DELTA DISK_READS_DELTA CURSOR(SELECT*FROMTA
1918506000 46154275 918 CURSOR STATEMENT : 4
CURSOR STATEMENT : 4
PLAN_TABLE_OUTPUT
SQL_ID 39708a3azmks7
SELECT A.SOTRELSTG_SOT_UD SOTCRCT_SOT_UD, B.SOTKEY_UD SOTCRCT_ORIG_SOT_UD, A.ROWID STAGE_ROWID FROM (SELECT SOTRELSTG_SOT_UD,
SOTRELSTG_SYS_UD, SOTRELSTG_ORIG_SYS_ORD_ID, SOTRELSTG_ORIG_SYS_ORD_VSEQ FROM SOT_REL_STAGE WHERE SOTRELSTG_TRD_DATE_YMD_PART = :B1 AND
SOTRELSTG_CRCT_PROC_STAT_CD = 'N' AND SOTRELSTG_SOT_UD NOT IN( SELECT SOTCRCT_SOT_UD FROM SOT_CORRECT WHERE SOTCRCT_TRD_DATE_YMD_PART =
:B1 )) A, (SELECT MAX(SOTKEY_UD) SOTKEY_UD, SOTKEY_SYS_UD, SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ, SOTKEY_TRD_DATE_YMD_PART FROM
SOT_KEYMAP WHERE SOTKEY_TRD_DATE_YMD_PART = :B1 AND SOTKEY_IUD_CD = 'I' GROUP BY SOTKEY_TRD_DATE_YMD_PART, SOTKEY_SYS_UD,
SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ) B WHERE A.SOTRELSTG_SYS_UD = B.SOTKEY_SYS_UD AND A.SOTRELSTG_ORIG_SYS_ORD_ID =
B.SOTKEY_SYS_ORD_ID AND NVL(A.SOTRELSTG_ORIG_SYS_ORD_VSEQ, 1) = NVL(B.SOTKEY_SYS_ORD_VSEQ, 1)
Plan hash value: 1213870831
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | | | 19655 (100)| | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,03 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,03 | PCWP | |
| 4 | PX RECEIVE | | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,03 | PCWP | |
| 5 | PX SEND HASH | :TQ10002 | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,02 | P->P | HASH |
| 6 | HASH GROUP BY | | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,02 | PCWP | |
| 7 | NESTED LOOPS ANTI | | 1 | 116 | 19654 (1)| 00:05:54 | | | Q1,02 | PCWP | |
| 8 | HASH JOIN | | 1 | 102 | 19654 (1)| 00:05:54 | | | Q1,02 | PCWP | |
| 9 | PX JOIN FILTER CREATE| :BF0000 | 13M| 664M| 2427 (3)| 00:00:44 | | | Q1,02 | PCWP | |
| 10 | PX RECEIVE | | 13M| 664M| 2427 (3)| 00:00:44 | | | Q1,02 | PCWP | |
| 11 | PX SEND HASH | :TQ10000 | 13M| 664M| 2427 (3)| 00:00:44 | | | Q1,00 | P->P | HASH |
| 12 | PX BLOCK ITERATOR | | 13M| 664M| 2427 (3)| 00:00:44 | KEY | KEY | Q1,00 | PCWC | |
| 13 | TABLE ACCESS FULL| SOT_REL_STAGE | 13M| 664M| 2427 (3)| 00:00:44 | KEY | KEY | Q1,00 | PCWP | |
| 14 | PX RECEIVE | | 27M| 1270M| 17209 (1)| 00:05:10 | | | Q1,02 | PCWP | |
| 15 | PX SEND HASH | :TQ10001 | 27M| 1270M| 17209 (1)| 00:05:10 | | | Q1,01 | P->P | HASH |
| 16 | PX JOIN FILTER USE | :BF0000 | 27M| 1270M| 17209 (1)| 00:05:10 | | | Q1,01 | PCWP | |
| 17 | PX BLOCK ITERATOR | | 27M| 1270M| 17209 (1)| 00:05:10 | KEY | KEY | Q1,01 | PCWC | |
| 18 | TABLE ACCESS FULL| SOT_KEYMAP | 27M| 1270M| 17209 (1)| 00:05:10 | KEY | KEY | Q1,01 | PCWP | |
| 19 | PARTITION RANGE SINGLE| | 16185 | 221K| 0 (0)| | KEY | KEY | Q1,02 | PCWP | |
| 20 | INDEX RANGE SCAN | IDXL_SOTCRCT_SOT_UD | 16185 | 221K| 0 (0)| | KEY | KEY | Q1,02 | PCWP | |
Other Execution plan from AWR
ELAPSED_TIME_DELTA BUFFER_GETS_DELTA DISK_READS_DELTA CURSOR(SELECT*FROMTA
1053251381 0 2925 CURSOR STATEMENT : 4
CURSOR STATEMENT : 4
PLAN_TABLE_OUTPUT
SQL_ID 39708a3azmks7
SELECT A.SOTRELSTG_SOT_UD SOTCRCT_SOT_UD, B.SOTKEY_UD SOTCRCT_ORIG_SOT_UD, A.ROWID STAGE_ROWID FROM (SELECT SOTRELSTG_SOT_UD,
SOTRELSTG_SYS_UD, SOTRELSTG_ORIG_SYS_ORD_ID, SOTRELSTG_ORIG_SYS_ORD_VSEQ FROM SOT_REL_STAGE WHERE SOTRELSTG_TRD_DATE_YMD_PART = :B1 AND
SOTRELSTG_CRCT_PROC_STAT_CD = 'N' AND SOTRELSTG_SOT_UD NOT IN( SELECT SOTCRCT_SOT_UD FROM SOT_CORRECT WHERE SOTCRCT_TRD_DATE_YMD_PART =
:B1 )) A, (SELECT MAX(SOTKEY_UD) SOTKEY_UD, SOTKEY_SYS_UD, SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ, SOTKEY_TRD_DATE_YMD_PART FROM
SOT_KEYMAP WHERE SOTKEY_TRD_DATE_YMD_PART = :B1 AND SOTKEY_IUD_CD = 'I' GROUP BY SOTKEY_TRD_DATE_YMD_PART, SOTKEY_SYS_UD,
SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ) B WHERE A.SOTRELSTG_SYS_UD = B.SOTKEY_SYS_UD AND A.SOTRELSTG_ORIG_SYS_ORD_ID =
B.SOTKEY_SYS_ORD_ID AND NVL(A.SOTRELSTG_ORIG_SYS_ORD_VSEQ, 1) = NVL(B.SOTKEY_SYS_ORD_VSEQ, 1)
Plan hash value: 3434900850
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | | | 1830 (100)| | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,03 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,03 | PCWP | |
| 4 | PX RECEIVE | | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,03 | PCWP | |
| 5 | PX SEND HASH | :TQ10002 | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,02 | P->P | HASH |
| 6 | HASH GROUP BY | | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,02 | PCWP | |
| 7 | NESTED LOOPS ANTI | | 1 | 131 | 1829 (2)| 00:00:33 | | | Q1,02 | PCWP | |
| 8 | HASH JOIN | | 1 | 117 | 1829 (2)| 00:00:33 | | | Q1,02 | PCWP | |
| 9 | PX JOIN FILTER CREATE| :BF0000 | 1010K| 50M| 694 (1)| 00:00:13 | | | Q1,02 | PCWP | |
| 10 | PX RECEIVE | | 1010K| 50M| 694 (1)| 00:00:13 | | | Q1,02 | PCWP | |
| 11 | PX SEND HASH | :TQ10000 | 1010K| 50M| 694 (1)| 00:00:13 | | | Q1,00 | P->P | HASH |
| 12 | PX BLOCK ITERATOR | | 1010K| 50M| 694 (1)| 00:00:13 | KEY | KEY | Q1,00 | PCWC | |
| 13 | TABLE ACCESS FULL| SOT_KEYMAP | 1010K| 50M| 694 (1)| 00:00:13 | KEY | KEY | Q1,00 | PCWP | |
| 14 | PX RECEIVE | | 11M| 688M| 1129 (3)| 00:00:21 | | | Q1,02 | PCWP | |
| 15 | PX SEND HASH | :TQ10001 | 11M| 688M| 1129 (3)| 00:00:21 | | | Q1,01 | P->P | HASH |
| 16 | PX JOIN FILTER USE | :BF0000 | 11M| 688M| 1129 (3)| 00:00:21 | | | Q1,01 | PCWP | |
| 17 | PX BLOCK ITERATOR | | 11M| 688M| 1129 (3)| 00:00:21 | KEY | KEY | Q1,01 | PCWC | |
| 18 | TABLE ACCESS FULL| SOT_REL_STAGE | 11M| 688M| 1129 (3)| 00:00:21 | KEY | KEY | Q1,01 | PCWP | |
| 19 | PARTITION RANGE SINGLE| | 5209 | 72926 | 0 (0)| | KEY | KEY | Q1,02 | PCWP | |
| 20 | INDEX RANGE SCAN | IDXL_SOTCRCT_SOT_UD | 5209 | 72926 | 0 (0)| | KEY | KEY | Q1,02 | PCWP | |
EXECUTION PLAN AFTER SETTING DEGREE=1 (It was also degraded)
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 129 | | 42336 (2)| 00:12:43 | | |
| 1 | HASH GROUP BY | | 1 | 129 | | 42336 (2)| 00:12:43 | | |
| 2 | NESTED LOOPS ANTI | | 1 | 129 | | 42335 (2)| 00:12:43 | | |
|* 3 | HASH JOIN | | 1 | 115 | 51M| 42334 (2)| 00:12:43 | | |
| 4 | PARTITION RANGE SINGLE| | 846K| 41M| | 8241 (1)| 00:02:29 | 81 | 81 |
|* 5 | TABLE ACCESS FULL | SOT_KEYMAP | 846K| 41M| | 8241 (1)| 00:02:29 | 81 | 81 |
| 6 | PARTITION RANGE SINGLE| | 8161K| 490M| | 12664 (3)| 00:03:48 | 81 | 81 |
|* 7 | TABLE ACCESS FULL | SOT_REL_STAGE | 8161K| 490M| | 12664 (3)| 00:03:48 | 81 | 81 |
| 8 | PARTITION RANGE SINGLE | | 6525K| 87M| | 1 (0)| 00:00:01 | 81 | 81 |
|* 9 | INDEX RANGE SCAN | IDXL_SOTCRCT_SOT_UD | 6525K| 87M| | 1 (0)| 00:00:01 | 81 | 81 |
Predicate Information (identified by operation id):
3 - access("SOTRELSTG_SYS_UD"="SOTKEY_SYS_UD" AND "SOTRELSTG_ORIG_SYS_ORD_ID"="SOTKEY_SYS_ORD_ID" AND
NVL("SOTRELSTG_ORIG_SYS_ORD_VSEQ",1)=NVL("SOTKEY_SYS_ORD_VSEQ",1))
5 - filter("SOTKEY_TRD_DATE_YMD_PART"=20090219 AND "SOTKEY_IUD_CD"='I')
7 - filter("SOTRELSTG_CRCT_PROC_STAT_CD"='N' AND "SOTRELSTG_TRD_DATE_YMD_PART"=20090219)
9 - access("SOTRELSTG_SOT_UD"="SOTCRCT_SOT_UD" AND "SOTCRCT_TRD_DATE_YMD_PART"=20090219)2. Why are you passing the partition name as bind variable? A statement executing 5 mins. best, > 2 hours worst obviously doesn't suffer from hard parsing issues and
doesn't need to (shouldn't) share execution plans therefore. So I strongly suggest to use literals instead of bind variables. This also solves any potential issues caused
by bind variable peeking.
This is a custom application which uses bind variables to extract data from daily partitions.So,daily automated data extract from daily paritions after load and ELT process.
Here,Value of bind variable is being passed through a procedure parameter.It would be bit difficult to use literals in such application.
3. All your posted plans suffer from bad cardinality estimates. The NO_MERGE hint suggested by Timur only caused a (significant) damage limitation by obviously reducing
the row source size by the group by operation before joining, but still the optimizer is way off, apart from the obviously wrong join order (larger row set first) in
particular the NESTED LOOP operation is causing the main troubles due to excessive logical I/O, as already pointed out by Timur.
Can i ask for alternatives to NESTED LOOP?
4. Your PLAN_TABLE seems to be old (you should see a corresponding note at the bottom of the DBMS_XPLAN.DISPLAY output), because none of the operations have a
filter/access predicate information attached. Since your main issue are the bad cardinality estimates, I strongly suggest to drop any existing PLAN_TABLEs in any non-Oracle
owned schemas because 10g already provides one in the SYS schema (GTT PLAN_TABLE$) exposed via a public synonym, so that the EXPLAIN PLAN information provides the
"Predicate Information" section below the plan covering the "Filter/Access" predicates.
Please post a revised explain plan output including this crucial information so that we get a clue why the cardinality estimates are way off.
I have dropped the old plan.Got above execution plan(listed above in first point) with PREDICATE information.
"As already mentioned the usage of bind variables for the partition name makes this issue potentially worse."
Is there any workaround without replacing bind variable.I am on 10g so 11g's feature will not help !!!
How are you gathering the statistics daily, can you post the exact command(s) used?
gather_table_part_stats(i_owner_name,i_table_name,i_part_name,i_estimate:= DBMS_STATS.AUTO_SAMPLE_SIZE, i_invalidate IN VARCHAR2 := 'Y',i_debug:= 'N')
Thanks & Regards,
Bhavik Desai -
How to maintain bitmap index on a large table in DW?
Hi all,
We have many tables which are constantly doing either FULL or INCREMENTAL loading.
And we have created many BITMAP indexes and several B*Tree index (caused by PRIMARY KEY or UNIQUE key constraints) on those tables.
So, what I want to know is, how to maintain those BITMAP (and B*Tree) indexes for different loading mode?
like, should I drop the index before the full load and re-create it after that?
and do nothing in INCREMENTAL loading? I am aware that it will take more time to load with indexes.
any links, books, articles or opinions would be highly appreciated.
ThanksJust to reiterate, add to what Adam said. From Oracle Doc
http://download.oracle.com/docs/cd/E11882_01/server.112/e17120/indexes002.htm#CIHJIDJG
Unusable indexes
An unusable index is ignored by the optimizer and is not maintained by DML. One reason to make an index unusable is to improve bulk load performance. (Bulk loads go more quickly if the database does not need to maintain indexes when inserting rows.) Instead of dropping the index and later re-creating it, which requires you to recall the exact parameters of the CREATE INDEX statement, you can make the index unusable, and then rebuild it.
You can create an index in the unusable state, or you can mark an existing index or index partition unusable. In some cases the database may mark an index unusable, such as when a failure occurs while building the index. When one partition of a partitioned index is marked unusable, the other partitions of the index remain valid.
An unusable index or index partition must be rebuilt, or dropped and re-created, before it can be used. Truncating a table makes an unusable index valid.
Beginning with Oracle Database 11g Release 2, when you make an existing index unusable, its index segment is dropped.
The functionality of unusable indexes depends on the setting of the SKIP_UNUSABLE_INDEXES initialization parameter. When SKIP_UNUSABLE_INDEXES is TRUE (the default), then:
•DML statements against the table proceed, but unusable indexes are not maintained.
•DML statements terminate with an error if there are any unusable indexes that are used to enforce the UNIQUE constraint.
•For nonpartitioned indexes, the optimizer does not consider any unusable indexes when creating an access plan for SELECT statements. The only exception is when an index is explicitly specified with the INDEX() hint.
•For a partitioned index where one or more of the partitions are unusable, the optimizer does not consider the index if it cannot determine at query compilation time if any of the index partitions can be pruned. This is true for both partitioned and nonpartitioned tables. The only exception is when an index is explicitly specified with the INDEX() hint.
When SKIP_UNUSABLE_INDEXES is FALSE, then:
•If any unusable indexes or index partitions are present, any DML statements that would cause those indexes or index partitions to be updated are terminated with an error.
•For SELECT statements, if an unusable index or unusable index partition is present but the optimizer does not choose to use it for the access plan, the statement proceeds. However, if the optimizer does choose to use the unusable index or unusable index partition, the statement terminates with an error.
Incremental load really matters the volume and whether for new dats you just add new partitions or subpartitions . If my incremntal go all over place and/or if I am touching few thousand rows. Yes might want to keep the indexes valid and let Oracle maintain it. IF millions added or incremental data just added to new part/subpart . Keeping indexes unsable for those partitions/subpartitions and the rebuilding it later may yield better results. -
Which ssd is compatible with my macbook pro 13 "late 2011?
I have a macbook pro late 2011, 13 "intel core i5 2.4GHz, my hd is a TOSHIBA MK5065GSXF
Model Name: MacBook Pro
Model Identifier: MacBookPro8,1
Processor Name: Intel Core i5
Processor Speed: 2.4 GHz
Number of Processors: 1
Total Number of Cores: 2
L2 Cache (per Core): 256 KB
L3 Cache: 3 MB
Memory: 8 GB
Boot ROM Version: MBP81.0047.B27
SMC Version (system): 1.68f99
Sudden Motion Sensor:
State: Enabled
My HD is
TOSHIBA MK5065GSXF:
Capacity: 500.11 GB (500,107,862,016 bytes)
Model: TOSHIBA MK5065GSXF
Review: GV108B
Serial Number: 126OCF5QT
Native Command Queuing: Yes
Queue Depth: 32
Removable Media: No
Removable Drive: Not
BSD Name: disk0
Rotational Rate: 5400
Media Type: Rotational
Partition Map Type: GPT (GUID Partition Table)
State S.M.A.R.T .: Verified
volumes:
EFI:
Capacity: 209.7 MB (209,715,200 bytes)
BSD Name: disk0s1
Content: EFI
Macintosh HD:
Capacity: 419 GB (418,999,992,320 bytes)
Available: 103.46 GB (103,456,337,920 bytes)
Recordable: Yes
File System: HFS + Chronological Prolific Reg.
BSD Name: disk0s2
Mount Point: /
Content: Apple_HFS
Volume UUID: BF3C8A89-5C0C-32D3-9152-FA2C8024DF57
Recovery HD:
Capacity: 650 MB (650,002,432 bytes)
BSD Name: disk0s3
Content: Apple_Boot
Volume UUID: 389848BE-6EBF-350F-BE40-8776ECF756B4
BOOTCAMP:
Capacity: 80.25 GB (80,247,521,280 bytes)
Available: 31.39 GB (31,388,819,456 bytes)
Writable: no
File system: NTFS
BSD Name: disk0s4
Mount Point: / Volumes / BOOTCAMP
Content: Microsoft Basic Data
Volume UUID: 34186CAA-1B13-4DFE-900B-E437E44E1B1D
I could not decipher which sata is compatible with my macbook, it is 1, 2 or 3, or what should I buy.Your machine has a SATA III connection, meaning you can get a negotiated link speed of up to 6.0Gbps.
You may want to take a look at my user tip -> Upgrading Your MacBook Pro with a Solid State Drive.
There are dozens of SSDs that would work in your MacBook Pro. You first need to decide what size you want (or need): they're available from 128GB - 1Terabyte.
Personally, I recommend either the Crucial M550 series or the Samsung EVO series. You can get either from a wealth of different sources. I have 4 Crucials now -> 2 mSATA and two standard 2.5" drives. There were some problems with Macs and the EVO series when they were first introduced but they work just fine with Macs now.
Google both drives and find the best prices for the storage that you need. You can't go wrong with either.
Good luck,
Clinton
MacBook Pro (15-inch Late 2011), OS Mavericks 10.9.4, 16GB Crucial RAM, Crucial M500 960GB SSD, 27” Apple Thunderbolt Display -
Unable to get correct sort order for subquery
Hi,
I have this complex subquery and I am sort of stuck on how to get the correct sort order. Here is the query
select *
from
(select r.ResultsId, r.TestName, p.ProjectName, h.PhaseName,
t.TypeText, s.StationName,
to_char(max(r.ExecutionStartDate) over
(partition by r.TestName, b.ConfigValue, .ConfigValue,
d.ConfigValue),
'DD MON YYYY HH24:MI:SS'), r.Owner, t.Status,
b.ConfigValue Language, c.ConfigValue Telemetry,
d.ConfigValue Flex
from Results r, Projects p, Phase h, Type t, Station s, Status t,
ConfigResults b, ConfigResults c, ConfigResults d
where
r.resultsId = b.resultsId and
r.resultsId = c.resultsId and
r.resultsId = d.resultsId and
b.configurationid = 1 and
c.configurationid = 2 and
d.configurationid = 3 and
r.projectid = p.projectid and
r.statusid = t.statusid and
h.PhaseId = r.PhaseId and
t.TypeId = r.TestTypeId and
s.StationId = r.StationId and %s
Order By
r.TestName, b.ConfigValue, c.ConfigValue, d.ConfigValue)
order by resultsid
My results are sorted by TestName, ConfigValue but I am trying to the
results sorted by resultsid
Any assistance would be greatly appreciated.
Thanks,
JeffWhat happens if you add an order by r.resultsid to your order by statement directly in your subquery rather than doing an order by later?
It will not work because I need to specify the exact fields that I use in
the partition by statement.
Jeff -
ORA-01502 error in case of unusable unique index and bulk dml
Hi, all.
The db is 11.2.0.3 on a linux machine.
I made a unique index unusable, and issued a dml on the table.
Howerver, oracle gave me ORA-01502 error.
In order to avoid ORA-01502 error, do I have to drop the unique index ,and do bulk dml, and recreate the index?
Or Is there any other solution without re-creating the unique index?
create table hoho.abcde as
select level col1 from dual connect by level <=1000
10:09:55 HOHO@PD1MGD>create unique index hoho.abcde_dx1 on hoho.abcde (col1);
Index created.
10:10:23 HOHO@PD1MGD>alter index hoho.abcde_dx1 unusable;
Index altered.
Elapsed: 00:00:00.03
10:11:27 HOHO@PD1MGD>delete from hoho.abcde where rownum < 11;
delete from hoho.abcde where rownum < 11
ERROR at line 1:
ORA-01502: index 'HOHO.ABCDE_DX1' or partition of such index is in unusable stateThanks in advance.
Best Regards.Hi. all.
The following is from "http://docs.oracle.com/cd/E14072_01/server.112/e10595/indexes002.htm#CIHJIDJG".
Is there anyone who can show me a tip to avoid the following without dropping and re-creating an unique index?
•DML statements terminate with an error if there are any unusable indexes that are used to enforce the UNIQUE constraint.
Unusable indexes
An unusable index is ignored by the optimizer and is not maintained by DML. One reason to make an index unusable is if you want to improve the performance of bulk loads. (Bulk loads go more quickly if the database does not need to maintain indexes when inserting rows.) Instead of dropping the index and later recreating it, which requires you to recall the exact parameters of the CREATE INDEX statement, you can make the index unusable, and then just rebuild it. You can create an index in the unusable state, or you can mark an existing index or index partition unusable. The database may mark an index unusable under certain circumstances, such as when there is a failure while building the index. When one partition of a partitioned index is marked unusable, the other partitions of the index remain valid.
An unusable index or index partition must be rebuilt, or dropped and re-created, before it can be used. Truncating a table makes an unusable index valid.
Beginning with Oracle Database 11g Release 2, when you make an existing index unusable, its index segment is dropped.
The functionality of unusable indexes depends on the setting of the SKIP_UNUSABLE_INDEXES initialization parameter.
When SKIP_UNUSABLE_INDEXES is TRUE (the default), then:
•DML statements against the table proceed, but unusable indexes are not maintained.
•DML statements terminate with an error if there are any unusable indexes that are used to enforce the UNIQUE constraint.
•For non-partitioned indexes, the optimizer does not consider any unusable indexes when creating an access plan for SELECT statements. The only exception is when an index is explicitly specified with the INDEX() hint.
•For a partitioned index where one or more of the partitions are unusable, the optimizer does not consider the index if it cannot determine at query compilation time if any of the index partitions can be pruned. This is true for both partitioned and non-partitioned tables. The only exception is when an index is explicitly specified with the INDEX() hint.
When SKIP_UNUSABLE_INDEXES is FALSE, then:
•If any unusable indexes or index partitions are present, any DML statements that would cause those indexes or index partitions to be updated are terminated with an error.
•For SELECT statements, if an unusable index or unusable index partition is present but the optimizer does not choose to use it for the access plan, the statement proceeds. However, if the optimizer does choose to use the unusable index or unusable index partition, the statement terminates with an error.Thanks in advance.
Best Regards. -
Compare Dates and select the max date ?
Hello,
I am trying to write a script and will compare the dates in " eff_startdt" and give me the lastest date at the outcome.
I have data that some service locations have more than one contract date and I need to get the latest dated conract dates to work on the data but I tried many things I am always getting errors. When I run the script below I get " missing expression" error. I dont' see anything missing.
also somehow Max() is keep giving me errors when I do something like [ ON service_locs = vmeterid WHERE SERVICE_LOCS = SERVICE_LOCS AND EFF_STARTDT = MAX(EFF_STARTDT) ]
Can someone pls give me advice on this. Thanks
SELECT DISTINCT Broker, customer_name, service_locs, fee_kwh, qtr_monthly, eff_startdt, eff_enddt
FROM VMETER
INNER JOIN BROKER_DATA
ON service_locs = vmeterid WHERE SERVICE_LOCS = SERVICE_LOCS AND (SELECT MAX(EFF_STARTDT) FROM VMETER)
-----------------------------------------------------------Hi,
I will try to explain on my example. I have got a table:
DESC SOLD_ITEMS;
Name Value NULL? Type
COMPONENT VARCHAR2(255)
SUBCOMPONENT VARCHAR2(255)
YEAR NUMBER(4)
MONTH NUMBER(2)
DAY NUMBER(2)
DEFECTS NUMBER(10)
DESCRIPTION VARCHAR2(200)
SALE_DATE DATE
COMP_ID NUMBERI have insert example data into my table:
select component, subcomponent, sale_date,comp_id
2 from sold_items;
COMPONENT SUBCOMPONENT SALE_DAT COMP_ID
graph bar 06/04/03 1
graph bar 06/04/01 2
search user search 06/04/02 3
search user search 06/04/01 4
search product search 06/03/20 5
search product search 06/03/16 6
graph bar 06/05/01 7
graph bar 06/05/02 8
graph bar 06/05/02 9
As you can see there are a few components and subcomponents duplicated with different date and comp_id value.
I want to get component and subcomponent combination with latest date.
SELECT COMPONENT, SUBCOMPONENT, MAX(SALE_DATE)
2 FROM SOLD_ITEMS
3* GROUP BY COMPONENT, SUBCOMPONENT;
Efect:
COMPONENT SUBCOMPONENT MAX(SALE
graph bar 06/05/02
search user search 06/04/02
search product search 06/03/20
For your purpose I will do it using join and subquery. Maybe it will help you resolve your problem:
SELECT COMPONENT, SUBCOMPONENT, SALE_DATE, RANK
2 FROM (SELECT T1.COMPONENT, T1.SUBCOMPONENT, T2.SALE_DATE,
3 ROW_NUMBER() OVER (PARTITION BY T1.COMPONENT, T2.SUBCOMPONENT ORDER BY T2.SALE_DATE DESC) AS ROW
4 FROM SOLD_ITEMS T1, SOLD_ITEMS T2
5 WHERE T1.COMP_ID = T2.COMP_ID)
6* WHERE ROW = 1;
I joined the same table but it act as two different tables inside subquery. It will group values (partition by statement) and order result descending using t2.sale_date column. As you can see columns are returned from both tables. If you would like to add some conditions, you can do it after WHERE ROW=1 code.
Results:
COMPONENT SUBCOMPONENT SALE_DAT RANK
graph bar 06/05/02 1
search product search 06/03/20 1
search user search 06/04/02 1Hope this help you
Peter D.
Maybe you are looking for
-
Help please! Can't save a Livecyle 7 form in Acrobat 9
I've created a form for the first time using Livecycle 7. Little did I know this wasn't the latest version... I've since discovered that within the organisation others users of the form have acrobat 8 and 9. The users with version 9 are unable to sav
-
How to make tables fully sized in a database diagram?
Hi, This is my first time using JDeveloper. I have been asked to make a database diagram from a database schema for the purposes of printing the diagram out to stick on the office wall, yay Smile There are a huge number of tables (100+) and I have go
-
I want to know e-mail address Apple.help me please
Hi all help me please. i come from Thailand i want to delete my money. i want to use Thai Store. i have money 0.04$ Sorry i can write English a little.
-
BUG: Create Subversion Connection Dialog Loop
Hi, Using JDeveloper 11g: 1. Check out an application from a repository using the command-line version of Subversion (svn). 2. Open the .jws file for an application. 3. Right-click a Java file that exists in the "Java Files" pane. 4. Select something
-
Adobe Extension: Menü kann nicht aktualisiert werden - Was mache ich falsch?
Ich versuch den ganze Tag schon eine Extension von Adobe-Empfehlungen runterzuladen und erhalten vom Adboe Extension Manager immer die Meldung, dass das Menü nicht aktualisiert werden kann. Was mache ich falsch?