Virtual column partitioning - explain plan takes lot of time.

I have some problem with table with partitioning based on virtual column.
Explain plan generates for some simple select 2-5 minutes,
but the same select but without part of where clausule generate in secounds.
In both query there is the same explain plan.
Could someone explain why?
Table:
CREATE TABLE "SUBSCRIPTION"
"CUSTOMER_ID" VARCHAR2(100 BYTE),
"IDENT_SOURCE_ID" VARCHAR2(20 BYTE),
"ACCOUNT_ID" VARCHAR2(100 BYTE),
"MSISDN" VARCHAR2(500 BYTE),
"IMSI" VARCHAR2(500 BYTE),
"SIM" VARCHAR2(500 BYTE),
"IMEI" VARCHAR2(500 BYTE),
"MEID" VARCHAR2(15 BYTE),
"EMAIL" VARCHAR2(100 BYTE),
"TELCOOP" VARCHAR2(1000 BYTE),
"MSISDN_TYPE" VARCHAR2(20 BYTE),
"GSM" NUMBER(1,0),
"CDMA" NUMBER(1,0),
"VALID_FROM" DATE,
"VALID_TO" DATE,
"MSISDN_HASH" NUMBER(3,0) GENERATED ALWAYS AS (MOD(TO_NUMBER(NVL2(RTRIM(TRANSLATE(NVL(SUBSTR("MSISDN",-3),NVL("MSISDN",'err')),'123456789','000000000'),'0'),'-1',NVL(SUBSTR("MSISDN",-3),"MSISDN"))),125)) VIRTUAL VISIBLE, --generali mod from 3 last digits of msisdn
) PARTITION BY LIST ( "MSISDN_HASH" )
PARTITION "PCHR" VALUES ( -1 )
PARTITION "P000" VALUES ( 0 )
PARTITION "P001" VALUES ( 1 )
... and so on till...
PARTITION "P124" VALUES (124)
PARALLEL 4;
Slow select:
select CUSTOMER_ID, IDENT_SOURCE_ID, ACCOUNT_ID, MSISDN, IMSI, SIM, IMEI, MEID, EMAIL, TELCOOP
from dbident2.subscription
where MSISDN = '600489461'
AND MSISDN_HASH = 86
AND VALID_FROM <=TO_DATE('2012-02-10 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
Fast select:
select CUSTOMER_ID, IDENT_SOURCE_ID, ACCOUNT_ID, MSISDN, IMSI, SIM, IMEI, MEID, EMAIL, TELCOOP
from dbident2.subscription
where MSISDN = '600489461'
AND MSISDN_HASH = 86

--Slower select trace:
Registered qb: SEL$1 0xf4ea2a20 (PARSER)
QUERY BLOCK SIGNATURE
signature (): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=4 objn=848731 hint_alias="SUBSCRIPTION"@"SEL$1"
SPM: statement not found in SMB
SPM: statement not a candidate for auto-capture
Dynamic sampling level auto-adjusted from 6 to 6
Automatic degree of parallelism (ADOP)
Automatic degree of parallelism is disabled: Parameter.
PM: Considering predicate move-around in query block SEL$1 (#0)
Predicate Move-Around (PM)
OPTIMIZER INFORMATION
----- Current SQL Statement for this session (sql_id=afjvvjmx6tqgr) -----
explain plan for
select CUSTOMER_ID, IDENT_SOURCE_ID, ACCOUNT_ID, MSISDN, IMSI, SIM, IMEI, MEID, EMAIL, TELCOOP
from subscription
where MSISDN = '600600600' AND MSISDN_HASH = 86
AND VALID_FROM <=TO_DATE('2012-02-10 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
Legend
The following abbreviations are used by optimizer trace.
CBQT - cost-based query transformation
JPPD - join predicate push-down
OJPPD - old-style (non-cost-based) JPPD
FPD - filter push-down
PM - predicate move-around
CVM - complex view merging
SPJ - select-project-join
SJC - set join conversion
SU - subquery unnesting
OBYE - order by elimination
OST - old style star transformation
ST - new (cbqt) star transformation
CNT - count(col) to count(*) transformation
JE - Join Elimination
JF - join factorization
SLP - select list pruning
DP - distinct placement
qb - query block
LB - leaf blocks
DK - distinct keys
LB/K - average number of leaf blocks per key
DB/K - average number of data blocks per key
CLUF - clustering factor
NDV - number of distinct values
Resp - response cost
Card - cardinality
Resc - resource cost
NL - nested loops (join)
SM - sort merge (join)
HA - hash (join)
CPUSPEED - CPU Speed
IOTFRSPEED - I/O transfer speed
IOSEEKTIM - I/O seek time
SREADTIM - average single block read time
MREADTIM - average multiblock read time
MBRC - average multiblock read count
MAXTHR - maximum I/O system throughput
SLAVETHR - average slave I/O throughput
dmeth - distribution method
1: no partitioning required
2: value partitioned
4: right is random (round-robin)
128: left is random (round-robin)
8: broadcast right and partition left
16: broadcast left and partition right
32: partition left using partitioning of right
64: partition right using partitioning of left
256: run the join in serial
0: invalid distribution method
sel - selectivity
ptn - partition
PARAMETERS USED BY THE OPTIMIZER
PARAMETERS WITH ALTERED VALUES
Compilation Environment Dump
pgamax_size = 1258280 KB
parallel_query_default_dop = 32
db_file_multiblock_read_count = 16
Bug Fix Control Environment
PARAMETERS IN OPT_PARAM HINT
Column Usage Monitoring is ON: tracking level = 1
Considering Query Transformations on query block SEL$1 (#0)
Query transformations (QT)
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No relevant table found.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for afjvvjmx6tqgr.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
Common Subexpression elimination (CSE)
CSE: CSE not performed on query block SEL$1 (#0).
OBYE: Considering Order-by Elimination from view SEL$1 (#0)
Order-by elimination (OBYE)
OBYE: OBYE bypassed: no order by to eliminate.
CVM: Considering view merge in query block SEL$1 (#0)
query block SEL$1 (#0) unchanged
Considering Query Transformations on query block SEL$1 (#0)
Query transformations (QT)
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No relevant table found.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for afjvvjmx6tqgr.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
Common Subexpression elimination (CSE)
CSE: CSE not performed on query block SEL$1 (#0).
SU: Considering subquery unnesting in query block SEL$1 (#0)
Subquery Unnest (SU)
SJC: Considering set-join conversion in query block SEL$1 (#0)
Set-Join Conversion (SJC)
SJC: not performed
PM: Considering predicate move-around in query block SEL$1 (#0)
Predicate Move-Around (PM)
PM: PM bypassed: Outer query contains no views.
PM: PM bypassed: Outer query contains no views.
query block SEL$1 (#0) unchanged
FPD: Considering simple filter push in query block SEL$1 (#0)
"SUBSCRIPTION"."MSISDN"='600600600' AND "SUBSCRIPTION"."MSISDN_HASH"=86 AND "SUBSCRIPTION"."VALID_FROM"<=TO_DATE(' 2012-02-10 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
try to generate transitive predicate from check constraints for query block SEL$1 (#0)
finally: "SUBSCRIPTION"."MSISDN"='600600600' AND "SUBSCRIPTION"."MSISDN_HASH"=86 AND "SUBSCRIPTION"."VALID_FROM"<=TO_DATE(' 2012-02-10 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
apadrv-start sqlid=12053738773107497463
call(in-use=8176, alloc=32712), compile(in-use=114912, alloc=116848), execution(in-use=175432, alloc=178928)
Peeked values of the binds in SQL statement
Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "SUBSCRIPTION"."CUSTOMER_ID" "CUSTOMER_ID","SUBSCRIPTION"."IDENT_SOURCE_ID" "IDENT_SOURCE_ID","SUBSCRIPTION"."ACCOUNT_ID" "ACCOUNT_ID","SUBSCRIPTION"."MSISDN" "MSISDN","SUBSCRIPTION"."IMSI" "IMSI","SUBSCRIPTION"."SIM" "SIM","SUBSCRIPTION"."IMEI" "IMEI","SUBSCRIPTION"."MEID" "MEID","SUBSCRIPTION"."EMAIL" "EMAIL","SUBSCRIPTION"."TELCOOP" "TELCOOP" FROM "DBIDENT2"."SUBSCRIPTION" "SUBSCRIPTION" WHERE "SUBSCRIPTION"."MSISDN"='600600600' AND "SUBSCRIPTION"."MSISDN_HASH"=86 AND "SUBSCRIPTION"."VALID_FROM"<=TO_DATE(' 2012-02-10 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
kkoqbc: optimizing query block SEL$1 (#0)
call(in-use=8320, alloc=32712), compile(in-use=115880, alloc=116848), execution(in-use=175432, alloc=178928)
kkoqbc-subheap (create addr=0x2b24ebece950)
QUERY BLOCK TEXT
select CUSTOMER_ID, IDENT_SOURCE_ID, ACCOUNT_ID, MSISDN, IMSI, SIM, IMEI, MEID, EMAIL, TELCOOP
from subscription
where MSISDN = '600600600' AND MSISDN_HASH = 86
AND VALID_FROM <=TO_DATE('2012-02-10 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
QUERY BLOCK SIGNATURE
signature (optimizer): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=0 objn=848731 hint_alias="SUBSCRIPTION"@"SEL$1"
SYSTEM STATISTICS INFORMATION
Using NOWORKLOAD Stats
CPUSPEEDNW: 714 millions instructions/sec (default is 100)
IOTFRSPEED: 4096 bytes per millisecond (default is 4096)
IOSEEKTIM: 10 milliseconds (default is 10)
MBRC: -1 blocks (default is 16)
BASE STATISTICAL INFORMATION
Table Stats::
Table: SUBSCRIPTION Alias: SUBSCRIPTION Partition [87]
#Rows: 218104 #Blks: 11008 AvgRowLen: 129.00 ChainCnt: 0.00
#Rows: 218104 #Blks: 11008 AvgRowLen: 129.00 ChainCnt: 0.00
Index Stats::
Index: SUBSCRIPTION_NDX_ACCID Col#: 3
LVLS: 3 #LB: 121036 #DK: 9767936 LB/K: 1.00 DB/K: 1.00 CLUF: 13921256.00
Index: SUBSCRIPTION_NDX_CUSID Col#: 1 2 18
LVLS: 3 #LB: 142123 #DK: 24665396 LB/K: 1.00 DB/K: 1.00 CLUF: 24842146.00
Index: SUBSCRIPTION_NDX_EMAIL Col#: 9
LVLS: 2 #LB: 8365 #DK: 1361827 LB/K: 1.00 DB/K: 1.00 CLUF: 1361798.00
Index: SUBSCRIPTION_NDX_EXT1 Col#: 19
LVLS: 2 #LB: 65756 #DK: 67792 LB/K: 1.00 DB/K: 80.00 CLUF: 5446485.00
Index: SUBSCRIPTION_NDX_IMEI Col#: 7
LVLS: 2 #LB: 44539 #DK: 9199616 LB/K: 1.00 DB/K: 1.00 CLUF: 10413439.00
Index: SUBSCRIPTION_NDX_IMSI Col#: 5
LVLS: 3 #LB: 92914 #DK: 12846080 LB/K: 1.00 DB/K: 1.00 CLUF: 23472821.00
Index: SUBSCRIPTION_NDX_MEID Col#: 8
LVLS: 1 #LB: 132 #DK: 12585 LB/K: 1.00 DB/K: 1.00 CLUF: 18419.00
Index: SUBSCRIPTION_NDX_MSISDN Col#: 4 PARTITION [87]
LVLS: 2 #LB: 1092 #DK: 74848 LB/K: 1.00 DB/K: 2.00 CLUF: 191920.00
LVLS: 2 #LB: 1092 #DK: 74848 LB/K: 1.00 DB/K: 2.00 CLUF: 191920.00
Index: SUBSCRIPTION_NDX_SIM Col#: 6
LVLS: 2 #LB: 88153 #DK: 13169664 LB/K: 1.00 DB/K: 1.00 CLUF: 24727298.00
Index: SUBSCRIPTION_NDX_SRCID Col#: 2 17
LVLS: 2 #LB: 81729 #DK: 4 LB/K: 20432.00 DB/K: 257314.00 CLUF: 1029257.00
Access path analysis for SUBSCRIPTION
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for SUBSCRIPTION[SUBSCRIPTION]
*** 2012-06-12 12:34:53.283
** Performing dynamic sampling initial checks. **
Column (#14):
NewDensity:0.000020, OldDensity:0.000366 BktCnt:254, PopBktCnt:22, PopValCnt:1, NDV:46252
Column (#14):
NewDensity:0.000163, OldDensity:0.000378 BktCnt:254, PopBktCnt:12, PopValCnt:1, NDV:5852
Column (#14): VALID_FROM( Part#: 87
AvgLen: 8 NDV: 5852 Nulls: 2 Density: 0.000163 Min: 2450364 Max: 2456082
Histogram: HtBal #Bkts: 254 UncompBkts: 254 EndPtVals: 244
Column (#14): VALID_FROM(
AvgLen: 8 NDV: 5852 Nulls: 2 Density: 0.000163 Min: 2450364 Max: 2456082
Histogram: HtBal #Bkts: 254 UncompBkts: 254 EndPtVals: 244
Column (#4):
NewDensity:0.000000, OldDensity:0.000000 BktCnt:254, PopBktCnt:0, PopValCnt:0, NDV:9730048
Column (#4):
NewDensity:0.000013, OldDensity:0.000033 BktCnt:254, PopBktCnt:0, PopValCnt:0, NDV:74848
Column (#4): MSISDN( Part#: 87
AvgLen: 10 NDV: 74848 Nulls: 0 Density: 0.000013
Histogram: HtBal #Bkts: 254 UncompBkts: 254 EndPtVals: 255
Column (#4): MSISDN(
AvgLen: 10 NDV: 74848 Nulls: 0 Density: 0.000013
Histogram: HtBal #Bkts: 254 UncompBkts: 254 EndPtVals: 255
** Dynamic sampling initial checks returning TRUE (level = 6).
*** 2012-06-12 12:34:53.284
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("SUBSCRIPTION") FULL("SUBSCRIPTION") NO_PARALLEL_INDEX("SUBSCRIPTION") */ 1 AS C1, CASE WHEN "SUBSCRIPTION"."MSISDN"='600600600' AND "SUBSCRIPTION"."VALID_FROM"<=TO_DATE(' 2012-02-10 00:00:00', 'syyyy-mm-dd hh24:mi:ss') THEN 1 ELSE 0 END AS C2 FROM "DBIDENT2"."SUBSCRIPTION" SAMPLE BLOCK (1.153706 , 1) SEED (1) "SUBSCRIPTION" WHERE "SUBSCRIPTION"."MSISDN"='600600600' AND "SUBSCRIPTION"."VALID_FROM"<=TO_DATE(' 2012-02-10 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) SAMPLESUB
*** 2012-06-12 12:36:44.452
** Executed dynamic sampling query:
level : 6
sample pct. : 1.153706
total partitions : 1
partitions for sampling : 1
actual sample size : 342182
filtered sample card. : 0
orig. card. : 218104
block cnt. table stat. : 11008
block cnt. for sampling: 11008
max. sample block cnt. : 128
sample block cnt. : 127
min. sel. est. : 0.00001260
** Not using dynamic sampling for single table sel. or cardinality.
DS Failed for : ----- Current SQL Statement for this session (sql_id=afjvvjmx6tqgr) -----
explain plan for
select CUSTOMER_ID, IDENT_SOURCE_ID, ACCOUNT_ID, MSISDN, IMSI, SIM, IMEI, MEID, EMAIL, TELCOOP
from subscription
where MSISDN = '600600600' AND MSISDN_HASH = 86
AND VALID_FROM <=TO_DATE('2012-02-10 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
Column (#21):
NewDensity:0.002912, OldDensity:0.000000 BktCnt:14078, PopBktCnt:14078, PopValCnt:126, NDV:126
Column (#21): MSISDN_HASH( Part#: 87
AvgLen: 3 NDV: 1 Nulls: 0 Density: 0.000002 Min: 86 Max: 86
Histogram: Freq #Bkts: 1 UncompBkts: 13365 EndPtVals: 1
Column (#21): MSISDN_HASH(
AvgLen: 3 NDV: 1 Nulls: 0 Density: 0.000002 Min: 86 Max: 86
Histogram: Freq #Bkts: 1 UncompBkts: 13365 EndPtVals: 1
Column (#1):
NewDensity:0.000000, OldDensity:0.000241 BktCnt:254, PopBktCnt:31, PopValCnt:2, NDV:9768960
Column (#1):
NewDensity:0.000009, OldDensity:0.000250 BktCnt:254, PopBktCnt:36, PopValCnt:3, NDV:99208
Column (#1): CUSTOMER_ID( Part#: 87
AvgLen: 11 NDV: 99208 Nulls: 0 Density: 0.000009
Histogram: HtBal #Bkts: 254 UncompBkts: 254 EndPtVals: 222
Column (#1): CUSTOMER_ID(
AvgLen: 11 NDV: 99208 Nulls: 0 Density: 0.000009
Histogram: HtBal #Bkts: 254 UncompBkts: 254 EndPtVals: 222
Column (#2):
NewDensity:0.000639, OldDensity:0.000000 BktCnt:14078, PopBktCnt:14078, PopValCnt:3, NDV:3
Column (#2):
NewDensity:0.000786, OldDensity:0.000002 BktCnt:13365, PopBktCnt:13365, PopValCnt:3, NDV:3
Column (#2): IDENT_SOURCE_ID( Part#: 87
AvgLen: 5 NDV: 3 Nulls: 0 Density: 0.000786
Histogram: Freq #Bkts: 3 UncompBkts: 13365 EndPtVals: 3
Column (#2): IDENT_SOURCE_ID(
AvgLen: 5 NDV: 3 Nulls: 0 Density: 0.000786
Histogram: Freq #Bkts: 3 UncompBkts: 13365 EndPtVals: 3
ColGroup (#1, Index) SUBSCRIPTION_NDX_CUSID
Col#: 1 2 18 CorStregth: -1.00
ColGroup (#2, Index) SUBSCRIPTION_NDX_SRCID
Col#: 2 17 CorStregth: -1.00
ColGroup Usage:: PredCnt: 2 Matches Full: Partial:
***** Virtual column Adjustment ******
Column name MSISDN_HASH
cost_cpu 2300.00
cost_io 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00
***** End virtual column Adjustment ******
Table: SUBSCRIPTION Alias: SUBSCRIPTION
Card: Original: 218104.000000 Rounded: 3 Computed: 2.75 Non Adjusted: 2.75
Access Path: TableScan
Cost: 2420.71 Resp: 672.42 Degree: 0
Cost_io: 2409.00 Cost_cpu: 100334308
Resp_io: 669.17 Resp_cpu: 27870641
kkofmx: index filter:"SUBSCRIPTION"."MSISDN_HASH"=86
***** Virtual column Adjustment ******
Column name MSISDN_HASH
cost_cpu 2300.00
cost_io 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00
***** End virtual column Adjustment ******
Access Path: index (AllEqRange)
Index: SUBSCRIPTION_NDX_MSISDN
resc_io: 6.00 resc_cpu: 36840
ix_sel: 0.000013 ix_sel_with_filters: 0.000013
***** Logdef predicate Adjustment ******
Final IO cst 0.00 , CPU cst 2300.00
***** End Logdef Adjustment ******
Cost: 6.01 Resp: 6.01 Degree: 1
Best:: AccessPath: IndexRange
Index: SUBSCRIPTION_NDX_MSISDN
Cost: 6.01 Degree: 1 Resp: 6.01 Card: 2.75 Bytes: 0
OPTIMIZER STATISTICS AND COMPUTATIONS
GENERAL PLANS
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]: SUBSCRIPTION[SUBSCRIPTION]#0
Best so far: Table#: 0 cost: 6.0051 card: 2.7487 bytes: 291
****** Recost for parallel table scan *******
Access path analysis for SUBSCRIPTION
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for SUBSCRIPTION[SUBSCRIPTION]
*** 2012-06-12 12:36:44.454
** Performing dynamic sampling initial checks. **
** TABLE SUBSCRIPTION Alias: SUBSCRIPTION : reused cached dynamic sampling result (failure).
ColGroup Usage:: PredCnt: 2 Matches Full: Partial:
***** Virtual column Adjustment ******
Column name MSISDN_HASH
cost_cpu 2300.00
cost_io 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00
***** End virtual column Adjustment ******
Table: SUBSCRIPTION Alias: SUBSCRIPTION
Card: Original: 218104.000000 Rounded: 3 Computed: 2.75 Non Adjusted: 2.75
Access Path: TableScan
Cost: 2420.71 Resp: 672.42 Degree: 0
Cost_io: 2409.00 Cost_cpu: 100334308
Resp_io: 669.17 Resp_cpu: 27870641
Best:: AccessPath: TableScan
Cost: 672.42 Degree: 4 Resp: 672.42 Card: 2.75 Bytes: 97
Join order[1]: SUBSCRIPTION[SUBSCRIPTION]#0
Join order aborted: cost > best plan cost
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:2000
Number of join permutations tried: 1
Enumerating distribution method (advanced)
Trying or-Expansion on query block SEL$1 (#0)
Transfer Optimizer annotations for query block SEL$1 (#0)
id=0 frofkks[i] (index start key) predicate="SUBSCRIPTION"."MSISDN"='600600600'
id=0 frofkke[i] (index stop key) predicate="SUBSCRIPTION"."MSISDN"='600600600'
id=0 frofand predicate="SUBSCRIPTION"."VALID_FROM"<=TO_DATE(' 2012-02-10 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
Final cost for query block SEL$1 (#0) - All Rows Plan:
Best join order: 1
Cost: 6.0051 Degree: 1 Card: 3.0000 Bytes: 291
Resc: 6.0051 Resc_io: 6.0000 Resc_cpu: 43740
Resp: 6.0051 Resp_io: 6.0000 Resc_cpu: 43740
kkoqbc-subheap (delete addr=0x2b24ebece950, in-use=21280, alloc=32840)
kkoqbc-end:
call(in-use=252920, alloc=343912), compile(in-use=129048, alloc=133000), execution(in-use=192248, alloc=195240)
kkoqbc: finish optimizing query block SEL$1 (#0)
apadrv-end
call(in-use=252920, alloc=343912), compile(in-use=129960, alloc=133000), execution(in-use=192248, alloc=195240)
Starting SQL statement dump
user_id=115 user_name=xxx module=SQL*Plus action=
sql_id=afjvvjmx6tqgr plan_hash_value=1672204165 problem_type=3
----- Current SQL Statement for this session (sql_id=afjvvjmx6tqgr) -----
explain plan for
select CUSTOMER_ID, IDENT_SOURCE_ID, ACCOUNT_ID, MSISDN, IMSI, SIM, IMEI, MEID, EMAIL, TELCOOP
from subscription
where MSISDN = '600600600' AND MSISDN_HASH = 86
AND VALID_FROM <=TO_DATE('2012-02-10 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
sql_text_length=266
sql=explain plan for
select CUSTOMER_ID, IDENT_SOURCE_ID, ACCOUNT_ID, MSISDN, IMSI, SIM, IMEI, MEID, EMAIL, TELCOOP
from subscription
where MSISDN = '600600600' AND MSISDN_HASH = 86
AND VALID_FROM <=TO_DATE('2012-02-10 00:00:00', 'YYYY-MM-DD HH2
sql=4:MI:SS')
----- Explain Plan Dump -----
----- Plan Table -----
============
Plan Table
============
-----------------------------------------------------------------------------------------------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------+
| 0 | SELECT STATEMENT | | | | 6 | | | |
| 1 | PARTITION LIST SINGLE | | 3 | 291 | 6 | 00:00:01 | 88 | 88 |
| 2 | TABLE ACCESS BY LOCAL INDEX ROWID | SUBSCRIPTION | 3 | 291 | 6 | 00:00:01 | 88 | 88 |
| 3 | INDEX RANGE SCAN | SUBSCRIPTION_NDX_MSISDN| 3 | | 3 | 00:00:01 | 88 | 88 |
-----------------------------------------------------------------------------------------------------------------------+
Predicate Information:
2 - filter("VALID_FROM"<=TO_DATE(' 2012-02-10 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
3 - access("MSISDN"='600600600')
Content of other_xml column
===========================
nodeid/pflags: 1 1 db_version : 11.2.0.2
parse_schema : xxx
plan_hash : 1672204165
plan_hash_2 : 1960934971
Outline Data:
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.2')
DB_VERSION('11.2.0.2')
OPT_PARAM('optimizer_dynamic_sampling' 6)
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX_RS_ASC(@"SEL$1" "SUBSCRIPTION"@"SEL$1" ("SUBSCRIPTION"."MSISDN"))
END_OUTLINE_DATA
Query Block Registry:
SEL$1 0xf4ea2a20 (PARSER) [FINAL]
call(in-use=259392, alloc=343912), compile(in-use=170344, alloc=270888), execution(in-use=344120, alloc=346656)
End of Optimizer State Dump
Dumping Hints
=============
====================== END SQL Statement Dump ======================
Edited by: user3754081 on 2012-06-12 08:07

Similar Messages

  • Urgen!! Query takes lots of time to execute and the production is in effect

    Hi,
    We have some data loading script. This scripts takes lots of time to execute. As Iam new to tunning please do let me know what is the wrong with the query !!
    Thanks In advance
    Query:
    =========
    INSERT /*+ PARALLEL */ INTO ris.ris_pi_profile
    (ID,COUNTRY_OF_CITIZENSHIP,IMMIGRATION_STATUS,SSN,DOB,GENDER,
    ETHNICITY,RACE,DEPARTMENT,DIVISION,INSTITUTION_ID,INST_EMAIL,EFFECT_DATE,ACADEMIC_TITLE,ACADEMIC_POSITION,
    OTH_PER_DATA,PCT_RESEARCH,PCT_TEACHING,PCT_CLINICAL,PCT_ADMIN,PCT_OTHER,PCT_TRAINING)
    SELECT
    ap.id,
    p.citizen_cd,
    decode(p.visa_cd,'CV',0,'F1',1,'H1',2,'H1B',3,'H2',4,'J1',5,'J2',6,'O1',7,'PR',8,'PRP',9,'TC',10,'TN',11,'TNN',12),
    (select n.soc_sec_num from sa.name n where n.name_id = p.name_id),
    (select n.birth_date from sa.name n where n.name_id = p.name_id),
    (select decode(n.gender_cd,'F',1,'M',2,0) from sa.name n where n.name_id = p.name_id),
    (select decode(n.ethnic_cd,'H',1) from sa.name n where n.name_id = p.name_id),
    (select decode(n.ethnic_cd,'A',2,'B',3,'I',1,'P',4,'W',5) from sa.name n where n.name_id = p.name_id),
    a.dept_name,
    a.div_name,
    a.inst_id,
    (select n.email from sa.name n where n.name_id = p.name_id),
    a.eff_date,
    ac.acad_pos_desc,
    p.acad_pos_cd,
    0,
    p.research_pct,
    p.teach_pct,
    p.patient_pct,
    p.admin_pct,
    p.other_pct,
    p.course_pct
    FROM
    appl1 ap,
    sa.personal_data p,
    sa.address a,
    sa.academic_pos_cd ac,
    profile_pi f
    WHERE
    p.project_role_cd='PI'
    and ap.appl_id=f.appl_id
    and p.appl_id=f.appl_id
    and p.name_id=f.name_id
    and a.addr_id=f.addr_id
    and p.acad_pos_cd=ac.acad_pos_cd
    AND EXISTS (select 1 from ris.ris_institution i WHERE i.id = a.inst_id)
    AND EXISTS (select 1 from sa.academic_pos_cd acp WHERE acp.acad_pos_cd = p.acad_pos_cd);
    In the execution PLan I see lots of Nested loop, Hash Join
    Index( Unique scan)
    Table Access by ( Index rowid)
    This query is fast in Test DB but ver very slow in prod DB. Need your help Urgent.
    Minaz

    When your query takes too long...
    When your query takes too long ...

  • Querry takes lot of time ---plz  suggest

    Hi oracle gurus
    this querry takes lot of time ..pls suggest me
    oracle version -Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    os version red hat enterprise linux ES release 4
    Select * from (SELECT M.MAIL_ID, MAIL_FROM, M.SUBJECT AS S1, CEIL(M.MAIL_SIZE) AS MAIL_SIZE,
    TO_CHAR(MAIL_DATE,'dd Mon yyyy hh:mi:ss am') AS MAIL_DATE1, M.ATTACHMENT_FLAG, M.MAIL_TYPE_ID,
    M.PRIORITY_NO, M.TEXT, COALESCE(M.MAIL_STATUS_VALUE, M.MAIL_STATUS_VALUE, 0), 0 as email_address,
    LOWER(M.MAIL_to) as Mail_to, M.Cc, M.MAIL_DATE AS MAIL_DATE, lower(subject) as subject, read_ipaddress,
    read_datetime, Folder_Id,compose_type,interc_count,history_id,
    pined_flag, rank() over (order by mail_date desc) as rnk from v$mail M WHERE M.USER_ID=6 AND M.FOLDER_ID =1)
    where rnk between 1 and 10
    Elapsed: 00:00:04.65
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=15270 Card=14316 B
    ytes=203716680)
    1 0 VIEW (Cost=15270 Card=14316 Bytes=203716680)
    2 1 WINDOW (SORT PUSHED RANK) (Cost=15270 Card=14316 Bytes=8
    890236)
    3 2 TABLE ACCESS (BY INDEX ROWID) OF 'MAIL' (TABLE) (Cost=
    13386 Card=14316 Bytes=8890236)
    4 3 INDEX (RANGE SCAN) OF 'INDX_MAIL_ALL1' (INDEX) (Cost
    =57 Card=14316)
    Statistics
    0 recursive calls
    0 db block gets
    0 consistent gets
    0 physical reads
    0 redo size
    0 bytes sent via SQL*Net to client
    0 bytes received via SQL*Net from client
    0 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    10 rows processed
    thanks n regard

    Hi Tuning gurus
    i tuned the query as below ------
    this querry works fine for lesser number of rows
    eg :--
    where ROWNUM <= 10 )
    where rnum >=1;
    but takes lot of time as we increase rownum ..
    eg :--
    where ROWNUM <= 10000 )
    where rnum >=9990;
    results are posted below
    pls suggest me
    oracle version -Oracle Database 10g Enterprise Edition
    Release 10.2.0.1.0 - Prod
    os version red hat enterprise linux ES release 4
    also statistics differ when we use table
    and its views
    results of view v$mail
    [select * from
    ( select a.*, ROWNUM rnum from
    ( SELECT M.MAIL_ID, MAIL_FROM, M.SUBJECT
    AS S1,CEIL(M.MAIL_SIZE) AS MAIL_SIZE,
    TO_CHAR(MAIL_DATE,'dd Mon yyyy hh:mi:ss
    am') AS MAIL_DATE1, M.ATTACHMENT_FLAG,
    M.MAIL_TYPE_ID, M.PRIORITY_NO, M.TEXT,
    COALESCE(M.MAIL_STATUS_VALUE,0),
    0 as email_address,LOWER(M.MAIL_to) as
    Mail_to, M.Cc, M.MAIL_DATE AS MAIL_DATE,
    lower(subject) as subject,read_ipaddress,
    read_datetime,Folder_Id,compose_type,
    interc_count,history_id,pined_flag,
    rank() over (order by mail_date desc)
    as rnk from v$mail M WHERE M.USER_ID=6 AND M.FOLDER_ID =1) a
    where ROWNUM <= 10000 )
    where rnum >=9990;]
    result :
    11 rows selected.
    Elapsed: 00:00:03.84
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=14735 Card=10000 B
    ytes=142430000)
    1 0 VIEW (Cost=14735 Card=10000 Bytes=142430000)
    2 1 COUNT (STOPKEY)
    3 2 VIEW (Cost=14735 Card=14844 Bytes=211230120)
    4 3 WINDOW (SORT) (Cost=14735 Card=14844 Bytes=9114216)
    5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MAIL' (TABLE) (C
    ost=12805 Card=14844 Bytes=9114216)
    6 5 INDEX (RANGE SCAN) OF 'FOLDER_USERID' (INDEX) (C
    ost=43 Card=14844)
    Statistics
    294 recursive calls
    0 db block gets
    8715 consistent gets
    8669 physical reads
    0 redo size
    7060 bytes sent via SQL*Net to client
    504 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    6 sorts (memory)
    0 sorts (disk)
    11 rows processed
    SQL> select count(*) from v$mail;
    Elapsed: 00:00:00.17
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=494 Card=1)
    1 0 SORT (AGGREGATE)
    2 1 INDEX (FAST FULL SCAN) OF 'FOLDER_USERID' (INDEX) (Cost=
    494 Card=804661)
    Statistics
    8 recursive calls
    0 db block gets
    2171 consistent gets
    2057 physical reads
    260 redo size
    352 bytes sent via SQL*Net to client
    504 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    results of original table mail
    [select * from
    ( select a.*, ROWNUM rnum from
    ( SELECT M.MAIL_ID, MAIL_FROM, M.SUBJECT
    AS S1,CEIL(M.MAIL_SIZE) AS MAIL_SIZE,
    TO_CHAR(MAIL_DATE,'dd Mon yyyy hh:mi:ss
    am') AS MAIL_DATE1, M.ATTACHMENT_FLAG,
    M.MAIL_TYPE_ID, M.PRIORITY_NO, M.TEXT,
    COALESCE(M.MAIL_STATUS_VALUE,0),
    0 as email_address,LOWER(M.MAIL_to) as
    Mail_to, M.Cc, M.MAIL_DATE AS MAIL_DATE,
    lower(subject) as subject,read_ipaddress,
    read_datetime,Folder_Id,compose_type,
    interc_count,history_id,pined_flag,
    rank() over (order by mail_date desc)
    as rnk from mail M WHERE M.USER_ID=6 AND M.FOLDER_ID =1) a
    where ROWNUM <= 10000 )
    where rnum >=9990;]
    result :
    11 rows selected.
    Elapsed: 00:00:03.21
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=14735 Card=10000 B
    ytes=142430000)
    1 0 VIEW (Cost=14735 Card=10000 Bytes=142430000)
    2 1 COUNT (STOPKEY)
    3 2 VIEW (Cost=14735 Card=14844 Bytes=211230120)
    4 3 WINDOW (SORT) (Cost=14735 Card=14844 Bytes=9114216)
    5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MAIL' (TABLE) (C
    ost=12805 Card=14844 Bytes=9114216)
    6 5 INDEX (RANGE SCAN) OF 'FOLDER_USERID' (INDEX) (C
    ost=43 Card=14844)
    Statistics
    1 recursive calls
    119544 db block gets
    8686 consistent gets
    8648 physical reads
    0 redo size
    13510 bytes sent via SQL*Net to client
    4084 bytes received via SQL*Net from client
    41 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    11 rows processed
    SQL> select count(*) from mail;
    Elapsed: 00:00:00.34
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=494 Card=1)
    1 0 SORT (AGGREGATE)
    2 1 INDEX (FAST FULL SCAN) OF 'FOLDER_USERID' (INDEX) (Cost=
    494 Card=804661)
    Statistics
    1 recursive calls
    0 db block gets
    2183 consistent gets
    2062 physical reads
    72 redo size
    352 bytes sent via SQL*Net to client
    504 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    Thanks n regards
    ps : sorry i could not preserve the format plz
    Message was edited by:
    Cool_Jr.DBA

  • Loading into the Infipackages takes lot of time

    Hi Friends,
    When i schedule and activate the process chain everyday, it takes lot of time to get loaded in the infopackages, around 5 to 6 hours.
    in st13 when i click on the Log id name to see the process chain.
    in the LOAD DATA infopackage it shows green, after double clicking on that, process moniter
    i see request still running and its yellow but 0 from 0 records.
    and this will be the same for hours
    Its very slow.
    need your guidence on this
    Thanks for your help.

    Hi,
    What are you trying to load using this process chain? If it is cube, are you deleting the indexes on the cube before starting the load. If not, include that step as well in the process chain. This will help in getting improved performance.
    Ideally it is always a good practice to delete the indexes before loading.
    Regards,
    Yogesh

  • Loading in infopackage takes lot of time

    Hi Friends,
    When i schedule and activate the process chain everyday, it takes lot of time to get loaded in the infopackages, around 5 to 6 hours.
    in st13 when i click on the Log id name to see the process chain.
    in the LOAD DATA infopackage it shows green, after double clicking on that, process moniter
    i see request still running and its yellow but 0 from 0 records.
    and this will be the same for hours
    Its very slow.
    need your guidence on this
    Thanks for your help.

    Hi,
    I will suggest you to check a few places where you can see the status
    1) SM37 job log (In source system if load is from R/3 or in BW if its a datamart load) (give request name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
    Also see if there is any 'sysfail' for any datapacket in SM37.
    2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. (In source system if load is from R/3 or in BW if its a datamart load). See if its accessing/updating some tables or is not doing anything at all.
    3) RSMO see what is available in details tab. It may be in update rules.
    4) ST22 check if any short dump has occured.(In source system if load is from R/3 or in BW if its a datamart load)
    5) Check in SM58 and BD87 for pending tRFCs and IDOCS.
    Once you identify you can rectify the error.
    If all the records are in PSA you can pull it from the PSA to target. Else you may have to pull it again from source infoprovider.
    If its running and if you are able to see it active in SM66 you can wait for some time to let it finish. You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
    If you feel its active and running you can verify by checking if the number of records has increased in the data tables.
    SM21 - System log can also be helpful.
    Regards,
    Habeeb

  • BO Report in 4.0 takes lot of time to open

    Hi We are having BO 4.0 environment.
    The underline database is SAP BW and BO reports are built on top of Bex queries.
    We found that it takes HELL a lot of time sometimes to open in BI launch pad/Infoview.
    Just for opening it takes lot of time we cound found that actually it is not running anything in BW or else .
    Could you tell us better integartion or alternate solution even for integartion of BI/BO or may be there would be some problem with interface betwwe BO and BW .
    eg Tomcat server slow etc ..
    Thanks
    Gaurav.

    Hi Denis,
    Extremely Sorry for not providing enough information.
    Business Objects is of BO 4.0 FP3 SP12.
    BI is 7.10  Support Package 10 and Revision 627.
    The point is While Opening Report In BO 4.0,it takes lot of time to open.
    This generally happens whenever we have  to open  report for the first time in laptop/Desktop machine.
    This always happens whenever we open the report for first time. (Generally  when we have demo team metting to show our reports to management level and we face it takes lot of time )
    Could you let us know for the same if someone else has also received this.
    Please let me know if any other details is required.
    Regards
    Gaurav.

  • Inserting into a base table of a materialized view takes lot of time.......

    Dear All,
    I have created a materialized view which refreshes on commit.materialized view is enabled query rewrite.I have created a materialized view log on the base table also While inserting into the base table it takes lot of time................Can u please tell me why?

    Dear Rahul,
    Here is my materialized view..........
    create materialized view mv_test on prebuilt table refresh force on commit
    enable query rewrite as
    SELECT P.PID,
    SUM(HH_REGD) AS HH_REGD,
    SUM(INPRO_WORKS) AS INPRO_WORKS,
    SUM(COMP_WORKS) AS COMP_WORKS,
    SUM(SKILL_WAGE) AS SKILL_WAGE,
    SUM(UN_SKILL_WAGE) AS UN_SKILL_WAGE,
    SUM(WAGE_ADVANCE) AS WAGE_ADVANCE,
    SUM(MAT_AMT) AS MAT_AMT,
    SUM(DAYS) AS DAYS,
    P.INYYYYMM,P.FIN_YEAR
    FROM PROG_MONTHLY P
    WHERE SUBSTR(PID,5,2)<>'PP'
    GROUP BY PID,P.INYYYYMM,P.FIN_YEAR;
    Please help me if query enable rewrite does any performance degradation......
    Thanks & Regards
    Kris

  • Execution of first request takes lot of time with JDBC OCI

    Hi,
    We are having an application which connects to Oracle server(10g Enterprise Edition Release 10.2.0.4.0 64bit) using Oracle JDBC OCI driver(10.2.0.1.0 production (10g R2)).
    Everything works fine.
    But what we observed is randomly the first request to the server vai JDBC driver in a day takes lot of time, varies from 5-15 minutes.
    If we execute the request again it executed immediately
    What we observed while debugging is preparedStatement.executeQuery() is taking time.
    We enabled logging at the JDBC driver layer and a snapshot isgiven below. Between T2CPreparedStatement.doDefineExecuteFetch () and T2CPreparedStatement.execute_for_rows () it takes 5 minutes. Any idea why it takes this much time and reason for it? Oracle driver is not throwing any exceptions
    *4-apr-2011 4:00:01* oracle.jdbc.driver.T2CPreparedStatement doDefineExecuteFetch
    FINE: T2CPreparedStatement.doDefineExecuteFetch ()
    *4-apr-2011 4:05:12* oracle.jdbc.driver.T2CPreparedStatement executeForRows
    FINE: T2CPreparedStatement.execute_for_rows () returns: void
    Detailed jdbc log is below
    -apr-2011 4:00:01 oracle.jdbc.driver.OracleStatement doExecuteWithTimeout
    FINE: OracleStatement.doExecuteWithTimeout() needToPrepareDefineBuffer = false
    4-apr-2011 4:00:01 oracle.jdbc.driver.PhysicalConnection needLine
    FINE: PhysicalConnection.needLine()--no return
    4-apr-2011 4:00:01 oracle.jdbc.driver.PhysicalConnection registerHeartbeat
    FINE: PhysicalConnection.registerHeartbeat()
    4-apr-2011 4:00:01 oracle.jdbc.driver.OracleStatement executeMaybeDescribe
    FINE: OracleStatement.execute_maybe_describe() rowPrefetchChanged = false, needToParse = false, needToPrepareDefineBuffer = false, columnsDefinedByUser = false
    4-apr-2011 4:00:01 oracle.jdbc.driver.T2CPreparedStatement executeForRows
    FINE: T2CPreparedStatement.execute_for_rows (executed_for_describe = false)
    4-apr-2011 4:00:01 oracle.jdbc.driver.T2CPreparedStatement doDefineExecuteFetch
    FINE: T2CPreparedStatement.doDefineExecuteFetch ()
    4-apr-2011 4:05:12 oracle.jdbc.driver.T2CPreparedStatement executeForRows
    FINE: T2CPreparedStatement.execute_for_rows () returns: void
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleStatement executeMaybeDescribe
    FINE: OracleStatement.execute_maybe_describe():return validRows = 1, needToPrepareDefineBuffer = false
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleStatement doExecuteWithTimeout
    FINE: OracleStatement.doExecuteWithTimeout():return validRows = 1, needToPrepareDefineBuffer = false
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl next
    FINE: OracleResultSetImpl.next(): closed=false
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl next
    FINER: closed=false, statement.currentRow=-1, statement.totalRowsVisited=0, statement.maxRows=0, statement.validRows=1, statement.gotLastBatch=true
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl getString
    INFO: OracleResultSetImpl.getString(columnIndex=1)
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl wasNull
    INFO: OracleResultSetImpl.wasNull()
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl getString
    INFO: OracleResultSetImpl.getString(columnIndex=2)
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl wasNull
    INFO: OracleResultSetImpl.wasNull()
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl getString
    INFO: OracleResultSetImpl.getString(columnIndex=3)
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl wasNull
    INFO: OracleResultSetImpl.wasNull()
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl wasNull
    INFO: OracleResultSetImpl.wasNull()
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl getTimestamp
    INFO: OracleResultSetImpl.getTimestamp(columnIndex=4)
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl wasNull
    INFO: OracleResultSetImpl.wasNull()
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl wasNull
    INFO: OracleResultSetImpl.wasNull()
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl next
    FINE: OracleResultSetImpl.next(): closed=false
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl next
    FINER: closed=false, statement.currentRow=0, statement.totalRowsVisited=1, statement.maxRows=0, statement.validRows=1, statement.gotLastBatch=true
    4-apr-2011 4:05:12 oracle.jdbc.driver.PhysicalConnection needLine
    FINE: PhysicalConnection.needLine()--no return
    4-apr-2011 4:05:12 oracle.jdbc.driver.T2CPreparedStatement closeQuery
    FINE: T2CPreparedStatement.closeQuery ()
    4-apr-2011 4:05:12 oracle.jdbc.driver.T2CPreparedStatement closeQuery
    FINE: T2CPreparedStatement.closeQuery () returns: void
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl close
    INFO: OracleResultSetImpl.close()
    Regards
    Sunil

    Hi,
    Thanks for the reply
    - SQL may vary. Its not the same query which takes time always. The same query executed immediately(using another connection from pool) works perfectly
    - If there is a failover or stale connection getting closed or timeout, will JDBC driver thhrows exdeption? I didn't see any exceptions in the driver logs. Is there anyway to identify whether failover or timeout is happening?
    Some other points
    - The application uses connection pooling. This means that connections are created in the beginning and kept in the pool. And it may be in the pool for 72 hours or , before this issue happens
    - Application uses prepared statements and the statements are preapred and kept and re used later. So the query which take s more time might be prepared before 48 hours or so.
    - This is not happening every day. Many times the pattern is , it happens after 60-72 hours after application is started.
    Sunil

  • Read cluster PCL4 .. takes lot of time

    Hi guys,
      Though I Know this is a functional related forum, i would want to post  techincal question related HR ABAP.
    I have a report that needs to read data from PCL4 cluster and display master data changes for an employee. Developed a big program to simulate "RPUAUD00" that displays the logged data.
    My concenr is the program that i developed is taking minutes to execute. I am using the 2 function modules "HR_INFOTYPE_LOG_GET_LIST" and "HR_INFOTYPE_LOG_GET_DETAIL" for this purpose. 
    The function module "HR_INFOTYPE_LOG_GET_LIST" takes lot of time to get executed as it checks all the personnel number with hanges even if there was only one personnel number that was changed. Assuming that my personnel area and personnel sub area has 2000 employees, even if one employee's data is changed, the function module reads the cluster for all the employee and takes a few minutes to comeback, before the function module "HR_INFOTYPE_LOG_GET_DETAIL" is executed to  get the exact details changerd. This is irritating the client. Any idea how to reduce the time as i am severely short of ideas as how to.
    regards
    Sam

    Unless you pass the exact pernr, the function cannot determine without looking at all the pernrs for changes..If your program tied to PNP, you can improve the run time passing the pernr list to the function call HR_INFOTYPE_LOG_GET_LIST..
    ~Suresh

  • Importing SLD content takes lot of time in Solman 4.0

    Hi Folks,
    Importing SLD content takes lot of time while installing Solution Manager 4.0. Its in the 42nd phase of 45...SQL 2005 DB.
    Its stuck in Configuring system landscape directory....and NOT thrown any error as of now...touch wood.
    Can anyone tell me how much time does it take OR is it gone in a loop?

    HI All,
    Finally I have received an error during the above mentionde phase....
    sapinst.log  -
    >>>
    Import Status: PREPARING
    Import Status: PREPARING
    Import Status: PREPARING
    ERROR: CIM_ERR_FAILED: IO error: Read timed out
    <BR>CONFIGURATION=
    ERROR 2007-10-22 15:55:34
    CJS-30059  J2EE Engine configuration error.<br>DIAGNOSIS: Error when configuring J2EE Engine. See output of logfile java.log: 'JSE'.
    java.exe.log -
    >>>
    Import Status: PREPARING
    TYPE=A<BR>STATE=<BR>INFO_SHORT=com.sap.sld.api.wbem.exception.CIMCommunicationException: com.sap.sld.api.wbem.exception.CIMCommunicationException: CIM_ERR_FAILED: IO error: Read timed out
         at com.sap.sld.api.wbem.client.WBEMHttpRequestSender.send(WBEMHttpRequestSender.java:158)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.send(WBEMRemoteClient.java:720)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.send(WBEMRemoteClient.java:694)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.send(WBEMRemoteClient.java:638)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.referencesImpl(WBEMRemoteClient.java:375)
         at com.sap.sld.api.wbem.client.WBEMClient.references(WBEMClient.java:1773)
         at com.sap.sld.api.wbem.client.WBEMClientUtil.referencesComplete(WBEMClientUtil.java:490)
         at com.sap.lcr.pers.delta.importing.SAPCRUpgrade.collectAssociationsForRestoration(SAPCRUpgrade.java:700)
         at com.sap.lcr.pers.delta.importing.SAPCRUpgrade.delete(SAPCRUpgrade.java:355)
         at com.sap.lcr.pers.delta.importing.ImportHandler.loadFullImport(ImportHandler.java:1765)
         at com.sap.lcr.pers.delta.importing.ImportHandler.loadImpl(ImportHandler.java:1605)
         at com.sap.lcr.pers.delta.importing.ImportHandler.load(ImportHandler.java:1573)
         at com.sap.ctc.util.SLDConfig.importSldContent(SLDConfig.java:812)
         at com.sap.ctc.util.SLDConfig.performFunction(SLDConfig.java:154)
         at com.sap.ctc.util.ConfigServlet.doGet(ConfigServlet.java:69)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:390)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:264)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:347)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:325)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:887)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:241)
         at com.sap.engine.services.httpserver.server.Client.handle(Client.java:92)
         at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:148)
         at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
         at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
         at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
         at java.security.AccessController.doPrivileged(Native Method)
         at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
         at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)
    Caused by: java.net.SocketTimeoutException: Read timed out
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.net.SocketInputStream.read(SocketInputStream.java:182)
         at com.tssap.dtr.client.lib.protocol.streams.ChunkedInputStream.readLine(ChunkedInputStream.java:323)
         at com.tssap.dtr.client.lib.protocol.streams.ResponseStream.readLine(ResponseStream.java:271)
         at com.tssap.dtr.client.lib.protocol.impl.Response.initialize(Response.java:476)
         at com.tssap.dtr.client.lib.protocol.Connection.getResponse(Connection.java:2604)
         at com.tssap.dtr.client.lib.protocol.Connection.sendInternal(Connection.java:1578)
         at com.tssap.dtr.client.lib.protocol.Connection.send(Connection.java:1427)
         at com.sap.sld.api.wbem.client.WBEMHttpRequestSender.send(WBEMHttpRequestSender.java:142)
         ... 30 more
    caused by:
    java.net.SocketTimeoutException: Read timed out
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.net.SocketInputStream.read(SocketInputStream.java:182)
         at com.tssap.dtr.client.lib.protocol.streams.ChunkedInputStream.readLine(ChunkedInputStream.java:323)
         at com.tssap.dtr.client.lib.protocol.streams.ResponseStream.readLine(ResponseStream.java:271)
         at com.tssap.dtr.client.lib.protocol.impl.Response.initialize(Response.java:476)
         at com.tssap.dtr.client.lib.protocol.Connection.getResponse(Connection.java:2604)
         at com.tssap.dtr.client.lib.protocol.Connection.sendInternal(Connection.java:1578)
         at com.tssap.dtr.client.lib.protocol.Connection.send(Connection.java:1427)
         at com.sap.sld.api.wbem.client.WBEMHttpRequestSender.send(WBEMHttpRequestSender.java:142)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.send(WBEMRemoteClient.java:720)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.send(WBEMRemoteClient.java:694)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.send(WBEMRemoteClient.java:638)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.referencesImpl(WBEMRemoteClient.java:375)
         at com.sap.sld.api.wbem.client.WBEMClient.references(WBEMClient.java:1773)
         at com.sap.sld.api.wbem.client.WBEMClientUtil.referencesComplete(WBEMClientUtil.java:490)
         at com.sap.lcr.pers.delta.importing.SAPCRUpgrade.collectAssociationsForRestoration(SAPCRUpgrade.java:700)
         at com.sap.lcr.pers.delta.importing.SAPCRUpgrade.delete(SAPCRUpgrade.java:355)
         at com.sap.lcr.pers.delta.importing.ImportHandler.loadFullImport(ImportHandler.java:1765)
         at com.sap.lcr.pers.delta.importing.ImportHandler.loadImpl(ImportHandler.java:1605)
         at com.sap.lcr.pers.delta.importing.ImportHandler.load(ImportHandler.java:1573)
         at com.sap.ctc.util.SLDConfig.importSldContent(SLDConfig.java:812)
         at com.sap.ctc.util.SLDConfig.performFunction(SLDConfig.java:154)
         at com.sap.ctc.util.ConfigServlet.doGet(ConfigServlet.java:69)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:390)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:264)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:347)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:325)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:887)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:241)
         at com.sap.engine.services.httpserver.server.Client.handle(Client.java:92)
         at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:148)
         at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
         at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
         at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
         at java.security.AccessController.doPrivileged(Native Method)
         at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
         at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)
    //   => Importing Data : E:/usr/sap/SOL/SYS/global/sld/model/CR_Content.zip URL=http://devsrv:50100 USER=J2EE_ADMIN ...
    Import Status: PREPARING
    ERROR: CIM_ERR_FAILED: IO error: Read timed out
    <BR>CONFIGURATION=

  • Query from source table takes lots of time

    I have to load data from an Oracle table which has millions of records. The loading knowledge module I am using is 'lkm sql to oracle'.
    I see that the query from the source database takes lot of time and doing a full table scan where as it supposed to be using an index.
    I dont have any lkm with 'lkm oracle to oracle'. Some how I need to add a hint to this source query. How I can do this?

    This LKM is not indicated to large volume of data, try to use LKM Oracle to Oracle (DBLINK), see ODI KMs documentation for more information.
    Related to HINTS you can add this with KM options, see that some Oracle ODI11g new KMs already have some options to do it, so you can put the hint on interface options of KM or define the temp index on the interface

  • Infoobject F4 help in the Query takes lot of time and hangs

    Hi All,
    I have 0Material in the query and the F4 help  for selection takes lot of time and hangs after some time.
    The 0Material has 2 million records the  Infoobject properties for Bex are as below:
    Selection:No selection restriction
    Query Def. Filter Value Selection: Values in Master Data Table
    Query Execution Filter Val.Selectn:Only Posted Values for Navigation
    We tried to alter some of the Properties but we have the same issue.
    Suggest other options that can be be looked into to get the F4 help in the Query .
    Thanks,
    Mike

    hi Mike,
    check if helps
    Note 817335 - Performance in F4 Help 
    Note 748623 - Input help (F4) has a very long runtime - recommendations
    Note 581802 - Variable dialog boxes: performance of the F4 help
    search oss note for f4 performance ......

  • Hierarchical tree take  lot of time to populate

    Hello fellows
    I have a form using hierarchical list. when populate it from the record group it take lot of time.
    this record group is using four loops and outer loop range is from 1 to 3000 and all inner loops range is from 1 to 3,
    Oracle.exe get time of 1 minute to populate record group and ifrun60.exe get time of 5 minutes to display tree.
    have you any suggestion to improve the performance of this form

    Hi all
    In order to populate tree faster its better not to populate it using Record Group.
    use the htree fucntions for parent and child nodes in the populate tree trigger or When node selected trigger.
    Ahsan

  • Sun Studio 12 takes lot of time to startup

    Hi,
    When I try to use sunstudio 12 on a Solaris 10 U6 x86 box, it takes ages to open the UI window.
    The window kind of hung state with out any messges showing in it until it comes up with full screen.
    Debugging from sunstudio takes lot of time when we try step to the code.
    Is this a known issue?. Any workaround?.
    The Solaris system has 8GB RAM and plenty of disk space.
    Thanks

    Thank you for the report! No, it is not a known issue.
    I use Sun Studio IDE on Solaris 10 x86 system with 2 Gb memory.
    Its responsiveness is ok (though sometimes it parses my projects
    for 10-20 seconds, but it happens only 2-3 times per day).
    The "startup" time is about 20-30 seconds.
    Let's try to find out what is the problem on your system.
    Can you run "sunstudio" under "truss -d" and look at the log file?
    Here is the command:
    truss -d -a -e -f -o ss.tr  sunstudio  --userdir /tmp/newuserdir It will create a log file "ss.tr" in current directory. Open it in a text editor,
    and look for long actions. Probably there is something in your HOME
    directory, that causes these delays.
    If you don't see anything suspicious, let me know. I'll take a look at the
    log file (though it can be several Mb, so we have to find out how to send it).
    Thanks,
    Nik

  • FBL3N T-code takes lot of time to produced output.

    Hi expert,
    FBL3N T-code takes lot of time to produced output.
    and many times accure TIME_OUT Runtime error.
    REgards,

    Hi Vivek,
    When you run the T-code FBL3N , Monitor SM50 and check the tables and run the statistics on the specified tables using Db20 t-code.
    Refer to below link for more information.
    http://help.sap.com/saphelp_nwpi71/helpdata/en/45/d59f51b3735594e10000000a1553f7/content.htm
    Also
    Refer to Note 1135916 - Line items: Help for analysis for long runtime
    Check whether a suitable database index has been created in the dictionary (transaction SE11 -> indexes). If no index has been created, create the corresponding index and activate it.
    Possible fields may be:
    BSIS/BSAS:  MANDT, BUKRS, HKONT, BUDAT
    BSIK/BSAK:  MANDT, BUKRS, LIFNR, BUDAT
    BSID/BSAD:  MANDT, BUKRS, KUNNR, BUDAT
    Regards,
    Sravanthi
    Edited by: Sravanthi.sap on Jan 9, 2012 10:17 PM

Maybe you are looking for