Query spends most time on report CL_RSDM_READ_MASTER_DATA
Hi Experts,
I have created some queries on a custom cube with about 8M rows of data. I have compressed and partitioned the cube ()CALMONTH) and built a relavent aggregate that is being picked up by the query. Also, it seems like the OLAP cache is working properly.
However, the run times are still quite long (over 65 sec) for queries selecting larger data sets (say 2 months of data). When I watch the query running in sm50, I notice it spends most of its time running report CL_RSDM_READ_MASTER_DATA. In my cube, I have made several attributes on one characteristic (Account#) navigational, and some of the query filtering is done on these nav attributes. Also, the report displays several of the attributes (nav and display) from this characteristic - I am not sure if that is causing the long reads from the master data P tables? I have played around with creating indexes on the X table for some of these attributes but that seems to make no difference either.
Any clues? Am I missing something simple?
cheers,
Darryl
This note might be helpful. Not sure if it applies to your environment or not.
https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1237104
Similar Messages
-
How to find out the type of sites that i spend most of my time on
i want to know what type of sites that i spend most of my time on and if there is a way that firefox tracks these sites and let me know at the end of the day how long i spend on these sites. for example, how long i spend on shopping sites or news sites things like that
Try this extension: <br />
https://addons.mozilla.org/en-US/firefox/addon/aboutme/ -
Last time I did an update...I lost most of my music...thousands of songs and had to reinstall them!!!! I am scared as **** to do another update!! I have radio shows to do and can't afford to spend the time with this. Help anyone?
A bit late for this advice now, but hopefully you'll know what to do next time.
Empty/corrupt library after upgrade/crash
Hopefully it's not been too long since you last upgraded iTunes, in fact if you get an empty/incomplete library immediately after upgrading then with the following steps you shouldn't lose a thing or need to do any further housekeeping. In the Previous iTunes Libraries folder should be a number of dated iTunes Library files. Take the most recent of these and copy it into the iTunes folder. Rename iTunes Library.itl as iTunes Library (Corrupt).itl and then rename the restored file as iTunes Library.itl. Start iTunes. Should all be good, bar any recent additions to or deletions from your library.
See iTunes Folder Watch for a tool to catch up with any changes since the backup file was created.
When you get it all working make a backup!
tt2 -
Want to upgrade RAC 10.2.0.2 database (2 instances) to 10.2.0.5. DB size is 4 TB. Contains mostly .pdf files, reports. I do not have test/upgrade environment. How much time I can expect it will take?
How much the upgrade process depends of database size, in this case?
I have fast boxes IBM AIX 5.3 , 64 b.Appreciate your comments ...
Yes, sound stupid, regarding test environment, but in reality, is not that much:
-I do have "preprod" environment, (2 instances RAC) same releases of OS, OCW,RAC ... but DB size is 40 GB!!! It has all the objects as production, but (again but) it does not have physical standby, as production has. This is not that unusual, knowing that production DB grows very fast, and very large, and business do not want to keep preprod, dev, test databases same HUGE size as production is. Savings ...
-My DB upgrade depends if application running on top of it is certified for higher RDBMS release ... looks like they have some problems.
-Very encouraging sounds comment that DB upgrade does not depend that much of the DB size ...
- i do not recall from where this info came, but I have heard that R 11.* treats PDF files in different way that R10 did it ... and because of that, process of upgrading big databases which have many items of that type (pdf) can take very long. Any idea about this?
-Brian, thx ... I'm trying hard to provide DB simmilar size in UPG environment and face all possible pitfalls before I go in production...thx for youe advice. I'm relly interested in how upgrade to 11.* from 10.* traets my pdf-s ... All my TBs comming from pdfs. Everi advice will be wellcome ... -
Hello! Using JRC. While creating the report for viewing, I noticed one interesting thing. Why JRC executes query two times AND first time is executed with default parameters AND if default parameters are NULL the query can be invalid (even if it is marked to convert all database NULLs to defaults in Report Options).
<!break>
1. Why do you need to execute query the first time with these default parameters, which are later set to the others? We need to make some valid default parameters to make everything work. Also it is not efficient to execute unnecessary query.
2. If I'm wrong could you explain please? If I'm right, is it a bug and when do you fix it?
Waiting for answer,
Anton Stalnuhhin
Java-developer, Webmedia AS.
-
How to get report (SQL Query) generating Run Time
There is a Standard report of Payroll which show employee transfer information on the bases of location, grade job or organization, now to get actual query which is generated by run time in report builder including whether single column parameter or lexical parameter " because currently the query in not complicate but the parameter and lexical parameter is much more due to this not quite easy to under stand just copy past it into toad or pl/sql developer,
Kindly share your experience to get such kind of query in you working time.
thanksHere i try to explain contain of query.
Parameter
P_DEPTNO = 10
P_WHERE_CLAUSE := ' AND EMPNO IS NOT NULL AND SALARY > 100'
SELECT * FROM EMP
WHERE DEPTNO = P_DEPTNO
&P_WHERE_CLAUSE
REPROT WILL GENERATE QUERY AT RUN TIME IS LIKE THAT
SELECT * FROM EMP
WHERE DEPTNO = 10
AND EMPNO IS NOT NULL AND SALARY > 100
Now i want to get this query out(Run time) by doing any oracle database feature or sth similar.
thanks -
Calculations in query taking long time and to load target table
Hi,
I am pulling approx 45 Million records using the below query in a ssis package which pulls from one DB on one server and loading the results to another target table on the another server. In the select query I have a calculation for 6 columns. The target
table is trunctaed and loaded every day. Also most of the columns in the source which I used for the calculations is having 0 and it took approximately 1 hour 45 min to load the target table. Is there any way to reduce the load time? Also can I do the calcultions
after once all the 47 M records loaded during query running and then calculate for the non zero records alone?
SELECT T1.Col1,
T1.Col2,
T1.Col3,
T2.Col1,
T2.Col2,
T3.Col1,
convert( numeric(8,5), (convert( numeric,T3.COl2) / 1000000)) AS Colu2,
convert( numeric(8,5), (convert( numeric,T3.COl3) / 1000000)) AS Colu3,
convert( numeric(8,5), (convert( numeric,T3.COl4) / 1000000)) AS Colu4,
convert( numeric(8,5),(convert( numeric, T3.COl5) / 1000000)) AS Colu5,
convert( numeric(8,5), (convert( numeric,T3.COl6) / 1000000)) AS Colu6,
convert( numeric(8,5), (convert( numeric,T3.COl7) / 1000000)) AS Colu7,
FROM Tab1 T1
JOIN Tab2 T2
ON (T1.Col1 = T2.Col1)
JOIN Tab3 T3
ON (Tab3.Col9 =Tab3.Col9)
AnandSo 45 or 47? Nevertheless ...
This is hardly a heavy calculation, the savings will be dismal. Also anything numeric is very easy on CPU in general.
But
convert( numeric(8,5), (convert( numeric,T3.COl7) / 1000000))
is not optimal.
CONVERT( NUMERIC(8,5),300 / 1000000.00000 )
Is
Now it boils to how to make load faster: do it in parallel. Find how many sockets the machine have and split the table into as many chunks. Also profile to find out where it spends most of the time. I saw sometimes the network is not letting me thru so you
may want to play with buffers, and packet sizes, for example if OLEDB used increase the packet size two times see if works faster, then x2 more and so forth.
To help you further you need to tell more e.g. what is this source, destination, how you configured the load.
Please understand that there is no Silver Bullet anywhere, or a blanket solution, and you need to tell me your desired load time. E.g. if you tell me it needs to load in 5 min I will give your ask a pass.
Arthur
MyBlog
Twitter -
Approach to tune a query in short time
Hi All,
Oracle 10g I know this question is asked number of times and there are many good replies to them.
But I just want to know how to approach a completely new query ( like the task given to me to fine tume a query in 1 day when I dont have even the slightest idea about how to proceed) if the timeline is very stringent and by just looking at the explain plan, you have to take the decision.
I am just posting my query here and what I am looking for is some lead on how to identify the congetion point which is where this query takes long time ( in my case some 15 mins as reported to me)
select
"LEGAL ENTITY",
"Legal Entity Description",
"Cluster",
"Sub_Cluster",
"Account",
rownum,
"Moody_Rating",
"Process_Date",
"Merge_Description",
rownum,
"Merge_Description",
"is_id_ic",
"is_n",
"cusip",
"isin",
"credit_spread_PV01",
"amount",
"Market_Value",
"Currency",
"Sensitivity_Type",
"maturity_Date",
"Exception_Flag",
"Base_Security_Id",
DECODE(sign("Market_Value"),-1,DeCode(SigN("Recovery"),-1,"Recovery",('-'||"Recovery")), ABS("Recovery")) as "Recovery"
from
select
le.name "LEGAL ENTITY",
le.display_name "Legal Entity Description",
mn4.display_name "Cluster",
mn3.display_name "Sub_Cluster",
bookname.display_name "Account",
(SELECT RATING_NAME
FROM moody_rating
where moody_rating_id = i.moody_rating_id) "Moody_Rating",
to_char(to_date(:v_cob_date,'DD-MM-YY'),'YYYYMMDD') "Process_Date",
ss.issuer "Merge_Description",
PART.MARS_ISSUER "is_id_ic",
PART.PARTICIPANT_NAME "is_n",
NULL "cusip",
NULL "isin",
NULL "credit_spread_PV01",
NULL "amount",
sum(mtmsens.sensitivity_value) "Market_Value",
(SELECT distinct cc.CCY
FROM legacy_country CC
INNER JOIN MARSNODE MN ON CC.countryisocode = MN.NAME
and mn.close_date is null
INNER JOIN MARSNODETYPE MNT ON MN.TYPE_ID =
MNT.NODE_TYPE_ID
AND MNT.NAME = 'COUNTRY'
and mnt.close_date is null
where MN.NODE_ID = part.country_domicile_id
and cc.begin_cob_date <= :v_cob_date
and cc.end_cob_date > :v_cob_date
and rownum < 2) "Currency",
'CREDITSPREADMARKETVALUE' "Sensitivity_Type",
NULL "maturity_Date",
NULL "Exception_Flag",
NULL "Base_Security_Id",
sum(ss.sensitivity_value) "Recovery"
from staging_position sp
left JOIN position p on (
p.feed_instance_id = sp.feed_instance_id
AND p.feed_row_id = sp.feed_row_id)
left JOIN staging_instrument si on (si.feed_instance_id =
sp.feed_instance_id AND
si.position_key =
sp.position_key)
left join book b on (b.book_id = p.book_id and
b.begin_cob_date <= :v_cob_date and
b.end_cob_date > :v_cob_date)
left join marsnode bk on (b.book_id = bk.node_id and
bk.close_date is null)
left join marsnode le on (b.leg_ent_id = le.node_id and
le.close_date is null)
left join marsnode bookname on (bookname.node_id = p.book_id and
bookname.close_date is null)
left join marsnodelink mnl on p.book_id = mnl.node_id
and :v_bus_org_hier_id =
mnl.hierarchy_id
and mnl.close_date is null
and :v_cob_date >= mnl.begin_cob_date
and :v_cob_date < mnl.end_cob_date
left join marsnode mn on mn.node_id = mnl.parent_id
and mn.close_date is null
left join marsnodelink mnl2 on mn.node_id = mnl2.node_id
and :v_bus_org_hier_id =
mnl2.hierarchy_id
and mnl2.close_date is null
and :v_cob_date >= mnl2.begin_cob_date
and :v_cob_date < mnl2.end_cob_date
left join marsnode mn2 on mn2.node_id = mnl2.parent_id
and mn2.close_date is null
left join marsnodelink mnl3 on mn2.node_id = mnl3.node_id
and :v_bus_org_hier_id =
mnl3.hierarchy_id
and mnl3.close_date is null
and :v_cob_date >= mnl3.begin_cob_date
and :v_cob_date < mnl3.end_cob_date
left join marsnode mn3 on mn3.node_id = mnl3.parent_id
and mn3.close_date is null
left join marsnodelink mnl4 on mn3.node_id = mnl4.node_id
and :v_bus_org_hier_id =
mnl4.hierarchy_id
and mnl4.close_date is null
and :v_cob_date >= mnl4.begin_cob_date
and :v_cob_date < mnl4.end_cob_date
left join marsnode mn4 on mn4.node_id = mnl4.parent_id
and mn4.close_date is null
--sensitivity data
left JOIN STAGING_SENSITIVITY ss ON (ss.FEED_INSTANCE_ID =
sp.FEED_INSTANCE_ID AND
ss.FEED_ROW_ID =
sp.FEED_ROW_ID)
--sensitivity data
left JOIN STAGING_SENSITIVITY mtmsens ON (mtmsens.FEED_INSTANCE_ID =
sp.FEED_INSTANCE_ID AND
mtmsens.FEED_ROW_ID =
sp.FEED_ROW_ID)
LEFT join xref_domain_value_map XREF on (XREF.Src_Value =
ss.issuer and
XREF.close_action_id is null and
XREF.Begin_Cob_Date <=
:v_cob_date and
XREF.End_Cob_Date >
:v_cob_date AND
xref.domain_map_id = 601 AND
xref.source_system_id = 307 AND xref.ISSUE_ID is not null)
Left join ISSUE i on (i.issue_id = xref.issue_id)
LEFT join participant PART ON (PART.PARTICIPANT_ID =
XREF.TGT_VALUE and
PART.Close_Action_Id is null and
PART.Begin_Cob_Date <= :v_cob_date and
PART.End_Cob_Date > :v_cob_date)
left join moody_rating RATING on (rating.moody_rating_id =
i.MOODY_RATING_ID)
where sp.feed_instance_id in
(select fbi.feed_instance_id
from feed_book_status fbi ,
feed_instance fi
where fbi.cob_date = :v_cob_date
and fbi.feed_instance_id = fi.feed_instance_id
and fi.feed_id in (
select feed_id from feed_group_xref where feed_group_id in (
select feed_group_id from feed_group where description like 'CDO Feeds')
and close_action_id is null
and sp.Feed_Row_Status_Id = 1
and ss.sensitivity_type = 'CREDITSPREADDEFAULT'
and mtmsens.sensitivity_type = 'MTMVALUE'
and le.name='161'
group by le.name,
le.display_name,
mn3.display_name,
mn4.display_name,
mn.display_name,
i.moody_rating_id,
ss.issuer,
PART.MARS_ISSUER,
PART.PARTICIPANT_NAME,
sp.feed_instance_id,
part.country_domicile_id,
bookname.display_name) And the explain plan
SELECT STATEMENT, GOAL = CHOOSE Cost=19365 Cardinality=1 Bytes=731
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MOODY_RATING Cost=1 Cardinality=1 Bytes=9
INDEX UNIQUE SCAN Object owner=MARS Object name=PK_MOODY_RATING Cost=0 Cardinality=1
HASH UNIQUE Cost=77 Cardinality=1 Bytes=488
COUNT STOPKEY
HASH JOIN Cost=76 Cardinality=1 Bytes=488
NESTED LOOPS Cost=68 Cardinality=1 Bytes=460
HASH JOIN Cost=66 Cardinality=1 Bytes=450
HASH JOIN Cost=59 Cardinality=1 Bytes=412
NESTED LOOPS Cost=51 Cardinality=1 Bytes=402
HASH JOIN Cost=49 Cardinality=1 Bytes=392
NESTED LOOPS Cost=42 Cardinality=1 Bytes=359
NESTED LOOPS Cost=40 Cardinality=1 Bytes=349
NESTED LOOPS Cost=37 Cardinality=1 Bytes=300
NESTED LOOPS Cost=34 Cardinality=1 Bytes=251
HASH JOIN Cost=32 Cardinality=1 Bytes=241
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=27
NESTED LOOPS Cost=24 Cardinality=1 Bytes=231
NESTED LOOPS Cost=21 Cardinality=1 Bytes=204
NESTED LOOPS Cost=18 Cardinality=1 Bytes=171
NESTED LOOPS Cost=16 Cardinality=1 Bytes=136
NESTED LOOPS Cost=13 Cardinality=1 Bytes=86
NESTED LOOPS Cost=10 Cardinality=1 Bytes=37
VIEW Object owner=MARS Cost=7 Cardinality=1 Bytes=10
FILTER
CONNECT BY WITH FILTERING
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_PARENT_ID Cost=3 Cardinality=250 Bytes=2500
HASH JOIN Cost=5 Cardinality=1 Bytes=62
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHYROOT Cost=2 Cardinality=5 Bytes=175
NESTED LOOPS
CONNECT BY PUMP
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=7 Cardinality=1 Bytes=39
INDEX RANGE SCAN Object owner=MARS Object name=IDX_MNL_HI_PI_NI Cost=3 Cardinality=4
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=27
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=49
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=50
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODETYPE Cost=2 Cardinality=1 Bytes=35
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODETYPE Cost=1 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=NODE_ASSOC Cost=3 Cardinality=1 Bytes=33
INDEX RANGE SCAN Object owner=MARS Object name=PK_NODE_ASSOC Cost=1 Cardinality=3
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
VIEW Object owner=MARS Cost=7 Cardinality=1 Bytes=10
FILTER
CONNECT BY WITH FILTERING
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_PARENT_ID Cost=3 Cardinality=250 Bytes=2500
HASH JOIN Cost=5 Cardinality=1 Bytes=62
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHYROOT Cost=2 Cardinality=5 Bytes=175
NESTED LOOPS
CONNECT BY PUMP
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=7 Cardinality=1 Bytes=39
INDEX RANGE SCAN Object owner=MARS Object name=IDX_MNL_HI_PI_NI Cost=3 Cardinality=4
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1 Bytes=10
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=NODE_ASSOC Cost=3 Cardinality=1 Bytes=49
INDEX RANGE SCAN Object owner=MARS Object name=PK_NODE_ASSOC Cost=1 Cardinality=3
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=49
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1 Bytes=10
VIEW Object owner=MARS Cost=7 Cardinality=1 Bytes=33
FILTER
CONNECT BY WITH FILTERING
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_PARENT_ID Cost=3 Cardinality=250 Bytes=2500
HASH JOIN Cost=5 Cardinality=1 Bytes=62
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHYROOT Cost=2 Cardinality=5 Bytes=175
NESTED LOOPS
CONNECT BY PUMP
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=7 Cardinality=1 Bytes=39
INDEX RANGE SCAN Object owner=MARS Object name=IDX_MNL_HI_PI_NI Cost=3 Cardinality=4
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1 Bytes=10
VIEW Object owner=MARS Cost=7 Cardinality=1 Bytes=10
FILTER
CONNECT BY WITH FILTERING
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_PARENT_ID Cost=3 Cardinality=250 Bytes=2500
HASH JOIN Cost=5 Cardinality=1 Bytes=62
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHYROOT Cost=2 Cardinality=5 Bytes=175
NESTED LOOPS
CONNECT BY PUMP
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=7 Cardinality=1 Bytes=39
INDEX RANGE SCAN Object owner=MARS Object name=IDX_MNL_HI_PI_NI Cost=3 Cardinality=4
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
VIEW Object owner=MARS Cost=7 Cardinality=1 Bytes=38
FILTER
CONNECT BY WITH FILTERING
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_PARENT_ID Cost=3 Cardinality=250 Bytes=2500
HASH JOIN Cost=5 Cardinality=1 Bytes=62
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHYROOT Cost=2 Cardinality=5 Bytes=175
NESTED LOOPS
CONNECT BY PUMP
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=7 Cardinality=1 Bytes=57
INDEX RANGE SCAN Object owner=MARS Object name=IDX_MNL_HI_PI_NI Cost=3 Cardinality=4
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=36
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1 Bytes=10
VIEW Object owner=MARS Cost=7 Cardinality=1 Bytes=28
FILTER
CONNECT BY WITH FILTERING
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_PARENT_ID Cost=3 Cardinality=250 Bytes=2500
HASH JOIN Cost=5 Cardinality=1 Bytes=62
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHYROOT Cost=2 Cardinality=5 Bytes=175
NESTED LOOPS
CONNECT BY PUMP
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=7 Cardinality=1 Bytes=57
INDEX RANGE SCAN Object owner=MARS Object name=IDX_MNL_HI_PI_NI Cost=3 Cardinality=4
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
COUNT
VIEW Object owner=MARS Cost=19365 Cardinality=1 Bytes=731
HASH GROUP BY Cost=19365 Cardinality=1 Bytes=1112
NESTED LOOPS OUTER Cost=19364 Cardinality=1 Bytes=1112
NESTED LOOPS OUTER Cost=19361 Cardinality=1 Bytes=1040
NESTED LOOPS OUTER Cost=19361 Cardinality=1 Bytes=1037
NESTED LOOPS OUTER Cost=19360 Cardinality=1 Bytes=1019
NESTED LOOPS OUTER Cost=19357 Cardinality=1 Bytes=951
NESTED LOOPS OUTER Cost=19354 Cardinality=1 Bytes=914
NESTED LOOPS OUTER Cost=19351 Cardinality=1 Bytes=877
NESTED LOOPS OUTER Cost=19337 Cardinality=1 Bytes=820
NESTED LOOPS OUTER Cost=19334 Cardinality=1 Bytes=783
NESTED LOOPS OUTER Cost=19320 Cardinality=1 Bytes=726
NESTED LOOPS OUTER Cost=19317 Cardinality=1 Bytes=707
NESTED LOOPS OUTER Cost=19303 Cardinality=1 Bytes=650
NESTED LOOPS OUTER Cost=19300 Cardinality=1 Bytes=613
NESTED LOOPS Cost=19285 Cardinality=1 Bytes=556
NESTED LOOPS Cost=19280 Cardinality=1 Bytes=443
NESTED LOOPS OUTER Cost=19275 Cardinality=1 Bytes=330
HASH JOIN RIGHT SEMI Cost=17457 Cardinality=1 Bytes=248
VIEW Object owner=SYS Object name=VW_NSO_1 Cost=1119 Cardinality=30 Bytes=150
HASH JOIN Cost=1119 Cardinality=30 Bytes=2040
TABLE ACCESS FULL Object owner=MARS Object name=FEED_GROUP Cost=2 Cardinality=5 Bytes=120
HASH JOIN Cost=1116 Cardinality=1607 Bytes=70708
TABLE ACCESS FULL Object owner=MARS Object name=FEED_GROUP_XREF Cost=13 Cardinality=701 Bytes=14721
HASH JOIN Cost=1102 Cardinality=3602 Bytes=82846
INDEX RANGE SCAN Object owner=MARS Object name=IDX_FBS_CD_FII_BI Cost=22 Cardinality=3602 Bytes=46826
TABLE ACCESS FULL Object owner=MARS Object name=FEED_INSTANCE Cost=1024 Cardinality=670264 Bytes=6702640
NESTED LOOPS Cost=16337 Cardinality=324 Bytes=78732
HASH JOIN Cost=14324 Cardinality=1977 Bytes=302481
NESTED LOOPS OUTER Cost=11 Cardinality=1 Bytes=114
NESTED LOOPS Cost=8 Cardinality=1 Bytes=95
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=5 Cardinality=1 Bytes=59
INDEX RANGE SCAN Object owner=MARS Object name=IDX_NODE1 Cost=3 Cardinality=2
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=BOOK Cost=3 Cardinality=2 Bytes=72
INDEX RANGE SCAN Object owner=MARS Object name=IDX_BOOK_LEI_BCD Cost=2 Cardinality=4
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=19
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
PARTITION RANGE ALL Cost=13995 Cardinality=3854299 Bytes=150317661
TABLE ACCESS FULL Object owner=MARS Object name=POSITION Cost=13995 Cardinality=3854299 Bytes=150317661
PARTITION RANGE ITERATOR Cost=2 Cardinality=1 Bytes=90
PARTITION HASH ITERATOR Cost=2 Cardinality=1 Bytes=90
TABLE ACCESS BY LOCAL INDEX ROWID Object owner=MARS Object name=STAGING_POSITION Cost=2 Cardinality=1 Bytes=90
INDEX UNIQUE SCAN Object owner=MARS Object name=PK_STAGINGPOSITON Cost=1 Cardinality=1
PARTITION HASH ITERATOR Cost=1819 Cardinality=1 Bytes=82
TABLE ACCESS BY LOCAL INDEX ROWID Object owner=MARS Object name=STAGING_INSTRUMENT Cost=1819 Cardinality=1 Bytes=82
INDEX RANGE SCAN Object owner=MARS Object name=PK_STAGINGINSTRUMENT Cost=9 Cardinality=2551
PARTITION RANGE ITERATOR Cost=5 Cardinality=1 Bytes=113
PARTITION HASH ITERATOR Cost=5 Cardinality=1 Bytes=113
TABLE ACCESS BY LOCAL INDEX ROWID Object owner=MARS Object name=STAGING_SENSITIVITY Cost=5 Cardinality=1 Bytes=113
INDEX RANGE SCAN Object owner=MARS Object name=IDX_SENSITIVITY_FEED_ROW_ID Cost=3 Cardinality=8
PARTITION RANGE ITERATOR Cost=5 Cardinality=1 Bytes=113
PARTITION HASH ITERATOR Cost=5 Cardinality=1 Bytes=113
TABLE ACCESS BY LOCAL INDEX ROWID Object owner=MARS Object name=STAGING_SENSITIVITY Cost=5 Cardinality=1 Bytes=113
INDEX RANGE SCAN Object owner=MARS Object name=IDX_SENSITIVITY_FEED_ROW_ID Cost=3 Cardinality=8
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=14 Cardinality=1 Bytes=57
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_NODE_ID Cost=2 Cardinality=14
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=37
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=14 Cardinality=1 Bytes=57
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_NODE_ID Cost=2 Cardinality=14
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=19
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=14 Cardinality=1 Bytes=57
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_NODE_ID Cost=2 Cardinality=14
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=37
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=14 Cardinality=1 Bytes=57
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_NODE_ID Cost=2 Cardinality=14
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=37
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=37
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=XREF_DOMAIN_VALUE_MAP Cost=3 Cardinality=1 Bytes=68
INDEX RANGE SCAN Object owner=MARS Object name=IDX_XDVM_DMI_SV_BCD Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=ISSUE Cost=1 Cardinality=1 Bytes=18
INDEX UNIQUE SCAN Object owner=MARS Object name=PK_ISSUE Cost=0 Cardinality=1
INDEX UNIQUE SCAN Object owner=MARS Object name=PK_MOODY_RATING Cost=0 Cardinality=1 Bytes=3
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=PARTICIPANT Cost=3 Cardinality=1 Bytes=72
INDEX RANGE SCAN Object owner=MARS Object name=PK_PARTICIPANT Cost=2 Cardinality=1Hi,
in your explain plan:
HASH JOIN RIGHT SEMI Cost=17457 Cardinality=1 Bytes=248
VIEW Object owner=SYS Object name=VW_NSO_1 Cost=1119 Cardinality=30 Bytes=150
HASH JOIN Cost=1119 Cardinality=30 Bytes=2040
TABLE ACCESS FULL Object owner=MARS Object name=FEED_GROUP Cost=2 Cardinality=5 Bytes=120
HASH JOIN Cost=1116 Cardinality=1607 Bytes=70708
TABLE ACCESS FULL Object owner=MARS Object name=FEED_GROUP_XREF Cost=13 Cardinality=701 Bytes=14721
HASH JOIN Cost=1102 Cardinality=3602 Bytes=82846
INDEX RANGE SCAN Object owner=MARS Object name=IDX_FBS_CD_FII_BI Cost=22 Cardinality=3602 Bytes=46826
TABLE ACCESS FULL Object owner=MARS Object name=FEED_INSTANCEThis part has the highest costs (this doesn't always mean it is slow). So this leads me to the WHERE clause where feed_group, feed_group_xref and feed_instance full are used. Maybe this can be improved, although the cardinality is not that high, so a full table can be the best. So the question is can indexes help here?
Furthermore there is the full table scan on POSITION:
TABLE ACCESS FULL Object owner=MARS Object name=POSITION Cost=13995 Cardinality=3854299 Bytes=150317661This looks also a large tabel (3 million + records), so is it possible to get this part smaller?
Herald ten Dam
http://htendam.wordpress.com -
What's the power of spending more time every day in your IT career ?
What's the power of spending more time every day in your IT career ?
When you do the research to find the factors of the successful people, you'll find that they ALWAYS spent a lot of time every day in their businesses.
The IT technologies change so fast and explode. The performance of IT people with 20 years experiences can be EASILY catched up by a IT people, maybe you, who has only 6 or 7 years or even less if you spend over 80 hours or even more every week on your job and on the latest technologies.
The reasons are listed below:
1. The out-of-date IT technologies is no use anymore. It's like the out-of-date IT books. The people with 20 years experience can't rely on his last experience any more.
2. If you spend 80 hours every week, you'll absorb doubled the knowledge and experience than the people who spends only 40 hours every week. You may catch up with him who has 20 years experiences with your 10 years experiences. Not only this, most of your knowledge and experiences are newer and useful than his.
3. Because you spend 80 hours, you have plenty of time to focus on your tasks. When the task comes in, you may finish your tasks many times faster than him and you may be the only person who still has the time to do any extra or urgent tasks. Because you have more time, you may prepare your next task to be ready to finish your next task even faster and you may absorb the latest IT technologies from a learner level to an expert level while other people is still doing his tasks. And due to your more spare time to learn the technologies all the time needed to do your task, you have much more knowledge needed to do your task and can finish your task much faster than others.
According to my experience, if the comparison of the knowledge the worker has is 1 to 5, the real hours needed to finish a task may be 1 to 10. In IT field, if you know how to do it, you need minutes, but if you don't know how to do it, you need months to do the error-try or starting to learn it and do it. Please notice that the knowledge can be used for many times and the 1 to 5 and 1 to 10 comparisons will repeat again and again. Without spending more time on getting more knowledge, the comparison of 1 to 10 is still happening.
4. Like trying to save the money, most of his 40 hours will be spent on his tasks. He may have only 5 hours left to learn new IT technologies. If you still need to spend the same time to finish your tasks like him, you still will have 45 hours to learn new technologies. You grow 9 times, 5 hours to 45 hours, faster than him. If you keep working and learning the new IT technologies 80 hours every week, you'll find that you'll need much less than 35 hours like him, you may have 60 or even more hours every week to do more tasks to get more experiences and to learn more latest technologies.
----- Someone asked me after read the above "You're absolutely right, but, what motivation can make people do that ?". I told him that "Don't ask me. Ask your boss" -----1) LabVIEW is a graphical based programming environment and
LabWindows/CVI is a C based programming environment. Both can be used
for similar tasks. Users familiar with C may prefer CVI. LabVIEW is
typically easier to learn if you are unfamiliar with both.
2)
The Run-Time Engine is a separate component that can be installed to
execute LabWindows/CVI programs and LabVIEW programs. It is free of
charge for both LabVIEW and LabWindows/CVI.
3)
LabWindows/CVI does not convert any LabVIEW programs into C code.
LabVIEW programs are already compiled as you write them, so you won't
need to convert them to C to use them. You should see similar
performance in similar LabVIEW and CVI code.
Allen P.
NI -
While running the query how much time it will taken, I want to see the time
Hi Folks
I would like to know ... While running the query how much time it will be taken, I want to see the time? in WEBI XI R2.....
Plz let me know the answer.......Hi Ravi,
The time a report runs is estimated based on the last time it was run. So you need to run the report once before you can see how long it will take. Also it depends on several factors... the database server could cache some queries so running it a second time immediately after the first time could be quicker. And there is the chance of changing filters to bring back different sets of data.
You could also schedule a report and then check the scheduled instance's status properties and view how long a report actually ran.
Good luck -
Stopping a Query taking more time to execute in runtime in Oracle Forms.
Hi,
In the present application one of the oracle form screen is taking long time to execute a query, user wanted an option to stop the query in between and browse the result (whatever has been fetched before stopping the query).
We have tried three approach.
1. set max fetch record in form and block level.
2. set max fetch time in form and block level.
in above two method does not provide the appropiate solution for us.
3. the third approach we applied is setting the interaction mode to "NON BLOCKING" at the form level.
It seems to be worked, while the query took long time to execute, oracle app server prompts an message to press Esc to cancel the query and it a displaying the results fetched upto that point.
But the drawback is one pressing esc, its killing the session itself. which is causing the entire application to collapse.
Please suggest if there is any alternative approach for this or how to overcome this perticular scenario.
This kind of facility is alreday present in TOAD and PL/SQL developer where we can stop an executing query and browse the results fetched upto that point, is the similar facility is avialable in oracle forms ,please suggest.
Thanks and Regards,
Suraj
Edited by: user10673131 on Jun 25, 2009 4:55 AMHello Friend,
You query will definitely take more time or even fail in PROD,becuase the way it is written. Here are my few observations, may be it can help :-
1. XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V : Never use a view inside such a long query , becuase View is just a window to the records.
and when used to join other table records, then all those tables which are used to create a view also becomes part of joining conition.
First of all please check if you really need this view. I guess you are using to check if the records have been created as Journal entries or not ?
Please check the possbility of finding it through other AR tables.
2. Remove _ALL tables instead use the corresponding org specific views (if you are in 11i ) or the sysnonymns ( in R12 )
For example : For ra_cust_trx_types_all use ra_cust_trx_types.
This will ensure that the query will execute only for those ORG_IDs which are assigned to that responsibility.
3. Check with the DBA whether the GATHER SCHEMA STATS have been run atleast for ONT and RA tables.
You can also check the same using
SELECT LAST_ANALYZED FROM ALL_TABLES WHERE TABLE_NAME = 'ra_customer_trx_all'.
If the tables are not analyzed , the CBO will not be able to tune your query.
4. Try to remove the DISTINCT keyword. This is the MAJOR reason for this problem.
5. If its a report , try to separate the logic in separate queries ( using a procedure ) and then populate the whole data in custom table, and use this custom table for generating the
report.
Thanks,
Neeraj Shrivastava
[email protected]
Edited by: user9352949 on Oct 1, 2010 8:02 PM
Edited by: user9352949 on Oct 1, 2010 8:03 PM -
I get the error message in QuickTime "operation stopped the operation is not supported for this media" most times when I try and export an .AVI file as something else (e.g. .m4v). I have not touched the file in any way (no trimming, clipping or other editing), all I want QuickTime to do is export the file in a compressed format. Bizzarely, if I shutdown and open QuickTime many times I can occasionally export a clip as another format (maybe one in 10 times). I have seen that other users have had a similar problem after clipping files in QuickTime but this seems to be a slightly different bug in that all I do is open the file and then try and export the file as is - either way, this is a very annoying bug
@Z_B-B, thank you for taking the time to respond to my cry for help. However, the link you supplied does not address the problem: I am not trying to export from Final Cut Pro to QuickTime, I am trying to export from QuickTime to the rest of the world (like people's iPhones and Ipads) in .m4v format (so I am not emailing my freinds such huge files).
If I were to spend hundreds of Dollars on a copy of Final Pro I could export directly from there and not have to bother with QuickTime, but I do not take enough video clips to justify the cost. I must say that I never had any of these problems before I decided to switch from Snow Leopard to Mountai Lion. -
SQL Query Executing longer time
Hi , The below SQL query executing longer time . Please help to Improve the query performance. The query continuously running for more than 24 hours and failing with roolback segment error. Not getting the final output. Most of the tables are having milions of records.
Select distinct
IBS.ADSL_ACCESS_INFO,
IBS.LIJ ,
regexp_substr(OBVS.REFERENTIE_A,'[[:digit:]]+') as O_NUMBER,
DBS.CKR_NUMMER_CONTRACTANT,
DBS.DNUMBER
FROM CD.IBS,
CD.OIBL,
CD.IH,
CD.ODL,
CD.OH,
CD.DBS,
CD.OBVS
Where IBS.END_DT = To_Date('31129999', 'ddmmyyyy')
AND OIBL.END_DT = to_date('31129999', 'ddmmyyyy')
AND DBS.END_DT = to_date('31129999', 'ddmmyyyy')
AND OBVS.END_DT = to_date('31129999', 'ddmmyyyy')
AND OBVS.REFERENTIE_A LIKE 'OFM%'
AND OIBL.INFRA_KEY = IH.INFRA_KEY
AND OIBL.ORDERS_KEY = OH.ORDERS_KEY
AND IBS.INFH_ID = IH.INFH_ID
AND ODL.ORDH_ID = OH.ORDH_ID
AND DBS.DEBH_ID = ODL.DEBH_ID
AND OBVS.ORDH_ID = ODL.ORDH_ID
Order By IBS.LIJ
All the columns which are present in the where condition are having either Index/key (Primary/unique) except END_DT column.
Please AdvisePredicate pushing can help when it greatlly restricts the number of rows - you must experiment - might not work with all predicates pushed (as shown here)
select distinct
ibs.adsl_access_info,
ibs.lij,
obvs.o_number,
dbs.ckr_nummer_contractant,
dbs.dnumber
from (select infh_id,adsl_access_info,lij
from cd.ibs
where end_dt = to_date('31129999','ddmmyyyy')
) ibs,
(select infra_key,orders_key
from cd.oibl
where end_dt = to_date('31129999','ddmmyyyy')
) oibl,
(select ordh_id,regexp_substr(obvs.referentie_a,'[[:digit:]]+') as o_number
from cd.obvs
where end_dt = to_date('31129999','ddmmyyyy')
and referentie_a like 'OFM%'
) obvs,
(select debh_id,ckr_nummer_contractant,dnumber
from cd.dbs
where end_dt = to_date('31129999','ddmmyyyy')
) dbs,
cd.ih,
cd.odl,
cd.oh
where oibl.infra_key = ih.infra_key
and oibl.orders_key = oh.orders_key
and ibs.infh_id = ih.infh_id
and odl.ordh_id = oh.ordh_id
and dbs.debh_id = odl.debh_id
and obvs.ordh_id = odl.ordh_id
order by ibs.lijRegards
Etbin -
Generate query at a time & connect two database ?
Hi
Could u help me? it is necessary that
I have two database ( RSHPL,RSPL) and both data base have same table name and same fieldname (Table Name :-OITM, Fieldname :ONHAND ,pk- ITEMcode,)
At first I tell u what I want?
I want to generate query at a time & connect two database and same table name(OITM) to select the TOTAL VALUE of this field(ONHAND) and pk-key is Itemcode.
Is it Possible? If Possible plz write this query and send me.Hi,
I don't think its possible to write a query from within SAP that allows you to get data from another SAP database.
I have used SQL Reporting Services to report on data from multiple databases, and this works very well for many of my clients. It depends on what you want to do with the data, do you need to display it on the screen in SAP and it to a field or will a report suffice?
Regards,
Adrian -
Issue: When using BW Bex query analyzer users cannot change reporting queries. Any attempt to change queries results in errors.
Error: BEx Query Designer: Run-time error '-2147221499 (80040005) Fatal Error - Terminating
Impact: Business reporting is currently being negatively impacted because users cannot modify queries, cannot change filters for fiscal period and fiscal year.
OS / MS Office Suite being used: Vista & Office 2007
Backend System: BW 2.0B
Frontend System: Being a large organization, we have a controlled environment wherein all users will have the following applications installed by default:
1. SAP Client Base 7.10
2. SAP BW 3.5 Patch 4
3. SAP BI 7.10 Patch 900
4. SAP GUI 7.10 Patch 12
Does anyone has any idea as to why we are getting this error? Is it a Vista issue? Is it a front-end issue?Just a thought - did you guys apply any Microsoft security patches before this started happening - we had a similar issue in other SAP application due to MS security update. Raise an OSS with SAP
Maybe you are looking for
-
407 proxy error occured while checking certificate revocation
Hello, A 407 proxy auth error occured while checking the server certificate revocation. source code: var loader:URLLoader = new URLLoader(); loader.load(new URLRequest("https://www.example.com/")); It works fine on Windows XP you know, but does n
-
HT201401 No sound from iTunes or videos ring tone is fine
No sound from iphone4 when playing audio ring tone is fine
-
Hi, We are using Teradata as data source in SSAS 2008R2, it's using '.Net Providers\.Net Data Provider for Teradata'. In DSV, we have two data sources both using Teradata. While processing dimensions, all dimensions coming from secondary data source
-
Calling N numbers in cheque field in smartform n zprogram
hi all, i wnt to give n numbers in cheque field. it declared in zprogram like this (PARAMETERS: S_CHQ_NO(10) TYPE c.) if i change the 10 to 30 it will take only 30 numbers..but i want to give n numbers..in form interface of samrtforms it declared li
-
Problem in activating after changing structure of BAPI once released
Hi, I have created a BAPI which is having 2 import structures. The BAPI was relaesed. When I changed component type of a field in one of the structure, it is not allowing me to re-activate it. Can we not make any changes in structure of the BAPI whe