Peformance of First_Value
Hi ,
I want to know which of the two queries is better in terms of performance(using first_value or a sub query).
Let us assume that salary is unique...
1) select first_value(ename ) over (order by sal) from emp;
or
2) select ename from emp where sal = (select min(sal) from emp )
Thanks,
Anand
You are comparing apple with oranges. It is because you are comparing analytical vs aggregate. They are different. Because
SQL> select first_value(ename ) over (order by sal) from emp;
FIRST_VALU
SMITH
SMITH
SMITH
SMITH
SMITH
SMITH
SMITH
SMITH
SMITH
SMITH
SMITH
FIRST_VALU
SMITH
SMITH
SMITH
SMITH
15 rows selected.
SQL> select ename from emp where sal = (select min(sal) from emp);
ENAME
SMITHFor more information check this url.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:74525921631614
Regards
Raj
Similar Messages
-
Use of FIRST_VALUE OVER in a PL/SQL query
Hello,
Here is my problem:
I'm trying to execute a query using FIRST_VALUE OVER in a PL/SQL procedure, e.g.
SELECT FIRST_VALUE (name) OVER (order by birthdate)
FROM birthday_table
WHERE location = 'HOME';
I need to get the value returned. I tried to do it using an INTO clause and
also with EXECUTE IMMEDIATE, but I get an error like "invalid column name".
Thank you,
Olivier.Assuming the query runs successfully outside of PL/SQL, the execute immediate construct would be:
execute immediate 'select first_value ... where location = :loc' into v_some_variable using 'HOME'; -
RH6 Extremely Poor Peformance with Lage Webhelp Project
We recently upgraded to RH6, which has caused performance
problems so severe that we literally cannot work with the project.
Our problems are exclusively with our largest project -- 600+ html
files, 500,000+ words, 35mb (only about 150 small gif images).
We're not having any problems with our smaller help projects.
When we launch the large project, the CPU usage immediately
hits 100% and stays there for the duration of the session. My
longest session is currently 35 minutes, during which I was able to
delete a folder, add another folder, and add 5 files to the system.
The application took almost 8 minutes to close.
Because of the size of this project, it's always had
performance issues on startup. Under X5, it took about five minutes
before you could work freely, but then worked fine.
So far, three of us have experience the same peformance
problems. We're all using identical Thinkpad t43 laptops with 2
gigs of memory and running XP Pro Service pack 2 with all of the
security updates. Someone recommended temporarily disabling
VirusScan, which didn't improve the situation at all.
So, two questions:
1. Have any of you experienced and been able to resolve
serious performance issues of this sort?
2. What operations does RoboHelp peform when you open a
project. It seems to verify folders (& links?), but I can't
find any information about exactly what it does.
EDIT: We are not accessing the project over a network, so
this cannot be the cause of our performance problems.
Thanks for any help you can provide.
Roger Gerbig
Senior Technical Writer
Aprimo, IncorporatedHi Roger and welcome to the RH community. To be honest your
project doesn't sound that large at all. At my last job we had
projects 4-5 times the size of that running with no performance
issues at all. That said, check out
this
link for some pointers about where you can go from here. -
Replacing Oracle's FIRST_VALUE and LAST_VALUE analytical functions.
Hi,
I am using OBI 10.1.3.2.1 where, I guess, EVALUATE is not available. I would like to know alternatives, esp. to replace Oracle's FIRST_VALUE and LAST_VALUE analytical functions.
I want to track some changes. For example, there are four methods of travel - Air, Train, Road and Sea. Would like to know traveler's first method of traveling and the last method of traveling in an year. If both of them match then a certain action is taken. If they do not match, then another action is taken.
I tried as under.
1. Get Sequence ID for each travel within an year per traveler as Sequence_Id.
2. Get the Lowest Sequence ID (which should be 1) for travels within an year per traveler as Sequence_LId.
3. Get the Highest Sequence ID (which could be 1 or greater than 1) for travels within an year per traveler as Sequence_HId.
4. If Sequence ID = Lowest Sequence ID then display the method of travel as First Method of Travel.
5. If Sequence ID = Highest Sequence ID then display the method of travel as Latest Method of Travel.
6. If First Method of Travel = Latest Method of Travel then display Yes/No as Match.
The issue is cells could be blank in First Method of Travel and Last Method of Travel unless the traveler traveled only once in an year.
Using Oracle's FIRST_VALUE and LAST_VALUE analytical functions, I can get a result like
Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
ABC | 01/01/2000 | 04/04/2000 | Road | Road | Air | No
ABC | 01/01/2000 | 15/12/2000 | Air | Road | Air | No
XYZ | 01/01/2000 | 04/05/2000 | Train | Train | Train | Yes
XYZ | 01/01/2000 | 04/11/2000 | Train | Train | Train | Yes
Using OBI Answers, I am getting something like this.
Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
ABC | 01/01/2000 | 04/04/2000 | Road | Road | <BLANK> | No
ABC | 01/01/2000 | 15/12/2000 | Air | <BLANK> | Air | No
XYZ | 01/01/2000 | 04/05/2000 | Train | Train | <BLANK> | No
XYZ | 01/01/2000 | 04/11/2000 | Train | <BLANK> | Train | No
Above, for XYZ traveler the Match? clearly shows a wrong result (although somehow it's correct for traveler ABC).
Would appreciate if someone can guide me how to resolve the issue.
Many thanks,
Manoj.
Edited by: mandix on 27-Nov-2009 08:43
Edited by: mandix on 27-Nov-2009 08:47Hi,
Just to recap, in OBI 10.1.3.2.1, I am trying to find an alternative way to FIRST_VALUE and LAST_VALUE analytical functions used in Oracle. Somehow, I feel it's achievable. I would like to know answers to the following questions.
1. Is there any way of referring to a cell value and displaying it in other cells for a reference value?
For example, can I display the First Method of Travel for traveler 'ABC' and 'XYZ' for all the rows returned in the same column, respectively?
2. I tried RMIN, RMAX functions in the RDP but it does not accept "BY" clause (for example, RMIN(Transaction_Id BY Traveler) to define Lowest Sequence Id per traveler). Am I doing something wrong here? Why can a formula with "BY" clause be defined in Answers but not the RPD? The idea is to use this in Answers. This is in relation to my first question.
Could someone please let me know?
I understand that this thread that I have posted is related to something that can be done outside OBI, but still would like to know.
If anything is not clear please let me know.
Thanks,
Manoj. -
Peformance of a Materialized view
Hi,
I have created materialized view the below
CREATE MATERIALIZED VIEW SUMMARY.V_OSFI_FEED_CORE_DATA_ADJ
TABLESPACE RCDW_CAD_SUM_1M_DAT01
PCTUSED 0
PCTFREE 0
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOCACHE
LOGGING
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
REFRESH FORCE ON DEMAND
WITH PRIMARY KEY
AS
/* Formatted on 2010/01/27 14:49 (Formatter Plus v4.8.8) */
SELECT orgn.month_end_date month_end_date,
orgn.ntrl_k_odc_instrument ntrl_k_odc_instrument,
orgn.processing_system processing_system,
orgn.source_system source_system,
DECODE (UPPER (adjstd.ssb_type_indicator),
'NULL', NULL,
NVL (adjstd.ssb_type_indicator, orgn.ssb_type_indicator)
) ssb_type_indicator,
DECODE (UPPER (adjstd.basel_exposure_class),
'NULL', NULL,
NVL (adjstd.basel_exposure_class, orgn.basel_exposure_class)
) basel_exposure_class,
DECODE (UPPER (adjstd.basel_exposure_subclass),
'NULL', NULL,
NVL (adjstd.basel_exposure_subclass,
orgn.basel_exposure_subclass
) basel_exposure_subclass,
DECODE (UPPER (TRIM (adjstd.ccis_product_code)),
'NULL', NULL,
NVL (adjstd.ccis_product_code, orgn.ccis_product_code)
) ccis_product_code,
orgn.warehouse_instrument_key warehouse_instrument_key,
DECODE (UPPER (adjstd.instrument_status),
'NULL', NULL,
NVL (adjstd.instrument_status, orgn.instrument_status)
) instrument_status,
DECODE (UPPER (adjstd.instrument_open_flag),
'NULL', NULL,
NVL (adjstd.instrument_open_flag, orgn.instrument_open_flag)
) instrument_open_flag,
DECODE (UPPER (adjstd.instrument_dwo_flag),
'NULL', NULL,
NVL (adjstd.instrument_dwo_flag, orgn.instrument_dwo_flag)
) instrument_dwo_flag,
DECODE (UPPER (adjstd.instrument_closed_flag),
'NULL', NULL,
NVL (adjstd.instrument_closed_flag,
orgn.instrument_closed_flag)
) instrument_closed_flag,
DECODE (UPPER (adjstd.instrument_arrears_flag),
'NULL', NULL,
NVL (adjstd.instrument_arrears_flag,
orgn.instrument_arrears_flag
) instrument_arrears_flag,
DECODE (UPPER (adjstd.instrument_npna_flag),
'NULL', NULL,
NVL (adjstd.instrument_npna_flag, orgn.instrument_npna_flag)
) instrument_npna_flag,
DECODE (UPPER (adjstd.account_open_dt),
'NULL', TO_DATE (NULL),
NULL, orgn.account_open_dt,
TO_DATE (adjstd.account_open_dt, 'dd-MON-YYYY')
) account_open_dt,
DECODE (UPPER (adjstd.account_close_dt),
'NULL', TO_DATE (NULL),
NULL, orgn.account_close_dt,
TO_DATE (adjstd.account_close_dt, 'dd-MON-YYYY')
) account_close_dt,
DECODE (UPPER (adjstd.instrument_insured_ind),
'NULL', NULL,
NVL (adjstd.instrument_insured_ind,
orgn.instrument_insured_ind)
) instrument_insured_ind,
DECODE
(UPPER (adjstd.instrument_securitized_ind),
'NULL', NULL,
NVL (adjstd.instrument_securitized_ind,
orgn.instrument_securitized_ind
) instrument_securitized_ind,
DECODE (UPPER (adjstd.country_code),
'NULL', NULL,
NVL (adjstd.country_code, orgn.country_code)
) country_code,
DECODE (UPPER (TRIM (adjstd.province)),
'NULL', NULL,
NVL (adjstd.province, orgn.province)
) province,
COALESCE (adjstd.presec_outstanding_bal,
orgn.presec_outstanding_bal,
NULL
) presec_outstanding_bal,
COALESCE (adjstd.postsec_outstanding_bal,
orgn.postsec_outstanding_bal,
NULL
) postsec_outstanding_bal,
COALESCE (adjstd.presec_authorized_limit,
orgn.presec_authorized_limit,
NULL
) presec_authorized_limit,
DECODE (UPPER (adjstd.interest_rate_type),
'NULL', NULL,
NVL (adjstd.interest_rate_type, orgn.interest_rate_type)
) interest_rate_type,
COALESCE (adjstd.ltv, orgn.ltv, NULL) ltv,
DECODE (UPPER (adjstd.arrears_cycle_type),
'NULL', NULL,
NVL (adjstd.arrears_cycle_type, orgn.arrears_cycle_type)
) arrears_cycle_type,
COALESCE (adjstd.recovery_amt, orgn.recovery_amt, NULL) recovery_amt,
COALESCE (adjstd.wrtoff_amt, orgn.wrtoff_amt, NULL) wrtoff_amt,
COALESCE (adjstd.utilization_rate,
orgn.utilization_rate,
NULL
) utilization_rate,
DECODE (UPPER (adjstd.pd_pool),
'NULL', NULL,
NVL (adjstd.pd_pool, orgn.pd_pool)
) pd_pool,
COALESCE (adjstd.pd_value_pct, orgn.pd_value_pct, NULL) pd_value_pct,
DECODE (UPPER (adjstd.lgd_pool),
'NULL', NULL,
NVL (adjstd.lgd_pool, orgn.lgd_pool)
) lgd_pool,
COALESCE (adjstd.lgd_value_pct, orgn.lgd_value_pct,
NULL) lgd_value_pct,
DECODE (UPPER (adjstd.ead_pool),
'NULL', NULL,
NVL (adjstd.ead_pool, orgn.ead_pool)
) ead_pool,
COALESCE (adjstd.presec_precrm_ead,
orgn.presec_precrm_ead,
NULL
) presec_precrm_ead,
COALESCE (adjstd.postsec_precrm_ead,
orgn.postsec_precrm_ead,
NULL
) postsec_precrm_ead,
COALESCE (adjstd.postsec_postcrm_ead,
orgn.postsec_postcrm_ead,
NULL
) postsec_postcrm_ead,
COALESCE (adjstd.expected_loss_amt,
orgn.expected_loss_amt,
NULL
) expected_loss_amt,
COALESCE (adjstd.postsec_rwa, orgn.postsec_rwa, NULL) postsec_rwa,
DECODE (UPPER (adjstd.risk_rating_system),
'NULL', NULL,
NVL (adjstd.risk_rating_system, orgn.risk_rating_system)
) risk_rating_system,
DECODE (UPPER (adjstd.default_ind),
'NULL', NULL,
NVL (adjstd.default_ind, orgn.default_ind)
) default_ind,
DECODE (UPPER (adjstd.LOB),
'NULL', NULL,
NVL (adjstd.LOB, orgn.LOB)
) LOB,
DECODE (UPPER (adjstd.row_source),
'NULL', NULL,
NVL (adjstd.row_source, orgn.row_source)
) row_source,
COALESCE (adjstd.postsec_authorized_limit,
orgn.postsec_authorized_limit,
NULL
) postsec_authorized_limit,
COALESCE (adjstd.recovery_amt_calc,
orgn.recovery_amt_calc,
NULL
) recovery_amt_calc,
adjstd.adj_cycle_id adj_cycle_id,
orgn.curr_warehouse_inst_key curr_warehouse_inst_key,
orgn.curr_ccis_prod curr_ccis_prod, adjstd.load_time load_time,
adjstd.update_time update_time
FROM summary.osfi_feed_core_data orgn, summary.ncr_adj_data adjstd
WHERE orgn.ntrl_k_odc_instrument = adjstd.ntrl_k_odc_instrument
UNION ALL
SELECT orgn.month_end_date, orgn.ntrl_k_odc_instrument,
orgn.processing_system, orgn.source_system, orgn.ssb_type_indicator,
orgn.basel_exposure_class, orgn.basel_exposure_subclass,
orgn.ccis_product_code, orgn.warehouse_instrument_key,
orgn.instrument_status, orgn.instrument_open_flag,
orgn.instrument_dwo_flag, orgn.instrument_closed_flag,
orgn.instrument_arrears_flag, orgn.instrument_npna_flag,
orgn.account_open_dt, orgn.account_close_dt,
orgn.instrument_insured_ind, orgn.instrument_securitized_ind,
orgn.country_code, orgn.province, orgn.presec_outstanding_bal,
orgn.postsec_outstanding_bal, orgn.presec_authorized_limit,
orgn.interest_rate_type, orgn.ltv, orgn.arrears_cycle_type,
orgn.recovery_amt, orgn.wrtoff_amt, orgn.utilization_rate,
orgn.pd_pool, orgn.pd_value_pct, orgn.lgd_pool, orgn.lgd_value_pct,
orgn.ead_pool, orgn.presec_precrm_ead, orgn.postsec_precrm_ead,
orgn.postsec_postcrm_ead, orgn.expected_loss_amt, orgn.postsec_rwa,
orgn.risk_rating_system, orgn.default_ind, orgn.LOB, orgn.row_source,
orgn.postsec_authorized_limit, orgn.recovery_amt_calc, 0,
orgn.curr_warehouse_inst_key curr_warehouse_inst_key,
orgn.curr_ccis_prod curr_ccis_prod, SYSDATE, SYSDATE
FROM osfi_feed_core_data orgn
WHERE NOT EXISTS (SELECT ntrl_k_odc_instrument
FROM ncr_adj_data
WHERE ntrl_k_odc_instrument = orgn.ntrl_k_odc_instrument);When i refresh the materialized view its taking 5 to 6 mins. I have used DBMS_ADVISOR.TUNE_MVIEW to tune the query, but it raised error as
ORA-13600: error encountered in Advisor
QSM-03113: Cannot tune the MATERIALIZED VIEW statement
QSM-02091: mv references a non-repeatable or session-sensitive expression
QSM-02164: the materialized view is BUILD DEFERRED
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.PRVT_ACCESS_ADVISOR", line 202
ORA-06512: at "SYS.PRVT_TUNE_MVIEW", line 1232
ORA-06512: at "SYS.DBMS_ADVISOR", line 753
ORA-06512: at line 221
Can anyone suggest some methods to increase the peformance of the view.Hi,
Here is the explain plan of the query,
Plan
SELECT STATEMENT ALL_ROWSCost: 626 Bytes: 35,803,043 Cardinality: 144,937
18 PX COORDINATOR
17 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10002
16 BUFFER SORT PARALLEL_COMBINED_WITH_PARENT Cost: 626 Bytes: 35,803,043 Cardinality: 144,937
15 UNION-ALL PARALLEL_COMBINED_WITH_PARENT
7 HASH JOIN PARALLEL_COMBINED_WITH_PARENT Cost: 314 Bytes: 7,803 Cardinality: 17
4 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD
3 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT Cost: 3 Bytes: 3,859 Cardinality: 17
2 PX SEND BROADCAST PARALLEL_FROM_SERIAL SYS.:TQ10000 Cost: 3 Bytes: 3,859 Cardinality: 17
1 TABLE ACCESS FULL TABLE SUMMARY.NCR_ADJ_DATA Cost: 3 Bytes: 3,859 Cardinality: 17
6 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD Cost: 310 Bytes: 33,625,384 Cardinality: 144,937
5 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT SUMMARY.OSFI_FEED_CORE_DATA Cost: 310 Bytes: 33,625,384 Cardinality: 144,937
14 HASH JOIN RIGHT ANTI PARALLEL_COMBINED_WITH_PARENT Cost: 312 Bytes: 35,795,240 Cardinality: 144,920
11 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD
10 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT Cost: 1 Bytes: 255 Cardinality: 17
9 PX SEND BROADCAST PARALLEL_FROM_SERIAL SYS.:TQ10001 Cost: 1 Bytes: 255 Cardinality: 17
8 INDEX FULL SCAN INDEX SUMMARY.INDEX_2 Cost: 1 Bytes: 255 Cardinality: 17
13 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD Cost: 310 Bytes: 33,625,384 Cardinality: 144,937
12 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT SUMMARY.OSFI_FEED_CORE_DATA Cost: 310 Bytes: 33,625,384 Cardinality: 144,937 There is no index on the tables and it contains lakhs of records. -
Peformance Turning for File Download / Upload with Enabled SharePoint Audit
Greetings all, may I ask your help for Peformance Issues?
Background:
I tried to create a ASP.NET Web Page to download/upload/list SharePoint file and deployed to IIS website in same application server (will NOT use web part as some users are NOT allowed to direct access confidential workspace)
Besides, for Audit Log record purpose, the page will impersonate (without password) the logged in user:
SPUserToken userToken = web.AllUsers[user].UserToken;
SPSite s = new SPSite(siteStr, userToken);
For File Listing, the web service can provide fast response, and we are using service A/C for connection (as no auting for listing, but require audit for file download upload)
Several implemeation options tested for File Downloiad / Upload, but issues occured and finding listed below:
Issues
1) SharePoint Object Model
When I open Site (using new SPSite), it's too slow to respond. (under 1s for all operations, but require 10~50s for open SPSIte. e.g.
using(SPSite s = new SPSite(siteStr) //50s
How can I download/upload file without open SPSite object (using SharePoint object model, but user token should be kept to allow SHarePoint identifiy user actions. e.g. Updated by Tom, NOT system administrator)?
2) SharePoint default web service
For file download, I tried to use SharePoint Web Service for download file, it's quick but how can SharePoint record the audit log to downloaded user, and not service A/C? ( e.g. View by Tom, NOT system administrator)
With Windows SSO solution, please note system should NOT prompt to ask user password for use impersonation
3) HTTP Request API (for file download)
As mentioned in point 2, if the system cannot get password from user, SharePoint also recorded service A/C in audit log... ><
Thank you for your kine attention.
.NET Beginner 3.5Thank you for prompt response, please find my reply with Underline:
Hi,
Maybe I'm not quite clear about the architecture you have now.
Is your asp.net application deployed in separate IIS site but in the same physical server as SharePoint?
Yes
"we are using service A/C for connection", can you please explain the 'A/C'?
Domain User, Local Admin and also SharePoint Service Admin A/C
Opening SPSite is relatively slower but shouldn't take 50 sec. However it depends on your server hardware configuration. You should meet the minimum hardware requirements for SharePoint.
Assigned double resources based on minimum hardware requirements.
For details, 50s is the load test result. But for other SharePoint operation, it takes around/under 3s reponse time.
Are you using SharePoint Audit log? Exactly If so then why don't you just put the hyperlink to the documents in your asp.net page. User maybe have to login once in SharePoint site but
it depends on your security architecture. For example if both of your sites in local intranet and you are using windows integrated authentication, SSO will work automatically User is NOT allowed
to access SharePoint site/server (not implemented for sepreate server yet, as performance issues occured for
separate site in same server) directly from Internet, the
middle server with web interface created for user request.
Whatever I understands gives me the feeling that you download the file using HTTPWebRequest C# class. However regarding security it depends on how authentication is setup in asp.net web site and in sharepoint. If both site uses windows integrated security
and they are in the same server, you can use the following code snippet:
using (WebClient webClient = new WebClient())
webClient.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials;
webClient.DownloadFile("file ur in sharepoint", "download directory");
Thanks, will try and reply later
But still, as I've mentioned, not quite clear about the whole architecture.
A) Request Handling
1) User use browser to request file listing and/or download file (hereafter called: File Download Request) from custom ASP.NET page (Let's say In Server 1, IIS 1)
2) ASP.NET page or File Handler ashx (Server 1, IIS 1) call custom web service, SharePoint deault/OOTB web service or using SharePoint Object Model to access SharePoint Document Library (in Server 1,
IIS 2)
3) Both SharePoint and IIS Web Site
(Server 1, IIS 1 & IIS2) using the same service A/C
4) The web service , File Handler return file obeject to IIS1
5) IIS 1 reply File Download Request to user
B) Application Architecture (In testing environment)
1) no load balancing
2) 1 DB server (Server 2)
3) 1 Application Server (with IIS 1 - ASP.NET, IIS 2, SharePoint Web Site with default SharePoint Web Service, IIS 3 SharePoint Admin Site & IIS 4 Custom SharePoint Web Service)
4) Sepreate AD Server (Server 3)
Thanks,
Sohel Rana
http://ranaictiu-technicalblog.blogspot.com
.NET Beginner 3.5 -
Peformance its very slow Report BI 7
Hi All,
Very Good Morning.........
Peformance it's very slow in Bex Report..... how to reactify the problem, please provide solutions ...
Please provide step-by-step.
If any possibel Enchancement of the report.
Advance thanks........
Bye,
Vijaay.Hi Vijay,
There is very much you can do about performance in the background (infocube level)
But, very littile you can do in the front end (query level)
to increase performance,
1) create indexes on the cube - creating indexes increase the readspeed of data.
Infocube manage -> performance tab -> create DB indexes*
2)Roll up of data to aggregates - If aggregates are build already, you can roll up the data else you can check the option to create aggregates
Infocube manage -> Rollup tab -> Execute.*
3)Compression - compression is a technique where you compress all the data in to a single request hence the over head of maintaining mulotiple requests get reduced subsequently increasing the query performance.
Infocube Manage -> collapse -> execute.
apart from the above, you also have the option to partition the cube. a little search on cube partition in SDN can give you the details.
Hope this helps,
Sri... -
Hi,
I am problem pulling right set of data for the below situation.
SELECT
FIRST_VALUE(p1)
OVER (PARTITION BY workorderid ORDER BY NVL(completeddate,createdate) DESC NULLS LAST) pressure
FROM HISTORY
Lets say that result set has values for p1 as no and yes for the same completeddate, then the
query returns value 'no' because it sorts the data alphabetically.
all i what is, when the completed date is same for two records then look up create date and give me the value of p1 based on max(create date).
How do i do that using analytical function?
Thanks
BilluSorry, i wasn't clear enough earlier.
here is my entire query posted :
SELECT
m.loannumber,
w.ordernumber,
o.spikey clientcode,
v.id vendorname,
w.spiworkcode workcode,
w.department_fk department,
SUBSTR(w.loantypetermid, 10) loantype,
TRUNC(w.orderdate) orderdate,
vw_rhist.wcompleteddate Winterization_Completed_Date,
CASE WHEN vw_rhist.wintsystemtype = 'HeatingSystemType.Dry'
THEN 'Dry'
WHEN vw_rhist.wintsystemtype = 'HeatingSystemType.Steam'
THEN 'Steam'
WHEN vw_rhist.wintsystemtype = 'HeatingSystemType.Radiant'
THEN 'Radiant'
ELSE NULL
END System_Type,
CASE WHEN vw_rhist.pressuretestsystem = 'YesNo.Yes' THEN 'Yes'
WHEN vw_rhist.pressuretestsystem = 'YesNo.No' THEN 'No'
ELSE NULL
END pressuretestsystem,
CASE WHEN vw_rhist.holdpressure = 'YesNo.Yes' THEN 'Yes'
WHEN vw_rhist.holdpressure = 'YesNo.No' THEN 'No'
ELSE NULL
END holdpressure,
CASE WHEN vw_rhist.systemwell = 'YesNo.Yes' THEN 'Yes'
WHEN vw_rhist.systemwell = 'YesNo.No' THEN 'No'
ELSE NULL
END systemwell,
a.state state
FROM organizationalrole o,
vendor v,
serviceableasset s,
address a,
mortgage m,
(SELECT *
FROM workorder
WHERE department_fk = 2
AND loantypetermid IN ( 'LoanType.CDG','LoanType.CV','LoanType.FHA','LoanType.FMC','LoanType.FNM',
'LoanType.REO','LoanType.UNK','LoanType.VA'))w,
SELECT workorderid,rnk,wcompleteddate,pressuretestsystem,
holdpressure,systemwell,wintsystemtype
FROM
((SELECT workorderid,
RANK() OVER (PARTITION BY workorderid ORDER BY oid) rnk,
MIN(COALESCE(winterizationdate, completeddate, createdate))
OVER (PARTITION BY workorderid) wcompleteddate,
FIRST_VALUE(pressuretestsystem)
OVER (PARTITION BY workorderid
ORDER BY NVL(completeddate, createdate) DESC NULLS LAST)
pressuretestsystem, FIRST_VALUE(holdpressure)
OVER (PARTITION BY workorderid
ORDER BY NVL(completeddate, createdate) DESC NULLS LAST)
holdpressure,
FIRST_VALUE(systemwell)
OVER (PARTITION BY workorderid
ORDER BY NVL(completeddate, createdate) DESC NULLS LAST)
systemwell,
FIRST_VALUE(wintsystemtype)
OVER (PARTITION BY workorderid
ORDER BY NVL(completeddate, createdate) DESC NULLS LAST)
wintsystemtype
FROM vwresulthistory
WHERE resulttype = 'OrderUpdate'
AND iswinterized = 'WinterizationCompleted.Yes') vw_rhist1)
WHERE wCompleteddate >=
TO_DATE('10/01/2009','MM/DD/YYYY') AND
wCompleteddate <= TO_DATE('10/6/2009','MM/DD/YYYY')) vw_rhist
WHERE vw_rhist.rnk = 1
AND v.objectid = w.vendor_fk
AND w.ordernumber = vw_rhist.workorderid
AND w.servicingasset_fk = s.objectid
AND s.address_fk = a.objectid
AND o.objectid = w.client_fk
AND m.objectid = s.primaryloan_fk
ORDER BY 2
The problem i have is with the first_value that is in bold. Sometimes I do have two identical completeddate, in that situation, the first_value returns pressuretestsystem sorted aphabetically, say you have two identical completed dates of 12/3/09 and pressuretestsystem values of 'no' and 'yes'. What I get is 'no'.
In that kind of situations, i want to look up create date and return the latest value (which here is 'yes'). Do i need case statement here?
Thanks for reading this far.
Billu. -
FIRST_VALUE,LAST_VALUE,invalid_identifier
Hello,
I am converting the ACCESS scripts to ORACLE sql. There is FIRST function in Access script. I converted in oracle sql language, but oracle gives an error as "invalid identifier" that I can not understand the reason.
If you will help to solve this problem, I am really really appreciate.
Access query:
select
First([Reporting Comps].COMPCODE) AS Cage,
Last(IIf([Reporting Comps]![COMPCODE] Is Not Null,[Reporting Comps]![compcode],"")) AS MFR,
First(COMPANIES.COMPANY_NAME) AS FirstOfCOMPANY_NAME,
First(COMPANIES.COMPANY_CODE) AS FirstOfCOMPANY_CODE,
Last(IIf([Reporting Comps_1]![COMPCODE] Is Not Null,[Reporting Comps_1]![compcode],"")) AS 2MFR,
First(COMPANIES_2.COMPANY_NAME) AS FirstOfCOMPANY_NAME1,
First([Reporting Comps_2].COMPCODE) AS 3MFR,
IIf([MFR] Is Not Null,[MFR],IIf([3MFR] Is Not Null,[3MFR],IIf([2MFR] Is Not Null,[2MFR],"Others"))) AS M
My script for oracle:
SELECT X,Y,Z,
FIRST_VALUE(BI_Reporting_Comps.COMPCODE) OVER ( ORDER BY IND_AUTO_KEY ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ) CAGE,
LAST_VALUE(CASE WHEN BI_REPORTING_COMPS.COMPCODE IS NOT NULL THEN BI_REPORTING_COMPS.COMPCODE ELSE NULL END) OVER ( ORDER BY IND_AUTO_KEY ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ) MFR,
FIRST_VALUE(COMPANIES.COMPANY_NAME) OVER ( ORDER BY IND_AUTO_KEY ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ) FirstOfCOMPANY_NAME,
FIRST_VALUE(COMPANIES.COMPANY_CODE) OVER ( ORDER BY IND_AUTO_KEY ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ) FirstOfCOMPANY_CODE,
LAST_VALUE(CASE WHEN BI_REPORTING_COMPS.COMPCODE IS NOT NULL THEN BI_REPORTING_COMPS.COMPCODE ELSE NULL END) OVER ( ORDER BY IND_AUTO_KEY ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ) MFR2,
FIRST_VALUE(COMPANIES.COMPANY_NAME) OVER ( ORDER BY IND_AUTO_KEY ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ) FirstOfCOMPANY_NAME1,
CASE WHEN MFR Is Not Null THEN MFR
WHEN MFR2 Is Not Null THEN MFR2
WHEN MF3 Is Not Null THEN MFR3
ELSE 'OTHERS'
END AS MANUFACTURER
FROM .............
GROUP BY
X,
Y,
Z,
CASE WHEN MFR Is Not Null THEN MFR
WHEN MFR2 Is Not Null THEN MFR2
WHEN MF3 Is Not Null THEN MFR3
ELSE 'OTHERS'
END
ORDER BY IND_AUTO_KEY ASC;Hello
I've reformatted your code to make it more readable. When you post code please remember to use the {noformat}{noformat} tag before and after to ensure the formatting is preserved.SELECT BI_INVOICES.REPORTED_IN,
BI_Invoices.IND_AUTO_KEY,
BI_Invoices.Invoice,
BI_Invoices.Customer,
BI_Invoices."P/N",
BI_Invoices.Qty_Ship,
BI_Invoices.Unit_Cost,
BI_Invoices.Unit_Sell,
BI_Invoices.Total_Cost,
BI_Invoices.Total_Sales,
CASE
WHEN BI_INVOICES.END_DEST1 '@'--<MISSING OPERATOR
THEN BI_INVOICES.END_DEST1
WHEN BI_INV_DEST_REF_1.DEST IS NOT NULL THEN BI_INV_DEST_REF_1.DEST
ELSE bi_inv_dest_ref_2.destination
END
AS End_Dest,
BI_Invoices.End_App,
BI_Invoices.Salesperson,
BI_Invoices.SOD_AUTO_KEY,
FIRST_VALUE (
BI_Reporting_Comps.COMPCODE)
OVER (ORDER BY BI_INVOICES.IND_AUTO_KEY
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
Cage,
LAST_VALUE (
CASE
WHEN BI_REPORTING_COMPS.COMPCODE IS NOT NULL
THEN
BI_REPORTING_COMPS.COMPCODE
ELSE
END)
OVER (ORDER BY BI_INVOICES.IND_AUTO_KEY
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
MFR1,
FIRST_VALUE (
COMPANIES.COMPANY_NAME)
OVER (ORDER BY BI_INVOICES.IND_AUTO_KEY
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
FirstOfCOMPANY_NAME,
LAST_VALUE (
CASE
WHEN BI_REPORTING_COMPS.COMPCODE IS NOT NULL
THEN
BI_REPORTING_COMPS.COMPCODE
ELSE
END)
OVER (ORDER BY BI_INVOICES.IND_AUTO_KEY
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
MFR2,
FIRST_VALUE (
COMPANIES.COMPANY_NAME)
OVER (ORDER BY BI_INVOICES.IND_AUTO_KEY
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
FirstOfCOMPANY_NAME1,
FIRST_VALUE (
BI_Reporting_Comps.COMPCODE)
OVER (ORDER BY BI_INVOICES.IND_AUTO_KEY
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
MFR3
FROM QCTL.STOCK
LEFT JOIN QCTL.PO_DETAIL
ON QCTL.STOCK.POD_AUTO_KEY = QCTL.PO_DETAIL.POD_AUTO_KEY
LEFT JOIN QCTL.PO_HEADER
ON QCTL.PO_DETAIL.POH_AUTO_KEY = QCTL.PO_HEADER.POH_AUTO_KEY
AND QCTL.STOCK.ORIGINAL_PO_NUMBER = QCTL.PO_HEADER.PO_NUMBER
LEFT JOIN QCTL.RO_DETAIL
ON QCTL.STOCK.ROD_AUTO_KEY = QCTL.RO_DETAIL.ROD_AUTO_KEY
LEFT JOIN QCTL.RO_HEADER
ON QCTL.RO_DETAIL.ROH_AUTO_KEY = QCTL.RO_HEADER.ROH_AUTO_KEY
LEFT JOIN QCTL.COMPANIES
ON QCTL.PO_HEADER.CMP_AUTO_KEY = QCTL.COMPANIES.CMP_AUTO_KEY
AND QCTL.RO_HEADER.CMP_AUTO_KEY = QCTL.COMPANIES.CMP_AUTO_KEY
LEFT JOIN BI_REPORTING_COMPS
ON QCTL.COMPANIES.CMP_AUTO_KEY = BI_REPORTING_COMPS.CMP_AUTO_KEY
RIGHT JOIN QCTL.STOCK_RESERVATIONS
ON QCTL.STOCK_RESERVATIONS.STM_AUTO_KEY = QCTL.STOCK.STM_AUTO_KEY
RIGHT JOIN BI_INVOICES
ON BI_INVOICES.SOD_AUTO_KEY = QCTL.STOCK_RESERVATIONS.SOD_AUTO_KEY
LEFT JOIN BI_INV_DEST_REF_2
ON BI_INVOICES.INVOICE = BI_INV_DEST_REF_2.DESTINATION
LEFT JOIN BI_INV_DEST_REF_1
ON BI_INVOICES.INVOICE = BI_INV_DEST_REF_1.INVOICE
GROUP BY BI_INVOICES.REPORTED_IN,
BI_Invoices.IND_AUTO_KEY,
BI_Invoices.Invoice,
BI_Invoices.Customer,
BI_Invoices."P/N",
BI_Invoices.Qty_Ship,
BI_Invoices.Unit_Cost,
BI_Invoices.Unit_Sell,
BI_Invoices.Total_Cost,
BI_Invoices.Total_Sales,
CASE
WHEN BI_INVOICES.END_DEST1 '@' --<MISSING OPERATOR
THEN
BI_INVOICES.END_DEST1
WHEN BI_INV_DEST_REF_1.DEST IS NOT NULL
THEN
BI_INV_DEST_REF_1.DEST
ELSE
bi_inv_dest_ref_2.destination
END,
BI_Invoices.End_App,
BI_Invoices.Salesperson,
BI_Invoices.SOD_AUTO_KEY,
BI_Reporting_Comps.COMPCODE,
COMPANIES.COMPANY_NAME
I've marked 2 lines with MISSING OPERATOR. If you were intending for that to be Not Equal, you need to use the != rather than the "<" and ">" operator. It's a "feature" of the forum software here.
Also, what's this bit meant to do?LAST_VALUE (
CASE
WHEN BI_REPORTING_COMPS.COMPCODE IS NOT NULL
THEN
BI_REPORTING_COMPS.COMPCODE
ELSE
END
You're saying if BI_REPORTING_COMPS.COMPCODE is not null then use it otherwise use null. In oracle when you assign '' to a string it is actually set to null. With that in mind, you could just useLAST_VALUE (BI_REPORTING_COMPS.COMPCODE) -
First_value in Oracle olap based on non - time dimension
Hi Experts,
I am trying to figure out to do first_value kind of calculation in Oracle OLAP.
Here is the requirement -
Fact table -
cust_id valid_flag balance
1 y 1000
1 y 1500
2 N 0
2 y 2000
2 y 2500
If valid_flag ='N' and balance =0, then set balance =0 for other cells for the customer. This needs to be done for all the dimensions.
Any pointer would be useful.
Regards, NeeleshIf the switch is really based on a dimension attribute (named particular_value), then it should be easy to create a derived measure.
CASE
WHEN particular_dim.particular_value = 'N'
THEN 0
ELSE my_cube.balance
ENDBut perhaps what you mean is that there is another measure, IS_VALID say, and you need to get the value of IS_VALID for the current cust_id and the member named 'particular_value' of the particular_dim. In this case it would look something like this.
CASE
WHEN my_cube.is_valid[particular_dim = 'particular_value'] = 'N'
THEN 0
ELSE my_cube.balance
ENDI expect that neither of the above expressions is right, but it should give you some pointers as to the kinds of tricks you can use. -
FIRST_VALUE & LAST_VALUE
Given the following sample data:
AverageRate CurrencyDate
1.0001 2001-09-03
1.0001 2001-09-04
1.0002 2001-09-05
1.0002 2001-09-06
1.0005 2001-09-07
1.0005 2001-09-08
1.0005 2001-09-09
1.0001 2001-09-10
1.0002 2001-09-11
1.0002 2001-09-12
How can I, using FIRST_VALUE and LAST_VALUE, return the desired output:
AverageRate From To
1.0001 2001-09-03 2001-09-04
1.0002 2001-09-05 2001-09-06
1.0005 2001-09-07 2001-09-09
1.0001 2001-09-10 2001-09-10
1.0002 2001-09-11 2001-09-12
I know how to do this using min/max, self joins and row_number() with partition by, but surely with 2012 there is a simpler (and more optimized) solution which only requires one select statement. Are my expectations too much?First_Value, Lag, Lead, ROWS, and RANGE are great additions (Oracle still has more cool analytic functions than SQL server though), but they won't necessarily magically solve all problems. If your data didn't have the unusual position related duplicates
(where 1.0002 showed up again later in the list, but is a different group to the first time through), your query might have fit 100 percent nicely into a less complex First_Value scenario.
Here's a way that takes care of it, that does use new SQL 2012 feature ROWS and also FIRST_VALUE. The key element is the "ROWS" clause, where we were able to limit the window to rows that occurred BEFORE the current row. Then FIRST_VALUE
at the final select statement too.
Create_Testdata:
Declare @Tbl table (avgrate decimal(9,4), currdate date)
Insert @Tbl Select 1.0001, '2001-09-03'
Insert @Tbl Select 1.0001, '2001-09-04'
Insert @Tbl Select 1.0002, '2001-09-05'
Insert @Tbl Select 1.0002, '2001-09-06'
Insert @Tbl Select 1.0005, '2001-09-07'
Insert @Tbl Select 1.0005, '2001-09-08'
Insert @Tbl Select 1.0005, '2001-09-09'
Insert @Tbl Select 1.0001, '2001-09-10'
Insert @Tbl Select 1.0002, '2001-09-11'
Insert @Tbl Select 1.0002, '2001-09-12'
Lister:
With L1_RowNums_Lead_Lag as
Select *
, row_number() over(order by @@Servername) as RN
, lead(avgrate) over(order by @@servername) as NextRate
, lag(avgrate) over(order by @@servername) as PrevRate
From @Tbl
, L2_RowFamily as
Select *
, case When (Prevrate <> avgrate and NextRate <> Avgrate) then RN
when NextRate = AvgRate and IsNull(PrevRate, -1) <> AvgRate then RN
Else NULL
End as RowFamily
From L1_RowNums_Lead_Lag
, L3_RowFamily_Assign as
Select *
, case when RowFamily is Null
Then Max(RowFamily) over(partition by avgrate order by rn Rows between unbounded preceding and current row)
Else RowFamily
End as RowFamily_All
from L2_RowFamily
, L4_Final as
Select *
, first_Value(Currdate) over(partition by RowFamily_all order by RowFamily_all) as FirstCurrdate
, Last_Value(Currdate) over(partition by RowFamily_all order by RowFamily_all) as LastCurrdate
From L3_RowFamily_Assign
Select * from L4_Final /* Where clause has to be in final query */
where RowFamily is not null
order by RN;
Results match your posted desired results. -
When to use First instead of First_Value functions
When would you use the First_Values function instead of First? What's the difference between the two?
Here are some examples of both in action. Use the analytic function FIRST_VALUE when you don't want to group your records. Use the aggregate function FIRST when you do.
select
deptno ,
hiredate ,
ename ,
first_value( ename ) over ( partition by deptno order by hiredate ) first_hired
from emp
order by deptno, hiredate ;
DEPTNO HIREDATE ENAME FIRST_HIRE
10 1981-06-09 CLARK CLARK
10 1981-11-17 KING CLARK
10 1982-01-23 MILLER CLARK
20 1980-12-17 SMITH SMITH
20 1981-04-02 JONES SMITH
20 1981-12-03 FORD SMITH
20 1987-04-19 SCOTT SMITH
20 1987-05-23 ADAMS SMITH
30 1981-02-20 ALLEN ALLEN
30 1981-02-22 WARD ALLEN
30 1981-05-01 BLAKE ALLEN
30 1981-09-08 TURNER ALLEN
30 1981-09-28 MARTIN ALLEN
30 1981-12-03 JAMES ALLEN
14 rows selected.
select
deptno,
min(ename) keep ( dense_rank first order by hiredate ) as first_hired
from emp
group by deptno
order by deptno ;
DEPTNO FIRST_HIRE
10 CLARK
20 SMITH
30 ALLEN
3 rows selected.As far as analytic FIRST_VALUE versus the analytic version of FIRST, FIRST_VALUE has the option to IGNORE NULLS whereas FIRST doesn't.
select
deptno ,
hiredate ,
comm ,
first_value( comm IGNORE NULLS ) over ( partition by deptno order by hiredate desc )
as first_value ,
min(comm) keep ( dense_rank first order by hiredate desc )
over ( partition by deptno )
as first
from emp
order by deptno, hiredate ;
DEPTNO HIREDATE COMM FIRST_VALUE FIRST
10 1981-06-09
10 1981-11-17
10 1982-01-23
20 1980-12-17
20 1981-04-02
20 1981-12-03
20 1987-04-19
20 1987-05-23
30 1981-02-20 300 1400
30 1981-02-22 500 1400
30 1981-05-01 1400
30 1981-09-08 0 1400
30 1981-09-28 1400 1400
30 1981-12-03
14 rows selected.Other than that the two seem functionally equivalent. I prefer FIRST_VALUE over FIRST though because its syntax is more familiar.
Joe Fuda
SQL Snippets
Message was edited by: SnippetyJoe - added third example -
Can anyone tell me if there is a more efficient way to do the following........
SELECT distinct accprf.account_id
first_value(accprf.value_start)
over (partition by accprf.account_id order by accprf.start_date) ,
first_value(accprf.value_end)
over (partition by accprf.account_id order by accprf.end_date desc)
FROM account_performance accprf
The table account_performance has more than one row per account_id. Each row has a start date and end date, with account performance information for that time interval. The above gets the values from the very first entry and the very last entry per account_id.
I was expecting to find a less complicated way to do the above.
Thanks, Mario.Well, in that case it doesn't matter!
This is a little demonstration:
SQL> drop table t;
Table dropped.
SQL>
SQL> create table t (c1 number, c2 number, c3 number);
Table created.
SQL>
SQL> insert into t values (1, 1, 10);
1 row created.
SQL> insert into t values (1, 2, 11);
1 row created.
SQL> insert into t values (1, 3, 7);
1 row created.
SQL>
SQL>
SQL> insert into t values (2, 3, 20);
1 row created.
SQL> insert into t values (2, 4, 21);
1 row created.
SQL> insert into t values (2, 5, 15);
1 row created.
SQL>
SQL> SELECT c1,
2 SUM(c3) keep (dense_rank first order by c2) sum,
3 MIN(c3) keep (dense_rank first order by c2) min,
4 MAX(c3) keep (dense_rank first order by c2) max
5 FROM t
6 group by c1;
C1 SUM MIN MAX
1 10 10 10
2 20 20 20Message was edited by:
Michel
Sorry,
It have made an implicit assumption that c2 was unique or it is not true and then you are right. It must be min and not max to meet the first query.
Thanks for your correction. -
Stack implementation peforming badly.
I have a lot of data points (x and y co-ordinates) which will be added to a stack and popped off as necessary.
Initially i implemented the stack using an ArrayList however when profiling my program i have found that the 'new' operation involved when adding an element to the stack was my program's bottle-neck. So i implemented my own stack based on two integer arrays.
I thought that this would be much faster as there was no creation of new objects, however this was not the case... infact it performed almost ten times as worse then before.
Is there something inherently slow/incorrect with my stack implementation?
code is as follows:
class MyStack
int[] x;
int[] y;
int index;
public MyStack()
x= new int[1024];
y= new int[1024];
index=0;
public int size()
return index;
public void add(int x1,int y1)
if (index==x.length)
int[] temp= new int[2*x.length];
System.arraycopy(x,0,temp,0,x.length);
x=temp;
temp= new int[2*y.length];
System.arraycopy(y,0,temp,0,y.length);
y=temp;
x[index]=x1;
y[index]=y1;
index++;
public int getX()
return x[index-1];
public int getY()
return y[index-1];
public void remove()
index--;
}Is this viable? as the stack is not firstly popluated
before pop operations are peformed, i.e. push and pop
operations can and do happen at any time. This would
mean that the timing of the stack until it reaches
"steady-state" will be by its nature much more than
the push and pops after.Yes it would, but then you will get a feeling for what takes time in your stack, internal restructuring or the actual stack operation. You can make an artificial test. First make N (quite large) pushes, so the stack grows, and time that, then make the same number of alternating push/pop operations, so the size won't change, and time that. The difference basically is the internal restructuring of the stack.
I'd say this will show that the second phase has become much faster now that you don't have to do create two new Integer objects for each push. If you can anticipate the stack-size you should make it that big from the start. -
Peformance of one process is slow ( statspack report is attached)
Hi,
My version is 9.2.0.7 (HP-UX Itanium)
we have recently migrated the DB from windows 2003 to Unix (HP-UX Itanium 11.23).
we have one process which usually takes 15 mins before migration, now it is taking 25 mins to complete. I did not change anything at db level. same init.ora parameters. tables and indexes statistiscs are upto to date.
Please guide me, what might be the wrong at instance level. Here I am skipping the sql query portion of statspack report due to security reasons.
this statspack report is taken before running the process and after completion of process.
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
UAT 496948094 UAT 1 9.2.0.7.0 NO dbt
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 2 15-Jul-09 10:59:05 11 2.7
End Snap: 3 15-Jul-09 12:42:18 17 4.4
Elapsed: 103.22 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 400M Std Block Size: 8K
Shared Pool Size: 160M Log Buffer: 512K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 44,830.27 435,162.76
Logical reads: 15,223.37 147,771.73
Block changes: 198.12 1,923.15
Physical reads: 47.02 456.37
Physical writes: 7.05 68.45
User calls: 50.01 485.42
Parses: 25.99 252.26
Hard parses: 0.24 2.38
Sorts: 3.40 33.00
Logons: 0.02 0.16
Executes: 34.64 336.27
Transactions: 0.10
% Blocks changed per Read: 1.30 Recursive Call %: 27.05
Rollback per transaction %: 33.70 Rows per Sort: 1532.57
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.69 In-memory Sort %: 100.00
Library Hit %: 99.38 Soft Parse %: 99.06
Execute to Parse %: 24.98 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 48.39 % Non-Parse CPU: 99.53
Shared Pool Statistics Begin End
Memory Usage %: 94.56 94.19
% SQL with executions>1: 74.01 62.51
% Memory for SQL w/exec>1: 52.89 54.29
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
CPU time 895 48.10
db file sequential read 195,597 443 23.83
log file parallel write 1,706 260 13.97
log buffer space 415 122 6.54
control file parallel write 2,074 66 3.53
Wait Events for DB: UAT Instance: UAT Snaps: 2 -3
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
db file sequential read 195,597 0 443 2 306.6
log file parallel write 1,706 0 260 152 2.7
log buffer space 415 0 122 293 0.7
control file parallel write 2,074 0 66 32 3.3
log file sync 678 4 51 75 1.1
db file scattered read 6,608 0 21 3 10.4
log file switch completion 9 0 2 208 0.0
SQL*Net more data to client 24,072 0 1 0 37.7
log file single write 18 0 0 19 0.0
db file parallel read 9 0 0 13 0.0
control file sequential read 928 0 0 0 1.5
SQL*Net break/reset to clien 292 0 0 0 0.5
latch free 25 2 0 3 0.0
log file sequential read 18 0 0 2 0.0
LGWR wait for redo copy 37 0 0 0 0.1
direct path read 45 0 0 0 0.1
direct path write 45 0 0 0 0.1
SQL*Net message from client 308,861 0 30,960 100 484.1
SQL*Net more data from clien 26,217 0 3 0 41.1
SQL*Net message to client 308,867 0 0 0 484.1
Background Wait Events for DB: UAT Instance: UAT Snaps: 2 -3
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file parallel write 1,706 0 260 152 2.7
control file parallel write 2,074 0 66 32 3.3
log buffer space 10 0 1 149 0.0
db file scattered read 90 0 1 7 0.1
db file sequential read 104 0 1 5 0.2
log file single write 18 0 0 19 0.0
control file sequential read 876 0 0 0 1.4
log file sequential read 18 0 0 2 0.0
latch free 4 2 0 9 0.0
LGWR wait for redo copy 37 0 0 0 0.1
direct path read 45 0 0 0 0.1
direct path write 45 0 0 0 0.1
rdbms ipc message 7,222 5,888 21,416 2965 11.3
pmon timer 2,079 2,079 6,044 2907 3.3
smon timer 21 21 6,002 ###### 0.0
Instance Activity Stats for DB: UAT Instance: UAT Snaps: 2 -3
Statistic Total per Second per Trans
CPU used by this session 89,478 14.5 140.3
CPU used when call started 89,478 14.5 140.3
CR blocks created 148 0.0 0.2
DBWR buffers scanned 158,122 25.5 247.8
DBWR checkpoint buffers written 11,909 1.9 18.7
DBWR checkpoints 3 0.0 0.0
DBWR free buffers found 136,228 22.0 213.5
DBWR lru scans 53 0.0 0.1
DBWR make free requests 53 0.0 0.1
DBWR summed scan depth 158,122 25.5 247.8
DBWR transaction table writes 43 0.0 0.1
DBWR undo block writes 19,283 3.1 30.2
SQL*Net roundtrips to/from client 308,602 49.8 483.7
active txn count during cleanout 6,812 1.1 10.7
background checkpoints completed 3 0.0 0.0
background checkpoints started 3 0.0 0.0
background timeouts 7,204 1.2 11.3
branch node splits 4 0.0 0.0
buffer is not pinned count 35,587,689 5,746.4 55,780.1
buffer is pinned count 202,539,737 32,704.6 317,460.4
bytes received via SQL*Net from c 106,536,068 17,202.7 166,984.4
bytes sent via SQL*Net to client 98,286,059 15,870.5 154,053.4
calls to get snapshot scn: kcmgss 346,517 56.0 543.1
calls to kcmgas 42,563 6.9 66.7
calls to kcmgcs 7,735 1.3 12.1
change write time 12,666 2.1 19.9
cleanout - number of ktugct calls 9,698 1.6 15.2
cleanouts and rollbacks - consist 0 0.0 0.0
cleanouts only - consistent read 1,161 0.2 1.8
cluster key scan block gets 15,789 2.6 24.8
cluster key scans 6,534 1.1 10.2
commit cleanout failures: block l 199 0.0 0.3
commit cleanout failures: buffer 69 0.0 0.1
commit cleanout failures: callbac 0 0.0 0.0
commit cleanouts 40,688 6.6 63.8
commit cleanouts successfully com 40,420 6.5 63.4
commit txn count during cleanout 4,652 0.8 7.3
consistent changes 150 0.0 0.2
consistent gets 93,071,913 15,028.6 145,880.7
consistent gets - examination 1,487,526 240.2 2,331.6
cursor authentications 322 0.1 0.5
data blocks consistent reads - un 51 0.0 0.1
db block changes 1,226,967 198.1 1,923.2
db block gets 1,206,448 194.8 1,891.0
deferred (CURRENT) block cleanout 13,478 2.2 21.1
dirty buffers inspected 9,876 1.6 15.5
enqueue conversions 41 0.0 0.1
enqueue releases 12,783 2.1 20.0
enqueue requests 12,785 2.1 20.0
enqueue waits 0 0.0 0.0
execute count 214,538 34.6 336.3
free buffer inspected 9,879 1.6 15.5
free buffer requested 349,615 56.5 548.0
hot buffers moved to head of LRU 141,298 22.8 221.5
immediate (CR) block cleanout app 1,161 0.2 1.8
immediate (CURRENT) block cleanou 23,894 3.9 37.5
Instance Activity Stats for DB: UAT Instance: UAT Snaps: 2 -3
Statistic Total per Second per Trans
index fast full scans (full) 19 0.0 0.0
index fetch by key 671,512 108.4 1,052.5
index scans kdiixs1 56,328,309 9,095.5 88,288.9
leaf node 90-10 splits 16 0.0 0.0
leaf node splits 2,187 0.4 3.4
logons cumulative 105 0.0 0.2
messages received 1,653 0.3 2.6
messages sent 1,653 0.3 2.6
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 35,118,594 5,670.7 55,044.8
opened cursors cumulative 4,036 0.7 6.3
parse count (failures) 43 0.0 0.1
parse count (hard) 1,516 0.2 2.4
parse count (total) 160,939 26.0 252.3
parse time cpu 421 0.1 0.7
parse time elapsed 870 0.1 1.4
physical reads 291,165 47.0 456.4
physical reads direct 45 0.0 0.1
physical writes 43,672 7.1 68.5
physical writes direct 45 0.0 0.1
physical writes non checkpoint 41,379 6.7 64.9
pinned buffers inspected 3 0.0 0.0
prefetched blocks 88,896 14.4 139.3
prefetched blocks aged out before 22 0.0 0.0
process last non-idle time 75,777 12.2 118.8
recursive calls 114,829 18.5 180.0
recursive cpu usage 11,704 1.9 18.3
redo blocks written 275,521 44.5 431.9
redo buffer allocation retries 419 0.1 0.7
redo entries 623,735 100.7 977.6
redo log space requests 10 0.0 0.0
redo log space wait time 192 0.0 0.3
redo ordering marks 3 0.0 0.0
redo size 277,633,840 44,830.3 435,162.8
redo synch time 5,185 0.8 8.1
redo synch writes 675 0.1 1.1
redo wastage 818,952 132.2 1,283.6
redo write time 26,562 4.3 41.6
redo writes 1,705 0.3 2.7
rollback changes - undo records a 395 0.1 0.6
rollbacks only - consistent read 49 0.0 0.1
rows fetched via callback 553,910 89.4 868.2
session connect time 74,797 12.1 117.2
session logical reads 94,278,361 15,223.4 147,771.7
session pga memory 2,243,808 362.3 3,516.9
session pga memory max 1,790,880 289.2 2,807.0
session uga memory 2,096,104 338.5 3,285.4
session uga memory max 32,637,856 5,270.1 51,156.5
shared hash latch upgrades - no w 56,430,882 9,112.0 88,449.7
sorts (memory) 21,055 3.4 33.0
sorts (rows) 32,268,330 5,210.5 50,577.3
summed dirty queue length 53,238 8.6 83.5
switch current to new buffer 37,071 6.0 58.1
table fetch by rowid 90,385,043 14,594.7 141,669.4
table fetch continued row 104,336 16.9 163.5
table scan blocks gotten 376,181 60.7 589.6
Instance Activity Stats for DB: UAT Instance: UAT Snaps: 2 -3
Statistic Total per Second per Trans
table scan rows gotten 5,103,693 824.1 7,999.5
table scans (long tables) 97 0.0 0.2
table scans (short tables) 53,485 8.6 83.8
transaction rollbacks 247 0.0 0.4
user calls 309,698 50.0 485.4
user commits 423 0.1 0.7
user rollbacks 215 0.0 0.3
workarea executions - opt 37,753 6.1 59.2
write clones created in foregroun 718 0.1 1.1
Tablespace IO Stats for DB: UAT Instance: UAT Snaps: 2 -3
->ordered by IOs (Reads + Writes) desc
Tablespace
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
USERS
200,144 32 2.3 1.4 22,576 4 0 0.0
UNDOTBS1
38 0 9.5 1.0 19,348 3 0 0.0
SYSTEM
2,016 0 4.7 1.5 505 0 0 0.0
TOOLS
14 0 9.3 1.3 1,237 0 0 0.0
IMAGES
3 0 6.7 1.0 3 0 0 0.0
INDX
3 0 6.7 1.0 3 0 0 0.0
Buffer Pool Statistics for DB: UAT Instance: UAT Snaps: 2 -3
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Write Buffer
Number of Cache Buffer Physical Physical Buffer Complete Busy
P Buffers Hit % Gets Reads Writes Waits Waits Waits
D 49,625 99.7 94,278,286 291,074 43,627 0 0 0
Instance Recovery Stats for DB: UAT Instance: UAT Snaps: 2 -3
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
B 38 9 2311 13283 13021 92160 13021
E 38 7 899 4041 3767 92160 3767
Buffer Pool Advisory for DB: UAT Instance: UAT End Snap: 3
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate (default block size first)
Size for Size Buffers for Est Physical Estimated
P Estimate (M) Factr Estimate Read Factor Physical Reads
D 32 .1 3,970 2.94 2,922,389
D 64 .2 7,940 2.54 2,524,222
D 96 .2 11,910 2.38 2,365,570
D 128 .3 15,880 2.27 2,262,338
D 160 .4 19,850 2.19 2,183,287
D 192 .5 23,820 1.97 1,962,758
D 224 .6 27,790 1.30 1,293,415
D 256 .6 31,760 1.21 1,203,737
D 288 .7 35,730 1.10 1,096,115
D 320 .8 39,700 1.06 1,056,077
D 352 .9 43,670 1.04 1,036,708
D 384 1.0 47,640 1.02 1,012,912
D 400 1.0 49,625 1.00 995,426
D 416 1.0 51,610 0.99 982,641
D 448 1.1 55,580 0.97 966,874
D 480 1.2 59,550 0.89 890,749
D 512 1.3 63,520 0.88 879,062
D 544 1.4 67,490 0.87 864,539
D 576 1.4 71,460 0.80 800,284
D 608 1.5 75,430 0.76 756,222
D 640 1.6 79,400 0.75 749,473
PGA Aggr Target Stats for DB: UAT Instance: UAT Snaps: 2 -3
-> B: Begin snap E: End snap (rows dentified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem - percentage of workarea memory under manual control
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
100.0 851 0
%PGA %Auto %Man
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
B 320 282 12.6 0.0 .0 .0 .0 16,384
E 320 281 15.3 0.0 .0 .0 .0 16,384
PGA Aggr Target Histogram for DB: UAT Instance: UAT Snaps: 2 -3
-> Opt Executions are purely in-memory operations
Low High
Opt Opt Total Execs Opt Execs 1-Pass Execs M-Pass Execs
8K 16K 37,010 37,010 0 0
16K 32K 70 70 0 0
32K 64K 11 11 0 0
64K 128K 34 34 0 0
128K 256K 9 9 0 0
256K 512K 54 54 0 0
512K 1024K 536 536 0 0
1M 2M 7 7 0 0
2M 4M 24 24 0 0
PGA Memory Advisory for DB: UAT Instance: UAT End Snap: 3
-> When using Auto Memory Mgmt, minly choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
40 0.1 3,269.7 98.2 97.0 0
80 0.3 3,269.7 9.6 100.0 0
160 0.5 3,269.7 9.6 100.0 0
240 0.8 3,269.7 0.0 100.0 0
320 1.0 3,269.7 0.0 100.0 0
384 1.2 3,269.7 0.0 100.0 0
448 1.4 3,269.7 0.0 100.0 0
512 1.6 3,269.7 0.0 100.0 0
576 1.8 3,269.7 0.0 100.0 0
640 2.0 3,269.7 0.0 100.0 0
960 3.0 3,269.7 0.0 100.0 0
1,280 4.0 3,269.7 0.0 100.0 0
1,920 6.0 3,269.7 0.0 100.0 0
2,560 8.0 3,269.7 0.0 100.0 0
-------------------------------------------------------------Rollback Segment Stats for DB: UAT Instance: UAT Snaps: 2 -3
->A high value for "Pct Waits" suggests more rollback segments may be required
->RBS stats may not be accurate between begin and end snaps when using Auto Undo
managment, as RBS may be dynamically created and dropped as needed
Trans Table Pct Undo Bytes
RBS No Gets Waits Written Wraps Shrinks Extends
0 22.0 0.00 0 0 0 0
1 650.0 0.00 1,868,300 0 0 0
2 1,987.0 0.00 4,613,768 9 0 7
3 6,070.0 0.00 24,237,494 37 0 36
4 223.0 0.00 418,942 3 0 1
5 621.0 0.00 1,749,086 11 0 11
6 8,313.0 0.00 48,389,590 54 0 52
7 7,248.0 0.00 14,477,004 19 0 17
8 1,883.0 0.00 12,332,646 14 0 12
9 2,729.0 0.00 17,820,450 19 0 19
10 1,009.0 0.00 2,857,150 5 0 3
Rollback Segment Storage for DB: UAT Instance: UAT Snaps: 2 -3
->Opt Size should be larger than Avg Active
RBS No Segment Size Avg Active Opt Size Maximum Size
0 450,560 0 450,560
1 8,511,488 6,553 8,511,488
2 8,511,488 4,592,363 18,997,248
3 29,351,936 14,755,792 29,483,008
4 2,220,032 105,188 2,220,032
5 3,137,536 3,416,104 54,648,832
6 55,697,408 21,595,184 55,697,408
7 26,337,280 9,221,107 26,337,280
8 13,754,368 5,142,374 13,754,368
9 22,011,904 10,220,526 22,011,904
10 4,317,184 3,810,892 13,754,368
Undo Segment Summary for DB: UAT Instance: UAT Snaps: 2 -3
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Undo Num Max Qry Max Tx Snapshot Out of uS/uR/uU/
TS# Blocks Trans Len (s) Concurcy Too Old Space eS/eR/eU
1 19,305 109,683 648 3 0 0 0/0/0/0/0/0
Undo Segment Stats for DB: UAT Instance: UAT Snaps: 2 -3
-> ordered by Time desc
Undo Num Max Qry Max Tx Snap Out of uS/uR/uU/
End Time Blocks Trans Len (s) Concy Too Old Space eS/eR/eU
15-Jul 12:32 10 13,451 3 2 0 0 0/0/0/0/0/0
15-Jul 12:22 87 13,384 6 1 0 0 0/0/0/0/0/0
15-Jul 12:12 3,746 13,229 91 1 0 0 0/0/0/0/0/0
15-Jul 12:02 8,949 13,127 648 1 0 0 0/0/0/0/0/0
15-Jul 11:52 1,496 10,476 24 1 0 0 0/0/0/0/0/0
15-Jul 11:42 3,895 10,441 6 1 0 0 0/0/0/0/0/0
15-Jul 11:32 531 9,155 1 3 0 0 0/0/0/0/0/0
15-Jul 11:22 0 8,837 3 0 0 0 0/0/0/0/0/0
15-Jul 11:12 4 8,817 3 1 0 0 0/0/0/0/0/0
15-Jul 11:02 587 8,766 2 2 0 0 0/0/0/0/0/0
Latch Activity for DB: UAT Instance: UAT Snaps: 2 -3
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
Consistent RBA 1,708 0.0 0 0
FIB s.o chain latch 40 0.0 0 0
FOB s.o list latch 467 0.0 0 0
SQL memory manager latch 1 0.0 0 2,038 0.0
SQL memory manager worka 174,015 0.0 0 0
active checkpoint queue 2,081 0.0 0 0
archive control 1 0.0 0 0
cache buffer handles 162,618 0.0 0 0
cache buffers chains 190,111,507 0.0 0.2 0 426,778 0.0
cache buffers lru chain 425,142 0.0 0.2 0 65,895 0.0
channel handle pool latc 202 0.0 0 0
channel operations paren 4,405 0.0 0 0
checkpoint queue latch 228,932 0.0 0.0 0 41,321 0.0
child cursor hash table 18,320 0.0 0 0
commit callback allocati 4 0.0 0 0
dml lock allocation 2,482 0.0 0 0
dummy allocation 204 0.0 0 0
enqueue hash chains 25,615 0.0 0 0
enqueues 15,416 0.0 0 0
event group latch 104 0.0 0 0
hash table column usage 410 0.0 0 191,319 0.0
internal temp table obje 1,048 0.0 0 0
job_queue_processes para 103 0.0 0 0
ktm global data 21 0.0 0 0
lgwr LWN SCN 3,215 0.0 0.0 0 0
library cache 1,657,451 0.0 0.0 0 1,479 0.1
library cache load lock 1,126 0.0 0 0
library cache pin 1,112,420 0.0 0.0 0 0
library cache pin alloca 670,952 0.0 0.0 0 0
list of block allocation 2,748 0.0 0 0
loader state object free 36 0.0 0 0
longop free list parent 1 0.0 0 1 0.0
messages 19,427 0.0 0 0
mostly latch-free SCN 3,229 0.3 0.0 0 0
multiblock read objects 15,022 0.0 0 0
ncodef allocation latch 99 0.0 0 0
object stats modificatio 28 0.0 0 0
post/wait queue 1,810 0.0 0 1,102 0.0
process allocation 202 0.0 0 104 0.0
process group creation 202 0.0 0 0
redo allocation 629,175 0.0 0.0 0 0
redo copy 0 0 623,865 0.0
redo writing 11,487 0.0 0 0
row cache enqueue latch 197,626 0.0 0 0
row cache objects 201,089 0.0 0 642 0.0
sequence cache 348 0.0 0 0
session allocation 3,634 0.1 0.0 0 0
session idle bit 621,031 0.0 0 0
session switching 99 0.0 0 0
session timer 2,079 0.0 0 0
Latch Activity for DB: UAT Instance: UAT Snaps: 2 -3
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
shared pool 786,331 0.0 0.1 0 0
sim partition latch 0 0 193 0.0
simulator hash latch 5,885,552 0.0 0 0
simulator lru latch 12,981 0.0 0 66,129 0.0
sort extent pool 120 0.0 0 0
transaction allocation 249 0.0 0 0
transaction branch alloc 99 0.0 0 0
undo global data 27,867 0.0 0 0
user lock 396 0.0 0 0
Latch Sleep breakdown for DB: UAT Instance: UAT Snaps: 2 -3
-> ordered by misses desc
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
cache buffers lru chain 425,142 82 15 67/15/0/0/0
library cache 1,657,451 76 3 73/3/0/0/0
shared pool 786,331 37 2 35/2/0/0/0
redo allocation 629,175 31 1 30/1/0/0/0
cache buffers chains 190,111,507 21 4 19/0/2/0/0
Latch Miss Sources for DB: UAT Instance: UAT Snaps: 2 -3
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
cache buffers chains kcbget: pin buffer 0 2 0
cache buffers chains kcbgtcr: fast path 0 2 0
cache buffers lru chain kcbbiop: lru scan 2 12 0
cache buffers lru chain kcbbwlru 0 2 0
cache buffers lru chain kcbbxsv: move to being wri 0 1 0
library cache kgllkdl: child: cleanup 0 1 0
library cache kglpin: child: heap proces 0 1 0
library cache kglpndl: child: before pro 0 1 0
redo allocation kcrfwi: more space 0 1 0
shared pool kghalo 0 2 0
-------------------------------------------------------------
Maybe you are looking for
-
Hi all, I am having problems migrating my Toplink Workbench project from 9.0.4.5 to 10.1.3.1.0! I used the Toplink Workbench to open and convert the existing project. The conversion succeeded, but when I tried to export the mappings I got this: java.
-
Publishing the instrument configuration?
Firstly, excuse my ignorance - I'm still pretty new... I have an instrument, which is actually a camera but for the sake of argument could be anything. This camera has an internal configuration, called "attributes" in LV I think. My first point of co
-
Hi, I'm newby to JSF technology, I'd like to ask what kind of HTML it generates for UI components contained in JSF page. Is there any support for XHTML? Thanks for advice, M.
-
Weblogic Server not visible in Administration Console
Hello, Forgive me if this is the incorrect location for this question but I could not find a relevant group to post this issue relating to the Administration Console, as it APPEARS. We have a three node Weblogic 8.1 SP3 cluster in our environment and
-
M3035 all in one. Printing solid black page
Unit prints solid black page. Tech manual covers black page with white specs, replace print cartridge. New cartridge same results. Tried engine test same results. Unit not under warranty. Next step?