Get parameters only specifiy to SQL Performance
Oracle Version:11.2.0.3/ OEL 6.1
For diagnosing a performance issue related to a batch run, our application teams wants to know all session/system level parameter that affects SQL performance.
I can't simply give an output of v$parameter.name and v$parameter.value as there will lots of parameter not related SQL performance like
diagnostic_dest
log_archive_dest_1
deferred_segment_creation
control_files
background_core_dump
I am looking for only those parameters which are relevant to SQL performance, like
optimizer_mode
statistics_level
memory_target
sga_target
Please see relevant sections in Performance and Tuning Guide:
http://docs.oracle.com/cd/E11882_01/server.112/e16638/build_db.htm#g23964
http://docs.oracle.com/cd/E11882_01/server.112/e16638/optimops.htm#BABDECGJ
Similar Messages
-
How to run a project which gets parameters from a batch file
Hello all,
I use to run a program, which gets parameters only by interactive mode, using a batch file as you can see below:
# myprogram < batchfile.txt
Now, I'm updating its code by SunStudioExpress IDE and I'd like to run it from the batchfile. I have noticed that the project properties window has the option run -> arguments, however this program doesn't accept arguments this way and for changing it, I'd have a hard job.
Does someone know how to run this project and to get its parameters from batchfile?
Regards,
GlauberAh, it appears that when you run the project, "<" is passed as one of the arguments and is not treated as input redirection.
Sorry, it looks like it is not possible to do the redirection; and it looks like a bug to me. Could you please file it through bugs.sun.com? It shouldn't take long as the problem is evident now. -
Please go thru below important checklist/guidelines to identify issue in any Perforamnce issue and resolution in no time.
Checklist for Quick Performance problem Resolution
· get trace, code and other information for given PE case
- Latest Code from Production env
- Trace (sql queries, statistics, row source operations with row count, explain plan, all wait events)
- Program parameters & their frequently used values
- Run Frequency of the program
- existing Run-time/response time in Production
- Business Purpose
· Identify most time consuming SQL taking more than 60 % of program time using Trace & Code analysis
· Check all mandatory parameters/bind variables are directly mapped to index columns of large transaction tables without any functions
· Identify most time consuming operation(s) using Row Source Operation section
· Study program parameter input directly mapped to SQL
· Identify all Input bind parameters being used to SQL
· Is SQL query returning large records for given inputs
· what are the large tables and their respective columns being used to mapped with input parameters
· which operation is scanning highest number of records in Row Source operation/Explain Plan
· Is Oracle Cost Based Optimizer using right Driving table for given SQL ?
· Check the time consuming index on large table and measure Index Selectivity
· Study Where clause for input parameters mapped to tables and their columns to find the correct/optimal usage of index
· Is correct index being used for all large tables?
· Is there any Full Table Scan on Large tables ?
· Is there any unwanted Table being used in SQL ?
· Evaluate Join condition on Large tables and their columns
· Is FTS on large table b'cos of usage of non index columns
· Is there any implicit or explicit conversion causing index not getting used ?
· Statistics of all large tables are upto date ?
Quick Resolution tips
1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
2) Use Data Caching Technique/Options to cache static data
3) Use Pipe Line Table Functions whenever possible
4) Use Global Temporary Table, Materialized view to process complex records
5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
8) Follow Oracle PL/SQL Best Practices
9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
12) Review Join condition on existing query explain plan
13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
14) Avoid applying SQL functions on index columns
15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
Thanks
PrafulI understand you were trying to post something helpful to people, but sorry, this list is appalling.
1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
No, use pure SQL.
2) Use Data Caching Technique/Options to cache static data
No, use pure SQL, and the database and operating system will handle caching.
3) Use Pipe Line Table Functions whenever possible
No, use pure SQL
4) Use Global Temporary Table, Materialized view to process complex records
No, use pure SQL
5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
No, use pure SQL
6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
Makes no sense.
7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
What about using the execution trace?
8) Follow Oracle PL/SQL Best Practices
Which are?
9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
You mean design your database and queries properly? And table scanning is not always bad.
10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
It depends if that is necessary or not.
11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
No, consider that too many indexes can have an impact on overall performance and can prevent the CBO from picking the best plan. There's far more to creating indexes than just picking every column that people are likely to search on; you have to consider the cardinality and selectivity of data, as well as the volumes of data being searched and the most common search requirements.
12) Review Join condition on existing query explain plan
Well, if you don't have your join conditions right then your query won't work, so that's obvious.
13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
No. Oracle recommends you do not use hints for query optimization (it says so in the documentation). Only certain hints such as APPEND etc. which are more related to certain operations such as inserting data etc. are acceptable in general. Oracle recommends you use the query optimization tools to help optimize your queries rather than use hints.
14) Avoid applying SQL functions on index columns
Why? If there's a need for a function based index, then it should be used.
15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
See 13.
In short, there are no silver bullets for dealing with performance. Each situation is different and needs to be evaluated on its own merits. -
[sql performance] inline view , group by , max, join
Hi. everyone.
I have a question with regard to "group by" inline view ,
max value, join, and sql performance.
I will give you simple table definitions in order for you
to understand my intention.
Table A (parent)
C1
C2
C3
Table B (child)
C1
C2
C3
C4 number type(sequence number)
1. c1, c2, c3 are the key columns of tabla A.
2. c1, c2, c3, c4 are the key columns of table B.
3. table A is the parent table of Table B.
4. c4 column of table b is the serial number.
(c4 increases from 1 by "1" regarding every (c1,c2,c3)
the following is the simple example of the sql query.
select .................................
from table_a,
(select c1, c2, c3, max(c4)
from table_b
group by c1, c2, c3) table_c
where table_a.c1 = table_c.c1
and table_a.c2 = table_c.c2
and table_a.c3 = table_c.c3
The real query is not simple as above. More tables come
after "the from clause".
Table A and table B are big tables, which have more than
100,000,000 rows.
The response time of this sql is very very slow
as everyone can expect.
Are there any solutions or sql-tips about the late response-time?
I am considering adding a new column into "Table B" in
order to identify the row, which has max serial number.
At this point, I am not sure adding a column is a good
thing in terms of every aspect.
I will be waiting for your advice and every response
will be appreciated even if it is not the solution.
Have a good day.
HO.
Message was edited by:
user507290For such big sources check that
1) you use full scans, hash joins or at least merge joins
2) you scan your source data as less as possible. In the best case each necessary table only once (for example not using exists clause to effectively scan all table via index scan).
3) how much time you are spending on sorts and hash joins (either from v$session_longops directly or some tool that visualises this info). If you are using workarea_size_policy = auto, probably you can switch to manual for this particular select and adjust sort_area_size and hash_area_size big enough to do as less as possible sorts on disk
4) if you have enough free resources i.e. big box probably you can consider using some parallelism
5) if your full scans are taking big time check what is your db_file_multiblock_read_count, probably increasing it for this select will give some gain.
6) run trace and check on what are you waiting for
7) most probably your problem is IO bound so probably you can do something from OS side to make IO faster
8) if your query now is optimized as much as you can, disks are running as mad and you are using all RAM then probably it is the most you can get out of your box :)
9) if nothing helps then you can start thinking about precalculations either using your idea about derived column or some materialized views.
10) I hope you have a test box and at least point (9) do firstly on it and see whether it helps.
Gints Plivna
http://www.gplivna.eu -
Regarding SQL performance Analyzer
Hi,
I am working on oracle 11g new feature Real application testing.I want to test the performance of DB after setting DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING parameters to TRUE(currently FALSE) by using SQL performance Analyzer. I have collected the SQL Tuning Sets from production and imported in Test server(replica of PROD), and will test the same in Test server only.
Will it be feasible to check by using SPA ? My concern is that in Test environment concurrent transaction and DML operation will not be done .
Please help me out on this .
Rgds,
AtHi,
Look at http://download.oracle.com/docs/cd/B28359_01/server.111/e12253/dbr_intro.htm
Regards, -
Ignore - SQL PERFORMANCE ANALYZER of 11g (dulpicate question)
I am using 11g on Windows 2000, I want to run SQL PERFORMANCE ANALYZER to see the impact of init.ora parameter changes on some sql’s. Currently, I am using it in a test environment, but eventually I want to apply it to production environment.
Let us say I want to see the effect of different values for db_file_muntilbock_readcount.
When I run this in my database, will the values changed impact only the session where I am running sql performance analyzer, or will it impact any other sessions, which are accessing the same database instance during the analysis period. I think, it impacts only the session where SQL Performance analyzer is being run, but want to make sure that is the case? I am not making any changes to paremeters myself using alter satementsm but Oracle is changing the parameters behind the scenes as part of its analysis,
Appreciate your feedback.
Message was edited by:
user632098Analyzer analyzes.
When you change in init parameter you change the parameter for everybody. -
Problem with SET GET parameters
Hi all,
I am facing a problem using SET and GET parameters.
There is a Z transaction(Dialog program) where some fields of screen are having parameter ID's. That transaction is designed to diaplay/change status of only one inspection lot at a time.
Now I need to call that transaction in a loop using BDC. I mean i need to update the status of multiple inspection lots(one after the other). Before calling the transaction I am using
SET PARAMETER ID 'QLS' FIELD lv_prueflos.
Unfortunately the transaction is only changing the first inspection lot. When I debugged I found that the screen field is changing in PAI. Even though in PBO it shows the next value, when it goes to PAI it is automatically changing to the first value(inspection lot).
Example: Inspection Lots : 4100000234
4100000235
4100000236
Now first time when the call transaction is being made the status of insp lot 4100000234 is changed. For the second time when insp lot 4100000235 is being passed in PBO ican see this. But the moment it enters PAI the screen field changes to 4100000234.
Could you pls help me in solving this issue.
Thanks,
AravindHi,
Problem with SET GET parameters
Regarding on your query. Follow this below link.
It will help you.
Re: Problem with Set parameter ID
Re: Problem in Set parameter ID
I Hope it will helps to you.
Regards,
Sekhar -
Help needed in SQL performance - Using CASE in SQL statement versus 2 query
Hi,
I have a requirement to find count from a bunch of tables.
The SQL I have gives the count of all members.
I have created 2 queries to find count of active and inactive members.
The key difference is only the active dates.
Each query takes 20 seconds to execute.
I modified the SQL to use CASE statement in the SELECT.
So after the data is fetched the CASE statement will evaluate the active date and gives 2 counts (active and inactive)
Is it advisable to use this approach. Will CASE improve SQL performance ? I have to justify this.
Please let me know your thoughts.
Thanks,
JHi,
If it can be done in single SQL do it in single SQL.
You said:
Will CASE improve SQL performance There can be both cases to prove if the performance is better or worse.
In your case you should tell us how it is.
Regards,
Bhushan -
SQL Performance issue: Using user defined function with group by
Hi Everyone,
im new here and I really could need some help on a weird performance issue. I hope this is the right topic for SQL performance issues.
Well ok, i create a function for converting a date from timezone GMT to a specified timzeone.
CREATE OR REPLACE FUNCTION I3S_REP_1.fnc_user_rep_date_to_local (date_in IN date, tz_name_in IN VARCHAR2) RETURN date
IS
tz_name VARCHAR2(100);
date_out date;
BEGIN
SELECT
to_date(to_char(cast(from_tz(cast( date_in AS TIMESTAMP),'GMT')AT
TIME ZONE (tz_name_in) AS DATE),'dd-mm-yyyy hh24:mi:ss'),'dd-mm-yyyy hh24:mi:ss')
INTO date_out
FROM dual;
RETURN date_out;
END fnc_user_rep_date_to_local;The following statement is just an example, the real statement is much more complex. So I select some date values from a table and aggregate a little.
select
stp_end_stamp,
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
stp_end_stampThis statement selects ~70000 rows and needs ~ 70ms
If i use the function it selects the same number of rows ;-) and takes ~ 4 sec ...
select
fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin'),
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin')I understand that the DB has to execute the function for each row.
But if I execute the following statement, it takes only ~90ms ...
select
fnc_user_rep_date_to_gmt(stp_end_stamp,'Europe/Berlin','ny21654'),
noi
from
select
stp_end_stamp,
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
stp_end_stamp
)The execution plan for all three statements is EXACTLY the same!!!
Usually i would say, that I use the third statement and the world is in order. BUT I'm working on a BI project with a tool called Business Objects and it generates SQL, so my hands are bound and I can't make this tool to generate the SQL as a subselect.
My questions are:
Why is the second statement sooo much slower than the third?
and
Howcan I force the optimizer to do whatever he is doing to make the third statement so fast?
I would really appreciate some help on this really weird issue.
Thanks in advance,
AndiHi,
The execution plan for all three statements is EXACTLY the same!!!Not exactly. Plans are the same - true. They uses slightly different approach to call function. See:
drop table t cascade constraints purge;
create table t as select mod(rownum,10) id, cast('x' as char(500)) pad from dual connect by level <= 10000;
exec dbms_stats.gather_table_stats(user, 't');
create or replace function test_fnc(p_int number) return number is
begin
return trunc(p_int);
end;
explain plan for select id from t group by id;
select * from table(dbms_xplan.display(null,null,'advanced'));
explain plan for select test_fnc(id) from t group by test_fnc(id);
select * from table(dbms_xplan.display(null,null,'advanced'));
explain plan for select test_fnc(id) from (select id from t group by id);
select * from table(dbms_xplan.display(null,null,'advanced'));Output:
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
2 - SEL$1 / T@SEL$1
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "T"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "ID"[NUMBER,22]
2 - "ID"[NUMBER,22]
34 rows selected.
SQL>
Explained.
SQL>
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
2 - SEL$1 / T@SEL$1
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "T"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "TEST_FNC"("ID")[22]
2 - "ID"[NUMBER,22]
34 rows selected.
SQL>
Explained.
SQL> select * from table(dbms_xplan.display(null,null,'advanced'));
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$F5BB74E1
2 - SEL$F5BB74E1 / T@SEL$2
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$F5BB74E1" "T"@"SEL$2")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$1")
MERGE(@"SEL$2")
OUTLINE_LEAF(@"SEL$F5BB74E1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "ID"[NUMBER,22]
2 - "ID"[NUMBER,22]
37 rows selected. -
i have taken "Oracle Database 12c: Performance Management and Tuning new" training from oracle university. Now i would like to get certified on "Oracle Database 11g: Performance Tuning 1Z0-054" exam. Is it possible ?
I essentially endorse and refer you to Matthews' and John's post above.
I would differ with slightly with Matthew because my guess is you would often be able to use like for like 12c training for an 11g certification ( I believe there are precedents). BEFORE ANYONE ASKS THE OTHER WAY DOESN'T HAPPEN.
.... but totally concur with Matthew you would ill advised to procede on that basis without one of:
- This being advertised as possible on the website : e.g. https://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=654&get_params=p_id:169 ... option 2 show courses.
- Confirmation from Brandye
- Confirmation from an Oracle Certification Support Web Ticket ( http://education.oracle.com/pls/eval-eddap-dcd/OU_SUPPORT_OCP.home?p_source=OCP )
... The more common (and in my opinion usually better) way would be get your 11g DBA OCP ( or higher first) and then take the 1z0-054. I am almost certain they will accept your 12c course for the 11g DBA OCP.
If you are choosing the route of not being a 11g (or 12c ) DBA OCP first but are on option 2 and relying on the course for certification then the issue is more in the balance and you are even more strongly advised to get confirmation before proceding (remember if the rules need to be changed for you only then any profit out of the exam is lost).
In general my understanding is Oracle would prefer to encourage people to train on the latest version of product that is available for training and will prefer to avoid restrictions which would cause you to train at a lower version. ( This is simply my guess at Oracle University Policy ... personal opinion only).
Having said all I have said I'd encourage you to go with the advice of the earlier two posts. -
I want to get increment only amount and its percentage of increment from HR
Dear Sir,
I want to get increment only amount and its percentage of increment can you plz help me how to get it
please find below mentioned query.
Regards
select
papf.employee_number as employee_number,
papf.last_name as last_name,
papf.middle_names as middle_name,
papf.first_name as first_name,
papf.known_as as preferred_name,
ppt.user_person_type as employment_type,
ppp.change_date as change_effective_date,
cur.currency_code as currency,
ppp.proposed_salary_n * per_saladmin_utility.get_annualization_factor(paaf.assignment_id , to_date(sysdate)) as current_salary_annual,ppp.proposed_salary_n,
per_saladmin_utility.get_previous_salary(paaf.assignment_id,ppp.PAY_PROPOSAL_ID) PREVIOUS_SAL,
ppp.proposal_reason as sal_proposla_resaon,
ppp.attribute2 as performance,
ppp.attribute3 as gandac,
ppp.attribute4 as NINEBOXPOSITION,
ppp.attribute1 as RPR
from
per_all_people_f papf,
per_all_assignments_f paaf,
per_pay_proposals ppp,
per_pay_bases ppb ,
pay_input_values_f piv,
pay_element_types_f pet,
fnd_currencies_tl cur,
per_person_type_usages_f pptu,
per_person_types ppt
where 1=1
and papf.employee_number in (127)
--and ppp.change_date >= ’01-Jan-2005?
and papf.current_employee_flag = 'Y'
and sysdate between papf.effective_start_date and papf.effective_end_date
and papf.person_id = paaf.person_id
and sysdate between paaf.effective_start_date and paaf.effective_end_date
and paaf.assignment_id = ppp.assignment_id
and paaf.pay_basis_id = ppb.pay_basis_id
and ppb.input_value_id = piv.input_value_id
and piv.element_type_id = pet.element_type_id
and pet.input_currency_code = cur.currency_code
and papf.person_id = pptu.person_id
and pptu.person_type_id = ppt.person_type_id
order by papf.employee_number, ppp.change_date desc
Edited by: user10941925 on Jan 20, 2012 2:37 AMMaybe (taking it literally)
select proposed_salary_n - PREVIOUS_SAL increment,
100 * (proposed_salary_n / PREVIOUS_SAL - 1) increment_percentage
from (select papf.employee_number as employee_number,
papf.last_name as last_name,
papf.middle_names as middle_name,
papf.first_name as first_name,
papf.known_as as preferred_name,
ppt.user_person_type as employment_type,
ppp.change_date as change_effective_date,
cur.currency_code as currency,
ppp.proposed_salary_n * per_saladmin_utility.get_annualization_factor(paaf.assignment_id,to_date(sysdate)) as current_salary_annual,
ppp.proposed_salary_n,
per_saladmin_utility.get_previous_salary(paaf.assignment_id,ppp.PAY_PROPOSAL_ID) PREVIOUS_SAL,
ppp.proposal_reason as sal_proposla_resaon,
ppp.attribute2 as performance,
ppp.attribute3 as gandac,
ppp.attribute4 as NINEBOXPOSITION,
ppp.attribute1 as RPR
from per_all_people_f papf,
per_all_assignments_f paaf,
per_pay_proposals ppp,
per_pay_bases ppb,
pay_input_values_f piv,
pay_element_types_f pet,
fnd_currencies_tl cur,
per_person_type_usages_f pptu,
per_person_types ppt
where 1=1
and papf.employee_number in (127)
-- and ppp.change_date >= ’01-Jan-2005?
and papf.current_employee_flag = 'Y'
and sysdate between papf.effective_start_date and papf.effective_end_date
and papf.person_id = paaf.person_id
and sysdate between paaf.effective_start_date and paaf.effective_end_date
and paaf.assignment_id = ppp.assignment_id
and paaf.pay_basis_id = ppb.pay_basis_id
and ppb.input_value_id = piv.input_value_id
and piv.element_type_id = pet.element_type_id
and pet.input_currency_code = cur.currency_code
and papf.person_id = pptu.person_id
and pptu.person_type_id = ppt.person_type_id
order by papf.employee_number,ppp.change_date desc
order by employee_number,change_effective_date descRegards
Etbin -
Hi All,
The Actual query to perform is below.
SELECT name,number from emp WHERE CASE WHEN :1='T' AND term_date IS Not NULL THEN 1 WHEN :1='A' AND term_date IS NULL THEN 1 WHEN :1='ALL' THEN 1 ELSE 1 END = 1;
I have tried in DB adapter like below as a parameter for :1 as #vInputParam
SELECT name,number from emp WHERE CASE WHEN #vInputParam='T' AND term_date IS Not NULL THEN 1 WHEN #vInputParam='A' AND term_date IS NULL THEN 1 WHEN #vInputParam='ALL' THEN 1 ELSE 1 END = 1;
Getting Error code :17003 .java.sql.SQLException: Invalid column index error.
Please suggest me on using ':' bind character in DB adapter Select Query SOA11g.
Can someone help me on this please?
Thanks,
HariHi,
Could you please make sure your binding style(Oracle Positional,Oracle named..etc) of the Seeded VO and Custom Vo are same.
This is the option you will get when you are extending your vo. So make sure that both are same.
You can refer the below link too
VO extension leads to "Invalid column index" exception
Thanks
Bharat -
No of columns in a table and SQL performance
How does the table size effects sql performance?
I am comparing 2 tables , with same number of rows(54 million rows) ,
table1(columns a,b,c,d,e,f..) has 40 columns
table2 (columns (a,b,c,d)
SQL uses columns a,b.
SQL using table2 runs in 1 sec.
SQL using table1 runs in 30 min.
Can any one please let me know how the table size , number of columns in table efects the performance of SQL's?
Thanks
jeevan.user600431 wrote:
This is a general question. I just want to compare table with more columns and table with less columns with same number of rows .
I am finding that table with less columns is good in performance , than the table with more columns.
Assuming there are no row chains , will there be any difference in performance with the number of columns in a table.Jeevan,
the question is not how many columns your table has, but how large your table segment is. If your query runs a full table scan it has to read through the whole table segment, so in that case the size of the table matters.
A table having more columns potentially has a larger row size than a table with less columns but this is not a general rule. Think of large columns, e.g. varchar2 columns, think of blank (NULL) columns and you can easily end up with a table consisting of a single column taking up more space per row than a table with 200 columns consisting only of varchar2(1) columns.
Check the DBA/ALL/USER_SEGMENTS view to determine the size of your two table segments. If you gather statistics on the tables then the dictionary will contain information about the average row size.
If your query is using indexes then the size of the table won't affect the query performance significantly in many cases.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
SQL PERFORMANCE SQL ANALYZER of 11g
I am using 11g on Windows 2000, I want run SQL PERFORMANCE ANALYZER to see the impact of parameter change on some sql’s. Currently, I am using it in a test environment, but eventually I want to apply t to production environment.
Let us say I want to see the effect of different values db_file_muntilbock_readcount.
When I run this in my database, will the values changed impact only the session where I am running sol performance analyzer, or will it impact any other sessions, which are accessing the same database instance. I think, it impacts only the session where SQL Performance analyzer is being run, but want to make sure that is the case?
Appreciate your feedback.I think, it impacts only the session where
SQL Performance analyzer is being run, but want to
make sure that is the case?The database instance is part of a larger 'system' which includes a fixed set of physical resources. Your session, and every other session, work within the constraints of those resources. When you change the current SQL statement, you will be moving the balance between those resources.
For example, a disk can only respond to one access request at a time. A memory location can be used for one piece of data at a time. A DB cache buffer can only reflect one block at a time. There are a lot of 'points of serialization'.
Although the major impact should be on the current session, there will be some impact on every other session in the system.
BY the way, there is an 'edit' button available to you for every post you create. As a courtesy, you could edit the title of the duplicate and let us know it is indeed a duplicate - or you could edit that other thread to ask that other question you were going to ask. -
Help needed here please. I am new to this concept and i am working on a tutorial based on SQL performance and security. I have worked my head round this but now i am stuck.
Here is the questions:
1. Analyse possible performance problems, and suggest solutions for each of the following transactions against the database
a) A manager of a project needs to inspect total planned and actual hours spent on a project broken down by activity.
e.g
Project: xxxxxxxxxxxxxx
Activity Code planned actual (to date)
1 20 25
2 30 30
3 40 24
Total 300 200
Note that actual time spent on an activity must be calculated from the WORK UNIT table.
b)On several lists (e.g. list or combo boxes) in the on-line system it is necessary to identify completed, current, or future projects.
2. Security: Justify and implement solutions at the server that meet the following security requirements
(i)Only members of the Corporate Strategy Department (which is an organisation unit) should be able to enter, update and delete data in the project table. All users should be able to read this information.
(ii)Employees should only be able to read information from the project table (excluding the budget) for projects they are assigned to.
(iii)Only the manager of a project should be able to update (insert, update, delete) any non-key information in the project table relating to that project.
Here is the project tables
set echo on
* Changes
* 4.10.00
* manager of employee on a project included in the employee on project table
* activity table now has compound key, based on ID dependence between project
* and activity
drop table org_unit cascade constraints;
drop table project cascade constraints;
drop table employee cascade constraints;
drop table employee_on_project cascade constraints;
drop table employee_on_activity cascade constraints;
drop table activity cascade constraints;
drop table activity_order cascade constraints;
drop table work_unit cascade constraints;
* org_unit
* type - for example in lmu might be FACULTY, or SCHOOL
CREATE TABLE org_unit
ou_id NUMBER(4) CONSTRAINT ou_pk PRIMARY KEY,
ou_name VARCHAR2(40) CONSTRAINT ou_name_uq UNIQUE
CONSTRAINT ou_name_nn NOT NULL,
ou_type VARCHAR2(30) CONSTRAINT ou_type_nn NOT NULL,
ou_parent_org_id NUMBER(4) CONSTRAINT ou_parent_org_unit_fk
REFERENCES org_unit
* project
CREATE TABLE project
proj_id NUMBER(5) CONSTRAINT project_pk PRIMARY KEY,
proj_name VARCHAR2(40) CONSTRAINT proj_name_uq UNIQUE
CONSTRAINT proj_name_nn NOT NULL,
proj_budget NUMBER(8,2) CONSTRAINT proj_budget_nn NOT NULL,
proj_ou_id NUMBER(4) CONSTRAINT proj_ou_fk REFERENCES org_unit,
proj_planned_start_dt DATE,
proj_planned_finish_dt DATE,
proj_actual_start_dt DATE
* employee
CREATE TABLE employee
emp_id NUMBER(6) CONSTRAINT emp_pk PRIMARY KEY,
emp_name VARCHAR2(40) CONSTRAINT emp_name_nn NOT NULL,
emp_hiredate DATE CONSTRAINT emp_hiredate_nn NOT NULL,
ou_id NUMBER(4) CONSTRAINT emp_ou_fk REFERENCES org_unit
* activity
* note each activity is associated with a project
* act_type is the type of the activity, for example ANALYSIS, DESIGN, BUILD,
* USER ACCEPTANCE TESTING ...
* each activity has a people budget , in other words an amount to spend on
* wages
CREATE TABLE activity
act_id NUMBER(6),
act_proj_id NUMBER(5) CONSTRAINT act_proj_fk REFERENCES project
CONSTRAINT act_proj_id_nn NOT NULL,
act_name VARCHAR2(40) CONSTRAINT act_name_nn NOT NULL,
act_type VARCHAR2(30) CONSTRAINT act_type_nn NOT NULL,
act_planned_start_dt DATE,
act_actual_start_dt DATE,
act_planned_end_dt DATE,
act_actual_end_dt DATE,
act_planned_hours number(6) CONSTRAINT act_planned_hours_nn NOT NULL,
act_people_budget NUMBER(8,2) CONSTRAINT act_people_budget_nn NOT NULL,
CONSTRAINT act_pk PRIMARY KEY (act_id, act_proj_id)
* employee on project
* when an employee is assigned to a project, an hourly rate is set
* remember that the persons manager depends on the project they are on
* the implication being that the manager needs to be assigned to the project
* before the 'managed'
CREATE TABLE employee_on_project
ep_emp_id NUMBER(6) CONSTRAINT ep_emp_fk REFERENCES employee,
ep_proj_id NUMBER(5) CONSTRAINT ep_proj_fk REFERENCES project,
ep_hourly_rate NUMBER(5,2) CONSTRAINT ep_hourly_rate_nn NOT NULL,
ep_mgr_emp_id NUMBER(6),
CONSTRAINT ep_pk PRIMARY KEY(ep_emp_id, ep_proj_id),
CONSTRAINT ep_mgr_fk FOREIGN KEY (ep_mgr_emp_id, ep_proj_id) REFERENCES employee_on_project
* employee on activity
* type - for example in lmu might be FACULTY, or SCHOOL
CREATE TABLE employee_on_activity
ea_emp_id NUMBER(6),
ea_proj_id NUMBER(5),
ea_act_id NUMBER(6),
ea_planned_hours NUMBER(3) CONSTRAINT ea_planned_hours_nn NOT NULL,
CONSTRAINT ea_pk PRIMARY KEY(ea_emp_id, ea_proj_id, ea_act_id),
CONSTRAINT ea_act_fk FOREIGN KEY (ea_act_id, ea_proj_id) REFERENCES activity ,
CONSTRAINT ea_ep_fk FOREIGN KEY (ea_emp_id, ea_proj_id) REFERENCES employee_on_project
* activity order
* only need a prior activity. If activity A is followed by activity B then
(B is the prior activity of A)
CREATE TABLE activity_order
ao_act_id NUMBER(6),
ao_proj_id NUMBER(5),
ao_prior_act_id NUMBER(6),
CONSTRAINT ao_pk PRIMARY KEY (ao_act_id, ao_prior_act_id, ao_proj_id),
CONSTRAINT ao_act_fk FOREIGN KEY (ao_act_id, ao_proj_id) REFERENCES activity (act_id, act_proj_id),
CONSTRAINT ao_prior_act_fk FOREIGN KEY (ao_prior_act_id, ao_proj_id) REFERENCES activity (act_id, act_proj_id)
* work unit
* remember that DATE includes time
CREATE TABLE work_unit
wu_emp_id NUMBER(5),
wu_act_id NUMBER(6),
wu_proj_id NUMBER(5),
wu_start_dt DATE CONSTRAINT wu_start_dt_nn NOT NULL,
wu_end_dt DATE CONSTRAINT wu_end_dt_nn NOT NULL,
CONSTRAINT wu_pk PRIMARY KEY (wu_emp_id, wu_proj_id, wu_act_id, wu_start_dt),
CONSTRAINT wu_ea_fk FOREIGN KEY (wu_emp_id, wu_proj_id, wu_act_id)
REFERENCES employee_on_activity( ea_emp_id, ea_proj_id, ea_act_id)
/* enter data */
start ouins
start empins
start projins
start actins
start aoins
start epins
start eains
start wuins
start pmselect
I have the tables containing ouins and the rest. email me on [email protected] if you want to have a look at the tables.Answer to your 2nd question is easy. Create database roles for the various groups of people who are allowed to access or perform various DML actions.
The assign the various users to these groups. The users will be restricted to what the roles are restricted to.
Look up roles if you are not familiar with it.
Maybe you are looking for
-
FIOS Internet connection reset on the verizon side
I am a new subscriber of the FIOS internet after having Comcast for years. I like the service speed, etc.. but I have one nagging issue that I can't seem to get any help with. I have the ethernet from the ONT plugged directly into my firewall (same
-
Windows 8.1 Error message at startup Timestamp: 26/11/2014 21:33:29 Error: Please do not load stuff in the multimessage browser directly, use the SummaryFrameManager instead. Source File: resource://gre/modules/summaryFrameManager.js Line: 85 What do
-
How to Add and Remove Apps from Launchpad?
I have noticed that I have apps missing in launchpad that I do have in my apps folder. How do I add these to launchpad. On the flip side, then how can I remove apps from launchpad that I hardly ever use?
-
Question about Java SDK 1.5.0 and mobile devices...
Hello, everyone. This is my first time posting on these forums since I usually find the answers I need from old threads (this place is VERY informative and HELPFUL). I wanted to know if it's possible at all to create a program from the Java Standard
-
A Possible Solution if your DVD/CD burner disappears from VISTA
I finally have a fix after 9 months of not being able to use my laptop with ITunes/IPod due to the program disabling the DVD/CD drives on my Vista laptop. You can try this ... if it does not work for you, you can simply restore the file. Search for t