BW Statistics setting.
Dear All,
I am having some doughts on BW Statistics Setting,
In selection for BW Statistics settings ( AWB->Tools-> BW Statistics for infoprovider) all objects including Master data object/ODS/Cube is selected.
My doughts is that is BW Statistics work for Master data objects/ODS ?
or it only work for Cube.
Do we need to deselect all check box in seting for Master Data object/ODS and only need to be checked for Cube.
Thanks All in advance.
Hi,
With the new architecture for BI reporting, collection of statistics for query runtime analysis was enhanced or changed. The parallelization in the data manager area (during data read) that is being used more frequently has led to splitting the previous "OLAP" statistics data into "data manager" data (such as database access times, RFC times) and front-end and OLAP times. The statistics data is collected in separate tables, but it can be combined using the InfoProvider for the technical content.
The information as to whether statistic data is collected for an object no longer depends on the InfoProvider. Instead it depends on those objects for which the data is collected, which means on a query, a workbook or a Web template. The associated settings are maintained in the RSDDSTAT transaction.
Effects on Existing Data, Due to the changes in the OLAP and front-end architecture, the statistic data collected up to now can only partially be compared with the new data.
Since the structure of the new tables differs greatly from that of the table RSDDSTAT, InfoProviders that are based on previous data (table RSDDSTAT) can no longer be supplied with data.
Effects on Customizing
The Collect Statistics setting is obsolete. Instead you have to determine whether and at which granularity you wish to display the statistics data for the individual objects (query, workbook, Web template). In the RSDDSTAT transaction, you can turn the statistics on and off for all queries for an InfoProvider. The maintenance of the settings (such as before) from the Data Warehousing Workbench can be reached using Tools ® BW Statistics.
You can use this BRCONNECT function to update the statistics on the Oracle database for the cost-based optimizer.
By running update statistics regularly, you make sure that the database statistics are up-to-date, so improving database performance. The Oracle cost-based optimizer (CBO) uses the statistics to optimize access paths when retrieving data for queries. If the statistics are out-of-date, the CBO might generate inappropriate access paths (such as using the wrong index), resulting in poor performance.
From Release 4.0, the CBO is a standard part of the SAP System. If statistics are available for a table, the database system uses the cost-based optimizer. Otherwise, it uses the rule-based optimizer.
Partitioned tables, except where partitioned tables are explicitly excluded by setting the active flag in the DBSTATC table to I. For more information, see SAP
InfoCube tables for the SAP Business Information Warehouse (SAP BW)
You can update statistics using one of the following methods:
DBA Planning Calendar in the Computing Center Management System (CCMS)
For more information, see Update Statistics for the Cost-Based Optimizer in CCMS (Oracle). The DBA Planning Calendar uses the BRCONNECT commands.
We recommend you to use this approach because you can easily schedule update statistics to run automatically at specified intervals (for example, weekly).
To use the CBO, make sure that the parameter OPTIMIZER_MODE in the Oracle initialization profile init.ora is set to CHOOSE.
BRCONNECT performs update statistics using a two-phase approach.
1. Checks each table to see if the statistics are out-of-date
2. If required, updates the statistics on the table immediately after the check
For more information about how update statistics works, see Internal Rules for Update Statistics.
You can influence how update statistics works by using the -force options. For more information, see -f stats.
Unless you have special requirements, we recommend you to perform the standard update statistics, using one of the following tools to schedule it on a regular basis (for example, daily or weekly):
DBA Planning Calendar, as described above in "Integration."
A tool such as cron (UNIX) or at (Windows NT) to execute the following standard call:
brconnect -u / -c -f stats -t all
This is also adequate after an upgrade of the database or SAP System. It runs using the OPS$ user without operator intervention.
Update statistics only for tables and indexes with missing statistics
brconnect -u / -c -f stats -t missing
Check and update statistics for all tables defined in the DBSTATC table
brconnect -u / -c -f stats -t dbstatc_tab
For examples of how you can override the internal rules for update statistics, see -force with Update Statistics.
The InfoCube tables used in SAP Business Information Warehouse (SAP BW) and Advanced Planner and Optimizer (APO) need to be processed in a special way when the statistics are being updated. Usually, statistics should be created using histograms, Statistics for the InfoCube tables can be updated, together with other tables in a run. In this case, the statistics for the InfoCube tables are always created with histograms. You can specify which tables are to be handled as InfoCube tables using the init.sap parameters:
stats_table
stats_exclude
stats_dbms_stats
The function of this keyword is to ensure that only InfoCube tables are processed in accordance with the selected parameter settings.
Statistics are only checked for InfoCube tables and updated, if required
brconnect -u / -c -f stats -t all -e info_cubes
Statistics are checked for all tables besides InfoCube tables and updated, if necessary.
stats_dbms_stats = INFO_CUBES:R:4
brconnect -u / -c -f stats -t all, Statistics are checked for all tables and updated, if necessary. New statistics for InfoCube tables are created with the DBMS_STATS package using row sampling and an internal parallel degree of 4.
This is the default. Statistics are checked for all tables and updated, if necessary. If InfoCube tables are present and selected following the update check, statistics are generated for them using histograms.
You can update statistics on the Oracle database using the Computing Center Management System (CCMS).
By running update statistics regularly, you make sure that the database statistics are up-to-date, so improving database performance. The Oracle cost-based optimizer (CBO) uses the statistics to optimize access paths when retrieving data for queries. If the statistics are out-of-date, the CBO might generate inappropriate access paths (such as using the wrong index), resulting in poor performance.
The CBO is a standard part of the SAP system. If statistics are available for a table, the database system uses the cost-based optimizer. Otherwise, it uses the rule-based optimizer.
You can also run update statistics for your Oracle database using BRCONNECT. Refer to Update Statistics with BRCONNECT. This is the recommended way to update statistics.
Update statistics after installations and upgrades
You need to update statistics for all tables in the SAP system after an installation or an upgrade. This is described in the relevant installation or upgrade documentation.
1. You use the DBA Planning Calendar in CCMS to schedule regular execution of check statistics and, if necessary, update statistics. For more information.
2. If required, you run one-off checks on tables to see if the tables statistics are out-of-date, and then run an update statistics for the table if required. This is useful, for example, if the data in a table has been significantly updated, but the next scheduled run of update statistics is not for a long time.
You can check, create, update, or delete statistics for:
¡ Single tables
¡ Groups of tables
3. If required, you configure update statistics by amending the parameters in the control table DBSTATC . This control table contains a list of the database tables for which the default values for update statistics are not suitable. If you change this table, all runs of update statistics in BRCONNECT, CCMS, or the DBA Planning Calendar are affected. Configuring update statistics makes sense with large tables, for which the default parameters might not be appropriate.
Do not add, delete, or change table entries unless you are aware of the consequences.
Tables from the DBSTATC table with either of the following values:
ACTIVE field U
ACTIVE field R or N and USE field A(relevant for the application monitor)
6. BRCONNECT writes the results of update statistics to the DBSTATTORA table and also, for tables with the DBSTATC history flag or usage type A, to the DBSTATHORA table.
7. For tables with update statistics using methods EI, EX, CI, or CX, BRCONNECT validates the structure of all associated indexes and writes the results to the DBSTATIORA table and also, for tables with the DBSTATC history flag or usage type A, to the DBSTAIHORA table.
8. BRCONNECT immediately deletes the statistics that it created in this procedure for tables with the ACTIVE flag set to N or R in the DBSTATC table.
Similar Messages
-
Max buffer setting in import function?
I was told I could shorten the time it takes to import a large table by increasing the buffer setting. I currently have it set at 5000000. Is there a maximum limit that I have to stay below?
I'm trying to import 56 field table that contains about 4 million rows. It's taking hours to complete. Here's the import statement I'm using. Any help on how to speed this up would be great! Thanks in advance.
imp login_name tables=hourlies log=C:\hourlies.explog file=C:\hourlies.expdat FROMUSER=login_name ignore=Y commit=y buffer=5000000 ANALYZE=N INDEXES=NIf so, you can import with direct (does create the actually SQL command and execute 1 rows at a time)Import can not use direct path - only conventional. Oracle has added direct path capabilities to Data Pump in 10g.
Since you always import data in conventional mode the general rules for the performance tuning are appiled - gbrabham already mentioned disabling triggers and indexes, foreign key constraints. IF you do not want to import statistics - set statistics parameter to none.
Create the trace file, check wait events - basically tune it the way you tuned the regular inserts.
Mike -
Understanding/reading set option show_* on
We have a bad query plan on a query which I can't see why its been chosen.
Its doing table reformatting which we can solve with a set store_index off.
The indexes have been reorg-ed and the stats are up-to-date.
I tried set option show_lio_costing on and looked at the output but it was rather large to understand.
( I also tried set option show_lio_costing on on a 2 table join and the output was over 400 lines long)
Is there a guide to understanding the output of these options ?
What's the most important lines I should be looking for ?
A very useful feature for the the next version of ASE would be a mechanism to present a simple explanation of the query plan selection.> 1.Are they have composite indexes?
> If so, how did you update their statistics?
> update index statistics or update statistics?
> For ASE 15.7 you have to use 'update index statistics' command.
I've tried update statistics and update index statistics but as the search col doesn't have any index on it then I wouldn't expect this to help.
> 2.Below set options will help you to understand it.
> 2.1.check missing statistics
> set option show_missing_stats on
The code is
select * from A, B
where A.col1 > 1234 and A.col2 = "A"
and A.PK = B.PK
there are no indexes on A.col1 and A.col2 so I wouldn't expect stats on these columns which is shown by
NO STATS on column DB..A.col1
NO STATS on column DB..A.col1
Since almost any column in a table could be involved in a query would it make sense to create statistics on every column ? I can't think this would be realistic to do with the data volumes - but maybe I'm wrong ?
> 2.2.enable below options too
> set statistics io, plancost on
What should I be looking at ?
I'm not expecting ASE to have stats and get this query right in all cases.
I'm trying to understand the output from the set option show* so at least I can explain why ASE has chosen the plan it has. The output is huge and it not easy to read. I'm looking for something which might explain the output and what the most useful parts to look at. -
Crystal report/SQL SP/DTSRUN
Post Author: slk35psu
CA Forum: Other
Hi.
[Crystal Reports version 11.5.9.1076]
[SQL Server 2000]
I have a SQL Server stored procedure that runs perfectly
when I execute it in the database but it does not work in Crystal Report.
The stored procedure executes two SQL scripts: drop table
indexes and truncate tables. After that,
it calls a DTS package to transfer data from an Oracle database to the SQL
database. After the DTS package call, it
executes two more SQL scripts to put back the table indexes and updates table
statistics. I can run the stored
procedure in the SQL database (on both my computer and on its server). The problem occurs when the stored procedure
is called in the Crystal Report. In
Crystal Report, the stored procedure runs the drop table indexes and truncate
tables scripts and the DTS package. It
seems to me that Crystal
thinks the job is done at the completion of the DTS package and reports a
success status. It fails to continue to
run the table indexes creation script and the table statistics update
script. What causes this?
To summarize, the steps in the SQL database stored procedure
are:
Truncate
tablesDrop
indexesRun
DTS package to transfer data from Oracle DB to SQL DBCreate
indexesUpdate
statistics
The stored procedure works find when executed in the SQL database. When executed in the Crystal Report, it stops
(and thinks) it is done at the end of the DTS package run (step 3).
I have pasted my stored procedure code below for your
review. Sincerely, Susan Kolonay.
create proc dbo.WH_DATA_LOAD
as
SET NOCOUNT ON
DECLARE @rc int
DECLARE @WH_UNAME varchar(30)
SET @WH_UNAME = 'whu1'
DECLARE @WH_PWD varchar(30)
SET @WH_PWD = 'K*e689f(k'
DECLARE @WH_DB_SERVER_NAME varchar(30)
SET @WH_DB_SERVER_NAME = 'ddasAA.psu.edu'
DECLARE @SCRIPTS_LOC varchar(50)
SET @SCRIPTS_LOC = 'E:\Scripts\cnv\TEST\'
DECLARE @DTS_PKG_NAME varchar(30)
SET @DTS_PKG_NAME = 'whse_build_routine'
DECLARE @TRUNCATE_TBL varchar(200)
DECLARE @DROP_INDEXES varchar(200)
DECLARE @DTS_PKG varchar(200)
DECLARE @CREATE_INDEXES varchar(200)
DECLARE @UPDATE_TBL_STATS varchar(200)
/CHECKING TABLE/
if not exists (select 1 from sysobjects where type='U' and
name='WH_DATA_LOAD_STATS' and uid=1)
BEGIN
PRINT
'WH_DATA_LOAD_STATS does not exist.'
RETURN
END
/TRUNCATE TEMPORARY TABLE/
truncate table WH_DATA_LOAD_STATS
/TRUNCATE TABLES/
insert into WH_DATA_LOAD_STATS (time_started, process)
values (getdate(), 'truncate table')
SET @TRUNCATE_TBL = 'isql -U '@WH_UNAME' -P '@WH_PWD' -n
-i '@SCRIPTS_LOC'trunc_whse_tables.sql 2>&1'
EXEC @rc = master.dbo.xp_cmdshell @TRUNCATE_TBL
IF @rc <> 0
BEGIN
update
WH_DATA_LOAD_STATS set time_ended=getdate(), status='failed' where
process='truncate table'
END
ELSE
BEGIN
update
WH_DATA_LOAD_STATS set time_ended=getdate(), status='success' where
process='truncate table'
END
/DROP INDEXES/
insert into WH_DATA_LOAD_STATS (time_started, process)
values (getdate(), 'drop indexes')
SET @DROP_INDEXES = 'isql -U '@WH_UNAME' -P '@WH_PWD' -n
-i '@SCRIPTS_LOC'drop_whse_indexes.sql 2>&1'
EXEC @rc = master.dbo.xp_cmdshell @DROP_INDEXES
IF @rc <> 0
BEGIN
update
WH_DATA_LOAD_STATS set time_ended=getdate(), status='failed' where
process='drop indexes'
END
ELSE
BEGIN
update
WH_DATA_LOAD_STATS set time_ended=getdate(), status='success' where
process='drop indexes'
END
/DTS PKG/
insert into WH_DATA_LOAD_STATS (time_started, process)
values (getdate(), 'DTS pkg')
SET @DTS_PKG = 'dtsrun /S '@WH_DB_SERVER_NAME' /U sa /P
abcdefg123 /N '@DTS_PKG_NAME' 2>&1'
EXEC @rc = master.dbo.xp_cmdshell @DTS_PKG
IF @rc <> 0
BEGIN
update
WH_DATA_LOAD_STATS set time_ended=getdate(), status='failed' where process='DTS
pkg'
END
ELSE
BEGIN
update
WH_DATA_LOAD_STATS set time_ended=getdate(), status='success' where
process='DTS pkg'
END
/CREATE INDEXES/
insert into WH_DATA_LOAD_STATS (time_started, process)
values (getdate(), 'create indexes')
SET @CREATE_INDEXES = 'isql -U '@WH_UNAME' -P '@WH_PWD'
-n -i '@SCRIPTS_LOC'create_whse_indexes.sql 2>&1'
EXEC @rc = master.dbo.xp_cmdshell @CREATE_INDEXES
IF @rc <> 0
BEGIN
update WH_DATA_LOAD_STATS
set time_ended=getdate(), status='failed' where process='create indexes'
END
ELSE
BEGIN
update
WH_DATA_LOAD_STATS set time_ended=getdate(), status='success' where
process='create indexes'
END
/UPDATE STATISTICS/
insert into WH_DATA_LOAD_STATS (time_started, process)
values (getdate(), 'update statistics')
SET @UPDATE_TBL_STATS = 'isql -U '@WH_UNAME' -P
'@WH_PWD' -n -i '@SCRIPTS_LOC'update_whse_stats.sql 2>&1'
EXEC @rc = master.dbo.xp_cmdshell @UPDATE_TBL_STATS
IF @rc <> 0
BEGIN
update
WH_DATA_LOAD_STATS set time_ended=getdate(), status='failed' where
process='update statistics'
END
ELSE
BEGIN
update
WH_DATA_LOAD_STATS set time_ended=getdate(), status='success' where
process='update statistics'
END
/*select upper(process)'('status')'char(10)
char(9)'Started on '+convert(varchar(8), time_started, 114)
+' ended
on '+convert(varchar(8), time_ended, 114)
' DURATION:'convert(varchar,datediff(mi,
time_started, time_ended))+' minutes.'
AS "REPORT"
from WH_DATA_LOAD_STATS
/*select process
, status
convert(varchar(8), time_started, 114) as "START_TIME"
convert(varchar(8), time_ended, 114) as "END_TIME"
convert(varchar,datediff(mi, time_started, time_ended)) as
"DURATION_in_MIN"
from WH_DATA_LOAD_STATS
/*select '('status') 'upper(process)'Started on
'convert(varchar(8), time_started, 114)' ended on '+convert(varchar(8),
time_ended, 114)+'
DURATION:'convert(varchar,datediff(mi, time_started, time_ended))'
minutes.' AS "REPORT"
from WH_DATA_LOAD_STATS
RETURN
goPost Author: yangster
CA Forum: Crystal Reports
you are over thinking the problemsimply create a parameter within crystal for your state and change it to allow multiple valuesthen in the select expert put in state = ?stateif you do show sql query you will see that the values are pushed down to the sql level thus no performance hit vs putting the parameter directly in the command itself -
[./solutions/atonx.sql]
REM
REM script ATONX.SQL
REM =====================================
SET AUTOTRACE ON EXPLAIN
[./solutions/saved_settings.sql]
set appinfo OFF
set appinfo "SQL*Plus"
set arraysize 15
set autocommit OFF
set autoprint OFF
set autorecovery OFF
set autotrace OFF
set blockterminator "."
set cmdsep OFF
set colsep " "
set compatibility NATIVE
set concat "."
set copycommit 0
set copytypecheck ON
set define "&"
set describe DEPTH 1 LINENUM OFF INDENT ON
set echo OFF
set editfile "afiedt.buf"
set embedded OFF
set escape OFF
set feedback ON
set flagger OFF
set flush ON
set heading ON
set headsep "|"
set linesize 80
set logsource ""
set long 80
set longchunksize 80
set markup HTML OFF HEAD "<style type='text/css'> body {font:10pt Arial,Helvetica,sans-serif; color:black; background:White;} p {font:10pt Arial,Helvetica,sans-serif; color:black; background:White;} table,tr,td {font:10pt Arial,Helvetica,sans-serif; color:Black; background:#f7f7e7; padding:0px 0px 0px 0px; margin:0px 0px 0px 0px;} th {font:bold 10pt Arial,Helvetica,sans-serif; color:#336699; background:#cccc99; padding:0px 0px 0px 0px;} h1 {font:16pt Arial,Helvetica,Geneva,sans-serif; color:#336699; background-color:White; border-bottom:1px solid #cccc99; margin-top:0pt; margin-bottom:0pt; padding:0px 0px 0px 0px;} h2 {font:bold 10pt Arial,Helvetica,Geneva,sans-serif; color:#336699; background-color:White; margin-top:4pt; margin-bottom:0pt;} a {font:9pt Arial,Helvetica,sans-serif; color:#663300; background:#ffffff; margin-top:0pt; margin-bottom:0pt; vertical-align:top;}</style><title>SQL*Plus Report</title>" BODY "" TABLE "border='1' width='90%' align='center' summary='Script output'" SPOOL OFF ENTMAP ON PRE ON
set newpage 1
set null ""
set numformat ""
set numwidth 10
set pagesize 14
set pause OFF
set recsep WRAP
set recsepchar " "
set serveroutput OFF
set shiftinout invisible
set showmode OFF
set sqlblanklines OFF
set sqlcase MIXED
set sqlcontinue "> "
set sqlnumber ON
set sqlpluscompatibility 8.1.7
set sqlprefix "#"
set sqlprompt "SQL> "
set sqlterminator ";"
set suffix "sql"
set tab ON
set termout OFF
set time OFF
set timing OFF
set trimout ON
set trimspool OFF
set underline "-"
set verify ON
set wrap ON
[./solutions/sol_06_04d.sql]
-- this script requires the sql id from the previous script to be substituted
SELECT PLAN_TABLE_OUTPUT
FROM TABLE (DBMS_XPLAN.DISPLAY_AWR(' your sql id here'));
[./solutions/rpsqlarea.sql]
set feedback off
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('your sql_id here'));
set feedback on
[./solutions/sqlid2.sql]
SELECT SQL_ID, SQL_TEXT FROM V$SQL
WHERE SQL_TEXT LIKE '%REPORT%' ;
[./solutions/schemastats.sql]
SELECT last_analyzed analyzed, sample_size, monitoring,
table_name
FROM user_tables;
[./solutions/allrows.sql]
REM
REM script ALLROWS.SQL
REM =====================================
alter session set optimizer_mode = all_rows
[./solutions/aton.sql]
REM
REM script ATON.SQL
REM =====================================
SET AUTOTRACE ON
[./solutions/li.sql]
REM script LI.SQL (list indexes)
REM wildcards in table_name allowed,
REM and a '%' is appended by default
REM ======================================
set termout off
store set sqlplus_settings replace
save buffer.sql replace
set verify off autotrace off
set feedback off termout on
break on table_name skip 1 on index_type
col table_name format a25
col index_name format a30
col index_type format a20
accept table_name -
prompt 'List indexes on table : '
SELECT ui.table_name
, decode(ui.index_type
,'NORMAL', ui.uniqueness
,ui.index_type) AS index_type
, ui.index_name
FROM user_indexes ui
WHERE ui.table_name LIKE upper('&table_name.%')
ORDER BY ui.table_name
, ui.uniqueness desc;
get buffer.sql nolist
@sqlplus_settings
set termout on
[./solutions/utlxplp.sql]
Rem
Rem $Header: utlxplp.sql 23-jan-2002.08:55:23 bdagevil Exp $
Rem
Rem utlxplp.sql
Rem
Rem Copyright (c) 1998, 2002, Oracle Corporation. All rights reserved.
Rem
Rem NAME
Rem utlxplp.sql - UTiLity eXPLain Parallel plans
Rem
Rem DESCRIPTION
Rem script utility to display the explain plan of the last explain plan
Rem command. Display also Parallel Query information if the plan happens to
Rem run parallel
Rem
Rem NOTES
Rem Assume that the table PLAN_TABLE has been created. The script
Rem utlxplan.sql should be used to create that table
Rem
Rem With SQL*plus, it is recomended to set linesize and pagesize before
Rem running this script. For example:
Rem set linesize 130
Rem set pagesize 0
Rem
Rem MODIFIED (MM/DD/YY)
Rem bdagevil 01/23/02 - rewrite with new dbms_xplan package
Rem bdagevil 04/05/01 - include CPU cost
Rem bdagevil 02/27/01 - increase Name column
Rem jihuang 06/14/00 - change order by to order siblings by.
Rem jihuang 05/10/00 - include plan info for recursive SQL in LE row source
Rem bdagevil 01/05/00 - make deterministic with order-by
Rem bdagevil 05/07/98 - Explain plan script for parallel plans
Rem bdagevil 05/07/98 - Created
Rem
set markup html preformat on
Rem
Rem Use the display table function from the dbms_xplan package to display the last
Rem explain plan. Use default mode which will display only relevant information
Rem
select * from table(dbms_xplan.display());
[./solutions/cbinp.sql]
REM Oracle10g SQL Tuning Workshop
REM script CBI.SQL (create bitmap index)
REM prompts for input; index name generated
REM =======================================
accept TABLE_NAME prompt " on which table : "
accept COLUMN_NAME prompt " on which column: "
set termout off
store set saved_settings replace
set heading off feedback off verify off
set autotrace off termout on
column dummy new_value index_name
SELECT 'creating index'
, SUBSTR( SUBSTR('&table_name',1,4)||'_' ||
TRANSLATE(REPLACE('&column_name', ' ', '')
, 1, 25
)||'_idx' dummy
FROM dual;
CREATE BITMAP INDEX &index_name ON &TABLE_NAME(&COLUMN_NAME)
NOLOGGING COMPUTE STATISTICS
@saved_settings
set termout on
undef INDEX_NAME
undef TABLE_NAME
undef COLUMN_NAME
[./solutions/dump.sql]
SElECT *
FROM v$parameter
WHERE name LIKE '%dump%';
[./solutions/utlxplan.sql]
rem
rem $Header: utlxplan.sql 29-oct-2001.20:28:58 mzait Exp $ xplainpl.sql
rem
Rem Copyright (c) 1988, 2001, Oracle Corporation. All rights reserved.
Rem NAME
REM UTLXPLAN.SQL
Rem FUNCTION
Rem NOTES
Rem MODIFIED
Rem mzait 10/26/01 - add keys and filter predicates to the plan table
Rem ddas 05/05/00 - increase length of options column
Rem ddas 04/17/00 - add CPU, I/O cost, temp_space columns
Rem mzait 02/19/98 - add distribution method column
Rem ddas 05/17/96 - change search_columns to number
Rem achaudhr 07/23/95 - PTI: Add columns partition_{start, stop, id}
Rem glumpkin 08/25/94 - new optimizer fields
Rem jcohen 11/05/93 - merge changes from branch 1.1.710.1 - 9/24
Rem jcohen 09/24/93 - #163783 add optimizer column
Rem glumpkin 10/25/92 - Renamed from XPLAINPL.SQL
Rem jcohen 05/22/92 - #79645 - set node width to 128 (M_XDBI in gendef)
Rem rlim 04/29/91 - change char to varchar2
Rem Peeler 10/19/88 - Creation
Rem
Rem This is the format for the table that is used by the EXPLAIN PLAN
Rem statement. The explain statement requires the presence of this
Rem table in order to store the descriptions of the row sources.
create table PLAN_TABLE (
statement_id varchar2(30),
timestamp date,
remarks varchar2(80),
operation varchar2(30),
options varchar2(255),
object_node varchar2(128),
object_owner varchar2(30),
object_name varchar2(30),
object_instance numeric,
object_type varchar2(30),
optimizer varchar2(255),
search_columns number,
id numeric,
parent_id numeric,
position numeric,
cost numeric,
cardinality numeric,
bytes numeric,
other_tag varchar2(255),
partition_start varchar2(255),
partition_stop varchar2(255),
partition_id numeric,
other long,
distribution varchar2(30),
cpu_cost numeric,
io_cost numeric,
temp_space numeric,
access_predicates varchar2(4000),
filter_predicates varchar2(4000));
[./solutions/indstats.sql]
accept table_name -
prompt 'on which table : '
SELECT index_name name, num_rows n_r,
last_analyzed l_a, distinct_keys d_k,
leaf_blocks l_b, avg_leaf_blocks_per_key a_l,join_index j_I
FROM user_indexes
WHERE table_name = upper('&table_name');
undef table_name
[./solutions/test.sql]
declare
x number;
begin
for i in 1..10000 loop
select count(*) into x from customers;
end loop;
end;
[./solutions/rp.sql]
set feedback off
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
set feedback on
[./solutions/sol_08_02b.sql]
ALTER SESSION SET SQL_TRACE = TRUE;
[./solutions/trace.sql]
ALTER SESSION SET SQL_TRACE = TRUE;
[./solutions/doit.sql]
DROP INDEX SALES_CH_BIX;
DROP INDEX SALES_CUST_BIX;
DROP INDEX SALES_PROD_BIX;
[./solutions/ci.sql]
REM SQL Tuning Workshop
REM script CI.SQL (create index)
REM prompts for input; index name generated
REM =======================================
accept TABLE_NAME prompt " on which table : "
accept COLUMN_NAME prompt " on which column(s): "
set termout off
store set saved_settings replace
set heading off feedback off autotrace off
set verify off termout on
column dummy new_value index_name
SELECT 'creating index'
, SUBSTR( SUBSTR('&table_name',1,4)||'_' ||
TRANSLATE(REPLACE('&column_name', ' ', '')
, 1, 25
)||'_idx' dummy
FROM dual;
CREATE INDEX &index_name
ON &table_name(&column_name)
NOLOGGING COMPUTE STATISTICS;
@saved_settings
set termout on
undef INDEX_NAME
undef TABLE_NAME
undef COLUMN_NAME
[./solutions/sol_06_04c.sql]
exec dbms_workload_repository.create_snapshot('ALL');
[./solutions/sol_06_04a.sql]
column sql_text format a25
SELECT SQL_ID, SQL_TEXT FROM V$SQL
WHERE SQL_TEXT LIKE '%REPORT%' ;
[./solutions/login.sql]
REM ======================================
REM COL[UMN] commands
REM ======================================
col dummy new_value index_name
col name format a32
col segment_name format a20
col table_name format a20
col column_name format a20
col index_name format a30
col index_type format a10
col constraint_name format a20
col num_distinct format 999999
col update_comment format a20 word
-- for the SHOW SGA/PARAMETER commands:
col name_col_plus_show_sga format a24
col name_col_plus_show_param format a40 -
heading name
col value_col_plus_show_param format a35 -
heading value
-- for the AUTOTRACE setting:
col id_plus_exp format 90 head i
col parent_id_plus_exp format 90 head p
col plan_plus_exp format a80
col other_plus_exp format a44
col other_tag_plus_exp format a29
col object_node_plus_exp format a8
REM ======================================
REM SET commands
REM ======================================
set describe depth 2
set echo off
set editfile D:\Tmp\buffer.sql
set feedback 40
set linesize 120
set long 999
set numwidth 8
set pagesize 36
set pause "[Enter]..." pause off
set tab off
set trimout on
set trimspool on
set verify off
set wrap on
REM ======================================
REM DEFINE commands
REM ======================================
def 1=employees
def table_name=employees
def column_name=first_name
def buckets=1
def sc=';'
REM ======================================
REM miscellaneous
REM ======================================
[./solutions/sqlid.sql]
SELECT SQL_ID, SQL_TEXT FROM V$SQL
WHERE SQL_TEXT LIKE '%/* my%' ;
[./solutions/hist1.sql]
SELECT * FROM products WHERE prod_status LIKE 'available, on stock'
[./solutions/sol_08_02.sql]
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'User12';
[./solutions/utlxrw.sql]
Rem
Rem $Header: utlxrw.sql 29-apr-2005.08:22:09 mthiyaga Exp $
Rem
Rem utlxrw.sql
Rem
Rem Copyright (c) 2000, 2005, Oracle. All rights reserved.
Rem
Rem NAME
Rem utlxrw.sql - Create the output table for EXPLAIN_REWRITE
Rem
Rem DESCRIPTION
Rem Outputs of the EXPLAIN_REWRITE goes into the table created
Rem by utlxrw.sql (called REWRITE_TABLE). So utlxrw must be
Rem invoked before any EXPLAIN_REWRITE tests.
Rem
Rem NOTES
Rem If user specifies a different name in EXPLAIN_REWRITE, then
Rem it should have been already created before calling EXPLAIN_REWRITE.
Rem
Rem MODIFIED (MM/DD/YY)
Rem mthiyaga 04/29/05 - Remove unncessary comment
Rem mthiyaga 06/08/04 - Add rewritten_txt field
Rem mthiyaga 10/10/02 - Add extra columns
Rem mthiyaga 09/27/00 - Create EXPLAIN_REWRITE output table
Rem mthiyaga 09/27/00 - Created
Rem
Rem
CREATE TABLE REWRITE_TABLE(
statement_id VARCHAR2(30), -- id for the query
mv_owner VARCHAR2(30), -- owner of the MV
mv_name VARCHAR2(30), -- name of the MV
sequence INTEGER, -- sequence no of the error msg
query VARCHAR2(2000),-- user query
query_block_no INTEGER, -- block no of the current subquery
rewritten_txt VARCHAR2(2000),-- rewritten query
message VARCHAR2(512), -- EXPLAIN_REWRITE error msg
pass VARCHAR2(3), -- rewrite pass no
mv_in_msg VARCHAR2(30), -- MV in current message
measure_in_msg VARCHAR2(30), -- Measure in current message
join_back_tbl VARCHAR2(30), -- Join back table in current msg
join_back_col VARCHAR2(30), -- Join back column in current msg
original_cost INTEGER, -- Cost of original query
rewritten_cost INTEGER, -- Cost of rewritten query
flags INTEGER, -- associated flags
reserved1 INTEGER, -- currently not used
reserved2 VARCHAR2(10)) -- currently not used
[./solutions/nm.sql]
ALTER INDEX &indexname NOMONITORING USAGE;
[./solutions/attox.sql]
REM
REM script ATTOX.SQL
REM =====================================
set autotrace traceonly explain
[./solutions/create_tab.sql]
DROP TABLE test_sales;
DROP TABLE test_promotions;
DROP TABLE test_customers;
DROP TABLE test_countries;
CREATE table test_sales as select * from sales;
CREATE TABLE test_promotions AS SELECT * FROM promotions;
CREATE INDEX t_promo_id_idx ON TEST_PROMOTIONS(promo_id);
ALTER TABLE test_promotions MODIFY promo_id PRIMARY KEY USING INDEX t_promo_id_idx;
CREATE TABLE test_customers AS SELECT * FROM customers;
CREATE INDEX t_cust_id_idx ON TEST_CUSTOMERS(cust_id);
ALTER TABLE test_customers MODIFY cust_id PRIMARY KEY USING INDEX t_cust_id_idx;
CREATE TABLE test_countries AS SELECT * FROM countries;
CREATE INDEX t_country_id_idx ON TEST_COUNTRIES(country_id);
ALTER TABLE test_countries MODIFY country_id PRIMARY KEY USING INDEX t_country_id_idx;
UPDATE test_customers SET cust_credit_limit = 1000 WHERE ROWNUM <= 15000;
[./solutions/cui.sql]
REM SQL Tuning Workshop
REM script CUI.SQL (create unique index)
REM prompts for input; index name generated
REM =======================================
accept TABLE_NAME prompt " on which table : "
accept COLUMN_NAME prompt " on which column(s): "\
set termout off
store set saved_settings replace
set heading off feedback off verify off
set autotrace off termout on
SELECT 'creating unique index'
, SUBSTR('ui_&TABLE_NAME._' ||
TRANSLATE(REPLACE('&COLUMN_NAME', ' ', '')
, 1, 30) dummy
from dual
CREATE UNIQUE INDEX &INDEX_NAME ON &TABLE_NAME(&COLUMN_NAME)
@saved_settings
set termout on
undef INDEX_NAME
undef TABLE_NAME
undef COLUMN_NAME
[./solutions/advisor_cache_setup.sql]
set echo on
alter system flush shared_pool;
grant advisor to sh;
connect sh/sh;
SELECT c.cust_last_name, sum(s.amount_sold) AS dollars,
sum(s.quantity_sold) as quantity
FROM sales s , customers c, products p
WHERE c.cust_id = s.cust_id
AND s.prod_id = p.prod_id
AND c.cust_state_province IN ('Dublin','Galway')
GROUP BY c.cust_last_name;
SELECT c.cust_id, SUM(amount_sold) AS dollar_sales
FROM sales s, customers c WHERE s.cust_id= c.cust_id GROUP BY c.cust_id;
select sum(unit_cost) from costs group by prod_id;
[./solutions/utlxmv.sql]
Rem
Rem $Header: utlxmv.sql 16-feb-2001.13:03:32 nshodhan Exp $
Rem
Rem utlxmv.sql
Rem
Rem Copyright (c) Oracle Corporation 2000. All Rights Reserved.
Rem
Rem NAME
Rem utlxmv.sql - UTiLity for eXplain MV
Rem
Rem DESCRIPTION
Rem The utility script creates the MV_CAPABILITIES_TABLE that is
Rem used by the DBMS_MVIEW.EXPLAIN_MVIEW() API.
Rem
Rem NOTES
Rem
Rem MODIFIED (MM/DD/YY)
Rem nshodhan 02/16/01 - Bug#1647071: replace mv with mview
Rem raavudai 11/28/00 - Fix comment.
Rem twtong 12/01/00 - fix for sql*plus
Rem twtong 09/13/00 - modify mv_capabilities_tabe
Rem twtong 08/18/00 - change create table to upper case
Rem jraitto 06/12/00 - add RELATED_NUM and MSGNO columns
Rem jraitto 05/09/00 - Explain_MV table
Rem jraitto 05/09/00 - Created
Rem
CREATE TABLE MV_CAPABILITIES_TABLE
(STATEMENT_ID VARCHAR(30), -- Client-supplied unique statement identifier
MVOWNER VARCHAR(30), -- NULL for SELECT based EXPLAIN_MVIEW
MVNAME VARCHAR(30), -- NULL for SELECT based EXPLAIN_MVIEW
CAPABILITY_NAME VARCHAR(30), -- A descriptive name of the particular
-- capability:
-- REWRITE
-- Can do at least full text match
-- rewrite
-- REWRITE_PARTIAL_TEXT_MATCH
-- Can do at leat full and partial
-- text match rewrite
-- REWRITE_GENERAL
-- Can do all forms of rewrite
-- REFRESH
-- Can do at least complete refresh
-- REFRESH_FROM_LOG_AFTER_INSERT
-- Can do fast refresh from an mv log
-- or change capture table at least
-- when update operations are
-- restricted to INSERT
-- REFRESH_FROM_LOG_AFTER_ANY
-- can do fast refresh from an mv log
-- or change capture table after any
-- combination of updates
-- PCT
-- Can do Enhanced Update Tracking on
-- the table named in the RELATED_NAME
-- column. EUT is needed for fast
-- refresh after partitioned
-- maintenance operations on the table
-- named in the RELATED_NAME column
-- and to do non-stale tolerated
-- rewrite when the mv is partially
-- stale with respect to the table
-- named in the RELATED_NAME column.
-- EUT can also sometimes enable fast
-- refresh of updates to the table
-- named in the RELATED_NAME column
-- when fast refresh from an mv log
-- or change capture table is not
-- possilbe.
POSSIBLE CHARACTER(1), -- T = capability is possible
-- F = capability is not possible
RELATED_TEXT VARCHAR(2000),-- Owner.table.column, alias name, etc.
-- related to this message. The
-- specific meaning of this column
-- depends on the MSGNO column. See
-- the documentation for
-- DBMS_MVIEW.EXPLAIN_MVIEW() for details
RELATED_NUM NUMBER, -- When there is a numeric value
-- associated with a row, it goes here.
-- The specific meaning of this column
-- depends on the MSGNO column. See
-- the documentation for
-- DBMS_MVIEW.EXPLAIN_MVIEW() for details
MSGNO INTEGER, -- When available, QSM message #
-- explaining why not possible or more
-- details when enabled.
MSGTXT VARCHAR(2000),-- Text associated with MSGNO.
SEQ NUMBER);
-- Useful in ORDER BY clause when
-- selecting from this table.
[./solutions/di.sql]
DROP INDEX &index_name;
[./solutions/hist2.sql]
SELECT * FROM products WHERE prod_status = 'obsolete'
[./solutions/sol_06_04b.sql]
-- this script requires the sql_id that you got from the previous step
SELECT SQL_ID, SQL_TEXT FROM dba_hist_sqltext where sql_id ='yourr sql id here';
[./solutions/tabstats.sql]
accept table_name -
prompt 'on which table : '
SELECT last_analyzed analyzed, sample_size, monitoring,
table_name
FROM user_tables
WHERE table_name = upper('&table_name');
undef TABLE_NAME
[./solutions/rewrite.sql]
ALTER SESSION SET QUERY_REWRITE_ENABLED = true
[./solutions/atto.sql]
REM
REM script ATTO.SQL
REM =====================================
set autotrace traceonly
[./solutions/flush.sql]
--this script flushes the shared pool
alter system flush shared_pool
[./solutions/atoff.sql]
REM
REM script ATOFF.SQLREM =====================================
SET AUTOTRACE OFF
[./solutions/cbi.sql]
REM Oracle10g SQL Tuning Workshop
REM script CBI.SQL (create bitmap index)
REM prompts for input; index name generated
REM =======================================
accept TABLE_NAME prompt " on which table : "
accept COLUMN_NAME prompt " on which column: "
set termout off
store set saved_settings replace
set heading off feedback off verify off
set autotrace off termout on
column dummy new_value index_name
SELECT 'creating index'
, SUBSTR( SUBSTR('&table_name',1,4)||'_' ||
TRANSLATE(REPLACE('&column_name', ' ', '')
, 1, 25
)||'_idx' dummy
FROM dual;
CREATE bitmap index &INDEX_NAME on &TABLE_NAME(&COLUMN_NAME)
LOCAL NOLOGGING COMPUTE STATISTICS
@saved_settings
set termout on
undef INDEX_NAME
undef TABLE_NAME
undef COLUMN_NAME
[./solutions/buffer.sql]
SELECT c.cust_last_name, c.cust_year_of_birth
, co.country_name
FROM customers c
JOIN countries co
USING (country_id)
[./solutions/sol_08_04.sql]
ALTER SESSION SET SQL_TRACE = false;
[./solutions/sqlplus_settings.sql]
set appinfo OFF
set appinfo "SQL*Plus"
set arraysize 15
set autocommit OFF
set autoprint OFF
set autorecovery OFF
set autotrace TRACEONLY EXPLAIN STATISTICS
set blockterminator "."
set cmdsep OFF
set colsep " "
set compatibility NATIVE
set concat "."
set copycommit 0
set copytypecheck ON
set define "&"
set describe DEPTH 1 LINENUM OFF INDENT ON
set echo OFF
set editfile "afiedt.buf"
set embedded OFF
set escape OFF
set feedback 6
set flagger OFF
set flush ON
set heading ON
set headsep "|"
set linesize 80
set logsource ""
set long 80
set longchunksize 80
set markup HTML OFF HEAD "<style type='text/css'> body {font:10pt Arial,Helvetica,sans-serif; color:black; background:White;} p {font:10pt Arial,Helvetica,sans-serif; color:black; background:White;} table,tr,td {font:10pt Arial,Helvetica,sans-serif; color:Black; background:#f7f7e7; padding:0px 0px 0px 0px; margin:0px 0px 0px 0px;} th {font:bold 10pt Arial,Helvetica,sans-serif; color:#336699; background:#cccc99; padding:0px 0px 0px 0px;} h1 {font:16pt Arial,Helvetica,Geneva,sans-serif; color:#336699; background-color:White; border-bottom:1px solid #cccc99; margin-top:0pt; margin-bottom:0pt; padding:0px 0px 0px 0px;} h2 {font:bold 10pt Arial,Helvetica,Geneva,sans-serif; color:#336699; background-color:White; margin-top:4pt; margin-bottom:0pt;} a {font:9pt Arial,Helvetica,sans-serif; color:#663300; background:#ffffff; margin-top:0pt; margin-bottom:0pt; vertical-align:top;}</style><title>SQL*Plus Report</title>" BODY "" TABLE "border='1' width='90%' align='center' summary='Script output'" SPOOL OFF ENTMAP ON PRE OFF
set newpage 1
set null ""
set numformat ""
set numwidth 10
set pagesize 14
set pause OFF
set recsep WRAP
set recsepchar " "
set serveroutput OFF
set shiftinout invisible
set showmode OFF
set sqlblanklines OFF
set sqlcase MIXED
set sqlcontinue "> "
set sqlnumber ON
set sqlpluscompatibility 8.1.7
set sqlprefix "#"
set sqlprompt "SQL> "
set sqlterminator ";"
set suffix "sql"
set tab ON
set termout OFF
set time OFF
set timing OFF
set trimout ON
set trimspool OFF
set underline "-"
set verify ON
set wrap ON
[./solutions/sol_07_01.sql]
SELECT owner, job_name,enabled
FROM DBA_SCHEDULER_JOBS
WHERE JOB_NAME = 'GATHER_STATS_JOB';
[./solutions/colhist.sql]
SELECT column_name, num_distinct, num_buckets, histogram
FROM USER_TAB_COL_STATISTICS
WHERE histogram <> 'NONE';
[./solutions/rpawr.sql]
set feedback off
SELECT PLAN_TABLE_OUTPUT
FROM TABLE (DBMS_XPLAN.DISPLAY_AWR('&sqlid'));
set feedback on
[./solutions/im.sql]
ALTER INDEX &indexname MONITORING USAGE;
[./solutions/utlxpls.sql]
Rem
Rem $Header: utlxpls.sql 26-feb-2002.19:49:37 bdagevil Exp $
Rem
Rem utlxpls.sql
Rem
Rem Copyright (c) 1998, 2002, Oracle Corporation. All rights reserved.
Rem
Rem NAME
Rem utlxpls.sql - UTiLity eXPLain Serial plans
Rem
Rem DESCRIPTION
Rem script utility to display the explain plan of the last explain plan
Rem command. Do not display information related to Parallel Query
Rem
Rem NOTES
Rem Assume that the PLAN_TABLE table has been created. The script
Rem utlxplan.sql should be used to create that table
Rem
Rem With SQL*plus, it is recomended to set linesize and pagesize before
Rem running this script. For example:
Rem set linesize 100
Rem set pagesize 0
Rem
Rem MODIFIED (MM/DD/YY)
Rem bdagevil 02/26/02 - cast arguments
Rem bdagevil 01/23/02 - rewrite with new dbms_xplan package
Rem bdagevil 04/05/01 - include CPU cost
Rem bdagevil 02/27/01 - increase Name column
Rem jihuang 06/14/00 - change order by to order siblings by.
Rem jihuang 05/10/00 - include plan info for recursive SQL in LE row source
Rem bdagevil 01/05/00 - add order-by to make it deterministic
Rem kquinn 06/28/99 - 901272: Add missing semicolon
Rem bdagevil 05/07/98 - Explain plan script for serial plans
Rem bdagevil 05/07/98 - Created
Rem
set markup html preformat on
Rem
Rem Use the display table function from the dbms_xplan package to display the last
Rem explain plan. Force serial option for backward compatibility
Rem
select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'));
[./solutions/dai.sql]
REM script DAI.SQL (drop all indexes)
REM prompts for a table name; % is appended
REM does not touch indexes associated with constraints
REM ==================================================
accept table_name -
prompt 'on which table : '
set termout off
store set sqlplus_settings replace
save buffer.sql replace
set heading off verify off autotrace off feedback off
spool doit.sql
SELECT 'drop index '||i.index_name||';'
FROM user_indexes i
WHERE i.table_name LIKE UPPER('&table_name.%')
AND NOT EXISTS
(SELECT 'x'
FROM user_constraints c
WHERE c.index_name = i.index_name
AND c.table_name = i.table_name
AND c.status = 'ENABLED');
spool off
@doit
get buffer.sql nolist
@sqlplus_settings
set termout on
[./solutions/setupenv.sql]
connect system/oracle
GRANT DBA TO sh;
GRANT CREATE ANY OUTLINE TO sh;
GRANT ADVISOR TO sh;
GRANT CREATE ANY VIEW TO sh;
EXECUTE DBMS_What an insane topic. Where's your question?
I recommend you to start over with a smart question and only the relevant code lines.
Check this link: [How To Ask Questions The Smart Way|http://www.catb.org/~esr/faqs/smart-questions.html]. -
BI Workload in BI 7.0(ST03)
Hi,
Can somebody tell whether BI Workload(ST03 - Expert Mode) is different in BI 7.0 compared to BW 3.5? I have already enabled the BI statistics setting in RSA1 for the cube and the query, however I fail to see any statistical data in ST03. When I click on BI Workload it says that "The infoprovider 0TCT_MC01 doesn't exist; reporting analysis not possible". In BW3.5, with these many steps, I can easily see the runtime statistics in ST03.
Do I have to install technical content in order to use ST03? Also there is another difference I noticed, RDDSTAT table is empty whether I can see statistical data in the views RSDDSTAT_OLAP and RSDDSTAT_DM. So clearly there is some marked changes in BI 7.0. If somebody can throw some light on this, that will be immensely appreciated. Specially how to use ST03 in BI 7.0 as the same is pretty convenient as per my BW 3.5 experience.
Thanks,
Anurag.
P.S. - I know there are other ways to get runtime statistics like RSRT and stuff, however my front end is not BEX. It is a MDX/ODBO setup and hence can't really use RSRT or BI admin cockpit.I think you will have to use ST03N in BI7 and for this you need to have stats enabled in the BI system...
Do I have to install technical content in order to use ST03? - Yes
Edited by: Arun Varadarajan on Jun 5, 2009 11:57 PM -
Hello,
Some days back, I came across a blog entry in which author concluded that when a = b and b = c, oracle does not conclude as a = c. He also provided a test case to prove his point. The URL is [http://sandeepredkar.blogspot.com/2009/09/query-performance-join-conditions.html]
Now, I thought that that can not be true. So I executed his test case (on 10.2.04) and the outcome indeed proved his point. Initially, I thought it might be due to absense of PK-FK relationship. But even after adding the PK-FK relationship, there was no change in the outcome. Although, when I modified the subquery with list of values, both the queries performed equally. I tried asking the author on his blog but it seems he has not yet seen my comment.
I am pasting his test case below. Can somebody please help me understand why CBO does not/can not use optimal plan here?
SQL> create table cu_all (custid number, addr varchar2(200), ph number, cano number, acctype varchar2(10));
Table created.
SQL> create table ca_receipt (custid number, caamt number, cadt date, totbal number);
Table created.
SQL>
SQL> insert into cu_all
2 select lvl,
3 dbms_random.string('A',30),
4 round(dbms_random.value(1,100000)),
5 round(dbms_random.value(1,10000)),
6 dbms_random.string('A',10)
7 from (select level "LVL" from dual connect by level <=200000);
200000 rows created.
SQL> insert into ca_receipt
2 select round(dbms_random.value(1,10000)),
3 round(dbms_random.value(1,100000)),
4 sysdate - round(dbms_random.value(1,100000)),
5 round(dbms_random.value(1,100000))
6 from (select level "LVL" from dual connect by level <=500000);
500000 rows created.
SQL> create unique index pk_cu_all_ind on cu_all(custid);
Index created.
SQL> create index ind2_cu_all on cu_all(CANO);
Index created.
SQL> create index ind_ca_receipt_custid on ca_receipt(custid);
Index created.
SQL> exec dbms_stats.gather_table_stats(user,'CU_ALL', cascade=>true);
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.gather_table_stats(user,'CA_RECEIPT', cascade=>true);
PL/SQL procedure successfully completed.
Now let us execute the query with trace on. This is the similar query which was provided to me.
SQL> set autot trace
SQL> SELECT ca.*, cu.*
2 FROM ca_receipt CA,
3 cu_all CU
4 WHERE CA.CUSTID = CU.CUSTID
5 AND CA.CUSTID IN (SELECT CUSTID FROM cu_all START WITH custid = 2353
6 CONNECT BY PRIOR CUSTID = CANO)
7 ORDER BY ACCTYPE DESC;
289 rows selected.
Execution Plan
Plan hash value: 3186098611
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1000 | 81000 | 504 (2)| 00:00:07 |
| 1 | SORT ORDER BY | | 1000 | 81000 | 504 (2)| 00:00:07 |
|* 2 | HASH JOIN | | 1000 | 81000 | 503 (2)| 00:00:07 |
| 3 | NESTED LOOPS | | | | | |
| 4 | NESTED LOOPS | | 1000 | 26000 | 112 (1)| 00:00:02 |
| 5 | VIEW | VW_NSO_1 | 20 | 100 | 21 (0)| 00:00:01 |
| 6 | HASH UNIQUE | | 20 | 180 | | |
|* 7 | CONNECT BY WITH FILTERING | | | | | |
| 8 | TABLE ACCESS BY INDEX ROWID | CU_ALL | 1 | 9 | 2 (0)| 00:00:01 |
|* 9 | INDEX UNIQUE SCAN | PK_CU_ALL_IND | 1 | | 1 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | | | | |
| 11 | CONNECT BY PUMP | | | | | |
| 12 | TABLE ACCESS BY INDEX ROWID| CU_ALL | 20 | 180 | 21 (0)| 00:00:01 |
|* 13 | INDEX RANGE SCAN | IND2_CU_ALL | 20 | | 1 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | IND_CA_RECEIPT_CUSTID | 50 | | 2 (0)| 00:00:01 |
| 15 | TABLE ACCESS BY INDEX ROWID | CA_RECEIPT | 50 | 1050 | 52 (0)| 00:00:01 |
| 16 | TABLE ACCESS FULL | CU_ALL | 200K| 10M| 389 (1)| 00:00:05 |
Predicate Information (identified by operation id):
2 - access("CA"."CUSTID"="CU"."CUSTID")
7 - access("CANO"=PRIOR "CUSTID")
9 - access("CUSTID"=2353)
13 - access("CANO"=PRIOR "CUSTID")
14 - access("CA"."CUSTID"="CUSTID")
Statistics
1 recursive calls
0 db block gets
2249 consistent gets
25 physical reads
0 redo size
11748 bytes sent via SQL*Net to client
729 bytes received via SQL*Net from client
21 SQL*Net roundtrips to/from client
7 sorts (memory)
0 sorts (disk)
289 rows processed
If you look at the query, it seems to be normal one.
But the problem is here-
Query is having two tables CA and CU. From the inner CU table query, it fetches records and joins with CA table an CA table Joins with CU table using the same column.
Here the inner query joins with CA table and cardinality of the query gets changed. So it is opting FTS when joining to CU table again.
This is causing the performance bottleneck. So to resolve the issue, I have change the joining condition.
Now if we check, following is the proper execution plan. Also the consistents gets have been reduced to 797 against 2249 in original query.
SQL> SELECT ca.*, cu.*
2 FROM ca_receipt CA,
3 cu_all CU
4 WHERE CA.CUSTID = CU.CUSTID
5 AND CU.CUSTID IN (SELECT CUSTID FROM cu_all START WITH custid = 2353
6 CONNECT BY PRIOR CUSTID = CANO)
7 ORDER BY ACCTYPE DESC;
289 rows selected.
Execution Plan
Plan hash value: 3713271440
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1000 | 81000 | 133 (2)| 00:00:02 |
| 1 | SORT ORDER BY | | 1000 | 81000 | 133 (2)| 00:00:02 |
| 2 | NESTED LOOPS | | | | | |
| 3 | NESTED LOOPS | | 1000 | 81000 | 132 (1)| 00:00:02 |
| 4 | NESTED LOOPS | | 20 | 1200 | 42 (3)| 00:00:01 |
| 5 | VIEW | VW_NSO_1 | 20 | 100 | 21 (0)| 00:00:01 |
| 6 | HASH UNIQUE | | 20 | 180 | | |
|* 7 | CONNECT BY WITH FILTERING | | | | | |
| 8 | TABLE ACCESS BY INDEX ROWID | CU_ALL | 1 | 9 | 2 (0)| 00:00:01 |
|* 9 | INDEX UNIQUE SCAN | PK_CU_ALL_IND | 1 | | 1 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | | | | |
| 11 | CONNECT BY PUMP | | | | | |
| 12 | TABLE ACCESS BY INDEX ROWID| CU_ALL | 20 | 180 | 21 (0)| 00:00:01 |
|* 13 | INDEX RANGE SCAN | IND2_CU_ALL | 20 | | 1 (0)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID | CU_ALL | 1 | 55 | 1 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | PK_CU_ALL_IND | 1 | | 0 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | IND_CA_RECEIPT_CUSTID | 50 | | 2 (0)| 00:00:01 |
| 17 | TABLE ACCESS BY INDEX ROWID | CA_RECEIPT | 50 | 1050 | 52 (0)| 00:00:01 |
Predicate Information (identified by operation id):
7 - access("CANO"=PRIOR "CUSTID")
9 - access("CUSTID"=2353)
13 - access("CANO"=PRIOR "CUSTID")
15 - access("CU"."CUSTID"="CUSTID")
16 - access("CA"."CUSTID"="CU"."CUSTID")
Statistics
1 recursive calls
0 db block gets
797 consistent gets
1 physical reads
0 redo size
11748 bytes sent via SQL*Net to client
729 bytes received via SQL*Net from client
21 SQL*Net roundtrips to/from client
7 sorts (memory)
0 sorts (disk)
289 rows processeduser503699 wrote:
Hello,
Some days back, I came across a blog entry in which author concluded that when a = b and b = c, oracle does not conclude as a = c. He also provided a test case to prove his point. The URL is [http://sandeepredkar.blogspot.com/2009/09/query-performance-join-conditions.html]
Now, I thought that that can not be true. So I executed his test case (on 10.2.04) and the outcome indeed proved his point. Initially, I thought it might be due to absense of PK-FK relationship. But even after adding the PK-FK relationship, there was no change in the outcome. Although, when I modified the subquery with list of values, both the queries performed equally. I tried asking the author on his blog but it seems he has not yet seen my comment.I see that Jonathan provided a helpful reply to you while I was in the process of setting up a test case.
Is it possible that the optimizer is correct? What if... the optimizer transformed the SQL statement? What if... the original SQL statement actually executes faster than the modified SQL statement? What if... the autotrace plans do not match the plans shown on that web page?
The first execution with the original SQL statement:
ALTER SESSION SET EVENTS '10053 trace name context forever, level 1';
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_case';
SET AUTOTRACE TRACE
SELECT ca.*, cu.*
FROM ca_receipt CA,
cu_all CU
WHERE CA.CUSTID = CU.CUSTID
AND CA.CUSTID IN (SELECT CUSTID FROM cu_all START WITH custid = 2353
CONNECT BY PRIOR CUSTID = CANO)
ORDER BY ACCTYPE DESC;
Execution Plan
Plan hash value: 2794552689
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1001 | 81081 | 1125 (2)| 00:00:01 |
| 1 | SORT ORDER BY | | 1001 | 81081 | 1125 (2)| 00:00:01 |
| 2 | NESTED LOOPS | | 1001 | 81081 | 1123 (1)| 00:00:01 |
| 3 | NESTED LOOPS | | 1001 | 26026 | 114 (2)| 00:00:01 |
| 4 | VIEW | VW_NSO_1 | 20 | 100 | 22 (0)| 00:00:01 |
| 5 | HASH UNIQUE | | 20 | 180 | | |
|* 6 | CONNECT BY WITH FILTERING | | | | | |
| 7 | TABLE ACCESS BY INDEX ROWID | CU_ALL | 1 | 19 | 2 (0)| 00:00:01 |
|* 8 | INDEX UNIQUE SCAN | PK_CU_ALL_IND | 1 | | 1 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | | | | |
| 10 | CONNECT BY PUMP | | | | | |
| 11 | TABLE ACCESS BY INDEX ROWID| CU_ALL | 20 | 180 | 22 (0)| 00:00:01 |
|* 12 | INDEX RANGE SCAN | IND2_CU_ALL | 20 | | 1 (0)| 00:00:01 |
| 13 | TABLE ACCESS BY INDEX ROWID | CA_RECEIPT | 50 | 1050 | 52 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | IND_CA_RECEIPT_CUSTID | 50 | | 2 (0)| 00:00:01 |
| 15 | TABLE ACCESS BY INDEX ROWID | CU_ALL | 1 | 55 | 1 (0)| 00:00:01 |
|* 16 | INDEX UNIQUE SCAN | PK_CU_ALL_IND | 1 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
6 - access("CANO"=PRIOR "CUSTID")
8 - access("CUSTID"=2353)
12 - access("CANO"=PRIOR "CUSTID")
14 - access("CA"."CUSTID"="$nso_col_1")
16 - access("CA"."CUSTID"="CU"."CUSTID")
Statistics
1 recursive calls
0 db block gets
232 consistent gets
7 physical reads
0 redo size
2302 bytes sent via SQL*Net to client
379 bytes received via SQL*Net from client
5 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
52 rows processedThe second SQL statement which was modified:
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_case2';
SELECT ca.*, cu.*
FROM ca_receipt CA,
cu_all CU
WHERE CA.CUSTID = CU.CUSTID
AND CU.CUSTID IN (SELECT CUSTID FROM cu_all START WITH custid = 2353
CONNECT BY PRIOR CUSTID = CANO)
ORDER BY ACCTYPE DESC;
Execution Plan
Plan hash value: 497148844
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1001 | 81081 | 136 (3)| 00:00:01 |
| 1 | SORT ORDER BY | | 1001 | 81081 | 136 (3)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID | CA_RECEIPT | 50 | 1050 | 52 (0)| 00:00:01 |
| 3 | NESTED LOOPS | | 1001 | 81081 | 134 (2)| 00:00:01 |
| 4 | NESTED LOOPS | | 20 | 1200 | 43 (3)| 00:00:01 |
| 5 | VIEW | VW_NSO_1 | 20 | 100 | 22 (0)| 00:00:01 |
| 6 | HASH UNIQUE | | 20 | 180 | | |
|* 7 | CONNECT BY WITH FILTERING | | | | | |
| 8 | TABLE ACCESS BY INDEX ROWID | CU_ALL | 1 | 19 | 2 (0)| 00:00:01 |
|* 9 | INDEX UNIQUE SCAN | PK_CU_ALL_IND | 1 | | 1 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | | | | |
| 11 | CONNECT BY PUMP | | | | | |
| 12 | TABLE ACCESS BY INDEX ROWID| CU_ALL | 20 | 180 | 22 (0)| 00:00:01 |
|* 13 | INDEX RANGE SCAN | IND2_CU_ALL | 20 | | 1 (0)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID | CU_ALL | 1 | 55 | 1 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | PK_CU_ALL_IND | 1 | | 0 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | IND_CA_RECEIPT_CUSTID | 50 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
7 - access("CANO"=PRIOR "CUSTID")
9 - access("CUSTID"=2353)
13 - access("CANO"=PRIOR "CUSTID")
15 - access("CU"."CUSTID"="$nso_col_1")
16 - access("CA"."CUSTID"="CU"."CUSTID")
Statistics
1 recursive calls
0 db block gets
162 consistent gets
0 physical reads
0 redo size
2302 bytes sent via SQL*Net to client
379 bytes received via SQL*Net from client
5 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
52 rows processed
ALTER SESSION SET EVENTS '10053 trace name context off';The question might be asked, does the final SQL statement actually executed look the same as the original? Slightly reformatted:
The first SQL statement:
SELECT
CA.*,
CU.*
FROM
CA_RECEIPT CA,
CU_ALL CU
WHERE
CA.CUSTID = CU.CUSTID
AND CA.CUSTID IN (
SELECT
CUSTID
FROM
CU_ALL
START WITH
CUSTID = 2353
CONNECT BY PRIOR
CUSTID = CANO)
ORDER BY
ACCTYPE DESC;
Final Transformation:
SELECT
"CA"."CUSTID" "CUSTID",
"CA"."CAAMT" "CAAMT",
"CA"."CADT" "CADT",
"CA"."TOTBAL" "TOTBAL",
"CU"."CUSTID" "CUSTID",
"CU"."ADDR" "ADDR",
"CU"."PH" "PH",
"CU"."CANO" "CANO",
"CU"."ACCTYPE" "ACCTYPE"
FROM
(SELECT DISTINCT
"CU_ALL"."CUSTID" "$nso_col_1"
FROM
"TESTUSER"."CU_ALL" "CU_ALL"
WHERE
"CU_ALL"."CANO"=PRIOR "CU_ALL"."CUSTID"
CONNECT BY
"CU_ALL"."CANO"=PRIOR "CU_ALL"."CUSTID") "VW_NSO_1",
"TESTUSER"."CA_RECEIPT" "CA",
"TESTUSER"."CU_ALL" "CU"
WHERE
"CA"."CUSTID"="VW_NSO_1"."$nso_col_1"
AND "CA"."CUSTID"="CU"."CUSTID"
ORDER BY
"CU"."ACCTYPE" DESC;The second SQL statement:
SELECT
CA.*,
CU.*
FROM
CA_RECEIPT CA,
CU_ALL CU
WHERE
CA.CUSTID = CU.CUSTID
AND CU.CUSTID IN (
SELECT
CUSTID
FROM
CU_ALL
START WITH
CUSTID = 2353
CONNECT BY PRIOR
CUSTID = CANO)
ORDER BY
ACCTYPE DESC;
Final Transformation:
SELECT
"CA"."CUSTID" "CUSTID",
"CA"."CAAMT" "CAAMT",
"CA"."CADT" "CADT",
"CA"."TOTBAL" "TOTBAL",
"CU"."CUSTID" "CUSTID",
"CU"."ADDR" "ADDR",
"CU"."PH" "PH",
"CU"."CANO" "CANO",
"CU"."ACCTYPE" "ACCTYPE"
FROM
(SELECT DISTINCT
"CU_ALL"."CUSTID" "$nso_col_1"
FROM
"TESTUSER"."CU_ALL" "CU_ALL"
WHERE
"CU_ALL"."CANO"=PRIOR "CU_ALL"."CUSTID"
CONNECT BY
"CU_ALL"."CANO"=PRIOR "CU_ALL"."CUSTID") "VW_NSO_1",
"TESTUSER"."CA_RECEIPT" "CA",
"TESTUSER"."CU_ALL" "CU"
WHERE
"CA"."CUSTID"="CU"."CUSTID"
AND "CU"."CUSTID"="VW_NSO_1"."$nso_col_1"
ORDER BY
"CU"."ACCTYPE" DESC;Now, let's take a look at performance, flushing the buffer cache to force physical reads:
SET AUTOTRACE OFF
SET TIMING ON
SET AUTOTRACE TRACEONLY STATISTICS
SET ARRAYSIZE 100
ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;
SELECT
CA.*,
CU.*
FROM
CA_RECEIPT CA,
CU_ALL CU
WHERE
CA.CUSTID = CU.CUSTID
AND CA.CUSTID IN (
SELECT
CUSTID
FROM
CU_ALL
START WITH
CUSTID = 2353
CONNECT BY PRIOR
CUSTID = CANO)
ORDER BY
ACCTYPE DESC;
ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;
SELECT
CA.*,
CU.*
FROM
CA_RECEIPT CA,
CU_ALL CU
WHERE
CA.CUSTID = CU.CUSTID
AND CU.CUSTID IN (
SELECT
CUSTID
FROM
CU_ALL
START WITH
CUSTID = 2353
CONNECT BY PRIOR
CUSTID = CANO)
ORDER BY
ACCTYPE DESC;The output:
/* (with AND CA.CUSTID IN...) */
52 rows selected.
Elapsed: 00:00:00.64
Statistics
0 recursive calls
0 db block gets
232 consistent gets
592 physical reads
0 redo size
2044 bytes sent via SQL*Net to client
346 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
52 rows processed
/* (with AND CU.CUSTID IN...) */
52 rows selected.
Elapsed: 00:00:00.70
Statistics
0 recursive calls
0 db block gets
162 consistent gets
712 physical reads
0 redo size
2044 bytes sent via SQL*Net to client
346 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
52 rows processedThe original SQL statement completed in 0.64 seconds, and the second completed in 0.70 seconds.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
NEW FEATURE:AUTOTRACE IN SQL*PLUS 3.3(EXECUTION PLAN)
제품 : SQL*PLUS
작성날짜 : 2003-10-07
NEW FEATURE:AUTOTRACE IN SQL*PLUS 3.3
======================================
Autotrace는 SQL*Plus 3.3부터 지원하는 New feature로서 기존에는 init.ora에
SQL_TRACE=TRUE를 setting 후 얻어진 trace file을 TKPROF란 utility를
이용하여 SQL 문의 수행 경로, 각종 통계 정보를 얻었다.
그러나, SQL*Plus 3.3부터는 이것을 간단히 처리할 수 있는 방법을 제공한다.
1. SQL*Plus를 실행하여 scott user로 접속한 후, plan table을 생성한다.
#sqlplus scott/tiger
SQL>@$ORACLE_HOME/rdbms/admin/utlxplan
2. 다음에 sys user에서 PLUSTRACE란 ROLE을 만든다.
SVRMGR>connect internal;
SVRMGR>create role plustrace;
SVRMGR>grant select on v_$sesstat to plustrace;
SVRMGR>grant select on v_$statname to plustrace;
SVRMGR>grant select on v_$session to plustrace;
SVRMGR>grant plustrace to dba with admin option;
SVRMGR>grant plustrace to scott;
비고) 위의 grant 문은 client에 SQL*Plus 3.3이 install되어 있는 경우
C:ORAWIN95\PLUS33\PLUSTRCE.SQL이라는 script에 기록되어 있다.
다음과 같이 실행해 주면 된다.
1> connect sys/manager
2> @$ORACLE_HOME/sqlplus/admin/plustrce.sql
3> grant plustrace to scott;
3. 다음에는 scott user로 connect하여 작업한다.
#sqlplus scott/tiger
SQL>set autotrace on
SQL>select * from emp;
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'EMP'
Statistics
389 recursive calls
5 db block gets
53 consistent gets
12 physical reads
0 redo size
1049 bytes sent via SQL*Net to client
239 bytes received via SQL*Net from client
4 SQL*Net round-trips to/from client
0 sorts (memory)
0 sorts (disk)
13 rows processed
4. 참고로 set autotrace에는 여러가지 option을 부여해 작업할 수도 있다.
예)set autotrace on => Explain plan and statistics.
set autotrace on explain => Explain plan only.
set autotrace traceonly => select된 결과는 빼고 trace만 display
시킴.
set autotrace on statistics=> sql statement execution statistics.
5. 서버 버젼과 상관없다.
Server가 7.2 version 이하일지라도 clinet에 SQL*Plus 3.3이 install되어
있으면 client에서 sqlplus 3.3을 구동시켜 server에 접속하여 위와 같이
작업하면 무리없이 작업이 가능하다.
Reference Documents
<Note:43214.1>Hi Roman,
I don't have an Oracle 9.2 database readily available, but it works fine on 10g XE. Please note 3.1 is not certified with 9i:
http://www.oracle.com/technetwork/developer-tools/sql-developer/certification-096745.html
Regards,
Gary
SQL Developer Team -
MapViewer 1.0 rc1 and Oracle Database 10g 10.1.0.2.0 Performace
I have just loaded the MV Demo and was a little disappointed at the performance. Below, I posted a sample of database fetch/rendering times. I was wondering if this was typical since this was my first experience with Map Viewer. And if not, where can I get some information to improve performance?
Terry
06/09/15 08:01:43 INFO [oracle.sdovis.DBMapMaker] **** time spent on rendering: 16ms
06/09/15 08:01:43 INFO [oracle.sdovis.DBMapMaker] **** time spent on loading features: 84145ms.
06/09/15 08:01:43 INFO [oracle.sdovis.DBMapMaker] **** time spent on rendering: 0ms
06/09/15 08:01:43 INFO [oracle.sdovis.DBMapMaker] **** time spent on loading features: 84300ms.
06/09/15 08:01:43 INFO [oracle.sdovis.DBMapMaker] **** time spent on rendering: 16ms
06/09/15 08:01:44 INFO [oracle.sdovis.DBMapMaker] **** time spent on loading features: 87443ms.
06/09/15 08:01:44 INFO [oracle.sdovis.DBMapMaker] **** time spent on rendering: 0ms
06/09/15 08:01:48 INFO [oracle.sdovis.DBMapMaker] **** time spent on loading features: 92155ms.
06/09/15 08:01:48 INFO [oracle.sdovis.DBMapMaker] **** time spent on rendering: 0ms
06/09/15 08:01:51 INFO [oracle.sdovis.DBMapMaker] **** time spent on loading features: 91921ms.
06/09/15 08:01:51 INFO [oracle.sdovis.DBMapMaker] **** time spent on rendering: 16ms
06/09/15 08:01:51 INFO [oracle.sdovis.DBMapMaker] **** time spent on loading features: 92186ms.
06/09/15 08:01:51 INFO [oracle.sdovis.DBMapMaker] **** time spent on rendering: 16ms
06/09/15 08:01:53 INFO [oracle.sdovis.DBMapMaker] **** time spent on loading features: 97024ms.
06/09/15 08:01:53 INFO [oracle.sdovis.DBMapMaker] **** time spent on loading features: 93618ms.
06/09/15 08:01:53 INFO [oracle.sdovis.DBMapMaker] **** time spent on rendering: 16ms
06/09/15 08:01:53 INFO [oracle.sdovis.DBMapMaker] **** time spent on rendering: 15ms
06/09/15 08:01:59 INFO [oracle.sdovis.DBMapMaker] **** time spent on loading features: 99739ms.
06/09/15 08:01:59 INFO [oracle.sdovis.DBMapMaker] **** time spent on rendering: 15msThank for the quick reply. I was just doing a basic test to determine if Map Viewer would satisfy our target architecture and rich web client functional requirements. Everything checks out except for my lack of understanding on how to determine the best performance settings. I expect the development environment to be slower than production. Our actual production environment is many load balanced map and database servers.
Sorry about the lack of details.
Terry
Here are the particulars.
Test Application: http://localhost:8888/mapviewer/faces/fsmc/oraclemaps.jspx
Db server: Win 2003 Enterprise Edition 2 Xeon 3GHx processors with 8 GB RAM.
App Server Win XP Pro 2 Xeon 3Ghx processors with 1 Gb Ram.
App Server Sart Script:
cd C:\mv_qs\oc4j\j2ee\home
start C:\jdk1.5.0_04\bin\java -server -Xmx384M -jar oc4j.jar
Map Cahce Config: DEMO_MAP MVDEMO DEMO_MAP 10 true 256 256 true
Data Source Config: mvdemo mq thin:@172.19.35.10:1521:MQ 64 100
MapViewer Config:
<?xml version="1.0" ?>
<!-- This is the configuration file for Oracle9iAS MapViewer. -->
<!-- Note: All paths are resolved relative to this directory (where
this config file is located), unless specified as an absolute
path name.
-->
<MapperConfig>
<!-- ****************************************************************** -->
<!-- ************************ Logging Settings ************************ -->
<!-- ****************************************************************** -->
<!-- Uncomment the following to modify logging. Possible values are:
log_level = "fatal"|"error"|"warn"|"info"|"debug"|"finest"
default: info) ;
log_thread_name = "true" | "false" ;
log_time = "true" | "false" ;
one or more log_output elements.
-->
<!--
<logging log_level="info" log_thread_name="false"
log_time="true">
<log_output name="System.err" />
<log_output name="../log/mapviewer.log" />
</logging>
-->
<!-- ****************************************************************** -->
<!-- ********************** Map Image Settings ************************ -->
<!-- ****************************************************************** -->
<!-- Uncomment the following only if you want generated images to
be stored in a different directory, or if you want to customize
the life cycle of generated image files.
By default, all maps are generated under
$ORACLE_HOME/lbs/mapviewer/web/images.
Images location-related attributes:
file_prefix: image file prefix, default value is "omsmap"
url: the URL at which images can be accessed. It must match the 'path'
attribute below. Its default value is "%HOST_URL%/mapviewer/images"
path: the corresponding path in the server where the images are
saved; default value is "%ORACLE_HOME%/lbs/mapviewer/web/images"
Images life cycle-related attributes:
life: the life period of generated images, specified in minutes.
If not specified or if the value is 0, images saved on disk will
never be deleted.
recycle_interval: this attribute specifies how often the recycling
of generated map images will be performed. The unit is minute.
The default interval (when not specified or if the value is 0)
is 8*60, or 8 hours.
-->
<!--
<save_images_at file_prefix="omsmap"
url="http://mypc.mycorp.com:8888/mapviewer/images"
path="../web/images"
/>
-->
<!-- ****************************************************************** -->
<!-- ********************* IP Monitoring Settings ********************* -->
<!-- ****************************************************************** -->
<!-- Uncomment the following to enable IP filtering for administrative
requests.
Note:
- Use <ips> and <ip_range> to specify which IPs (and ranges) are allowed.
Wildcard form such as 20.* is also accepted. Use a comma-delimited
list in <ips>.
- Use <ips_exclude> and <ip_range_exclude> for IPs and IP ranges
prohibited from accessing eLocation.
- If an IP falls into both "allowed" and "prohibited" categories, it is
prohibited.
- If you put "*" in an <ips> element, then all IPs are allowed, except
those specified in <ips_exclude> and <ip_range_exclude>.
On the other hand, if you put "*" in an <ips_exclude> element, no one
will be able to access MapViewer (regardless of whether an IP is in
<ips> or <ip_range>).
- You can have multiple <ips>, <ip_range>, <ips_exclude>, and
<ip_range_exclude> elements under <ip_monitor>.
- If no <ip_monitor> element is present in the XML configuration
file, then no IP filtering will be performed (all allowed).
- The way MapViewer determines if an IP is allowed is:
if(IP filtering is not enabled) then allow;
if(IP is in exclude-list) then not allow;
else if(IP is in allow-list) then allow;
else not allow;
-->
<!--
<ip_monitor>
<ips> 138.1.17.9, 138.1.17.21, 138.3.*, 20.* </ips>
<ip_range> 24.17.1.3 - 24.17.1.20 </ip_range>
<ips_exclude> 138.3.29.* </ips_exclude>
<ip_range_exclude>20.22.34.1 - 20.22.34.255</ip_range_exclude>
</ip_monitor>
-->
<!-- ****************************************************************** -->
<!-- ********************** Web Proxy Setting ************************ -->
<!-- ****************************************************************** -->
<!-- Uncomment and modify the following to specify the Web proxy setting.
This is only needed for passing background image URLs to
MapViewer in map requests or for setting a logo image URL, if
such URLs cannot be accessed without the proxy.
-->
<!--
<web_proxy host="www-proxy.my_corp.com" port="80" />
-->
<!-- ****************************************************************** -->
<!-- *********************** Security Configuration ******************* -->
<!-- ****************************************************************** -->
<!-- Here you can set various security related configurations of MapViewer.
-->
<security_config>
<disable_direct_info_request> false </disable_direct_info_request>
</security_config>
<!-- ****************************************************************** -->
<!-- *********************** Global Map Configuration ***************** -->
<!-- ****************************************************************** -->
<!-- Uncomment and modify the following to specify systemwide parameters
for generated maps. You can specify your copyright note, map title, and
an image to be used as a custom logo shown on maps. The logo image must
be accessible to this MapViewer and in either GIF or JPEG format.
Notes:
- To disable a global note or title, specify an empty string ("") for
the text attribute of <note> and <title> element.
- position specifies a relative position on the map where the
logo, note, or title will be displayed. Possible values are
NORTH, EAST, SOUTH, WEST, NORTH_EAST, SOUTH_EAST,
SOUTH_WEST, NORTH_WEST, and CENTER.
- image_path specifies a file path or a URL (starts with "http://")
for the image.
<rendering> element attributes:
- Local geodetic data adjustment: If allow_local_adjustment="true",
MapViewer automatically performs local data
"flattening" with geodetic data if the data window is less than
3 decimal degrees. Specifically, MapViewer performs a simple
mathematical transformation of the coordinates using a tangential
plane at the current map request center.
If allow_local_adjustment="false" (default), no adjustment is
performed.
- Automatically applies a globular map projection (geodetic data only):
If use_globular_projection="true", MapViewer will
apply a globular projection on the fly to geometries being displayed.
If use_globular_projection="false" (the default), MapViewer does no map
projection to geodetic geometries. This option has no effect on
non-geodetic data.
-->
<!--
<global_map_config>
<note text="Copyright 2004, Oracle Corporation"
font="sans serif"
position="SOUTH_EAST"/>
<title text="MapViewer Demo"
font="Serif"
position="NORTH" />
<logo image_path="C:\\images\\a.gif"
position="SOUTH_WEST" />
<rendering allow_local_adjustment="false"
use_globular_projection="false" />
</global_map_config>
-->
<!-- ****************************************************************** -->
<!-- ****************** Spatial Data Cache Setting ******************* -->
<!-- ****************************************************************** -->
<!-- Uncomment and modify the following to customize the spatial data cache
used by MapViewer. The default is 64 MB for in-memory cache.
To disable the cache, set max_cache_size to 0.
max_cache_size: Maximum size of in-memory spatial cache of MapViewer.
Size must be specified in megabytes (MB).
report_stats: If you would like to see periodic output of cache
statistics, set this attribute to true. The default
is false.
-->
<!--
<spatial_data_cache max_cache_size="64"
report_stats="false"
/>
-->
<!-- ****************************************************************** -->
<!-- ******************** Custom Image Renderers ********************** -->
<!-- ****************************************************************** -->
<!-- Uncomment and add as many custom image renderers as needed here,
each in its own <custom_image_renderer> element. The "image_format"
attribute specifies the format of images that are to be custom
rendered using the class with full name specified in "impl_class".
You are responsible for placing the implementation classes in the
MapViewer's classpath.
-->
<!--
<custom_image_renderer image_format="ECW"
impl_class="com.my_corp.image.ECWRenderer" />
-->
<!-- ****************************************************************** -->
<!-- ****************** Custom WMS Capabilities Info ****************** -->
<!-- ****************************************************************** -->
<!-- Uncomment and modify the following tag if you want MapViewer to
use the following information in its getCapabilities response.
Note: all attributes and elements of <wms_config> are optional.
-->
<!--
<wms_config host="www.my_corp.com" port="80">
<title>
WMS 1.1 interface for Oracle Application Server 10g MapViewer
</title>
<abstract>
This WMS service is provided through Oracle MapViewer.
</abstract>
<keyword_list>
<keyword>bird</keyword>
<keyword>roadrunner</keyword>
<keyword>ambush</keyword>
</keyword_list>
<sdo_epsg_mapfile>
../config/epsg_srids.properties
</sdo_epsg_mapfile>
</wms_config>
-->
<!-- ****************************************************************** -->
<!-- **************** Custom Non-Spatial Data Provider **************** -->
<!-- ****************************************************************** -->
<!-- Uncomment and add as many custom non-spatial data provider as
needed here, each in its own <ns_data_provider> element.
You must provide the id and full class name here. Optionally you
can also specify any number of global parameters, which MapViewer
will pass to the data provider implementation during initialization.
The name and value of each parameter is interpreted only by the
implementation.
-->
<!-- this is the default data provider that comes with MapViewer; please
refer to the MapViewer User's Guide for instructions on how to use it.
-->
<ns_data_provider
id="defaultNSDP"
class="oracle.sdovis.NSDataProviderDefault"
/>
<!-- this is a sample NS data provider with prameters:
<ns_data_provider
id="myProvider1" class="com.mycorp.bi.NSDataProviderImpl" >
<parameters>
<parameter name="myparam1" value="value1" />
<parameter name="p2" value="v2" />
</parameters>
</ns_data_provider>
-->
<!-- ****************************************************************** -->
<!-- ******************* Map Cache Server Setting ******************* -->
<!-- ****************************************************************** -->
<!-- Uncomment and modify the following to customize the map cache server.
<cache_storage> specifies the default root directory under which the
cached tile images are to be stored if the cache instance configuration
does not specify the root directory for the cache instance. If the
default root directory is not set or not valid, the default root
direcotry will be set to be $MAPVIEWER_HOME/web/mapcache
default_root_path: The default root directory under which the cached
tile images are stored.
<logging> specifies the logging options for map cache server.
-->
<!--
<map_cache_server>
<cache_storage default_root_path="/scratch/mapcachetest/"/>
<logging log_level="finest" log_thread_name="false" log_time="true">
<log_output name="System.err"/>
<log_output name="../log/mapcacheserver.log"/>
</logging>
</map_cache_server>
-->
<!-- ****************************************************************** -->
<!-- ******************** Predefined Data Sources ******************** -->
<!-- ****************************************************************** -->
<!-- Uncomment and modify the following to predefine one or more data
sources.
Note: You must precede the jdbc_password value with a '!'
(exclamation point), so that when MapViewer starts the next
time, it will encrypt and replace the clear text password.
-->
<map_data_source name="mvdemo"
jdbc_host="172.19.35.10"
jdbc_sid="MQ"
jdbc_port="1521"
jdbc_user="mq"
jdbc_password="mq001"
jdbc_mode="thin"
number_of_mappers="3"
/>
</MapperConfig> -
Steps in Technical Content (0TCT*)
Hello,
Please let me know the steps are correct or something is missing.
1) Activate the Technical content just like business content (say activating cube 0TCT_C03: Data Manager Statistics )
2) Check the extractor in BW, t-code RSA3 (self system) to see number of records
3) Load the data using infopackage and schedule delta
4) Run the standard query on this cube
Just need some clarification on "Activating Data Transfers for BW Statistics". This can be found in menubar Tools --> BI Statistics Setting. I can understand that here you can activate and de-activate individually the processes for saving and transferring data. But if we dont do this, can't we see the data in reports. Is this mandatory?
Please advice
Regards
Pank SAPHi Pank,
For all the Queries/Workbooks for which you want to collect Stats and see them in Reports, you have to activate or mark them/relevant Infoproviders in Datawarehousing Workbench.Tools> BI Statistics Settings.
For the new statistics, you have to determine the objects for which you want to update the statistics and to what level of detail. You call this dialog from the Data Warehousing Workbench. It has changed completely from the previous statistics functionality.
In addition, you can manually assign a priority to the queries, InfoProviders and process chains. This priority is evaluated in the statistics. This can be used to establish a ranking or to exclude objects from the display in a report.
Please search in SAP Help, you will get more stuff.
http://help.sap.com/saphelp_nw70/helpdata/en/43/15c54048035a39e10000000a422035/frameset.htm
http://help.sap.com/saphelp_nw70/helpdata/en/ef/372242c4e05033e10000000a155106/frameset.htm
Thanks
CK -
Re: DB Time , OLAP Time and Front end time?
Hi Gurus,
What is DB Time,OLAP Time and Frontend time? In which scenerio's generally consider them. please guide me, points will be rewarded.
By
RKHi,
Query Runtime Statistics (Previously OLAP Statistics) (Changed)
Technical data
Function is
Changed
For Release
Software components
· Component: SAP NetWeaver
· Release 2004s
Assignment to Application Component
BW-BEX-OT
Country Setting
Valid for all countries
Use
With the new architecture for BI reporting, collection of statistics for query runtime analysis was enhanced or changed. The parallelization in the data manager area (during data read) that is being used more frequently has led to splitting the previous "OLAP" statistics data into "data manager" data (such as database access times, RFC times) and front-end and OLAP times. The statistics data is collected in separate tables, but it can be combined using the InfoProvider for the technical content.
The information as to whether statistic data is collected for an object no longer depends on the InfoProvider. Instead it depends on those objects for which the data is collected, which means on a query, a workbook or a Web template. The associated settings are maintained in the RSDDSTAT transaction.
Effects on Existing Data
Due to the changes in the OLAP and front-end architecture, the statistic data collected up to now can only partially be compared with the new data.
Since the structure of the new tables differs greatly from that of the table RSDDSTAT, InfoProviders that are based on previous data (table RSDDSTAT) can no longer be supplied with data.
Effects on Customizing
The Collect Statistics setting is obsolete. Instead you have to determine whether and at which granularity you wish to display the statistics data for the individual objects (query, workbook, Web template). In the RSDDSTAT transaction, you can turn the statistics on and off for all queries for an InfoProvider. The maintenance of the settings (such as before) from the Data Warehousing Workbench can be reached using Tools ® BW Statistics.
DB Time:
It is the time to read the Database
Front End Time:
It is the time the query has taken to run in the front end.
If the above info is useful, please grant me points
Regards,
Subha -
HI
I have query sql tunning.
SQL> explain plan set statement_id='TEST' for
2 select * from employee where emp_id=7369;
Explained.
but to how to view this explained.That is a good explanation provided by hoek, in this case the Predicate Information section does provide a clue that the optimizer rewote the SQL statement that you provided into an equivalent form. That section of the execution plan indicates how the data will be retrieved and compared. The number to the left should be matched with the corresponding ID value in the execution plan.
I do not have your test table, so I created one in 10.2.0.4 and 11.1.0.7, then built an equivalent query. Creating the test table and gathering statistics:
SET LINESIZE 140
SET TRIMSPOOL ON
SET PAGESIZE 1000
CREATE TABLE T1(
EMPLOYEE_ID VARCHAR2(30),
EMPLOYEE_NAME VARCHAR2(80),
DEPT_ID VARCHAR2(15),
SALARY NUMBER,
PRIMARY KEY(EMPLOYEE_ID));
INSERT INTO
T1
SELECT
CHR(MOD(ROWNUM-1,26)+65)||TO_CHAR(ROWNUM),
RPAD(CHR(MOD(ROWNUM-1,26)+65),40,CHR(MOD(ROWNUM-1,26)+65))||TO_CHAR(ROWNUM),
CHR(MOD(ROWNUM-1,10)+65),
ROUND(ABS(SIN(ROWNUM/180))*100000 + 10000,2)
FROM
DUAL
CONNECT BY
LEVEL<=1000;
COMMIT;
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)Executing the query with a 10053 trace enabled so that we can see how the optimizer transforms the submitted SQL statement:
ALTER SESSION SET TRACEFILE_IDENTIFIER = '10053TraceTransform';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
SELECT /*+ FIND_ME */
DEPT_ID
FROM
T1
WHERE
SALARY > ALL(
SELECT
AVG(SALARY)
FROM
T1
GROUP BY
DEPT_ID);
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));On Oracle 10.2.0.4 the execution plan looked like this:
SQL_ID 0yyg2tkus5avd, child number 0
SELECT /*+ FIND_ME */ DEPT_ID FROM T1 WHERE SALARY > ALL(
SELECT AVG(SALARY) FROM T1 GROUP BY DEPT_ID)
Plan hash value: 3443957669
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 1561 (100)| |
|* 1 | FILTER | | | | | |
| 2 | TABLE ACCESS FULL | T1 | 1000 | 8000 | 2 (0)| 00:00:01 |
|* 3 | FILTER | | | | | |
| 4 | HASH GROUP BY | | 10 | 80 | 3 (34)| 00:00:01 |
| 5 | TABLE ACCESS FULL| T1 | 1000 | 8000 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter( IS NULL)
3 - filter(LNNVL(AVG("SALARY")<:B1))On 11.1.0.7 the execution plan looked like this:
SQL_ID 0yyg2tkus5avd, child number 0
SELECT /*+ FIND_ME */ DEPT_ID FROM T1 WHERE SALARY > ALL(
SELECT AVG(SALARY) FROM T1 GROUP BY DEPT_ID)
Plan hash value: 1608688004
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 9 (100)| |
| 1 | MERGE JOIN ANTI NA | | 950 | 19950 | 9 (34)| 00:00:01 |
| 2 | SORT JOIN | | 1000 | 8000 | 4 (25)| 00:00:01 |
| 3 | TABLE ACCESS FULL | T1 | 1000 | 8000 | 3 (0)| 00:00:01 |
|* 4 | SORT UNIQUE | | 10 | 130 | 5 (40)| 00:00:01 |
| 5 | VIEW | VW_NSO_1 | 10 | 130 | 4 (25)| 00:00:01 |
| 6 | HASH GROUP BY | | 10 | 80 | 4 (25)| 00:00:01 |
| 7 | TABLE ACCESS FULL| T1 | 1000 | 8000 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("SALARY"<="AVG(SALARY)")
filter("SALARY"<="AVG(SALARY)")On 11.1.0.7 we see a null aware antijoin. Let's take a peek inside the 10053 trace files to see the transformed SQL statements (slightly reformatted):
10.2.0.4:
******* UNPARSED QUERY IS *******
SELECT /*+ */
"SYS_ALIAS_1"."DEPT_ID" "DEPT_ID"
FROM
"TESTUSER"."T1" "SYS_ALIAS_1"
WHERE
NOT EXISTS (
SELECT /*+ */
0
FROM
"TESTUSER"."T1" "T1"
GROUP BY
"T1"."DEPT_ID"
HAVING
LNNVL(AVG("T1"."SALARY")<"SYS_ALIAS_1"."SALARY"))11.1.0.7:
Final query after transformations:******* UNPARSED QUERY IS *******
SELECT
"T1"."DEPT_ID" "DEPT_ID"
FROM
(SELECT
AVG("T1"."SALARY") "AVG(SALARY)"
FROM
"TESTUSER"."T1" "T1"
GROUP BY
"T1"."DEPT_ID") "VW_NSO_1",
"TESTUSER"."T1" "T1"
WHERE
"T1"."SALARY"<="VW_NSO_1"."AVG(SALARY)"Just out of curiosity, what is the purpose of your SQL statement (I can see what it is doing, but why are you doing it)?
Charles Hooper
Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Use of the DBA_TAB_MODIFICATIONS view
I am interested in exploring use of the dba_tab_modifications view. Is there a connection between population of this view's base tables and the Monitoring statistics (set/reset using the Create-Table and ALter-Table SQL stmts) feature? What do I need to do, to be able to use the data provided by this view?
Hi Kedar,
I think I replied your other post as well. Anyway, to answer your question - YES, these setting will apply to all users. The way it works is, once the user (created in su01) logs in, these settings will be applied as the defaults. Therefore, it should also work for users replicated from other systems using a batch program.
Cheers,
Lashan -
Peformance question TKPROF/Autotrace Cost High when using index why?
I'm going through a legacy database application to do some application code performance tuning of the PL/SQL.
I am constrained from changing the code logic so I can only add indexes and hints. As I identify full-table-scans I'm using hints to force the code to use the index and now the Cost is Higher than with full table scans. Though the application code runs in batch mode and processes a lot of records.
Can anyone provide any insight into why the cost goes up when using Hints in the SQL?
I have additional testing to do though I just wanted to pose the question.Firstly, the cost can be less for a full table scan because with a full table scan Oracle can do multi block IO with a full scan. So reading a large amount of data from a table will go much faster if the table is full scanned as opposed to accessed by an index (your forcing the query with a hint can actually be extremely detrimental to the query).
Secondly, if you are not allowed to make any code changes, i'd recommend not taking the path you are currently on (adding hints). That may be warranted in a few situations, but likely not many assuming you have accurate statistics set up on your database.
Here's an article that deals with full table scans ....
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968
Hope that Helps. -
Insert a Javascript code...
Hi everybody.
i'd like to insert a meter visits in a captivate proyect,
because i want to know how much people visit my site.
how can i do that?
kind regards.
JM.I've been using Google Analytics for some time now, and
highly recommend it. When you sign up for the free account, the
Analytics admin site will display some HTML code for you. You'll
copy/paste this code into each HTML page you want statistics for,
in the exact location they specify (in the correct <tag>
area). It's really simple to set up.
This HTML page needs to be on the internet - and not inside a
corporate internet.
You might also check with your web hosting company - nearly
all of them have web logs and statistics set up for every client by
default. You usually log into a web site/control panel - and can
view reports and statistics in there. I have this available from my
web host, but still find the Google Analytics very useful. The
reporting and graphs are very intuitive to use.
Maybe you are looking for
-
Could anyone tell me what to do to solve the problem. I updated FCPX to 10.9 and the program keeps on crash every time I try to read files stored in my LaCie hard drive. And the program also keeps on crash every time I try to import files from my SDX
-
Directory logical path to directory physical path
Hello experts, I have question related to getting path from defined earlier logcal path in FILE transaction. I know there are FMs called 'FILE_GET_NAME' and 'FILE_GET_NAME_AND_LOGICAL_PATH' but they are not solving my problem. Let's say I know only p
-
Problem with unzip file install817ntee.zip
When I unzip the file install817ntee.zip downloaded from oracle.com with WINRAR, it is say "it is unpredictable at the end of file"; So do when I unzip the file "OWB_902560.zip'
-
Oc4j class-loading-heirarchy - enabling "search-local-classes-first" option
Guys, I need some assistance with regards to oc4j class-loading hierarchy. Would appreciate some feedback on this... Let me give you a bit of context first. Basically we have 3 war modules which we initially were deployed separately on oc4j (transpar
-
Bootcamp and Windows 7 without a disk.
My school has the option to download for free either the 64 bit or 32 bit Windows 7 package. I downloaded the 64 but package into 2 parts, along with a product key. Shown in the picture. I must really be dumbd at this. The second file wouldnt let me