Rman level 1 is taking same time as level 0 in oracle 9i
Hi experts,
what is exact difference b/w incremental backup(level 1) in oracle 9i/10g,
out prod db is oracle 9.2.0.8,
we are planing to implement rman on production but we have tested one of test db,
but it has taken same time as full backup(level 0)..level 1 nothing but only changed blocks why it's taking same full backup time..
what is the main reason behind that plz help me .
Oracle 10g has block change tracking feature - a file that tracks all changed DB blocks is continuously maintained. That file enables RMAN (10g) to read only the blocks that have changes since the most recent full backup. The feature could bring huge performance improvements for incremental backups. A negative would be slightly slow down DMLs during DB operation.
Similar Messages
-
Query in timesten taking more time than query in oracle database
Hi,
Can anyone please explain me why query in timesten taking more time
than query in oracle database.
I am mentioning in detail what are my settings and what have I done
step by step.........
1.This is the table I created in Oracle datababase
(Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
CREATE TABLE student (
id NUMBER(9) primary keY ,
first_name VARCHAR2(10),
last_name VARCHAR2(10)
2.THIS IS THE ANONYMOUS BLOCK I USE TO
POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
declare
firstname varchar2(12);
lastname varchar2(12);
catt number(9);
begin
for cntr in 1..2599999 loop
firstname:=(cntr+8)||'f';
lastname:=(cntr+2)||'l';
if cntr like '%9999' then
dbms_output.put_line(cntr);
end if;
insert into student values(cntr,firstname, lastname);
end loop;
end;
3. MY DSN IS SET THE FOLLWING WAY..
DATA STORE PATH- G:\dipesh3repo\db
LOG DIRECTORY- G:\dipesh3repo\log
PERM DATA SIZE-1000
TEMP DATA SIZE-1000
MY TIMESTEN VERSION-
C:\Documents and Settings\dipesh>ttversion
TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
Instance admin: dipesh
Instance home directory: G:\TimestTen\TT70_32
Daemon home directory: G:\TimestTen\TT70_32\srv\info
THEN I CONNECT TO THE TIMESTEN DATABASE
C:\Documents and Settings\dipesh> ttisql
command>connect "dsn=dipesh3;oraclepwd=tiger";
4. THEN I START THE AGENT
call ttCacheUidPwdSet('SCOTT','TIGER');
Command> CALL ttCacheStart();
5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
create readonly cache group rc_student autorefresh
interval 5 seconds from student
(id int not null primary key, first_name varchar2(10), last_name varchar2(10));
load cache group rc_student commit every 100 rows;
6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
I SET THE TIMING..
command>TIMING 1;
consider this query now..
Command> select * from student where first_name='2155666f';
< 2155658, 2155666f, 2155660l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
another query-
Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
2206: Table SCOTT.STUDENTS not found
Execution time (SQLPrepare) = 0.074964 seconds.
The command failed.
Command> SELECT * FROM STUDENT where first_name='2093434f';
< 2093426, 2093434f, 2093428l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
Command>
7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
ID FIRST_NAME LAST_NAME
1498663 1498671f 1498665l
Elapsed: 00:00:00.15
Can anyone please explain me why query in timesten taking more time
that query in oracle database.
Message was edited by: Dipesh Majumdar
user542575
Message was edited by:
user542575TimesTen
Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
Version: 7.0.4.0.0 64 bit
Schema:
create usermanaged cache group factCache from
MV_US_DATAMART
ORDER_DATE DATE,
IF_SYSTEM VARCHAR2(32) NOT NULL,
GROUPING_ID TT_BIGINT,
TIME_DIM_ID TT_INTEGER NOT NULL,
BUSINESS_DIM_ID TT_INTEGER NOT NULL,
ACCOUNT_DIM_ID TT_INTEGER NOT NULL,
ORDERTYPE_DIM_ID TT_INTEGER NOT NULL,
INSTR_DIM_ID TT_INTEGER NOT NULL,
EXECUTION_DIM_ID TT_INTEGER NOT NULL,
EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
NO_ORDERS TT_BIGINT,
FILLED_QUANTITY TT_BIGINT,
CNT_FILLED_QUANTITY TT_BIGINT,
QUANTITY TT_BIGINT,
CNT_QUANTITY TT_BIGINT,
COMMISSION BINARY_FLOAT,
CNT_COMMISSION TT_BIGINT,
FILLS_NUMBER TT_BIGINT,
CNT_FILLS_NUMBER TT_BIGINT,
AGGRESSIVE_FILLS TT_BIGINT,
CNT_AGGRESSIVE_FILLS TT_BIGINT,
NOTIONAL BINARY_FLOAT,
CNT_NOTIONAL TT_BIGINT,
TOTAL_PRICE BINARY_FLOAT,
CNT_TOTAL_PRICE TT_BIGINT,
CANCELLED_ORDERS_COUNT TT_BIGINT,
CNT_CANCELLED_ORDERS_COUNT TT_BIGINT,
ROUTED_ORDERS_NO TT_BIGINT,
CNT_ROUTED_ORDERS_NO TT_BIGINT,
ROUTED_LIQUIDITY_QTY TT_BIGINT,
CNT_ROUTED_LIQUIDITY_QTY TT_BIGINT,
REMOVED_LIQUIDITY_QTY TT_BIGINT,
CNT_REMOVED_LIQUIDITY_QTY TT_BIGINT,
ADDED_LIQUIDITY_QTY TT_BIGINT,
CNT_ADDED_LIQUIDITY_QTY TT_BIGINT,
AGENT_CHARGES BINARY_FLOAT,
CNT_AGENT_CHARGES TT_BIGINT,
CLEARING_CHARGES BINARY_FLOAT,
CNT_CLEARING_CHARGES TT_BIGINT,
EXECUTION_CHARGES BINARY_FLOAT,
CNT_EXECUTION_CHARGES TT_BIGINT,
TRANSACTION_CHARGES BINARY_FLOAT,
CNT_TRANSACTION_CHARGES TT_BIGINT,
ORDER_MANAGEMENT BINARY_FLOAT,
CNT_ORDER_MANAGEMENT TT_BIGINT,
SETTLEMENT_CHARGES BINARY_FLOAT,
CNT_SETTLEMENT_CHARGES TT_BIGINT,
RECOVERED_AGENT BINARY_FLOAT,
CNT_RECOVERED_AGENT TT_BIGINT,
RECOVERED_CLEARING BINARY_FLOAT,
CNT_RECOVERED_CLEARING TT_BIGINT,
RECOVERED_EXECUTION BINARY_FLOAT,
CNT_RECOVERED_EXECUTION TT_BIGINT,
RECOVERED_TRANSACTION BINARY_FLOAT,
CNT_RECOVERED_TRANSACTION TT_BIGINT,
RECOVERED_ORD_MGT BINARY_FLOAT,
CNT_RECOVERED_ORD_MGT TT_BIGINT,
RECOVERED_SETTLEMENT BINARY_FLOAT,
CNT_RECOVERED_SETTLEMENT TT_BIGINT,
CLIENT_AGENT BINARY_FLOAT,
CNT_CLIENT_AGENT TT_BIGINT,
CLIENT_ORDER_MGT BINARY_FLOAT,
CNT_CLIENT_ORDER_MGT TT_BIGINT,
CLIENT_EXEC BINARY_FLOAT,
CNT_CLIENT_EXEC TT_BIGINT,
CLIENT_TRANS BINARY_FLOAT,
CNT_CLIENT_TRANS TT_BIGINT,
CLIENT_CLEARING BINARY_FLOAT,
CNT_CLIENT_CLEARING TT_BIGINT,
CLIENT_SETTLE BINARY_FLOAT,
CNT_CLIENT_SETTLE TT_BIGINT,
CHARGEABLE_TAXES BINARY_FLOAT,
CNT_CHARGEABLE_TAXES TT_BIGINT,
VENDOR_CHARGE BINARY_FLOAT,
CNT_VENDOR_CHARGE TT_BIGINT,
ROUTING_CHARGES BINARY_FLOAT,
CNT_ROUTING_CHARGES TT_BIGINT,
RECOVERED_ROUTING BINARY_FLOAT,
CNT_RECOVERED_ROUTING TT_BIGINT,
CLIENT_ROUTING BINARY_FLOAT,
CNT_CLIENT_ROUTING TT_BIGINT,
TICKET_CHARGES BINARY_FLOAT,
CNT_TICKET_CHARGES TT_BIGINT,
RECOVERED_TICKET_CHARGES BINARY_FLOAT,
CNT_RECOVERED_TICKET_CHARGES TT_BIGINT,
PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
READONLY);
No of rows: 2228558
Config:
< CkptFrequency, 600 >
< CkptLogVolume, 0 >
< CkptRate, 0 >
< ConnectionCharacterSet, US7ASCII >
< ConnectionName, tt_us_dma >
< Connections, 64 >
< DataBaseCharacterSet, AL32UTF8 >
< DataStore, e:\andrew\datacache\usDMA >
< DurableCommits, 0 >
< GroupRestrict, <NULL> >
< LockLevel, 0 >
< LockWait, 10 >
< LogBuffSize, 65536 >
< LogDir, e:\andrew\datacache\ >
< LogFileSize, 64 >
< LogFlushMethod, 1 >
< LogPurge, 0 >
< Logging, 1 >
< MemoryLock, 0 >
< NLS_LENGTH_SEMANTICS, BYTE >
< NLS_NCHAR_CONV_EXCP, 0 >
< NLS_SORT, BINARY >
< OracleID, NYCATP1 >
< PassThrough, 0 >
< PermSize, 4000 >
< PermWarnThreshold, 90 >
< PrivateCommands, 0 >
< Preallocate, 0 >
< QueryThreshold, 0 >
< RACCallback, 0 >
< SQLQueryTimeout, 0 >
< TempSize, 514 >
< TempWarnThreshold, 90 >
< Temporary, 1 >
< TransparentLoad, 0 >
< TypeMode, 0 >
< UID, OS_OWNER >
ORACLE:
Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
Schema:
CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
TABLESPACE TS_OS
PARTITION BY RANGE (ORDER_DATE)
PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
LOGGING
NOCOMPRESS
TABLESPACE TS_OS
NOCACHE
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
USING INDEX
TABLESPACE TS_OS_INDEX
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT order_date, if_system,
GROUPING_ID (order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id
) GROUPING_ID,
/* ============ DIMENSIONS ============ */
time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
instr_dim_id, execution_dim_id, exec_exchange_dim_id,
/* ============ MEASURES ============ */
-- o.FX_RATE /* FX_RATE */,
COUNT (*) no_orders,
-- SUM(NO_ORDERS) NO_ORDERS,
-- COUNT(NO_ORDERS) CNT_NO_ORDERS,
SUM (filled_quantity) filled_quantity,
COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
COUNT (quantity) cnt_quantity, SUM (commission) commission,
COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
COUNT (fills_number) cnt_fills_number,
SUM (aggressive_fills) aggressive_fills,
COUNT (aggressive_fills) cnt_aggressive_fills,
SUM (fx_rate * filled_quantity * average_price) notional,
COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
SUM (fx_rate * fills_number * average_price) total_price,
COUNT (fx_rate * fills_number * average_price) cnt_total_price,
SUM (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END) cancelled_orders_count,
COUNT (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END
) cnt_cancelled_orders_count,
-- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
-- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
-- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
SUM (routed_orders_no) routed_orders_no,
COUNT (routed_orders_no) cnt_routed_orders_no,
SUM (routed_liquidity_qty) routed_liquidity_qty,
COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
SUM (removed_liquidity_qty) removed_liquidity_qty,
COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
SUM (added_liquidity_qty) added_liquidity_qty,
COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
SUM (agent_charges) agent_charges,
COUNT (agent_charges) cnt_agent_charges,
SUM (clearing_charges) clearing_charges,
COUNT (clearing_charges) cnt_clearing_charges,
SUM (execution_charges) execution_charges,
COUNT (execution_charges) cnt_execution_charges,
SUM (transaction_charges) transaction_charges,
COUNT (transaction_charges) cnt_transaction_charges,
SUM (order_management) order_management,
COUNT (order_management) cnt_order_management,
SUM (settlement_charges) settlement_charges,
COUNT (settlement_charges) cnt_settlement_charges,
SUM (recovered_agent) recovered_agent,
COUNT (recovered_agent) cnt_recovered_agent,
SUM (recovered_clearing) recovered_clearing,
COUNT (recovered_clearing) cnt_recovered_clearing,
SUM (recovered_execution) recovered_execution,
COUNT (recovered_execution) cnt_recovered_execution,
SUM (recovered_transaction) recovered_transaction,
COUNT (recovered_transaction) cnt_recovered_transaction,
SUM (recovered_ord_mgt) recovered_ord_mgt,
COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
SUM (recovered_settlement) recovered_settlement,
COUNT (recovered_settlement) cnt_recovered_settlement,
SUM (client_agent) client_agent,
COUNT (client_agent) cnt_client_agent,
SUM (client_order_mgt) client_order_mgt,
COUNT (client_order_mgt) cnt_client_order_mgt,
SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
SUM (client_trans) client_trans,
COUNT (client_trans) cnt_client_trans,
SUM (client_clearing) client_clearing,
COUNT (client_clearing) cnt_client_clearing,
SUM (client_settle) client_settle,
COUNT (client_settle) cnt_client_settle,
SUM (chargeable_taxes) chargeable_taxes,
COUNT (chargeable_taxes) cnt_chargeable_taxes,
SUM (vendor_charge) vendor_charge,
COUNT (vendor_charge) cnt_vendor_charge,
SUM (routing_charges) routing_charges,
COUNT (routing_charges) cnt_routing_charges,
SUM (recovered_routing) recovered_routing,
COUNT (recovered_routing) cnt_recovered_routing,
SUM (client_routing) client_routing,
COUNT (client_routing) cnt_client_routing,
SUM (ticket_charges) ticket_charges,
COUNT (ticket_charges) cnt_ticket_charges,
SUM (recovered_ticket_charges) recovered_ticket_charges,
COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
FROM us_datamart_raw
GROUP BY order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id;
-- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
-- by Oracle with the associated materialized view.
CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
NOLOGGING
NOPARALLEL
COMPRESS 7;
No of rows: 2228558
The query (taken Mondrian) I run against each of them is:
select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
--, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
--, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
--, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
--, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
--, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
--, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
--, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
--, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
--, sum("MV_US_DATAMART"."COMMISSION") as "m9"
--, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
--, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
--,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
--,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
--, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
--, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
--, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
--, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
--,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
--, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
No Columns ORACLE TimesTen
1 1.05 0.94
2 1.07 1.47
3 2.04 1.8
4 2.06 2.08
5 2.09 2.4
6 3.01 2.67
7 4.02 3.06
8 4.03 3.37
9 4.04 3.62
10 4.06 4.02
11 4.08 4.31
12 4.09 4.61
13 5.01 4.76
14 5.02 5.06
15 5.04 5.25
16 5.05 5.48
17 5.08 5.84
18 6 6.21
19 6.02 6.34
20 6.04 6.75 -
Running multiple vi's in top level app at same time.
hello, i am wondering on how to run multiple vi's under the same top level application. if i have a top level application that has multiple buttons that call other vi's, is there a way to minimize one of the sub vi's and open another. i keep having problems because i think that the top level application needs the first sub vi to close befor you can call another sub vi. i attach an example i found in the ni examples as an example. thank you.
Attachments:
Dynamic Load Example.vi 52 KBThank you for the response. I tried to use this feature but i think i am doing it wrong. Do you put the method in the sub vi's or in the top level application. also what should you wire to the Auto Dispose Ref input for the Run VI method. Thank you
-
RMAN cold backup taking more time than usual
Hi everybody.. please help me resolving below issue..
I have configured the RMAN in one of the production database with separate catalogue database six months earlier. I have sceduled weekly cold backup through RMAN on sunday 6pm. Normally it used to take one hour to complete the cold backup and database goes down soon as the job starts.
But since from then every week the time taken to just initiate the database shutdown continuosly increasing and recently when i checked it is taking 1 hour to initiate the database shutdown. Once the initiation starts it hardly take 1 to 3 min to shutdown.
Database is up and running during that one Hour. I was in the assumption that RMAN takes some time to execute its internal packages.
Please help
Regards,
Arun KumarHi John and Tychos,
Thank you very much for your valuable inputs.
Yesterday there was cold backup and i have monitored the CPU usage. But there was no load on the CPU at that time and CPU usage was 0%
I have tried connecting to RMAN manually and it connects within a second. And also noticed in prstat -a that rman connects as soon as the job starts.
So i think that its taking time at the deleting obsolete backups.
But I have observerd following observation.
Before executing the delete obsolete command as mentioned before
RMAN> REPORT OBSOLETE RECOVERY WINDOW OF 35 DAYS DEVICE TYPE 'SBT_TAPE';
Report of obsolete backups and copies
Type Key Completion Time Filename/Handle
Backup Set 83409 25-JUL-09
Backup Piece 83587 25-JUL-09 arc_SID_20090725_557_1
Backup Set 83410 25-JUL-09
Backup Piece 83588 25-JUL-09 arc_SID_20090725_558_1
Backup Set 83411 25-JUL-09
Backup Piece 83589 25-JUL-09 arc_SID_20090725_559_1
After executing the delete obsolete command
RMAN> REPORT OBSOLETE RECOVERY WINDOW OF 35 DAYS DEVICE TYPE 'SBT_TAPE';
Report of obsolete backups and copies
Type Key Completion Time Filename/Handle
Backup Set 83409 25-JUL-09
Backup Piece 83587 25-JUL-09 arc_SID_20090725_557_1
Backup Set 83410 25-JUL-09
Backup Piece 83588 25-JUL-09 arc_SID_20090725_558_1
Backup Set 83411 25-JUL-09
Backup Piece 83589 25-JUL-09 arc_SID_20090725_559_1
Please advice me on the following.
1. Why its not deleting the obsolete BACKUP SETS?
2. Is it normal that RMAN takes this much deleting obsolete backup sets? How can i minimize the time taking for deleting obsolete files.
Thanks and Regards,
Arun Kumar -
Data laods taking more time to complete
Hi,
we have some sale stats cubes which are laoding daily on avg Million records.
We are creating indexes after that then creating aggregates.
Both of these process are taking 2-3 hrs each.. even though we have 0 records in weekends its taking same time toc omplete.
Can anyone suggest us what will be better way to create indexes and roll up data?
Thanks
SivaprasadAre you compressing the InfoCubes after a specific period of time (e.g. requests 30 or more days old)? Compression will help reduce the number of aggregates and index addresses to create and therefore reduce the runtimes for each.
-
RMAN Cumulative and differential level 1taking too much time
hi,
I am attempting to HOT backup my 600 GB database to backup into Tape using NMO 5 EMC Networker 7.6.
My networker server is on Win Serv 2003.
My oracle database is on RHEL 4.5 Architecture ia64
Oracle DB Version 10.2.0.4.0
Using ASM
Using EMC Storage as Databse storage
Using tape backup media type LTO-Ultrium-5
No of chaneels used same for bothLevel 0 & 1 is 4
there are 60 Datafiles fior the database
i am atttempting incremental backup[Hot] backup
for Incrementa Level 0 is taking 90 Minutes to complete.
BUT leve1 backup [Both differential and cumulative] are taking almost the same time as taken for Level 0 backup
almost 80Mins.
but the backup Set size for Level 0 is almost 500 GB and Sizes for any Level 1 backup not more than 200MB.
i am confused if both LEVEL 0 AND LEVEL 1 BACKUP should take the same span of time.
please help to reduce the time to complete the Level 1 backups..
thanks in advanceRMAN incremental level 1 and up will have to verify every block in the data files to identify if any modifications have occurred. The time it takes to complete the incremental backup will depend on how much changed. Are you using the latest patches? There are known bugs that can affect performance problems with RMAN backup and recovery. Otherwise, check the Oracle documentation to troubleshoot RMAN.
Block change tracking as already mantioned, introduced in 10g, can greatly speed up your incremental level 1 and up backups.
From what I understand:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/mydir/rman_change_track.f';
As soon as block change tracking is enabled, Oracle starts to record every block that updates. The information is stored in a bitmap inside the BCT file. Every incremental backup causes a bitmap swtich in the BCT file.
If there exists a previous bitmap beside the current bitmap, then an incremental level 1 backup will only backup the blocks according to the current bitmap. Incremental level 1 backups are differential backups by default. If there is no previous bitmap, the RMAN backup will perform a conventional scan of the database as usual.
The bitmap logic applies also to cumulative level 1 incremental backups, which will use all the bitmaps recorded since the last bitmap switch from a level 0 incremental backup. Due to the limit of 8 bitmaps, a cumulative incremental level 1 backup will have to perform a conventional scan of the database, if you make a level 0 database backup followed by 7 differential incremental backups. -
Release Strategy- two levels getting released at the same time
I have created PR release startegy using 4 levels of $$.
The first level (L0)is $0.00 to 1.00
The second level ( L1) goes from 1.01 to 500.00 etc
The level2 and 3 work fine but the problem I am running into is when I release level L0 the system automatically releasaes level L1 along with L0.
Has anyone seen this before? what am I doing wrong.
(my original system had L1, L2 and L3 and that worked fine but I had to add in L0 for buyre review so I created lower level dollar limit and I am now running into this dual release problem)
Need ideas/ help thanks
RajHi,
Please check settings for release prerequisite in PO release strategy for L0 & L1.
Regards,
Manish -
XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD
HI
XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD.
The sql has 5 union clauses.
It takes 20-30 minutes in TOAD compared to running through Concurrent Program in XML Publisher in EBS taking around 4-5 hours.
The Scalable Flag at report level is turned on with the JVM options set to -Xmx1024m -Xmx1024m in Concurrent Program definition.
Other configurations for Data Template like XSLT, Scalable, Optimization are turned on though didn't bounce the OPP Server for these to take effect as I am not sure whether it is needed.
Thanks in advance for your help.But the question is that how come it is working in TOAD and takes only 15-20 minutes?
with initialization of session ?
what about sqlplus ?
Do I have to set up the the temp directory for the XML Publisher report to make it faster?
look at
R12: Troubleshooting Known XML Publisher and E-Business Suite (EBS) Integration Issues (Doc ID 1410160.1)
BI Publisher - Troubleshooting Oracle Business Intelligence (XML) Publisher For The Oracle E-Business Suite (Doc ID 364547.1) -
Will deleting a column at logical schema delete the same at physical level by DDL Sync?
Will deleting a column at logical schema delete the same at physical level by DDL Sync?
Hi David,
First of all thanks for your quick response and for your help logging the enhancement request,
I am testing more or less your suggestion but I am not sure if I understood exactly what you mean,
1)I imported from data dictionary in a new model and into the options menu on the schema select screen I un-ckecked partitions and triggers,
I guessed that the import should not get from the data dictionary the information about the partitions but the result is that the tables partitioned (by list in this case) are partitioned by range without fields into the physical model on SDDM,
2)I select one of the tables modify a NO partitioned option and propagate the option for the rest of the tables
3) I imported again from data dictionary but this time I included the partitions into the option menu on select schema screen,
into tabular view on compare models screen I can select all the tables with different partitioned option, also I can change for "list partitions" and select only the partitions that I want to import.
So I have a solution for my problem, thanks a lot for your suggestion
The second step I'm not sure is needed or maybe I can avoid the step with some configuration setting in any of the preferences screen,
if not, I think the options to not include partitions into select schema screen are not so clear, at least for me,
please, could you confirm me if a way to avoid the second step exists or if I misunderstood this option?
thanks in advance -
Skype using High levels of RAM over time
So recently i've noticed that Skype has been using increased levels of RAM over time. Before i think 7.1 it sat at a steady 170k or lower but since the recent update, it seems to scale all the way up to nearly using 80% of my RAM after being on for an hour. Is there a reason this happens or is it a memory leak issue?
Hi,
I'm having the same issue as you.
For my situation, with Windows 7 Ultimate 64bit, with 8 GB of RAM available, Skype use randomly from 400 Mb to 1.5 Gb of Ram slowing down all the PC..
I have to kill the Skype process for working well.
My SKYPE version is 7.3.0.101.
I need to resolve this problem (and, from my point of view, uninstall the newest version and install the older one is not a solution)
Thanks in advance
Andrea -
Using different levels of the same dimension
If I have 2 fact tables and a conforming time dimension. Can I make a join so that one table will ignore different years completely? (1 table has only 1 year of data, so it should display same values for different years for other table -yet, I still want to be able to drill-down from Year-to-Detail). I am somewhat successful at seeing it ok on a Annual level (when setting the desired metric to Grand Total Level - but aggregate navigation doesn't work correctly -I'm locked at annual level). Thanks
HI mma (and anyone else who was following).
Here's an update:
a) AGO function doesn't support anything non-integer - it's official and the product enhancement request has been filled. Go figure - they cancel time-series wizard - but at the same time - the AGO isn't fully scalable. Since we're developing a brand-new RPD - and we weren't using any YAGO, MAGO, etc. tables - this has bitten us later than sooner.
b) using CASE statement is no panacea (change for each year).
c) your method (time.key=time.key (of 2005) + MOD (time,key, 365) would probably work in my situation (I tried it - and it wasn't that difficult to implement), UNLESS I'd get the following RPD integrity error - "Error: Using a complex join in table that sources time dimension" (I tried aliases as well).
d) Right now, I'm just doing that metric on Annual level. Creating a view in Oracle that only contains needed data (for 2006) , using it as a physical table in physical layer, and connecting with all foreign keys BUT time.
I still think there must be a better way without complex ETL and without creating additional column. LOJ and ROJ and FOJ didn't work.
Thanks for looking at this again. -
I have request in the report level but the same is missing in the infocube
Dear Experts,
I have request in the report level but the same is missing in the compressed infocube level. What could be the cause? does the compressed infocube deletes the request ? if so, I could able to view other requests under infocube manage level.
Kindly provide with enough information.
Thanks.............Hi
Compressing InfoCubes
Use
When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
Features
You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
Compressing one request takes approx. 2.5 ms per data record.
With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior SUM appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
Edited by: Allu on Dec 20, 2007 3:26 PM -
With RMAN, is it possible to clone into two separate databases at same time
backing up PROD.
need to clone into TEST1 and TEST2.
has anyone ever done this?
we've tried to run 2 at the same time but we get an error.
(I'll have to dig the error up).
we were not using separate data sets yet.Hi,
run two duplicate command
CONNECT TARGET sys/***@prod CATALOG rman/***@catdb AUXILIARY SYS/***@TEST1;
CONNECT TARGET sys/***@prod CATALOG rman/***@catdb AUXILIARY SYS/***@TEST2;
I run it from two test servers. Both connect to prod and to the same catalog db and there is no problem.
REgards,
Tom
Edited by: Soli on 24.7.2009 9:39 -
Hi
I use backing tracks for some songs with a live band and we are having some issues with levels. Some are higher than others etc. I use i pad to run the tracks. I am looking for an app that where I can control the levels better. Ideally, an app where I can set the level of each track to a desired level and leave it at that level for good. Mix16 Pro seems to do that but I wonder does it save the setting as I do not want to set the levels every time I use it.
ThanksHi
I use backing tracks for some songs with a live band and we are having some issues with levels. Some are higher than others etc. I use i pad to run the tracks. I am looking for an app that where I can control the levels better. Ideally, an app where I can set the level of each track to a desired level and leave it at that level for good. Mix16 Pro seems to do that but I wonder does it save the setting as I do not want to set the levels every time I use it.
Thanks -
Dear S&OP community,
I am getting following error while creating a planning ares in a newly installed sandbox. "Enter values for planning horizon From and planning horizon To for the storage time profile level".
This what I did...
1) Created new attributes and master data objects and activated them successfully.
2) Time profile created and activated successfully
3) Trying to create planing area by assigning time profile in step 2 and assigned master data from step1..Unable to save the data and system returns
this error - "Enter values for planning horizon From and planning horizon To for the storage time profile level"
My understanding is time profile needs to be active but doesn't have to have values...
Any help is appreciated.
Thanks,
KrishnaYS,
Here are my time profile settings
Level Name Display Horizon - Past Display Horizon - Future
1 Monthly -6 11
2 Quarterly -2 3
3 Yearly -1 2
Time profile is active and but time profile data is not loaded
Thanks,
Krishna
Maybe you are looking for
-
MotoX OTA update fixes, when's it coming to VERIZON???
Camera – Improved Photo Quality Improved capture of natural light (auto-white balance) and color accuracy for more precise exposure in outdoor and backlit scenes Camera – Improved Focus Faster touch to focus time and reduced unnecessary refocusing
-
Update available for Lion only app - how to get rid of it?
Hi, The App Store indicates that I have an app (linesmART) which is available for update, but when clicking the update button, a dialog box shows up saying it can't be updated because it requires OS X 10.7. I can imagine this will happen for other ap
-
Unable to connect to iTunes/Apple Phone, iPad and Mac STATUS_CODE_ERROR
Unable to connect to iTunes/Apple Phone, iPad and Mac Apple Id Password STATUS_CODE_ERROR
-
I have to reset the output volume in my System Preference whenever I restart my computer because it reverts back to some default setting whenever I restart. How do I change this setting?
-
Hello Gurus, I want to test a delta datasource in sap bw, how can I create delta data for that testing? Many thanks.