RMAN backup taking more time than usual suddenly
Hi All,
We are using 11.1.0.7 database, We regularly takes the full level 0 incremental backup which generally tooks 4:30 hours to complete but from last 2-3 days it is taking 6 hours /or more to complete. We did not made any parameter changes or script change in the database.
Below are the details of rman :
RMAN> show all;
RMAN configuration parameters for database with db_unique_name OLAP are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'f:\backup
CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE CHANNEL DEVICE TYPE DISK MAXOPENFILES 2;
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT 'e:\backup\OLAP\OLAP_full_%u
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT 'f:\backup\OLAP\OLAP_full_%u
CONFIGURE CHANNEL 3 DEVICE TYPE DISK FORMAT 'e:\backup\OLAP\OLAP_full_%u
CONFIGURE CHANNEL 4 DEVICE TYPE DISK FORMAT 'f:\backup\OLAP\OLAP_full_%u
CONFIGURE CHANNEL 5 DEVICE TYPE DISK FORMAT 'e:\backup\OLAP\OLAP_full_%u
CONFIGURE CHANNEL 6 DEVICE TYPE DISK FORMAT 'f:\backup\OLAP\OLAP_full_%u
CONFIGURE CHANNEL 7 DEVICE TYPE DISK FORMAT 'e:\backup\OLAP\OLAP_full_%u
CONFIGURE CHANNEL 8 DEVICE TYPE DISK FORMAT 'f:\backup\OLAP\OLAP_full_%u
CONFIGURE MAXSETSIZE TO UNLIMITED;
CONFIGURE ENCRYPTION FOR DATABASE OFF;
CONFIGURE ENCRYPTION ALGORITHM 'AES128';
CONFIGURE COMPRESSION ALGORITHM 'BZIP2';
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'f:\backup\OLAP\SNCFOLAP.ORA';
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'F:\BACKUP\OLAP\SNCFOLAP.ORA';
=====================================================
Please help me, as this more time make my schedule task to extend.
Thanks
Sam
sam wrote:
Hi All,
We are using 11.1.0.7 database, We regularly takes the full level 0 incremental backup which generally tooks 4:30 hours to complete but from last 2-3 days it is taking 6 hours /or more to complete. We did not made any parameter changes or script change in the database.could be due to change in server load,
please compare server load(cpu/memory) on above two period.
Similar Messages
-
RMAN cold backup taking more time than usual
Hi everybody.. please help me resolving below issue..
I have configured the RMAN in one of the production database with separate catalogue database six months earlier. I have sceduled weekly cold backup through RMAN on sunday 6pm. Normally it used to take one hour to complete the cold backup and database goes down soon as the job starts.
But since from then every week the time taken to just initiate the database shutdown continuosly increasing and recently when i checked it is taking 1 hour to initiate the database shutdown. Once the initiation starts it hardly take 1 to 3 min to shutdown.
Database is up and running during that one Hour. I was in the assumption that RMAN takes some time to execute its internal packages.
Please help
Regards,
Arun KumarHi John and Tychos,
Thank you very much for your valuable inputs.
Yesterday there was cold backup and i have monitored the CPU usage. But there was no load on the CPU at that time and CPU usage was 0%
I have tried connecting to RMAN manually and it connects within a second. And also noticed in prstat -a that rman connects as soon as the job starts.
So i think that its taking time at the deleting obsolete backups.
But I have observerd following observation.
Before executing the delete obsolete command as mentioned before
RMAN> REPORT OBSOLETE RECOVERY WINDOW OF 35 DAYS DEVICE TYPE 'SBT_TAPE';
Report of obsolete backups and copies
Type Key Completion Time Filename/Handle
Backup Set 83409 25-JUL-09
Backup Piece 83587 25-JUL-09 arc_SID_20090725_557_1
Backup Set 83410 25-JUL-09
Backup Piece 83588 25-JUL-09 arc_SID_20090725_558_1
Backup Set 83411 25-JUL-09
Backup Piece 83589 25-JUL-09 arc_SID_20090725_559_1
After executing the delete obsolete command
RMAN> REPORT OBSOLETE RECOVERY WINDOW OF 35 DAYS DEVICE TYPE 'SBT_TAPE';
Report of obsolete backups and copies
Type Key Completion Time Filename/Handle
Backup Set 83409 25-JUL-09
Backup Piece 83587 25-JUL-09 arc_SID_20090725_557_1
Backup Set 83410 25-JUL-09
Backup Piece 83588 25-JUL-09 arc_SID_20090725_558_1
Backup Set 83411 25-JUL-09
Backup Piece 83589 25-JUL-09 arc_SID_20090725_559_1
Please advice me on the following.
1. Why its not deleting the obsolete BACKUP SETS?
2. Is it normal that RMAN takes this much deleting obsolete backup sets? How can i minimize the time taking for deleting obsolete files.
Thanks and Regards,
Arun Kumar -
Suddenly ODI scheduled executions taking more time than usual.
Hi,
I have set ODI packages scheduled for execution.
From some days those are taking more time to execute themselves.
Before they used to take 1 hr 30 mins approx.
Now they are taking 3 - 3 hr 15 mins approx.
And there no any major change in data in terms of Quantity.
My ODI version s
Standalone Edition Version 11.1.1
Build ODI_11.1.1.3.0_GENERIC_100623.1635
ODI packages are mainly using Oracle as SOURCE and TARGET DB.
What things should i check to get to know reasons of sudden increase in time of execution.
Any pointers regarding this would be appreciated.
Thanks,
MaheshMahesh,
Use some repository queries to retrieve the session task timings and compare your slow execution to a previous acceptable execution, then look for the biggest changes - this will highlight where you are slowing down, then its off to tune the item accordingly.
See here for some example reports , you might need to tweak for your current repos version but I dont think the table structures have changed that much :
http://rnm1978.wordpress.com/2010/11/03/analysing-odi-batch-performance/ -
Cube content deletion is taking more time than usual.
Hi Experts,
We have a Process chain which ideally should run in every two hours. This chain has a delete data cube content step before the new data is loaded in the cube. This chain is running fine for one instance & the other instance is taking more time so it is quite intermittent.
In the process chain we are also deleting contents from the Dimension tables (in the delete content step). Need your inputs to improve the performance of this step.
Thanks & Regards
Mayank Tyagi.Hi Mayank ,
You can delete the indexes of the cube before deleting the contents of the cube . The concept is same as of data loading that data loads happens faster when indexes are deleted .
If you are having aggregates over this cube , then that aggregate will be also adjusted .
Kind Regards,
Ashutosh Singh -
Imac taking more time than usual to boot and Bluetooth functionality is not available
My IMac (21" 10.9.1 OS X 2.5GHZ Intel Core i5) is taking much time to boot. I am seeing this happening from past one week. I am also facing Bluetooth issue. My bluetooth Keyboard and Mouse are not working. So I had to connect with normal USB Keyboard and Mouse and when I logged in, I am able to see that the bluetooth functionality is disabled. It is saying Bluetooth not available.
I tried to do Repair Disk permissions (on Disk Utility) and also tried PRAM reset. Both did not work.
Do anyone faced the same isse? Question here is.
1) Can anyone help me to resolve this issue?.
2) What caused this issue all of a sudden, I dont remember installing any softwares recently.
Thanks in advance.Hi,
Below are the details, that I have taken from Etrecheck. Can you pls have a look into this..
Hardware Information:
iMac (21.5-inch, Mid 2011)
iMac - model: iMac12,1
1 2.5 GHz Intel Core i5 CPU: 4 cores
4 GB RAM
Video Information:
AMD Radeon HD 6750M - VRAM: 512 MB
System Software:
OS X 10.9.1 (13B42) - Uptime: 0 days 0:6:52
Disk Information:
ST3500418AS disk0 : (500.11 GB)
EFI (disk0s1) <not mounted>: 209.7 MB
Mcintosh (disk0s2) /: 499.25 GB (61.48 GB free)
Recovery HD (disk0s3) <not mounted>: 650 MB
OPTIARC DVD RW AD-5690H
USB Information:
Apple Inc. FaceTime HD Camera (Built-in)
USB USB Keyboard
DragonRise Inc. Generic USB Joystick
Logitech USB Optical Mouse
Apple Computer, Inc. IR Receiver
Apple Internal Memory Card Reader
FireWire Information:
Thunderbolt Information:
Apple Inc. thunderbolt_bus
Kernel Extensions:
com.orderedbytes.driver.CMUSBDevices (4.6.0 - SDK 10.6)
com.orderedbytes.driver.ControllerMateFamily (4.6.0 - SDK 10.6)
Startup Items:
HWNetMgr: Path: /Library/StartupItems/HWNetMgr
HWPortDetect: Path: /Library/StartupItems/HWPortDetect
Problem System Launch Daemons:
Problem System Launch Agents:
Launch Daemons:
[System] com.adobe.fpsaud.plist 3rd-Party support link
[System] com.google.keystone.daemon.plist 3rd-Party support link
[System] com.microsoft.office.licensing.helper.plist 3rd-Party support link
[System] com.oracle.java.Helper-Tool.plist 3rd-Party support link
[System] com.oracle.java.JavaUpdateHelper.plist 3rd-Party support link
Launch Agents:
[System] com.google.keystone.agent.plist 3rd-Party support link
[System] com.oracle.java.Java-Updater.plist 3rd-Party support link
[System] com.orderedbytes.ControllerMateHelper.plist 3rd-Party support link
User Launch Agents:
[not loaded] com.facebook.videochat.[redacted].plist 3rd-Party support link
User Login Items:
iTunesHelper
USBOverdriveHelper
VMware Fusion Start Menu
Internet Plug-ins:
FlashPlayer-10.6: Version: 11.9.900.117 - SDK 10.6 3rd-Party support link
Default Browser: Version: 537 - SDK 10.9
Flash Player: Version: 11.9.900.117 - SDK 10.6 Outdated! Update
QuickTime Plugin: Version: 7.7.3
o1dbrowserplugin: Version: 5.1.4.17398 3rd-Party support link
SharePointBrowserPlugin: Version: 14.0.0 3rd-Party support link
npgtpo3dautoplugin: Version: 0.1.44.29 - SDK 10.5 3rd-Party support link
googletalkbrowserplugin: Version: 5.1.4.17398 3rd-Party support link
JavaAppletPlugin: Version: Java 7 Update 45 Outdated! Update
Audio Plug-ins:
BluetoothAudioPlugIn: Version: 1.0 - SDK 10.9
AirPlay: Version: 1.9 - SDK 10.9
AppleAVBAudio: Version: 2.0.0 - SDK 10.9
iSightAudio: Version: 7.7.3 - SDK 10.9
User Internet Plug-ins:
Unity Web Player: Version: UnityPlayer version 2.6.1f3 3rd-Party support link
RealPlayer Plugin: Version: Unknown
Google Earth Web Plug-in: Version: 7.1 3rd-Party support link
3rd Party Preference Panes:
Flash Player 3rd-Party support link
Java 3rd-Party support link
Bad Fonts:
None
Old Applications:
Wondershare Helper Compact: Version: 2.2.6.0 - SDK 10.5 3rd-Party support link
/Applications/Wondershare Helper compact/Wondershare Helper Compact.app
PwnageTool: Version: 5.1.1 - SDK 10.4 3rd-Party support link
/Users/Shared/Sowndar/iPhone/PwnageTool.app
VLC: Version: 2.0.1 - SDK 10.5 3rd-Party support link
Time Machine:
Time Machine not configured!
Top Processes by CPU:
6% mds
3% mds_stores
0% EtreCheck
0% WindowServer
0% SystemUIServer
Top Processes by Memory:
143 MB com.apple.IconServicesAgent
102 MB mds_stores
90 MB softwareupdated
70 MB sandboxd
61 MB com.apple.WebKit.WebContent
Virtual Memory Information:
548 MB Free RAM
1.53 GB Active RAM
1.12 GB Inactive RAM
826 MB Wired RAM
1.53 GB Page-ins
0 B Page-outs -
Concurrent request taking more time than usual
HI,
I am running a concurrent request : Payables transfer to general Ledger
which was usually getting completed within 5 minutes....but now it is taking hours to complete....Log file not showing any errors as such...
I increased the standard mangers processes and bounce the Concurrent managers n Database as well...i also ran cmclean.sql, but no luck..
Can anybody suggest me the approach for this problem...
thanks n regards,
SajadCheck the enable trace option in the concurrent program definitions.
After that run the concurrent request and get the trace file in udump directory and tkprof it, and analyze which is the query that is causing the issue. -
Level1 backup is taking more time than Level0
The Level1 backup is taking more time than Level0, I really am frustated how could it happen. I have 6.5GB of database. Level0 took 8 hrs but level1 is taking more than 8hrs . please help me in this regard.
Ogan Ozdogan wrote:
Charles,
By enabling the block change tracking will be indeed faster than before he have got. I think this does not address the question of the OP unless you are saying the incremental backup without the block change tracking is slower than a level 0 (full) backup?
Thank you in anticipating.
OganOgan,
I can't explain why a 6.5GB level 0 RMAN backup would require 8 hours to complete (maybe a very slow destination device connected by 10Mb/s Ethernet) - I would expect that it should complete in a couple of minutes.
An incremental level 1 backup without a block change tracking file could take longer than a level 0 backup. I encountered a good written description of why that could happen, but I can't seem to locate the source at the moment. The longer run time might have been related to the additional code paths required to constantly compare the SCN of each block, and the variable write rate which may affect some devices, such as a tape device.
A paraphrase from the book "Oracle Database 10g RMAN Backup & Recovery"
"Incremental backups must check the header of each block to discover if it has changed since the last incremental backup - that means an incremental backup may not complete much faster than a full backup."
Charles Hooper
Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Query in timesten taking more time than query in oracle database
Hi,
Can anyone please explain me why query in timesten taking more time
than query in oracle database.
I am mentioning in detail what are my settings and what have I done
step by step.........
1.This is the table I created in Oracle datababase
(Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
CREATE TABLE student (
id NUMBER(9) primary keY ,
first_name VARCHAR2(10),
last_name VARCHAR2(10)
2.THIS IS THE ANONYMOUS BLOCK I USE TO
POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
declare
firstname varchar2(12);
lastname varchar2(12);
catt number(9);
begin
for cntr in 1..2599999 loop
firstname:=(cntr+8)||'f';
lastname:=(cntr+2)||'l';
if cntr like '%9999' then
dbms_output.put_line(cntr);
end if;
insert into student values(cntr,firstname, lastname);
end loop;
end;
3. MY DSN IS SET THE FOLLWING WAY..
DATA STORE PATH- G:\dipesh3repo\db
LOG DIRECTORY- G:\dipesh3repo\log
PERM DATA SIZE-1000
TEMP DATA SIZE-1000
MY TIMESTEN VERSION-
C:\Documents and Settings\dipesh>ttversion
TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
Instance admin: dipesh
Instance home directory: G:\TimestTen\TT70_32
Daemon home directory: G:\TimestTen\TT70_32\srv\info
THEN I CONNECT TO THE TIMESTEN DATABASE
C:\Documents and Settings\dipesh> ttisql
command>connect "dsn=dipesh3;oraclepwd=tiger";
4. THEN I START THE AGENT
call ttCacheUidPwdSet('SCOTT','TIGER');
Command> CALL ttCacheStart();
5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
create readonly cache group rc_student autorefresh
interval 5 seconds from student
(id int not null primary key, first_name varchar2(10), last_name varchar2(10));
load cache group rc_student commit every 100 rows;
6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
I SET THE TIMING..
command>TIMING 1;
consider this query now..
Command> select * from student where first_name='2155666f';
< 2155658, 2155666f, 2155660l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
another query-
Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
2206: Table SCOTT.STUDENTS not found
Execution time (SQLPrepare) = 0.074964 seconds.
The command failed.
Command> SELECT * FROM STUDENT where first_name='2093434f';
< 2093426, 2093434f, 2093428l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
Command>
7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
ID FIRST_NAME LAST_NAME
1498663 1498671f 1498665l
Elapsed: 00:00:00.15
Can anyone please explain me why query in timesten taking more time
that query in oracle database.
Message was edited by: Dipesh Majumdar
user542575
Message was edited by:
user542575TimesTen
Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
Version: 7.0.4.0.0 64 bit
Schema:
create usermanaged cache group factCache from
MV_US_DATAMART
ORDER_DATE DATE,
IF_SYSTEM VARCHAR2(32) NOT NULL,
GROUPING_ID TT_BIGINT,
TIME_DIM_ID TT_INTEGER NOT NULL,
BUSINESS_DIM_ID TT_INTEGER NOT NULL,
ACCOUNT_DIM_ID TT_INTEGER NOT NULL,
ORDERTYPE_DIM_ID TT_INTEGER NOT NULL,
INSTR_DIM_ID TT_INTEGER NOT NULL,
EXECUTION_DIM_ID TT_INTEGER NOT NULL,
EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
NO_ORDERS TT_BIGINT,
FILLED_QUANTITY TT_BIGINT,
CNT_FILLED_QUANTITY TT_BIGINT,
QUANTITY TT_BIGINT,
CNT_QUANTITY TT_BIGINT,
COMMISSION BINARY_FLOAT,
CNT_COMMISSION TT_BIGINT,
FILLS_NUMBER TT_BIGINT,
CNT_FILLS_NUMBER TT_BIGINT,
AGGRESSIVE_FILLS TT_BIGINT,
CNT_AGGRESSIVE_FILLS TT_BIGINT,
NOTIONAL BINARY_FLOAT,
CNT_NOTIONAL TT_BIGINT,
TOTAL_PRICE BINARY_FLOAT,
CNT_TOTAL_PRICE TT_BIGINT,
CANCELLED_ORDERS_COUNT TT_BIGINT,
CNT_CANCELLED_ORDERS_COUNT TT_BIGINT,
ROUTED_ORDERS_NO TT_BIGINT,
CNT_ROUTED_ORDERS_NO TT_BIGINT,
ROUTED_LIQUIDITY_QTY TT_BIGINT,
CNT_ROUTED_LIQUIDITY_QTY TT_BIGINT,
REMOVED_LIQUIDITY_QTY TT_BIGINT,
CNT_REMOVED_LIQUIDITY_QTY TT_BIGINT,
ADDED_LIQUIDITY_QTY TT_BIGINT,
CNT_ADDED_LIQUIDITY_QTY TT_BIGINT,
AGENT_CHARGES BINARY_FLOAT,
CNT_AGENT_CHARGES TT_BIGINT,
CLEARING_CHARGES BINARY_FLOAT,
CNT_CLEARING_CHARGES TT_BIGINT,
EXECUTION_CHARGES BINARY_FLOAT,
CNT_EXECUTION_CHARGES TT_BIGINT,
TRANSACTION_CHARGES BINARY_FLOAT,
CNT_TRANSACTION_CHARGES TT_BIGINT,
ORDER_MANAGEMENT BINARY_FLOAT,
CNT_ORDER_MANAGEMENT TT_BIGINT,
SETTLEMENT_CHARGES BINARY_FLOAT,
CNT_SETTLEMENT_CHARGES TT_BIGINT,
RECOVERED_AGENT BINARY_FLOAT,
CNT_RECOVERED_AGENT TT_BIGINT,
RECOVERED_CLEARING BINARY_FLOAT,
CNT_RECOVERED_CLEARING TT_BIGINT,
RECOVERED_EXECUTION BINARY_FLOAT,
CNT_RECOVERED_EXECUTION TT_BIGINT,
RECOVERED_TRANSACTION BINARY_FLOAT,
CNT_RECOVERED_TRANSACTION TT_BIGINT,
RECOVERED_ORD_MGT BINARY_FLOAT,
CNT_RECOVERED_ORD_MGT TT_BIGINT,
RECOVERED_SETTLEMENT BINARY_FLOAT,
CNT_RECOVERED_SETTLEMENT TT_BIGINT,
CLIENT_AGENT BINARY_FLOAT,
CNT_CLIENT_AGENT TT_BIGINT,
CLIENT_ORDER_MGT BINARY_FLOAT,
CNT_CLIENT_ORDER_MGT TT_BIGINT,
CLIENT_EXEC BINARY_FLOAT,
CNT_CLIENT_EXEC TT_BIGINT,
CLIENT_TRANS BINARY_FLOAT,
CNT_CLIENT_TRANS TT_BIGINT,
CLIENT_CLEARING BINARY_FLOAT,
CNT_CLIENT_CLEARING TT_BIGINT,
CLIENT_SETTLE BINARY_FLOAT,
CNT_CLIENT_SETTLE TT_BIGINT,
CHARGEABLE_TAXES BINARY_FLOAT,
CNT_CHARGEABLE_TAXES TT_BIGINT,
VENDOR_CHARGE BINARY_FLOAT,
CNT_VENDOR_CHARGE TT_BIGINT,
ROUTING_CHARGES BINARY_FLOAT,
CNT_ROUTING_CHARGES TT_BIGINT,
RECOVERED_ROUTING BINARY_FLOAT,
CNT_RECOVERED_ROUTING TT_BIGINT,
CLIENT_ROUTING BINARY_FLOAT,
CNT_CLIENT_ROUTING TT_BIGINT,
TICKET_CHARGES BINARY_FLOAT,
CNT_TICKET_CHARGES TT_BIGINT,
RECOVERED_TICKET_CHARGES BINARY_FLOAT,
CNT_RECOVERED_TICKET_CHARGES TT_BIGINT,
PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
READONLY);
No of rows: 2228558
Config:
< CkptFrequency, 600 >
< CkptLogVolume, 0 >
< CkptRate, 0 >
< ConnectionCharacterSet, US7ASCII >
< ConnectionName, tt_us_dma >
< Connections, 64 >
< DataBaseCharacterSet, AL32UTF8 >
< DataStore, e:\andrew\datacache\usDMA >
< DurableCommits, 0 >
< GroupRestrict, <NULL> >
< LockLevel, 0 >
< LockWait, 10 >
< LogBuffSize, 65536 >
< LogDir, e:\andrew\datacache\ >
< LogFileSize, 64 >
< LogFlushMethod, 1 >
< LogPurge, 0 >
< Logging, 1 >
< MemoryLock, 0 >
< NLS_LENGTH_SEMANTICS, BYTE >
< NLS_NCHAR_CONV_EXCP, 0 >
< NLS_SORT, BINARY >
< OracleID, NYCATP1 >
< PassThrough, 0 >
< PermSize, 4000 >
< PermWarnThreshold, 90 >
< PrivateCommands, 0 >
< Preallocate, 0 >
< QueryThreshold, 0 >
< RACCallback, 0 >
< SQLQueryTimeout, 0 >
< TempSize, 514 >
< TempWarnThreshold, 90 >
< Temporary, 1 >
< TransparentLoad, 0 >
< TypeMode, 0 >
< UID, OS_OWNER >
ORACLE:
Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
Schema:
CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
TABLESPACE TS_OS
PARTITION BY RANGE (ORDER_DATE)
PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
LOGGING
NOCOMPRESS
TABLESPACE TS_OS
NOCACHE
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
USING INDEX
TABLESPACE TS_OS_INDEX
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT order_date, if_system,
GROUPING_ID (order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id
) GROUPING_ID,
/* ============ DIMENSIONS ============ */
time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
instr_dim_id, execution_dim_id, exec_exchange_dim_id,
/* ============ MEASURES ============ */
-- o.FX_RATE /* FX_RATE */,
COUNT (*) no_orders,
-- SUM(NO_ORDERS) NO_ORDERS,
-- COUNT(NO_ORDERS) CNT_NO_ORDERS,
SUM (filled_quantity) filled_quantity,
COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
COUNT (quantity) cnt_quantity, SUM (commission) commission,
COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
COUNT (fills_number) cnt_fills_number,
SUM (aggressive_fills) aggressive_fills,
COUNT (aggressive_fills) cnt_aggressive_fills,
SUM (fx_rate * filled_quantity * average_price) notional,
COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
SUM (fx_rate * fills_number * average_price) total_price,
COUNT (fx_rate * fills_number * average_price) cnt_total_price,
SUM (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END) cancelled_orders_count,
COUNT (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END
) cnt_cancelled_orders_count,
-- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
-- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
-- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
SUM (routed_orders_no) routed_orders_no,
COUNT (routed_orders_no) cnt_routed_orders_no,
SUM (routed_liquidity_qty) routed_liquidity_qty,
COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
SUM (removed_liquidity_qty) removed_liquidity_qty,
COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
SUM (added_liquidity_qty) added_liquidity_qty,
COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
SUM (agent_charges) agent_charges,
COUNT (agent_charges) cnt_agent_charges,
SUM (clearing_charges) clearing_charges,
COUNT (clearing_charges) cnt_clearing_charges,
SUM (execution_charges) execution_charges,
COUNT (execution_charges) cnt_execution_charges,
SUM (transaction_charges) transaction_charges,
COUNT (transaction_charges) cnt_transaction_charges,
SUM (order_management) order_management,
COUNT (order_management) cnt_order_management,
SUM (settlement_charges) settlement_charges,
COUNT (settlement_charges) cnt_settlement_charges,
SUM (recovered_agent) recovered_agent,
COUNT (recovered_agent) cnt_recovered_agent,
SUM (recovered_clearing) recovered_clearing,
COUNT (recovered_clearing) cnt_recovered_clearing,
SUM (recovered_execution) recovered_execution,
COUNT (recovered_execution) cnt_recovered_execution,
SUM (recovered_transaction) recovered_transaction,
COUNT (recovered_transaction) cnt_recovered_transaction,
SUM (recovered_ord_mgt) recovered_ord_mgt,
COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
SUM (recovered_settlement) recovered_settlement,
COUNT (recovered_settlement) cnt_recovered_settlement,
SUM (client_agent) client_agent,
COUNT (client_agent) cnt_client_agent,
SUM (client_order_mgt) client_order_mgt,
COUNT (client_order_mgt) cnt_client_order_mgt,
SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
SUM (client_trans) client_trans,
COUNT (client_trans) cnt_client_trans,
SUM (client_clearing) client_clearing,
COUNT (client_clearing) cnt_client_clearing,
SUM (client_settle) client_settle,
COUNT (client_settle) cnt_client_settle,
SUM (chargeable_taxes) chargeable_taxes,
COUNT (chargeable_taxes) cnt_chargeable_taxes,
SUM (vendor_charge) vendor_charge,
COUNT (vendor_charge) cnt_vendor_charge,
SUM (routing_charges) routing_charges,
COUNT (routing_charges) cnt_routing_charges,
SUM (recovered_routing) recovered_routing,
COUNT (recovered_routing) cnt_recovered_routing,
SUM (client_routing) client_routing,
COUNT (client_routing) cnt_client_routing,
SUM (ticket_charges) ticket_charges,
COUNT (ticket_charges) cnt_ticket_charges,
SUM (recovered_ticket_charges) recovered_ticket_charges,
COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
FROM us_datamart_raw
GROUP BY order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id;
-- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
-- by Oracle with the associated materialized view.
CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
NOLOGGING
NOPARALLEL
COMPRESS 7;
No of rows: 2228558
The query (taken Mondrian) I run against each of them is:
select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
--, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
--, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
--, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
--, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
--, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
--, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
--, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
--, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
--, sum("MV_US_DATAMART"."COMMISSION") as "m9"
--, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
--, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
--,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
--,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
--, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
--, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
--, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
--, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
--,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
--, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
No Columns ORACLE TimesTen
1 1.05 0.94
2 1.07 1.47
3 2.04 1.8
4 2.06 2.08
5 2.09 2.4
6 3.01 2.67
7 4.02 3.06
8 4.03 3.37
9 4.04 3.62
10 4.06 4.02
11 4.08 4.31
12 4.09 4.61
13 5.01 4.76
14 5.02 5.06
15 5.04 5.25
16 5.05 5.48
17 5.08 5.84
18 6 6.21
19 6.02 6.34
20 6.04 6.75 -
CatSearch taking more time than full table scan
Hi
I have a table which has close to 140 million records. I had been exploring the option of using oracle text for search. So , I created an index(ctxcat) on the column Name by the following query.
begin
ctx_ddl.create_preference('FT_WL', 'BASIC_WORDLIST');
ctx_ddl.set_attribute ('FT_WL', 'prefix-index','TRUE');
end;
create index history_namex on history(name) indextype is ctxsys.ctxcat parameters ('WORDLIST FT_WL');
But when I executed the following query , I have found out that catsearch is taking more time. The queries and thier stats are :-
1. select * from history where catsearch(name, 'Jake%', null) > 0 and rownum < 200;
Elapsed : 00 : 00 : 00.13
Statistics :
112 recursive calls
0 db block gets
413 consistent gets
28 physical reads
0 redo size
33168 bytes sent via SQL*Net to client
663 bytes receuved via SQL*Net from client
15 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
199 rows processed
2. select * from history where name like 'Jake%' and rownum < 200;
Elapsed : 00 : 00 : 00.05
Statistics :
1 recursive calls
0 db block gets
220 consistent gets
383 physical reads
0 redo size
26148 bytes sent via SQL*Net to client
663 bytes receuved via SQL*Net from client
15 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
199 rows processed
Can anyone explain why this is happening?
PS : there is no conventional index on the name column.
Edited by: 976934 on Dec 14, 2012 3:32 AMThe asterisk (*) is simply the correct syntax for a wildcard using catsearch. If you use % instead, then you will not get the same results. Please see the section of the online documentation below, that shows that the asterisk is the wildcard for catsearch.
http://docs.oracle.com/cd/E11882_01/text.112/e24436/csql.htm#CHDJBGHE
Additionally, if you want to limit the rows, then you need to get the matching results in an inner sub-query, then use the rownum in an outer query. The way that you were doing it, it first limits the rows to the first 200, then checks which of those meet the criteria, instead of the other way around. So, the correct syntax should be the following, which should also be the most efficient.
select * from
(select * from history
where catsearch (name, 'Jake*', null) > 0)
where rownum < 200; -
Zfs destroy command takes more time than usual
Hi,
When I run the destroy command it takes more than usual.
I have exported the lun form this zfs volume ealier.
Later I have removed the lun view and deleted the lun.After that when I run the below command it takes more time (more than 5mins and still running)
#zfs destroy storage/luIs there a way to quickly destroy the filesystem.
It looks it removing the allocated files.
capacity operations bandwidth
pool alloc free read write read write
storage0 107G 116T 3.32K 2.52K 3.48M 37.7M
storage0 107G 116T 840 551 1.80M 6.01M
storage0 106G 116T 273 0 586K 0
storage0 106G 116T 1.19K 0 2.61M 0
storage0 106G 116T 1.47K 0 3.20M -
Wait Step is taking more time than specified
Hi All,
I have included wait step for 1 minute in BPM. But it is taking more than 30 minutes instead of 1 minute.
Could any one help me on this?
Help will be rewarded.
Thanks & Regards,
Jyothi.Hi,
Chech if all the jobs are scheduled correctly in SWF_XI_CUSTOMIZING if u find any jobs in error status just perform an Automatic BPM Customizing.
refer my wiki... [BPM Trouble Shooting Deadline Step|https://wiki.sdn.sap.com/wiki/display/XI/BPMTroubleShootingDeadlineStep]
~SaNv... -
RMAN backup uses more space than actual DB size
Hi All,
Yesterday, we have configured the RMAN backup for an RAC db in HP-UX. The total DB size is 82GB
but the RMAN took around 250G space for backup and the backup failed. The oracle db is 10.2.0.1.
$ ls -artl
total 520626852
drwxr-xr-x 23 root root 8192 Feb 10 15:49 ..
drwxr-xr-x 2 root root 96 Feb 10 16:09 lost+found
-rw-rw-rw- 1 oracle dba 290 Feb 10 20:35 backup.log
-rw-rw---- 1 oracle dba 22768640000 Feb 10 21:30 bk_4_1_678488912
-rw-rw---- 1 oracle dba 37469036544 Feb 10 21:40 bk_3_1_678488912
-rw-rw---- 1 oracle dba 28807331840 Feb 10 21:52 bk_5_1_678490229
-rw-rw---- 1 oracle dba 22659448832 Feb 10 22:01 bk_6_1_678490845
-rw-rw---- 1 oracle dba 22775595008 Feb 10 22:14 bk_7_1_678491581
-rw-rw---- 1 oracle dba 19507593216 Feb 10 22:15 bk_8_1_678492076
-rw-rw---- 1 oracle dba 18644811776 Feb 10 22:29 bk_9_1_678492883
-rw-rw---- 1 oracle dba 19040927744 Feb 10 22:31 bk_10_1_678492958
-rw-rw---- 1 oracle dba 18791776256 Feb 10 22:44 bk_11_1_678493794
-rw-rw---- 1 oracle dba 15491096576 Feb 10 22:45 bk_12_1_678493889
-rw-rw---- 1 oracle dba 19860652032 Feb 10 23:00 bk_13_1_678494656
-rw-rw---- 1 oracle dba 20742381568 Feb 10 23:01 bk_14_1_678494701
drwxrwxrwx 3 oracle dba 1024 Feb 10 23:01 .
Why this is happening ? Any bug or i am missing out anything. Please help.
Regards,
Anand.We are using the belwo scripts,
rman target sys/system nocatalog msglog /backup/backup.log
RUN {
ALLOCATE CHANNEL ch00 TYPE DISK;
ALLOCATE CHANNEL ch01 TYPE DISK;
BACKUP
INCREMENTAL LEVEL=0
SKIP INACCESSIBLE
TAG hot_db_bk_level0
FILESPERSET 5
FORMAT '/backup/bk_%s_%p_%t'
DATABASE;
sql 'alter system archive log current';
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
ALLOCATE CHANNEL ch00 TYPE DISK;
ALLOCATE CHANNEL ch01 TYPE DISK;
BACKUP
filesperset 20
FORMAT '/backup/al_%s_%p_%t'
ARCHIVELOG from time 'sysdate - 1' ;]
#For backing up the archive of the second server.
ALLOCATE CHANNEL ch02 TYPE DISK connect 'sys/system@qicdbr2';
BACKUP
filesperset 20
FORMAT 'al2_%s_%p_%t'
ARCHIVELOG from time sysdate - 1 ;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
ALLOCATE CHANNEL ch00 TYPE DISK;
BACKUP
FORMAT '/backup/cntrl_%s_%p_%t'
CURRENT CONTROLFILE;
RELEASE CHANNEL ch00;
RMAN> LIST BACKUPSET SUMMARY;
List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
1 B 0 A DISK 10-FEB-09 1 1 NO HOT_DB_BK_LEVEL0
2 B 0 A DISK 10-FEB-09 1 1 NO HOT_DB_BK_LEVEL0
3 B 0 A DISK 10-FEB-09 1 1 NO HOT_DB_BK_LEVEL0
4 B 0 A DISK 10-FEB-09 1 1 NO HOT_DB_BK_LEVEL0
5 B 0 A DISK 10-FEB-09 1 1 NO HOT_DB_BK_LEVEL0
6 B 0 A DISK 10-FEB-09 1 1 NO HOT_DB_BK_LEVEL0
7 B 0 A DISK 10-FEB-09 1 1 NO HOT_DB_BK_LEVEL0
8 B 0 A DISK 10-FEB-09 1 1 NO HOT_DB_BK_LEVEL0
9 B 0 A DISK 10-FEB-09 1 1 NO HOT_DB_BK_LEVEL0
10 B 0 A DISK 10-FEB-09 1 1 NO HOT_DB_BK_LEVEL0
11 B 0 A DISK 10-FEB-09 1 1 NO HOT_DB_BK_LEVEL0
12 B 0 A DISK 10-FEB-09 1 1 NO HOT_DB_BK_LEVEL0
RMAN> show all;
using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/orahome/OraRAC/dbs/snapcf_QICDB1
.f'; # default -
My mac book pro taking a little more time than usual to load up pages. what can i do?
Thanks
This helped to speed up my copy of pages.
open disk utility and click verify disk permissions and repair disk permisions.
Tyler -
Connection through jdbc thin client taking more time than from sqlplus!!!
Hello All
Machines A and B
Applicaion in A is connecting to B(9.2.0.6),db server.
The schema is so small with few tables and data in each table less than 500 rows
We are in the process of migrating the Application Schema in B to C[9.2.0.8].
But the response time is more when the application fetches from C.
Even while selecting the sysdate from dual.
The application is using the jdbc thin client for fetching the data.
When the same sql is executed by (from A to C)
sqlplus -s user/pass @execute.sql, its gets done in a fraction of a second.
But when the same is done through the application which uses jdbc thin client, it takes few seconds
to complete.
When tried with a small java program that uses classes12.jar (from A to C)
conn = DriverManager.getConnection(URL,UID,PASS);
stop = System.currentTimeMillis();
System.out.println("Connection time in milli sec: " + (stop - start));
System.out.println();
..It was found that creating the connection was taking time.
But the same is not happening when tired through the sqlplus
Could someone throw some light into this?
What could be the reason for jdbc to get slower while establishing connections?
TIA,
JJare you using latest drivers - http://www.oracle.com/technology/tech/java/sqlj_jdbc/index.html
you may want to check some options reducing jdbc connection cost from the otn samples - http://www.oracle.com/technology/sample_code/tech/java/sqlj_jdbc/index.html -
Hi
Oracle 10.1.0.2.0
OS IBM/AIX RISC/6000
I have a db of around 150GB,Normally i take cold backup (using OS commands) it takes around 4 hours. Today i just wanted to use rman to take online backup on disk it took around 8 hours to complete. i used the following.
run {
allocate channel d1 type disk;
allocate channel d2 type disk;
backup database plus archivelog;Do i need to configure more channels?? what else can i do to decrease the time taken by backup.
ThanksHi,
To increase the backup speed in RMAN , **Allocate multiple channels and assign files to specific channels.aA
**Allocate Disk buffers like FILESPERSET = number , MAXOPENFILES = number
** Set the backup duration
Eg: BACKUP DURATION 5:00 DATABASE;
BACKUP DURATION 1:00 PARTIAL TABLESPACE <tablespace_name>;
BACKUP DURATION 2:00 PARTIAL MINIMIZE TIME DATABASE;
BACKUP DURATION 2:00 PARTIAL MINIMIZE LOAD DATABASE;
Good luck..
Edited by: sshafiulla on Jan 24, 2010 11:23 AM
Maybe you are looking for
-
How can I transfer all my information from iphone 4 to iphone 5
I tried to transfer the information but a window was open that required a password, I wrote my itunes password but it says incorrect. I need to transfers everything: apps, contacts, music. Please help me
-
Photo Dimensions in iWeb - Am I correct?
Is my understanding correct? Under Inspector/Photos/Download Size there are three options, small, medium and large. From a previous post I was informed that these relate to: 1) small = 800px x 600px 2) medium = 1280 x 960 3) large = 1920 x 1440 On m
-
Why when I have an annual Creative Cloud subscription am I repeatedly asked to login to authenticate
Why when I have an annual Creative Cloud subscription am I repeatedly asked to login to authenticate the subscription - sometimes twice, three times during a single product session usage? It's really out of control with Acrobat Pro where I am routine
-
hi, I have installed apex 2.2.1. Now it says: The structure of the link to the Application Express administration services is as follows: http://host:port/pls/apex/apex_admin from where i can get this port number? kindly help.. Regards, Saad.
-
Hardware colour calibration tips...
Hi there, I've recently purchased one of the new glossy 15" MacBook Pros and a glossy 24" Led Cinema Display. I'm no expert in colour on Macs, so i want to be sure i have everything set up as well as possible. I design for web mostly (so print calibr