OR is taking much more time than UNION

hi gems..
i have written a query using UNION clause and it took 12 seconds to give result.
then i wrote the same query using OR operator and then it took 78 seconds to give the resultset.
The tables which are referred by this qurey have no indexes.
the cost plans for the query with OR is also much more lesser than that with UNION.
please suggest why OR is taking more time.
thanks in advance

Here's a ridiculously simple example.  (these tables don't even have any rows in them)
If you had separate indexes on col1 and col2, the optimizer might use indexes in the union but not in the or statement:
Which is faster will depend on the usual list of things.
Of course, the union also requires a sort operation.
SQL> create table table1
  2  (col1 number, col2 number, col3 number, col4 number);
Table created.
SQL> create index t1_idx1 on table1(col1);
Index created.
SQL> create index t1_idx2 on table1(col2);
Index created.
SQL> explain plan for
  2  select col1, col2, col3, col4
  3  from table1
  4  where col1> = 123
  5  or col2 <= 456;
Explained.
SQL> @xp
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |        |     1 |    52 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TABLE1 |     1 |    52 |     2   (0)| 00:00:01 |
Predicate Information (identified by operation id):
   1 - filter("COL1">=123 OR "COL2"<=456)
SQL> explain plan for
  2  select col1, col2, col3, col4
  3  from table1
  4  where col1 >= 123
  5  union
  6  select col1, col2, col3, col4
  7  from table1
  8  where col2 <= 456;
Explained.
SQL> @xp
| Id  | Operation                     | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT              |         |     2 |   104 |     4  (75)| 00:00:01 |
|   1 |  SORT UNIQUE                  |         |     2 |   104 |     4  (75)| 00:00:01 |
|   2 |   UNION-ALL                   |         |       |       |            |          |
|   3 |    TABLE ACCESS BY INDEX ROWID| TABLE1  |     1 |    52 |     1   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | T1_IDX1 |     1 |       |     1   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID| TABLE1  |     1 |    52 |     1   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN          | T1_IDX2 |     1 |       |     1   (0)| 00:00:01 |
Predicate Information (identified by operation id):
   4 - access("COL1">=123)
   6 - access("COL2"<=456)

Similar Messages

  • Test program running taking much more time on high end server T5440 than low end server  T5220

    Hi all,
    I have written the following program and run on both T5440  [1.4 GHz, 95 GB RAM, 32 cores(s), 256 logical (virtual) processor(s),] and  T5220 [(UltraSPARC-T2 (chipid 0, clock 1165 MH) , 8GB RAM, 1 core, 8 virtual processors )] on same OS version.  I found that T5540 server takes more time than T5220. Please find below the details.
    test1.cpp
    #include <iostream>
    #include <pthread.h>
    using namespace std;
    #define NUM_OF_THREADS 20
    struct ABCDEF {
    char A[1024];
    char B[1024];
    void *start_func(void *)
        long long i = 6000;
        while(i--)
                    ABCDEF*             sdf = new ABCDEF;
                    delete sdf;
                    sdf = NULL;
        return NULL;
    int main(int argc, char* argv[])
        pthread_t tid[50];
        for(int i=0; i<NUM_OF_THREADS; i++)
                    pthread_create(&tid[i], NULL, start_func, NULL);
                    cout<<"Creating thread " << i <<endl;
        for(int i=0; i<NUM_OF_THREADS; i++)
                    pthread_join(tid[i], NULL);
                    cout<<"Waiting for thread " << i <<endl;
    After executing the above program on T5440 takes :
    real 0.78
    user 3.94s
    sys 0.05
    After executing the above program on T5220 takes :
    real 0.23
    user 1.43s
    sys 0.03
    It seems that T5440 which is high end server takes almost 3 times more time than T5220 which is low end server.  
    However, I have one more observation. I tried the following program :
    test2.cpp
    #include <iostream>
    #include <pthread.h>
    using namespace std;
    #define NUM_OF_THREADS 20
    struct ABCDEF {
    char A[1024];
    char B[1024];
    int main(int argc, char* argv[])
        long long i = 6000000;
        while(i--)
            ABCDEF*  sdf = new ABCDEF;
            delete sdf;
            sdf = NULL;
        return 0;
    It seems that T5440 server is fast in this case as compaired to T5220 server.
    Could anyone please help me out the exact reason for this behaviour as my application is slow as well on this T5440 server. I have posted earlier as well for the same issue. 
    Thanks in advance !!!
    regards,
    Sanjay

    You already asked this question...
    48 hours earlier, and in the same Solaris forum space
    Repeating the post isn't going to get you a response any faster, and actually now have people NOT respond because you are not showing any patience.
    These are end-user community forums, not a place to expect Oracle Technical Support.   There is no obligation that there be a response.
    If you have a business-critical issue and hope to get accurate and timely response, then use your service contract credentials to open a Support request.
    This new redundant post is locked.
    Edit:
    It appears that at the same time the O.P. posted this redundant thread, they also posted the same question to at least one other forum web site:
    http://www.unix.com/solaris/229269-test-program-running-taking-much-more-time-high-end-server-t5440-than-low-end-server-t5220.html

  • Why this Query is taking much longer time than expected?

    Hi,
    I need experts support on the below mentioned issue:
    Why this Query is taking much longer time than expected? Sometimes I am getting connection timeout error. Is there any better way to achieve result in shortest time.  Below, please find the DDL & DML:
    DDL
    BHDCollections
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BHDCollections](
     [BHDCollectionid] [bigint] IDENTITY(1,1) NOT NULL,
     [GroupMemberid] [int] NOT NULL,
     [BHDDate] [datetime] NOT NULL,
     [BHDShift] [varchar](10) NULL,
     [SlipValue] [decimal](18, 3) NOT NULL,
     [ProcessedValue] [decimal](18, 3) NOT NULL,
     [BHDRemarks] [varchar](500) NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_BHDCollections] PRIMARY KEY CLUSTERED
     [BHDCollectionid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    BHDCollectionsDet
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[BHDCollectionsDet](
     [CollectionDetailid] [bigint] IDENTITY(1,1) NOT NULL,
     [BHDCollectionid] [bigint] NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](18, 3) NOT NULL,
     [Quantity] [int] NOT NULL,
     CONSTRAINT [PK_BHDCollectionsDet] PRIMARY KEY CLUSTERED
     [CollectionDetailid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    Banks
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Banks](
     [Bankid] [int] IDENTITY(1,1) NOT NULL,
     [Bankname] [varchar](50) NOT NULL,
     [Bankabbr] [varchar](50) NULL,
     [BankContact] [varchar](50) NULL,
     [BankTel] [varchar](25) NULL,
     [BankFax] [varchar](25) NULL,
     [BankEmail] [varchar](50) NULL,
     [BankActive] [bit] NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_Banks] PRIMARY KEY CLUSTERED
     [Bankid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    Groupmembers
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[GroupMembers](
     [GroupMemberid] [int] IDENTITY(1,1) NOT NULL,
     [Groupid] [int] NOT NULL,
     [BAID] [int] NOT NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_GroupMembers] PRIMARY KEY CLUSTERED
     [GroupMemberid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_BankAccounts] FOREIGN KEY([BAID])
    REFERENCES [dbo].[BankAccounts] ([BAID])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_BankAccounts]
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_Groups] FOREIGN KEY([Groupid])
    REFERENCES [dbo].[Groups] ([Groupid])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_Groups]
    BankAccounts
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BankAccounts](
     [BAID] [int] IDENTITY(1,1) NOT NULL,
     [CustomerID] [int] NOT NULL,
     [Locationid] [varchar](25) NOT NULL,
     [Bankid] [int] NOT NULL,
     [BankAccountNo] [varchar](50) NOT NULL,
     CONSTRAINT [PK_BankAccounts] PRIMARY KEY CLUSTERED
     [BAID] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Banks] FOREIGN KEY([Bankid])
    REFERENCES [dbo].[Banks] ([Bankid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Banks]
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Locations1] FOREIGN KEY([Locationid])
    REFERENCES [dbo].[Locations] ([Locationid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Locations1]
    Currency
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Currency](
     [Currencyid] [int] IDENTITY(1,1) NOT NULL,
     [CurrencyISOCode] [varchar](20) NOT NULL,
     [CurrencyCountry] [varchar](50) NULL,
     [Currency] [varchar](50) NULL,
     CONSTRAINT [PK_Currency] PRIMARY KEY CLUSTERED
     [Currencyid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    CurrencyDetails
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[CurrencyDetails](
     [CurDenid] [int] IDENTITY(1,1) NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](15, 3) NOT NULL,
     [DenominationType] [varchar](25) NOT NULL,
     CONSTRAINT [PK_CurrencyDetails] PRIMARY KEY CLUSTERED
     [CurDenid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    QUERY
    WITH TEMP_TABLE AS
    SELECT     0 AS COINS, BHDCollectionsDet.Quantity AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'Currency') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)
    UNION ALL
    SELECT     BHDCollectionsDet.Quantity AS COINS, 0 AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'COIN') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)),
    TEMP_TABLE2 AS
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM TEMP_TABLE Group By CollectionDate,DSLIPS,Bankname
    SELECT CollectionDate,Bankname,count(DSLIPS) AS DSLIPS,sum(BN) AS BN,sum(COINS) AS coins FROM TEMP_TABLE2 Group By CollectionDate,Bankname
    HAVING COUNT(DSLIPS)<>0;

    Without seeing an execution plan of the query it is hard to suggest something useful. Try insert the result of UNION ALL to the temporary table and then perform an aggregation on that table, not a CTE.
    Just
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM
    #tmp Group By CollectionDate,DSLIPS,Bankname
    HAVING COUNT(DSLIPS)<>0;
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • T-code SUIM taking much more time for generating output for profie change.

    Hi All
    We want to extract report for profile addition and deletion for Users in ECC 6. While executing the the t-code SUIM it is taking much more time (taking more than 20 hrs). This problem is coming after patch application
    Please give the solution/Suggest to minimize the time taken in report generation.
    Thanks-
    Guru Prasad Dwivedi

    Hello Prasad,
    The reason for the performance trouble is a new feature regarding the user change documents. Since note 874850 and 1015043 you will get a more complete overview about the changes regarding a user.
    The disadvantage of that new feature is, that in some customer system usage scenario, the performance is very poor. That's the case, if the central change documents are intensivly used also by other applications and the tables CDPOS, CDHDR, ... contains a very big count of rows. Unfortunatly the user change documents can not be searched by the key columns of the central change docs. - so the bad response time can explained.
    What now ... ?
    There are some work arounds to get the change documents on faster way.
    1st. - You can get the former report output and performance if you
           would use the report RSUSR100 instead of the new RSUSR100N in
           separate mode.
    2nd. - If you want to use the new report RSUSR100N directly and only
           want to get the information about the traditional topics
           (content of USH* tables) you should only mark the search areas
           on the tabstrip 'user attributes') to get a better performance.
         - furthermore limit the date range, if possible
    3rd. - You should regulary (monthly) archive the user relevant documents
           for PFCG and IDENTITY from the central change documents.
           As per our note 1079207 chapter 3 you can reload that archives
           into more selective tables.
           The selection for change documents will be rather faster over
           reloaded archived documents than the documents in the
           central change documents tables.
    Best Regards,
    Guilherme de Oliveira.

  • After upgrading to lion starting up is taking much more time ??

    After upgrading to lion starting up is taking much more time is there any solution to this ?
    i do all things about cleaning disk permission or editing .

    Have you check what is being started at the same time?
    Lion will start more applications then were being started in the past.
    Allan

  • Why SQL2 took much more time than SQL1?

    I run these 2 SQLs sequencely.
    --- SQL1: It took 245 seconds.
    create table PORTAL_DAYLOG_100118_bak
    as
    select * from PORTAL_DAYLOG_100118;
    --- SQL2: It took 3105 seconds.
    create table PORTAL_DAYLOG_100121_bak
    as
    select * from PORTAL_DAYLOG_100121;
    It is really strange that SQL2 took almost 13 times than SQL1, with nearly same data amount and same data structure in the same tablespace.
    Could anyone tell me the reason? or How could I find out why?
    Here is more detail info. for my case,
    --- Server:
    [@wapbi.no.sohu.com ~]$ uname -a
    Linux test 2.6.18-128.el5 #1 SMP Wed Dec 17 11:41:38 EST 2008 x86_64 x86_64 x86_64 GNU/Linux
    --- DB
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    --- Tablespace:
    CREATE TABLESPACE PORTAL DATAFILE
      '/data/oradata/wapbi/portal01.dbf' SIZE 19456M AUTOEXTEND ON NEXT 1024M MAXSIZE UNLIMITED,
      '/data/oradata/wapbi/portal02.dbf' SIZE 17408M AUTOEXTEND ON NEXT 1024M MAXSIZE UNLIMITED
    LOGGING
    ONLINE
    PERMANENT
    EXTENT MANAGEMENT LOCAL AUTOALLOCATE
    BLOCKSIZE 8K
    SEGMENT SPACE MANAGEMENT AUTO
    FLASHBACK ON;
    --- Tables:
    SQL> select table_name,num_rows,blocks,avg_row_len from dba_tables
      2  where table_name in ('PORTAL_DAYLOG_100118','PORTAL_DAYLOG_100121');
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN
    PORTAL_DAYLOG_100118             20808536     269760          85
    PORTAL_DAYLOG_100121             33747911     440512          86
    CREATE TABLE PORTAL_DAYLOG_100118
      IP           VARCHAR2(20 BYTE),
      NODEPATH     VARCHAR2(50 BYTE),
      PG           VARCHAR2(20 BYTE),
      PAGETYPE     INTEGER,
      CLK          VARCHAR2(20 BYTE),
      FR           VARCHAR2(20 BYTE),
      PHID         INTEGER,
      ANONYMOUSID  VARCHAR2(50 BYTE),
      USID         VARCHAR2(50 BYTE),
      PASSPORT     VARCHAR2(200 BYTE),
      M_TIME       CHAR(4 BYTE)                     NOT NULL,
      M_DATE       CHAR(6 BYTE)                     NOT NULL,
      LOGDATE      DATE
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    CREATE TABLE PORTAL_DAYLOG_100121
      IP           VARCHAR2(20 BYTE),
      NODEPATH     VARCHAR2(50 BYTE),
      PG           VARCHAR2(20 BYTE),
      PAGETYPE     INTEGER,
      CLK          VARCHAR2(20 BYTE),
      FR           VARCHAR2(20 BYTE),
      PHID         INTEGER,
      ANONYMOUSID  VARCHAR2(50 BYTE),
      USID         VARCHAR2(50 BYTE),
      PASSPORT     VARCHAR2(200 BYTE),
      M_TIME       CHAR(4 BYTE)                     NOT NULL,
      M_DATE       CHAR(6 BYTE)                     NOT NULL,
      LOGDATE      DATE
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;Any comment will be really appeciated!!!
    Satine

    Hey Anurag,
    Thank you for your help!
    Here it is.
    SQL1:
    create table portal.PORTAL_DAYLOG_100118_TEST
    as
    select * from portal.PORTAL_DAYLOG_100118
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1    374.69     519.05     264982     265815     274858    20808536
    Fetch        0      0.00       0.00          0          0          0           0
    total        2    374.69     519.05     264982     265815     274858    20808536
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS
    Rows     Row Source Operation
          0  LOAD AS SELECT  (cr=268138 pr=264982 pw=264413 time=0 us)
    20808536   TABLE ACCESS FULL PORTAL_DAYLOG_100118 (cr=265175 pr=264981 pw=0 time=45792172 us cost=73478 size=1768725560 card=20808536)SQL2:
    create table portal.PORTAL_DAYLOG_100121_TEST
    as
    select * from portal.PORTAL_DAYLOG_100121
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1   1465.72    1753.35     290959     291904     300738    22753695
    Fetch        0      0.00       0.00          0          0          0           0
    total        2   1465.72    1753.35     290959     291904     300738    22753695
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS
    Rows     Row Source Operation
          0  LOAD AS SELECT  (cr=295377 pr=290960 pw=289966 time=0 us)
    22753695   TABLE ACCESS FULL PORTAL_DAYLOG_100121 (cr=291255 pr=290958 pw=0 time=56167952 us cost=80752 size=1956817770 card=22753695)Best wishes,
    Satine

  • HT201274 My iphone4 is taking more than 5 hours to erase all the data still it is under process , still how much more time i have to wait for my mobile to ON

    My iphone4 is taking more than 5 hours to erase all the data still it is under process , still how much more time i have to wait for my mobile to ON ?

    I'm having this EXACT same problem with my iPhone 4, and I have the same computer stats (I have a Samsung Series 7)

  • Level1 backup is taking more time than Level0

    The Level1 backup is taking more time than Level0, I really am frustated how could it happen. I have 6.5GB of database. Level0 took 8 hrs but level1 is taking more than 8hrs . please help me in this regard.

    Ogan Ozdogan wrote:
    Charles,
    By enabling the block change tracking will be indeed faster than before he have got. I think this does not address the question of the OP unless you are saying the incremental backup without the block change tracking is slower than a level 0 (full) backup?
    Thank you in anticipating.
    OganOgan,
    I can't explain why a 6.5GB level 0 RMAN backup would require 8 hours to complete (maybe a very slow destination device connected by 10Mb/s Ethernet) - I would expect that it should complete in a couple of minutes.
    An incremental level 1 backup without a block change tracking file could take longer than a level 0 backup. I encountered a good written description of why that could happen, but I can't seem to locate the source at the moment. The longer run time might have been related to the additional code paths required to constantly compare the SCN of each block, and the variable write rate which may affect some devices, such as a tape device.
    A paraphrase from the book "Oracle Database 10g RMAN Backup & Recovery"
    "Incremental backups must check the header of each block to discover if it has changed since the last incremental backup - that means an incremental backup may not complete much faster than a full backup."
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • SSRS report is consuming much more time to fetch data from DB even direct run SP takes less than a second

    Hi,
    we are using SQL SERVER 2008R2 X64 RTM version. 
    One of the SSRS report designed by developer  is  consuming  much more time( 5 to 6 minutes ) to fetch data from DB. Even direct run of  Stored Procedure (Called in report )takes less than a second to display the result set.
    Please help.
    Regards Naveen MSSQL DBA

    Hi Naveen,
    Based on my understanding, you spend a little time retrieving data with a stored procedure from database in dataset designer. However, it takes long time to run the report to display the data, right?
    In Reporting Services, the total time to generate a report include TimeDataRetreval, TimeProcessing and TimeRendering. In your scenario, since you mentioned retrieving data costs a little time, you should check the table
    Executionlog3 in the ReportServer database to find which section costs most of time, TimeProcessing or TimeRendering. Then you can refer to this article to optimize your report:
    Troubleshooting Reports: Report Performance.
    Besides, if parameters exist in the report, you should declare variables inside of the stored procedure and assign the incoming parameters to the variables. For more information, please refer to the similar thread:
    Fast query runs slow in SSRS.
    If you have any question, please feel free to ask.
    Best regards,
    Qiuyun Yu
    Qiuyun Yu
    TechNet Community Support

  • Why are my apps taking up more space than what the appstore says they will? Like were talking a full GB more than what it should.

    Why are my apps taking up more space than what the appstore says they will? Like were talking a half a GB more than what it should.
    For example the app Modern Combat 4 says it should only be 1.58 GB on the appstore but when i go into settings>general>usage it says it takes up 2.0 GB.
    Ive tried deleting the apps and then reinstalling them but they still take up so much more space.
    This happens for multiple apps of mine. Some I have never even opened before.
    I have an ipod touch (4th Gen)(32GB).
    I link to a mac computer.
    I have IOS version 6.1.3
    I noticed this a long time ago but didnt need the space, Now i need the space!!
    Please help!!

    Because you see the file size of the compuressed file. The file gets expanded when installed and that takes up more space. Also, data is kept in the app.

  • Clear data taking double the time than usual...

    Hello,
    We are on 11.1.1.3 and force archiving the database to ABC.arc file. All of sudden we noticed the double the size of ABC.arc file (from 3GB to 6GB) and the calc script to clear data is taking double the time than previous( earlier it was .30mins n now its 1hour).
    But the daily data files are of same size like 90MB and other 2 metadata files 80MB & 60KB.
    And also there was no change in the time to archive the database & calulating aggregation of whole database etc.,
    Only Clearing data script itself taking double the time. From EAS logs we noticed that the no. of blocks to be cleared are same (3000) on all days and no. of fixed account members also same on all days.
    There was no suspicious log while the clearing data script running..it is same as before.
    So,could any one has idea what might be wrong and what are the arficats would be there in .arc file which may cause to increase its size almost double.
    Any help would be appreciated.
    Thanks,
    Edited by: user11150227 on Dec 24, 2012 7:24 AM

    If it was taking 30 mins to clear data before something else is wrong to begin with. Also - FIXing on dense members with calc commands (CLEARDATA, COPY DATA) degrades performance severely.
    For what it's worth you will probably get more replies with a more professional post, where are is not abbreviated 'r' and would not abbreviated 'wud'.
    -Matt

  • Why Safari takes much long time than other browser to open a page?

    I have used windows since start of my computing life... Now i have replaced my windows laptop to a macbook pro.. Everything is amazing.. except only one. Safari.. I experience it takes too long time for openning a webpage...even for a simple webpage...takes much more time for loading a flash... I dont know why Safari is taking too many times as compared to other web browsers...

    I found the same problem too after moving into mac earlier this week....
    i guess here's the solution https://discussions.apple.com/thread/5227588
    try uninstall/unable the third party extension

  • My iphone is currently using battery life much more quickly than inthe past, and it also has an icon tht is starying on the screen, to the left of the battery charge percentage, that looks like a lock with a circle around it

    i have an iphone that is using battery life much more quickly than in the past, a full charge is going to zero in a half a day or so, with out much use.
    an icon is also appearing next to the charge percentage in the top right, and the icon is a circle around a lock
    can anyone tell me what the problem is, and what the icon means?

    The Icon meas your display orientation is locked to vertical so it wont change when you turn the phone sideways.
    First thing you should check with the battery life is multitasking, which also happens to be where the orientation lock can be turned off.  check out   http://support.apple.com/kb/HT4211 and go from there

  • Since loading Lion, I've experienced much more instability than Snow Leopard. In particular, Mail crashes with regularity, full-screen apps seem to run slower and show the beach ball more often for longer, etc.  I'm disappointed with the performance. Any

    Since loading Lion, I've experienced much more instability than Snow Leopard. In particular, Mail crashes with regularity, full-screen apps seem to run slower and show the beach ball more often for longer, etc. I love the features, but I'm disappointed with the performance. Any help coming from Apple?  I've been sending them so many reports after crashes, that their file must be full!

    Summoning max. courage, I did what you advised. Here is the result. What does this tell you? My Lion 7.2 (mid 2011 iMac) has several annoying glitches (which I have so far tolerated through gritted teeth) but none that have actually stopped me working.
    BTW, I see several items involving CleanMyMac which I did not know I had. It is generally villified as a trouble-maker. Spotlight can't find an app. or a utility of that name. How can I get rid of what's there please? Just delete?
    Last login: Thu Nov  3 20:55:11 on console
    Steve-Kirkbys-iMac:~ stevekirkby$ kextstat -kl | awk ' !/apple/ { print $6 $7 } '
    com.AmbrosiaSW.AudioSupport(4.0)
    Steve-Kirkbys-iMac:~ stevekirkby$ sudo launchctl list | sed 1d | awk ' !/0x|apple|com\.vix|edu\.|org\./ { print $3 } '
    Password:
    com.openssh.sshd
    com.stclairsoft.DefaultFolderXAgent
    com.microsoft.office.licensing.helper
    com.bombich.ccc.scheduledtask.067493DB-2728-4DF3-87D8-092EF69086E8
    com.bombich.ccc
    com.adobe.SwitchBoard
    Steve-Kirkbys-iMac:~ stevekirkby$ launchctl list | sed 1d | awk ' !/0x|apple|edu\.|org\./ { print $3 } '
    com.sony.PMBPortable.AutoRun
    uk.co.markallan.clamxav.freshclam
    com.veoh.webplayer.startup
    com.macpaw.CleanMyMac.volumeWatcher
    com.macpaw.CleanMyMac.trashSizeWatcher
    com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae
    com.adobe.AAM.Scheduler-1.0
    Steve-Kirkbys-iMac:~ stevekirkby$ ls -1A {,/}Library/{Ad,Compon,Ex,Fram,In,La,Mail/Bu,P*P,Priv,Qu,Scripti,Sta}* 2> /dev/null
    /Library/Components:
    /Library/Extensions:
    /Library/Frameworks:
    AEProfiling.framework
    AERegistration.framework
    ApplicationEnhancer.framework
    AudioMixEngine.framework
    FxPlug.framework
    NyxAudioAnalysis.framework
    PluginManager.framework
    ProFX.framework
    ProMetadataSupport.framework
    TSLicense.framework
    iLifeFaceRecognition.framework
    iLifeKit.framework
    iLifePageLayout.framework
    iLifeSQLAccess.framework
    iLifeSlideshow.framework
    /Library/Input Methods:
    /Library/Internet Plug-Ins:
    AdobePDFViewer.plugin
    EPPEX Plugin.plugin
    Flash Player.plugin
    Flip4Mac WMV Plugin.plugin
    JavaAppletPlugin.plugin
    Quartz Composer.webplugin
    QuickTime Plugin.plugin
    SharePointBrowserPlugin.plugin
    SharePointWebKitPlugin.webplugin
    Silverlight.plugin
    flashplayer.xpt
    iPhotoPhotocast.plugin
    nsIQTScriptablePlugin.xpt
    /Library/LaunchAgents:
    com.adobe.AAM.Updater-1.0.plist
    com.sony.PMBPortable.AutoRun.plist
    /Library/LaunchDaemons:
    com.adobe.SwitchBoard.plist
    com.apple.remotepairtool.plist
    com.bombich.ccc.plist
    com.bombich.ccc.scheduledtask.067493DB-2728-4DF3-87D8-092EF69086E8.plist
    com.microsoft.office.licensing.helper.plist
    com.stclairsoft.DefaultFolderXAgent.plist
    /Library/PreferencePanes:
    .DS_Store
    Application Enhancer.prefPane
    Default Folder X.prefPane
    DejaVu.prefPane
    Flash Player.prefPane
    Flip4Mac WMV.prefPane
    /Library/PrivilegedHelperTools:
    com.bombich.ccc
    com.microsoft.office.licensing.helper
    com.stclairsoft.DefaultFolderXAgent
    /Library/QuickLook:
    iWork.qlgenerator
    /Library/QuickTime:
    AppleIntermediateCodec.component
    AppleMPEG2Codec.component
    DesktopVideoOut.component
    DivX 6 Decoder.component
    FCP Uncompressed 422.component
    Flip4Mac WMV Advanced.component
    Flip4Mac WMV Export.component
    Flip4Mac WMV Import.component
    LiveType.component
    /Library/ScriptingAdditions:
    .DS_Store
    Adobe Unit Types.osax
    Default Folder X Addition.osax
    /Library/StartupItems:
    Library/Address Book Plug-Ins:
    Library/Frameworks:
    EWSMac.framework
    Library/Input Methods:
    .localized
    Library/Internet Plug-Ins:
    Library/LaunchAgents:
    com.adobe.AAM.Updater-1.0.plist
    com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae.plist
    com.macpaw.CleanMyMac.trashSizeWatcher.plist
    com.macpaw.CleanMyMac.volumeWatcher.plist
    com.veoh.webplayer.startup.plist
    uk.co.markallan.clamxav.freshclam.plist
    Library/PreferencePanes:
    .DS_Store
    Perian.prefPane
    WindowShade X.prefPane
    Library/QuickTime:
    AC3MovieImport.component
    Perian.component
    Library/ScriptingAdditions:
    Steve-Kirkbys-iMac:~ stevekirkby$

Maybe you are looking for