Query taking more than 1/2 hour for 80 million rows in fact table

Hi All,
I am stuck in this query as it it taking more than 35 mins to execute for 80 million rows. My SLA is less than 30 mins for 160 million rows i.e. double the number.
Below is the query and the Execution Plan.
SELECT txn_id AS txn_id,
acntng_entry_src AS txn_src,
f.hrarchy_dmn_id AS hrarchy_dmn_id,
f.prduct_dmn_id AS prduct_dmn_id,
f.pstng_crncy_id AS pstng_crncy_id,
f.acntng_entry_typ AS acntng_entry_typ,
MIN (d.date_value) AS min_val_dt,
GREATEST (MAX (d.date_value),
LEAST ('07-Feb-2009', d.fin_year_end_dt))
AS max_val_dt
FROM Position_Fact f, Date_Dimension d
WHERE f.val_dt_dmn_id = d.date_dmn_id
GROUP BY txn_id,
acntng_entry_src,
f.hrarchy_dmn_id,
f.prduct_dmn_id,
f.pstng_crncy_id,
f.acntng_entry_typ,
d.fin_year_end_dt
Execution Plan is as:
11 HASH JOIN Cost: 914,089 Bytes: 3,698,035,872 Cardinality: 77,042,414      
                                                                                9 TABLE ACCESS FULL TABLE Date_Dimension Cost: 29 Bytes: 94,960 Cardinality: 4,748
                                                                                10 TABLE ACCESS FULL TABLE Position_Fact Cost: 913,693 Bytes: 2,157,187,592 Cardinality: 77,042,414
Kindly suggest, how to make it faster.
Regards,
Sid

The above is just a part of the query that is taking the maximum time.
Kindly find the entire query and the plan as follows:
WITH MIN_MX_DT
AS
( SELECT
TXN_ID AS TXN_ID,
ACNTNG_ENTRY_SRC AS TXN_SRC,
F.HRARCHY_DMN_ID AS HRARCHY_DMN_ID,
F.PRDUCT_DMN_ID AS PRDUCT_DMN_ID,
F.PSTNG_CRNCY_ID AS PSTNG_CRNCY_ID,
F.ACNTNG_ENTRY_TYP AS ACNTNG_ENTRY_TYP,
MIN (D.DATE_VALUE) AS MIN_VAL_DT,
GREATEST (MAX (D.DATE_VALUE), LEAST (:B1, D.FIN_YEAR_END_DT))
AS MAX_VAL_DT
FROM
proj_PSTNG_FCT F, proj_DATE_DMN D
WHERE
F.VAL_DT_DMN_ID = D.DATE_DMN_ID
GROUP BY
TXN_ID,
ACNTNG_ENTRY_SRC,
F.HRARCHY_DMN_ID,
F.PRDUCT_DMN_ID,
F.PSTNG_CRNCY_ID,
F.ACNTNG_ENTRY_TYP,
D.FIN_YEAR_END_DT),
SLCT_RCRDS
AS (
SELECT
M.TXN_ID,
M.TXN_SRC,
M.HRARCHY_DMN_ID,
M.PRDUCT_DMN_ID,
M.PSTNG_CRNCY_ID,
M.ACNTNG_ENTRY_TYP,
D.DATE_VALUE AS VAL_DT,
D.DATE_DMN_ID,
D.FIN_WEEK_NUM AS FIN_WEEK_NUM,
D.FIN_YEAR_STRT AS FIN_YEAR_STRT,
D.FIN_YEAR_END AS FIN_YEAR_END
FROM
MIN_MX_DT M, proj_DATE_DMN D
WHERE
D.HOLIDAY_IND = 0
AND D.DATE_VALUE >= MIN_VAL_DT
AND D.DATE_VALUE <= MAX_VAL_DT),
DLY_HDRS
AS (
SELECT
S.TXN_ID AS TXN_ID,
S.TXN_SRC AS TXN_SRC,
S.DATE_DMN_ID AS VAL_DT_DMN_ID,
S.HRARCHY_DMN_ID AS HRARCHY_DMN_ID,
S.PRDUCT_DMN_ID AS PRDUCT_DMN_ID,
S.PSTNG_CRNCY_ID AS PSTNG_CRNCY_ID,
SUM
DECODE
PNL_TYP_NM,
:B5, DECODE (NVL (F.PSTNG_TYP, :B2),
:B2, NVL (F.PSTNG_AMNT, 0) * (-1),
NVL (F.PSTNG_AMNT, 0)),
0))
AS MTM_AMT,
NVL (
LAG (
SUM (
DECODE (
PNL_TYP_NM,
:B5, DECODE (NVL (F.PSTNG_TYP, :B2),
:B2, NVL (F.PSTNG_AMNT, 0) * (-1),
NVL (F.PSTNG_AMNT, 0)),
0)))
OVER (
PARTITION BY S.TXN_ID,
S.TXN_SRC,
S.HRARCHY_DMN_ID,
S.PRDUCT_DMN_ID,
S.PSTNG_CRNCY_ID
ORDER BY S.VAL_DT),
0)
AS YSTDY_MTM,
SUM (
DECODE (
PNL_TYP_NM,
:B4, DECODE (NVL (F.PSTNG_TYP, :B2),
:B2, NVL (F.PSTNG_AMNT, 0) * (-1),
NVL (F.PSTNG_AMNT, 0)),
0))
AS CASH_AMT,
SUM (
DECODE (
PNL_TYP_NM,
:B3, DECODE (NVL (F.PSTNG_TYP, :B2),
:B2, NVL (F.PSTNG_AMNT, 0) * (-1),
NVL (F.PSTNG_AMNT, 0)),
0))
AS PAY_REC_AMT,
S.VAL_DT,
S.FIN_WEEK_NUM,
S.FIN_YEAR_STRT,
S.FIN_YEAR_END,
NVL (TRUNC (F.REVSN_DT), S.VAL_DT) AS REVSN_DT,
S.ACNTNG_ENTRY_TYP AS ACNTNG_ENTRY_TYP
FROM
SLCT_RCRDS S,
proj_PSTNG_FCT F,
proj_ACNT_DMN AD,
proj_PNL_TYP_DMN PTD
WHERE
S.TXN_ID = F.TXN_ID(+)
AND S.TXN_SRC = F.ACNTNG_ENTRY_SRC(+)
AND S.HRARCHY_DMN_ID = F.HRARCHY_DMN_ID(+)
AND S.PRDUCT_DMN_ID = F.PRDUCT_DMN_ID(+)
AND S.PSTNG_CRNCY_ID = F.PSTNG_CRNCY_ID(+)
AND S.DATE_DMN_ID = F.VAL_DT_DMN_ID(+)
AND S.ACNTNG_ENTRY_TYP = F.ACNTNG_ENTRY_TYP(+)
AND SUBSTR (AD.ACNT_NUM, 0, 1) IN (1, 2, 3)
AND NVL (F.ACNT_DMN_ID, 1) = AD.ACNT_DMN_ID
AND NVL (F.PNL_TYP_DMN_ID, 1) = PTD.PNL_TYP_DMN_ID
GROUP BY
S.TXN_ID,
S.TXN_SRC,
S.DATE_DMN_ID,
S.HRARCHY_DMN_ID,
S.PRDUCT_DMN_ID,
S.PSTNG_CRNCY_ID,
S.VAL_DT,
S.FIN_WEEK_NUM,
S.FIN_YEAR_STRT,
S.FIN_YEAR_END,
TRUNC (F.REVSN_DT),
S.ACNTNG_ENTRY_TYP,
F.TXN_ID)
SELECT
D.TXN_ID,
D.VAL_DT_DMN_ID,
D.REVSN_DT,
D.TXN_SRC,
D.HRARCHY_DMN_ID,
D.PRDUCT_DMN_ID,
D.PSTNG_CRNCY_ID,
D.YSTDY_MTM,
D.MTM_AMT,
D.CASH_AMT,
D.PAY_REC_AMT,
MTM_AMT + CASH_AMT + PAY_REC_AMT AS DLY_PNL,
SUM (
MTM_AMT + CASH_AMT + PAY_REC_AMT)
OVER (
PARTITION BY D.TXN_ID,
D.TXN_SRC,
D.HRARCHY_DMN_ID,
D.PRDUCT_DMN_ID,
D.PSTNG_CRNCY_ID,
D.FIN_WEEK_NUM || D.FIN_YEAR_STRT || D.FIN_YEAR_END
ORDER BY D.VAL_DT)
AS WTD_PNL,
SUM (
MTM_AMT + CASH_AMT + PAY_REC_AMT)
OVER (
PARTITION BY D.TXN_ID,
D.TXN_SRC,
D.HRARCHY_DMN_ID,
D.PRDUCT_DMN_ID,
D.PSTNG_CRNCY_ID,
D.FIN_YEAR_STRT || D.FIN_YEAR_END
ORDER BY D.VAL_DT)
AS YTD_PNL,
D.ACNTNG_ENTRY_TYP AS ACNTNG_PSTNG_TYP,
'EOD ETL' AS CRTD_BY,
SYSTIMESTAMP AS CRTN_DT,
NULL AS MDFD_BY,
NULL AS MDFCTN_DT
FROM
DLY_HDRS D
Plan
SELECT STATEMENT ALL_ROWSCost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
25 WINDOW SORT Cost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
24 WINDOW SORT Cost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
23 VIEW Cost: 10,519,225 Bytes: 3,369,680,886 Cardinality: 7,854,734
22 WINDOW BUFFER Cost: 10,519,225 Bytes: 997,551,218 Cardinality: 7,854,734
21 SORT GROUP BY Cost: 10,519,225 Bytes: 997,551,218 Cardinality: 7,854,734
20 HASH JOIN Cost: 10,296,285 Bytes: 997,551,218 Cardinality: 7,854,734
1 TABLE ACCESS FULL TABLE proj_PNL_TYP_DMN Cost: 3 Bytes: 45 Cardinality: 5
19 HASH JOIN Cost: 10,296,173 Bytes: 2,695,349,628 Cardinality: 22,841,946
5 VIEW VIEW index$_join$_007 Cost: 3 Bytes: 84 Cardinality: 7
4 HASH JOIN
2 INDEX FAST FULL SCAN INDEX (UNIQUE) proj_ACNT_DMN_PK Cost: 1 Bytes: 84 Cardinality: 7
3 INDEX FAST FULL SCAN INDEX (UNIQUE) proj_ACNT_DMN_UNQ Cost: 1 Bytes: 84 Cardinality: 7
18 HASH JOIN RIGHT OUTER Cost: 10,293,077 Bytes: 68,925,225,244 Cardinality: 650,237,974
6 TABLE ACCESS FULL TABLE proj_PSTNG_FCT Cost: 913,986 Bytes: 4,545,502,426 Cardinality: 77,042,414
17 VIEW Cost: 7,300,017 Bytes: 30,561,184,778 Cardinality: 650,237,974
16 MERGE JOIN Cost: 7,300,017 Bytes: 230,184,242,796 Cardinality: 650,237,974
8 SORT JOIN Cost: 30 Bytes: 87,776 Cardinality: 3,376
7 TABLE ACCESS FULL TABLE proj_DATE_DMN Cost: 29 Bytes: 87,776 Cardinality: 3,376
15 FILTER
14 SORT JOIN Cost: 7,238,488 Bytes: 25,269,911,792 Cardinality: 77,042,414
13 VIEW Cost: 1,835,219 Bytes: 25,269,911,792 Cardinality: 77,042,414
12 SORT GROUP BY Cost: 1,835,219 Bytes: 3,698,035,872 Cardinality: 77,042,414
11 HASH JOIN Cost: 914,089 Bytes: 3,698,035,872 Cardinality: 77,042,414
9 TABLE ACCESS FULL TABLE proj_DATE_DMN Cost: 29 Bytes: 94,960 Cardinality: 4,748
10 TABLE ACCESS FULL TABLE proj_PSTNG_FCT Cost: 913,693 Bytes: 2,157,187,592 Cardinality: 77,042,414

Similar Messages

  • Cancelling disc burn taking more than half-an-hour

    I had just burned a cd before trying to burn another one but that wouldn't finish. The one before it took less than two minutes while this next one was taking more than ten. I decided to cancel it but now it is still on the cancelling sequence and I cannot get the cd out. Any advice?

    Given that it's so new, and what you've already tried, it sounds like an issue that requires service.
    You can arrange online service here.
    Service request.

  • Delete query taking more than 2 hours

    Hi
    I have table1 with more than 20 million rows.I am trying to delete some rows based on conditions.I have written a query to delete the rows.Butit is taking almost 2 hours.Is there any other way to achieve this?
    DEL FROM TABLE1
    WHERE ID=100 AND SRCE_NM='CHECK'
    AND TRAN_EFF_DT BETWEEN '2012-01-01' AND   '2012-12-31'
    AND (ACCT_ID,TRAN_ID,DW_EFF_DT) NOT IN
    SEL ACCT_ID,TRAN_ID,DW_EFF_DT
    FROM TABLE1  T
    INNER JOIN TABLE2 T2
    ON T.ACCT_ID = T2.ACCT_ID
    AND T.TRAN_EFF_DT BETWEEN T2.DW_EFF_DT AND T2.DW_EXPR_DT
    AND
    COALESCE(BRANCH_CD ,'XX') NOT IN
    SELECT BRANCH_CD
    FROM TABLE3
    WHERE TBL_NM='ACCT'
    AND ENR_NM='TOA')
    WHERE ID=100 AND SRCE_NM='CHECK'
    AND TRAN_EFF_DT BETWEEN '2012-01-01' AND '2012-12-31'
    I have indexes on the columns used in where clause and range partition on the date cols used in BETWEEN operator.Is there any other alternative to change the query?
    Regards
    KVB

    TRAN_EFF_DT BETWEEN '2012-01-01' AND   '2012-12-31'
    If you're comparing a DATE (at least, I suspect/hope TRAN_EFF_DT is of DATE datatype) to STRINGS, you'rein for trouble. Use TO_DATE  and a proper format mask.
    COALESCE(BRANCH_CD ,'XX')
    If BRANCH_CD is indexed, Optimizer will not use it since you're using a function on it. A function based index might help. ORACLE-BASE - Oracle Function Based Indexes
    What is the exact  database version? (the result of: select * from v$version; )
    Are the table statistics up to date?

  • Fetch only more than or equal to 10 million rows tables

    Hi all,
    How to fetch tables has more than 10 million rows with is plsql? i got this from some other site I couldn't remember.
    Somebody can help me on this please. your help is greatly appreciated.
    declare
    counter number;
    begin
    for x in (select segment_name, owner
    from dba_segments
    where segment_type='TABLE'
    and owner='KOMAKO') loop
    execute immediate 'select count(*) from '||x.owner||'.'||x.segment_name into counter;
    dbms_output.put_line(rpad(x.owner,30,' ') ||'.' ||rpad(x.segment_name,30,' ') ||' : ' || counter ||' row(s)');
    end loop;
    end;
    Thank you,
    gg

    1) This code appears to work. Of course, there seems to be no need to select from DBA_SEGMENTS when DBA_TABLES would more straightforward. And, of course, you'd have to do something when the count exceeded 10 million.
    2) If you are using the cost-based optimizer (CBO) and your statistics are reasonably accurate and you can tolerate a degree of staleness/ approximation in the row counts, you could just select the NUM_ROWS column from DBA_TABLES.
    Justin

  • Query taking more time

    Iam having nearly 2 crores records at present in my table..
    I want to get the avg of price from my table..
    i put the query like
    select avg(sum(price)) from table group by product_id
    The query taking more than 5 mins to execute...
    is that any other way i can simplify my query?

    Warren:
    Your first query gives:
    SQL> SELECT AVG(SUM(price)) sum_price
      2  FROM t;
    SELECT AVG(SUM(price)) sum_price
    ERROR at line 1:
    ORA-00978: nested group function without GROUP BYand your second gives:
    SQL> SELECT product_id, AVG(SUM(price))
      2  FROM t
      3  GROUP BY product_id;
    SELECT product_id, AVG(SUM(price))
    ERROR at line 1:
    ORA-00937: not a single-group group functionSymon:
    What exactly are you ttrying to accomplish. Your query as posted will calculate the average of the sums of the prices for all product_id values. That is, it is equivalent to:
    SELECT AVG(sum_price)
    FROM (SELECT SUM(price) sum_price
          FROM t
          GROUP BY product_id)So given:
    SQL> SELECT * FROM t;
    PRODUCT_ID      PRICE
    PROD1               5
    PROD1               7
    PROD1              10
    PROD2               3
    PROD2               4
    PROD2               5The sum of the prices per product_id is:
    SQL> SELECT SUM(price) sum_price
      2  FROM t
      3  GROUP BY product_id;
    SUM_PRICE
            22
            12 and the average of that is (22 + 12) / 2 = 17. Is that what you are looking for? If so, then the equivalent query I posted above is at least clearer, but may not be any faster. If this is not what you are looking for, then some sample data and expected results may help. Although, it appears that you need to full scan the table in either case, so that may be as good as it gets.
    John

  • Taking more than 2 min T-SQL

    Hi Friends,
    SELECT
    DATEPART(YEAR, SaleDate) AS [PrevYear],
    DATENAME(MONTH, SaleDate) AS [PrevMonth],
    SaleDate as SaleDate,
    Sum(Amount) as PrevAmount
    FROM TableA A
    WHERE SaleDate >= DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) - 1, 0)
    AND SaleDate <= DATEADD(dd, -1, DATEADD(yy, DATEDIFF(yy, 0, GETDATE()), 0))
    -----'2013-12-31 00:00:00.000'
    GROUP BY
    SaleDate
    This Query taking more than 2 min to pull the results .... basically I was passing last year first date and last date (should derive based on getdate())
    if I pass static values  like this  WHERE SaleDate >= ''2013-01-01 00:00:00.000''
          AND SaleDate <= '2013-12-31 00:00:00.000'
    then it is pulling results in fraction of seconds.....
    Note: I was keeping this code in View I have to use only View ( I know we can write store procedure for this but I dont want sp I need only View)
    any idea please how to improve my view performance?
    Thanks,
    RK

    Do you have an index on SaleDate column ? If so , is this NCI or CI? How much data does it return? Can you show an execution plan of the query?
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • HT201274 My iphone4 is taking more than 5 hours to erase all the data still it is under process , still how much more time i have to wait for my mobile to ON

    My iphone4 is taking more than 5 hours to erase all the data still it is under process , still how much more time i have to wait for my mobile to ON ?

    I'm having this EXACT same problem with my iPhone 4, and I have the same computer stats (I have a Samsung Series 7)

  • XMLSerialize taking more than 10 hours to execute in Oracle

    Hi All,
    In my current project, converting oracle query into xml format first and then using XMLSerialize for printing but while execution it is taking more than 15 hours and more.
    The basic oracle query taking hardly 10 second to execute and converting from oracle query to xml format taking around 1 min but when i am using XMLSerialize with order by clause it is not executing.
    Can some help for fixing this performance issue causing due to XMLSerialize
    Thanks in advance.
    after adding the below clause performance issue started
    select XMLSerialize(CONTENT rec_str as CLOB) as test_XML, 100 + rnum as ORDER_CLAUSE from xxtemp
    Edited by: redrose1405 on May 1, 2012 12:45 AM

    How much free space do you have on your boot drive?
    OT

  • Sync and Create project operation from DTR is taking more than one hour

    Hi All.
    Recently basis team has implemented the track for  ESS/MSS application.So When we import the track to NWDS its showing 500 Dcs.
    I have successfully done the Sync and create project operation from DTR for 150 DCS and its take 5 min per Dcs.
    However after that when i am trying to sync Dc or create project DC from DTR the operation is taking more than 3 hour per DC.Which should not be the case because for rest 150 DC that i ahve done Sync operation adn Create project operation from DTR it hardly takes 5 min per Dc.As this operataion is taking so much time finally i have close the NWDS to stop this operation.
    I am using NWDS 2.0.15 and EP7.0 portal SP15 and NWDI is 7.0
    Can any body tell how to solve this issue so that i can Sync and Create project from DTR for a DC within 5 min?
    Thanks
    Susmita

    Hi Susmita,
    If the DCs are fine in CBS build, then I feel there is no need to test all of them locally in NWDS.
    You can verify some certain applications in these DCs, then you sync & create project for these DCs & test run these applications.
    As I get you only need to check ( no changes will be done ), yes you can verify them in small groups (say 20-25 DCs/group) in different workspaces so that no workspace is overloaded.
    But why do you want to keep a copy of them locally as you are not making any changes, you can Unsync & Remove these projects once verified & use the same workspace to work on the next set of DCs.
    Hope this clarifies your concerns.
    Kind Regards,
    Nitin
    Edited by: Nitin Jain on Apr 23, 2009 1:55 PM

  • Huge volume of records are routing to the remote user other than his position and organization records. Synchronization and DB initialization taking more time around 36 hours.

    Huge volume of records are routing to the remote user other than his position and organization records. Synchronization and DB initialization taking more time around 36 hours.
    Actual accounts & contacts need to be route around 2000 & 3000 but we have observed lakhs of records routing into local DB.
    We have verified all the Assignment Rules, Views.
    We ran docking object visibility rules and we have observed that some other accounts are routing due to Organization rule passing. (these records are not supposed to route).
    Version Siebel 7.7.2.12,
    OS Solaris.

    let me know what would be the reason that 1st million takes only 15 minuts and the time goes on increasing gradually with the increase of dataYes that's a little strange. I only can guess:
    1. You are in archivelog mode and the Archiver is not able to archive the redo logs fast enough
    2. You don't use Direct Load and DBWR ist not able to write the direty block to disk fast enough. You could create more DBWR processes in that case.
    3. Make a snapshot of v$system_event:
    create table begin as select * from v$system_event;After the import run
    create table end as select * from v$system_event;Now compare the values:
    select * from begin order by TIME_WAITED_MICRO descwith the values given you by
    select * from end order by TIME_WAITED_MICRO descSo you can look where your DB spent so much time waiting for something.
    Alternativly, you could start a 10046 trace on the loading session and use tkprof.
    Dim

  • While using Status for Object button it is taking more than 15 mins to open

    Hi Gurus,
    We are trying to attach documents to ZBOS & OR types sales documents , while opening the Status for Object button of the sales order it is taking more than 15 mins to open , once it is opened it is working as normal.
    can you please let us know is it the system functionality because of which it is taking so much time to open or the problem  is with  something else.
    please let us also know whether it is  system impacting process.
    We are using 4.6C.
    Thank You,
    Boyeni.

    Hi Syed ,
    Greetings!!!...
    Thank you very much for your Swift response!.
    could you be so kind to let me know The Program that needs to be refreshed.
    Thank You once again for your Assistance.
    Boyeni.

  • Syncing my iPad is taking more than 36 hours... how do I speed this upso I can update my iOS?

    syncing my iPad is taking more than 36 hours... how do I speed this up so I can update my iOS?

    Try restarting the iPad. Disconnect the iPad first.
    Press and hold the On/Off Sleep/Wake button until the red slider appears. Slide your finger across the slider to turn off iPod touch. To turn iPod touch back on, press and hold the On/Off Sleep/Wake button until the Apple logo appears.
    Restart your PC.
    Try again.

  • After installing the 8.1 update the Internet Sharing won't allow any devices to stay connected for more than a few hours.

    Internet sharing is turned "on" and appears to be working, but really isn't. Locking the screen, restarting, or a hard reboot helps, but it is extremely aggravating not to be able to maintain a connection for more than a few hours at a time!

        Thank you so much for the info! I apologize for having asked for your device model as that is the forum you posted to. Also, I know that we've set the expectation that the device will remain connected as long as a demand for signal flows through it, but that may not be entirely the case. After a period of constant use, any device (including the Lumia 822) may require or force itself to stop.
    You've mentioned that the device will freeze after a few hours usage. Is the device also connected to the charger while being used as a hotspot? If so, do you notice the device getting warmer than it normally would be?
    DionM_VZW
    Follow us on Twitter www.twitter.com/vzwsupport

  • I'm trying to restore and back up my iPhone 4 it's taking more than 45 hours. What should I do?

    I have been trying to restore and back up my Iphone which i have just updated and I wanted my apps, contacts, messages,etc,. all back on my iPhone but its taking more than 45 hours and it's only 16 GB. I have really important contacts that I need to have in my phone but it would take weeks to get them back one by one without using the back up.

    As has been said, you may have a corrupt backup on your computer. However, if it is a Windows computer you may also have a corrupt socket layer in Windows. So first try this (after canceling the backup):
    Open a command window with Administrator privileges and type:
       netsh  winsock  reset
    then reboot your computer and try again to do a backup (right click on the phone's name in iTunes and select Backup from the floating menu).
    If this doesn't work you will have to delete the existing backup. Go to iTunes preferences, Devices page.
    Note: If you have a Mac first try restarting it, launching iTunes, and back up as described above.

  • Oracle query taking more time in SSRS

    Hi All
    we have report which connects to Orale DB . the Query for the datset is running in 9 Secs  in PL/sql developer  but the same query is taking more than 150 secs in SSRS. we are using Oracleclient type provider.
    Surendra Thota

    Hi Surendra,
    Based on the current description, I understand that you may use Oracle Provider for OLE DB to connect Oracle database. If so, I suggest you using Microsoft OLE DB Provider for Oracle to compare the query time between SQLPlus and SSRS.
    Here is a relative thread (SSRS Report with Oracle database performance problems) for your reference.
    Hope this helps.
    Regards,
    Heidi Duan
    If you have any feedback on our support, please click
    here.
    Heidi Duan
    TechNet Community Support

Maybe you are looking for