Cursor taking much longer time

Is there any solution to optimize this? As it is taking more than 2 hours to update the table.
Stats about the table records count:
table_1 = 0.3milliions
table_2 = 89k
table_3 = 2.5 millions
table_4 = 6k
=========================================
cursor c1 is
SELECT SUM (A.w_amount) AS total_comp, D.emp_no
FROM table_1 A,
table_2 B,
table_3 C,
table_4 D
WHERE A.emp_no = B.emp_no
AND A.check_id = B.check_id
AND A.emp_no = C.emp_no
AND D.emp_no = C.emp_no
AND A.earn_code IN ('REM')
AND B.check_date >= C.elig_dt
AND ee.time_s_key = user2.time_pkg.fx_time_key()
GROUP BY D.emp_no;
BEGIN
FOR cur_row IN c1
LOOP
UPDATE table_4
SET gross_comp = cur_row.total_comp
WHERE cur_row.emp_no = table_4.emp_no;
END LOOP;
END;
======================================

time_key_var := user2.time_pkg.fx_time_key();
update table_4 D
set gross_comp = (
SELECT SUM (A.w_amount) AS total_comp, D.emp_no
FROM table_1 A,
table_2 B,
table_3 C
WHERE A.emp_no = B.emp_no
AND A.check_id = B.check_id
AND A.emp_no = C.emp_no
AND D.emp_no = C.emp_no
AND A.earn_code IN ('REM')
AND B.check_date >= C.elig_dt
AND ee.time_s_key = time_key_var
GROUP BY D.emp_no
where exists (
SELECT null
FROM table_1 A,
table_2 B,
table_3 C
WHERE A.emp_no = B.emp_no
AND A.check_id = B.check_id
AND A.emp_no = C.emp_no
AND D.emp_no = C.emp_no
AND A.earn_code IN ('REM')
AND B.check_date >= C.elig_dt
AND ee.time_s_key = time_key_var
);

Similar Messages

  • Why this Query is taking much longer time than expected?

    Hi,
    I need experts support on the below mentioned issue:
    Why this Query is taking much longer time than expected? Sometimes I am getting connection timeout error. Is there any better way to achieve result in shortest time.  Below, please find the DDL & DML:
    DDL
    BHDCollections
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BHDCollections](
     [BHDCollectionid] [bigint] IDENTITY(1,1) NOT NULL,
     [GroupMemberid] [int] NOT NULL,
     [BHDDate] [datetime] NOT NULL,
     [BHDShift] [varchar](10) NULL,
     [SlipValue] [decimal](18, 3) NOT NULL,
     [ProcessedValue] [decimal](18, 3) NOT NULL,
     [BHDRemarks] [varchar](500) NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_BHDCollections] PRIMARY KEY CLUSTERED
     [BHDCollectionid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    BHDCollectionsDet
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[BHDCollectionsDet](
     [CollectionDetailid] [bigint] IDENTITY(1,1) NOT NULL,
     [BHDCollectionid] [bigint] NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](18, 3) NOT NULL,
     [Quantity] [int] NOT NULL,
     CONSTRAINT [PK_BHDCollectionsDet] PRIMARY KEY CLUSTERED
     [CollectionDetailid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    Banks
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Banks](
     [Bankid] [int] IDENTITY(1,1) NOT NULL,
     [Bankname] [varchar](50) NOT NULL,
     [Bankabbr] [varchar](50) NULL,
     [BankContact] [varchar](50) NULL,
     [BankTel] [varchar](25) NULL,
     [BankFax] [varchar](25) NULL,
     [BankEmail] [varchar](50) NULL,
     [BankActive] [bit] NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_Banks] PRIMARY KEY CLUSTERED
     [Bankid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    Groupmembers
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[GroupMembers](
     [GroupMemberid] [int] IDENTITY(1,1) NOT NULL,
     [Groupid] [int] NOT NULL,
     [BAID] [int] NOT NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_GroupMembers] PRIMARY KEY CLUSTERED
     [GroupMemberid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_BankAccounts] FOREIGN KEY([BAID])
    REFERENCES [dbo].[BankAccounts] ([BAID])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_BankAccounts]
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_Groups] FOREIGN KEY([Groupid])
    REFERENCES [dbo].[Groups] ([Groupid])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_Groups]
    BankAccounts
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BankAccounts](
     [BAID] [int] IDENTITY(1,1) NOT NULL,
     [CustomerID] [int] NOT NULL,
     [Locationid] [varchar](25) NOT NULL,
     [Bankid] [int] NOT NULL,
     [BankAccountNo] [varchar](50) NOT NULL,
     CONSTRAINT [PK_BankAccounts] PRIMARY KEY CLUSTERED
     [BAID] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Banks] FOREIGN KEY([Bankid])
    REFERENCES [dbo].[Banks] ([Bankid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Banks]
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Locations1] FOREIGN KEY([Locationid])
    REFERENCES [dbo].[Locations] ([Locationid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Locations1]
    Currency
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Currency](
     [Currencyid] [int] IDENTITY(1,1) NOT NULL,
     [CurrencyISOCode] [varchar](20) NOT NULL,
     [CurrencyCountry] [varchar](50) NULL,
     [Currency] [varchar](50) NULL,
     CONSTRAINT [PK_Currency] PRIMARY KEY CLUSTERED
     [Currencyid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    CurrencyDetails
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[CurrencyDetails](
     [CurDenid] [int] IDENTITY(1,1) NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](15, 3) NOT NULL,
     [DenominationType] [varchar](25) NOT NULL,
     CONSTRAINT [PK_CurrencyDetails] PRIMARY KEY CLUSTERED
     [CurDenid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    QUERY
    WITH TEMP_TABLE AS
    SELECT     0 AS COINS, BHDCollectionsDet.Quantity AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'Currency') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)
    UNION ALL
    SELECT     BHDCollectionsDet.Quantity AS COINS, 0 AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'COIN') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)),
    TEMP_TABLE2 AS
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM TEMP_TABLE Group By CollectionDate,DSLIPS,Bankname
    SELECT CollectionDate,Bankname,count(DSLIPS) AS DSLIPS,sum(BN) AS BN,sum(COINS) AS coins FROM TEMP_TABLE2 Group By CollectionDate,Bankname
    HAVING COUNT(DSLIPS)<>0;

    Without seeing an execution plan of the query it is hard to suggest something useful. Try insert the result of UNION ALL to the temporary table and then perform an aggregation on that table, not a CTE.
    Just
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM
    #tmp Group By CollectionDate,DSLIPS,Bankname
    HAVING COUNT(DSLIPS)<>0;
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • I moved my music from the c drive to the d drive. All of my music is in itunes but my ipod won't sync with itunes. The syncing process is taking much longer than usual too. I left my ipod over night to sync and it didnt finish. Fails to sync every time.

    I moved my music from the c drive to the d drive. All of my music is in itunes but my ipod won't sync with itunes. The syncing process is taking much longer than usual too. I left my ipod over night to sync and it didnt finish. Fails to sync every time. I tried to restore my ipod and it didnt help.

    Ignore.  I figured it out:)

  • How to tune the query..."sum" operation is taking too much long time...

    how to reduce the execution time for the below query..."sum" operation is taking too much long time....
    SELECT
    B.DP_AVPS_STATUS,
    SUM(DP_AD_CURR_BAL*D.ISIN_CLOSE_PRICE) Qty,
    B.DP_NET_WORTH,
    B.CMN_DP_FLAG,
    B.RESTRAIN_TYPE ,
    B.LETT_GENR_DATE,
    E.MAX_NET_WORTH,
    E.MIN_NET_WORTH
    FROM DPMADV0 A, PABRNCHDTLP0 B, PABANKCCYP0 D,PADPNETWORTHDTLP0 E,
    (SELECT CID_NUMB FROM CFCUSTMASTD0
    WHERE SUB_TYPE NOT IN (1,2,5,6,7,28,29,30,31,40,41,48,49,50,57,83)) C
    WHERE A.DP_AD_BR_NBR = B.BRNCH_NUMB
    AND A.DP_AD_CCY_CDE = D.CCY_ALPHA_CODE
    AND A.DP_AD_ACCT_NBR = C.CID_NUMB
    AND E.DP_ACCOUNT_TYPE = B.DP_ACCOUNT_TYPE
    AND SUBSTR(B.BRNCH_NUMB,1,3) like SUBSTR(:hvBrnchNumb,1,3)
    AND ((B.DP_TYPE IN (2) AND B.DP_ACCOUNT_TYPE IN (10, 11)) or
    (B.DP_TYPE IN (3) and B.DP_ACCOUNT_TYPE=11 ) )
    AND B.DEL_FLAG = 'N'
    AND B.DP_STATUS = 'A'
    AND D.ISIN_STATUS = 'A'
    GROUP BY B.DP_NET_WORTH,B.RESTRAIN_TYPE,B.DP_AVPS_STATUS,E.MAX_NET_WORTH,
    E.MIN_NET_WORTH,B.CMN_DP_FLAG,B.LETT_GENR_DATE;

    Hi,
    please produce a plan with rowsource statistics (if not sure how, follow instructions in http://savvinov.com/2012/09/24/a-sqlplus-script-for-diagnosing-poor-sql-plans/) and post it using tags to preserve formatting.
    Best regards,
      Nikolay
    P.S. I also suggest that you work on your open threads:
    Handle:      946903 
    Status Level:      Newbie
    Registered:      Jul 16, 2012
    Total Posts:      11
    Total Questions:      5 (5 unresolved)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • My email all of sudden is taking a long time to open

    Since last week my msn.com email is taking a long time to open I haven't changed a thing and I have had this email since 1998

    How large is your hard drive and how much hard drive space do you have left?
    Which Macbook Pro do you have?
    Which os & version are you using?

  • My mac pro is taking a long time to open files

    When I rey and open applications like Finder, it is taking a long time to open or it hangs how can I fix this
    I have OS10.7.2 and 300GB left out of 500GB.
    Mac book peo 17
    Message was edited by: waldoamt

    How large is your hard drive and how much hard drive space do you have left?
    Which Macbook Pro do you have?
    Which os & version are you using?

  • Issue in updating large number of rows which is taking a long time

    Hi all,
    Am new to oracle forums. First I will explain my problems as below:
    1) I have a table of 350 columns for which i have two indexes. One is for primary key's id
    and the other is the composite id (combination of two functional ids)
    2) Through my application, the user can calculate some functional conditions and the result
    is updated in the same table.
    3) The table consists of all input, intermediate and output columns.
    4) The only way of calculation is done through update statements. The problem is, for one
    complete process, the total number of update statement hits the db is around 1000.
    5) From the two index, one indexed column is mandatory in all update where clause. So one
    will come at any case but the other is optional.
    6) Updating the table is taking a long time if the row count exceeds 1lakh.
    7) I will now explain the scenario:
    a. Say there is 5lakh 100 records in the table in which mandatory indexed column id 1 has
    100 records and id 2 has 5 lakhs record.
    b. If I process id 1, it is very fast and executed within 10seconds. But if I process id 2,
    then it is taking more than 4 minutes to update.
    Is there any way to increase the speed of the update statement. Am using oracle 10g.
    Please help me in this, Since I am a developer and dont have much knowledge in oracle.
    Thanks in advance.
    Regards,
    Sethu

    refer the link:
    http://hoopercharles.wordpress.com/2010/03/09/vsession_longops-wheres-my-sql-statement/

  • Test program running taking much more time on high end server T5440 than low end server  T5220

    Hi all,
    I have written the following program and run on both T5440  [1.4 GHz, 95 GB RAM, 32 cores(s), 256 logical (virtual) processor(s),] and  T5220 [(UltraSPARC-T2 (chipid 0, clock 1165 MH) , 8GB RAM, 1 core, 8 virtual processors )] on same OS version.  I found that T5540 server takes more time than T5220. Please find below the details.
    test1.cpp
    #include <iostream>
    #include <pthread.h>
    using namespace std;
    #define NUM_OF_THREADS 20
    struct ABCDEF {
    char A[1024];
    char B[1024];
    void *start_func(void *)
        long long i = 6000;
        while(i--)
                    ABCDEF*             sdf = new ABCDEF;
                    delete sdf;
                    sdf = NULL;
        return NULL;
    int main(int argc, char* argv[])
        pthread_t tid[50];
        for(int i=0; i<NUM_OF_THREADS; i++)
                    pthread_create(&tid[i], NULL, start_func, NULL);
                    cout<<"Creating thread " << i <<endl;
        for(int i=0; i<NUM_OF_THREADS; i++)
                    pthread_join(tid[i], NULL);
                    cout<<"Waiting for thread " << i <<endl;
    After executing the above program on T5440 takes :
    real 0.78
    user 3.94s
    sys 0.05
    After executing the above program on T5220 takes :
    real 0.23
    user 1.43s
    sys 0.03
    It seems that T5440 which is high end server takes almost 3 times more time than T5220 which is low end server.  
    However, I have one more observation. I tried the following program :
    test2.cpp
    #include <iostream>
    #include <pthread.h>
    using namespace std;
    #define NUM_OF_THREADS 20
    struct ABCDEF {
    char A[1024];
    char B[1024];
    int main(int argc, char* argv[])
        long long i = 6000000;
        while(i--)
            ABCDEF*  sdf = new ABCDEF;
            delete sdf;
            sdf = NULL;
        return 0;
    It seems that T5440 server is fast in this case as compaired to T5220 server.
    Could anyone please help me out the exact reason for this behaviour as my application is slow as well on this T5440 server. I have posted earlier as well for the same issue. 
    Thanks in advance !!!
    regards,
    Sanjay

    You already asked this question...
    48 hours earlier, and in the same Solaris forum space
    Repeating the post isn't going to get you a response any faster, and actually now have people NOT respond because you are not showing any patience.
    These are end-user community forums, not a place to expect Oracle Technical Support.   There is no obligation that there be a response.
    If you have a business-critical issue and hope to get accurate and timely response, then use your service contract credentials to open a Support request.
    This new redundant post is locked.
    Edit:
    It appears that at the same time the O.P. posted this redundant thread, they also posted the same question to at least one other forum web site:
    http://www.unix.com/solaris/229269-test-program-running-taking-much-more-time-high-end-server-t5440-than-low-end-server-t5220.html

  • MBP Taking a long time to boot up

    Recently, My MBP started taking a long time to boot up starting at the Mac OS X boot screen and since it is set on auto login through the whole longin process(briging up the dock and finder windows) i've tried safe booting but it didnt make much of a difference what should I do?

    Try creating a new user account and see if the boot/login process takes the same time with the new account.

  • HT5521 I have a lightning to usb cable which is 2m in length.  Is there any known issues with trying to recharge an iPad 4 with this longer length.  It seems to me it is taking much longer than a 1m cord to recharge.

    I have a lightning to usb cable which is 2m in length.  Is there any known issues with trying to recharge an iPad 4 with this longer length?  It seems to me it is taking much longer than a 1m cord to recharge.

    Axel,
    I'm afraid a new SSD won't be different from your bad USB stick. I had similar issues over USB with both a (no-brand) stick and TWO 64gb Kingston SSDNow's (running Kubuntu 12.04 with Kernel 3.11, ia_64): it all runs exceptionally well (wanna know how it feels like booting in 5 sec?) for a few days - then suddenly you find yourself facing that dreaded (initramfs) prompt. You ask yourself: why? Did I upgrade grub lately? Did I upgrade the Kernel? I don't recall so. Ok, let's fix this... insert favorite live cd, boot, fsck...what???? THOUSANDS of errors??? Hundreds of files and directories corrupted, and the system is unusable - Reinstall everything from scratch onto another drive.
    Rinse and repeat: did this 3 times. Then I found this analysis:
    http://lkcl.net/reports/ssd_analysis.html
    I also suspect USB power interrupts more abruptly than SATA power, at shutdown - basically aggravating any power interruption damages. So now I'm going to:
    - buy an Intel S3500!
    - add commit=1 to my mount options in /etc/fstab
    - edit shutdown procedure to add a 5-10 sec pause after unmounting drives.
    Just my two cents.
    Andrea
    Last edited by andreius (2013-12-29 16:51:04)

  • FCPX .0.7 taking much longer to export projects than .06

    Hi
    Like some others, I'm finding exporting projects both old and new taking much longer in .07 than .06. The work around seems to be using Compressor which I don't have. I also am not getting larger files in export just longer export times.
    Any ideas on why this is happening and what can be done to fix?

    1) Which codec are you exporting to ?
    When exporting FCPx is "rendering" and thus accessing the CPUs core(s). Fcpx 10.0.7 had something fundamental change to it... Certain operations really sped up while others slowed down.
    Perhaps this is what you are experiencing.
    When you say slower.... Do you mean... Half as fast ?

  • DIAdem taking a long time to load 700MB TDMS file to data portal

    I'm trying to load a 700MB TDMS file into DIAdem ,4 channels 10Hz sampling rate over 4 days . When i drag the file into the data portal , diadem freezes up , its on a 2GB machine, file located on the desktop, not running any other programs and it went 50min without being able to load before i made it quit. seems to work fine on smaller files though . Was wondering if this sounds normal and if anyone knows a way around it.
    yin
    Solved!
    Go to Solution.

    Thanks for all your input. 
    YongqinYe and Brad : I've tried defragmenting but this also seems to be taking a  long time (in excess of ) on 2 different machines. The index file is 300MB, so about 75% of the *.tdms, 
    Otmar:  i also tried registry loading the data but this still took a long time  on this particular file although it worked great on smaller files (5MB). 
    Brad : the file does not open in the labVIEW TDMS viewer  although i'm able to view it in chunks of 65000 using the TDMS importer for excel. 
    Stefan: the file does have explicit time stamp channel . 
    Any extra help would be appreciated but the way I'm thinking of doing it now is to use much longer buffer size (from only 6 to a few hundred) when collecting the data to decrease the level of fragmentation. 

  • Discoverer report taking too long time to open.

    HI,
    Discovere reports are taking too long time to open. Please help to resolve this.
    Regards,
    Bhatia

    What is the Dicoverer and the Application release?
    Please refer to the following links (For both Discoverer 4i and 10g). Please note that some Discoverer 4i notes also apply to Discoverer 10g.
    Note: 362851.1 - Guidelines to setup the JVM in Apps Ebusiness Suite 11i and R12
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=362851.1
    Note: 68100.1 - Discoverer Performance When Running On Oracle Applications
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=68100.1
    Note: 465234.1 - Recommended Client Java Plug-in (JVM/JRE) For Discoverer Plus 10g (10.1.2)
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=465234.1
    Note: 329674.1 - Slow Performance When Opening Plus Workbooks from Oracle 11.5.10 Applications Home Page
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=329674.1
    Note: 190326.1 - Ideas for Improving Discoverer 4i Performance in an Applications 11i Environment
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=190326.1
    Note: 331435.1 - Slow Perfomance Using Disco 4.1 Admin/Desktop in Oracle Applications Mode EUL
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=331435.1
    Note: 217669.1 - Refreshing Folders and opening workbooks is slow in Apps 11i environment
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=217669.1

  • Taking too long time to get LOV

    HI,
    I have created a customer folder in which the query retuns 0.5 million records.
    I have created a item class in airline_name column which is being used in the worksheet as parameter.
    The problem is it is taking too long time near about 2 min to get LOV when the user wants to search the exact name.
    Thanks,
    Himanshu Tiwari

    Hi,
    Usually, you should not use the folder that the report is based on to define the LOV. You should use a separate folder to define the LOV that is optimised to return the content of the LOV.
    Rod West

  • My MacBook Pro (purchased in 2011) is taking 10 minutes to boot-up.  It's also taking much longer to open files.  I have 650 GB out of 700 GB left on my machine so it's not that I've overloaded the memory.  Is anyone else having this problem?

    My MacBook Pro (purchased in 2011) is taking 10 minutes to boot-up.  It's also taking much longer to open files.  I have 650 GB out of 700 GB left on my machine so it's not that I've overloaded the memory.  Is anyone else having this problem?

    Look at this comprehensive trouble shooting document;
    https://discussions.apple.com/docs/DOC-3353
    I suggest that you start with SMC and PRAM resets.
    Then  a Safe Boot.
    Ciao.

Maybe you are looking for

  • Can I have more than one library for the same ipod nano?

    I have a desktop pc that I have originally setup my nano on. I have about 400 songs on that computer. I have about 500 other songs on my laptop, can I put itunes on to both computers and have them add the songs from each one respectively without eras

  • Setting up webauth for guest wireless access

    Hi there, I'm trying to set up guest wireless access.  having no experience with this at all, I'm beginning to struggle. Equipment: 2x 3850 stacked and acting as one switch running 03.06.00E 4x 1602E AP's registered to the WLC running on the 3850 The

  • Customer Service Problems

    After fighting with Verizon customer service over two service issues without any possibility of resolution I am canceling my account.  I’m doing this reluctantly because as a long-term shareholder of Verizon I would like to see the company succeed an

  • Reference point

    Hi Friends, Hope all are doing well, I hv one doubt. We maintain the reference point in network activity(in assignment tab) for BOM transfer. When I am going to enter any value suppose A there I am getting the error as Entry A does not exist in TCNRF

  • Premiere Pro CC 2014 Synchronize audio deletes audio track

    I have a 2 camera shoot that has a clapper slate marker before each take.  I recently tried to use the synchronize function in PP2014 to align the camera takes.  This didn't work, in fact it deleted one tracks audio and gave an offset number to the v