Datapump: expdp taking considerably longer than exp

does anyone know why datapump export takes considerably longer than the deprecated exp function of oracle?
with few schemes datapump is rather fast. the more schemes you have, the longer it takes for datapump to perform an export. the deprecated exp function however remains fast.
some statistics i've made:
12 schemes: export takes ~30 seconds
128 schemes: export takes ~60 seconds
152 schemes: export takes ~80 seconds
328 schemes: export takes ~ 5minutes(!)
after i deleted schemes and reduced the number of schemes to 68, the expdp function became fast again: ~60 seconds.
does anyone have a hint for me about where i can make performance modifications? oracle describes in its documentation that datapump is faster than the deprecated export and that the deprecated exp function shouldn't be used anymore. currently this isn't true and we are at a big disadvantage now, since we are using datapump.
the expdp function "hangs" very long with the following logging output:
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
our oracle version: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
Edited by: user8995776 on Dec 18, 2010 8:29 AM

i want to add: i checked the trace files and it seems like i'm stuck with the same problem like this guy:
http://www.freelists.org/post/oracle-l/Any-valid-security-concerns-using-Data-Pump-over-conventional-expimp,7
this query takes very long, it's exactly the same as his:
>
select /*+rule*/
sys_xmlgen(value(ku$), xmlformat.createformat2('TABLE_DATA_T', '7')),
0, ku$.base_obj.name, ku$.base_obj.owner_name, 'TABLE',
to_char(ku$.bytes_alloc), to_char(ku$.et_parallel), ku$.fgac,
ku$.nonscoped_ref, ku$.xmlschemacols, ku$.name, ku$.name, 'TABLE_DATA',
ku$.part_name, ku$.parttype, ku$.property, ku$.refpar_level,
ku$.schema_obj.owner_name, ku$.ts_name, ku$.schema_obj.name,
ku$.trigflag,
decode(
ku$.schema_obj.type_num,
2, decode(
bitand(ku$.property, 8224),
8224, 'NESTED PARTITION',
8192, 'NESTED TABLE',
'TABLE'
19, decode(
bitand(ku$.property, 8224),
8224, 'NESTED PARTITION',
'PARTITION'
20, 'PARTITION',
'SUBPARTITION'
to_char(ku$.unload_method), ku$.xmltype_fmts
from sys.ku$_10_2_table_data_view ku$
where not bitand(ku$.base_obj.flags, 128) != 0 and
not(bitand(ku$.base_obj.flags, 16) = 16) and
ku$.base_obj.obj_num in(
select *
from table(DBMS_METADATA.fetch_objnums(100001)))
>
and it's the same for me: the more objects, the longer the expdp process takes. the operation finishes eventually.

Similar Messages

  • I moved my music from the c drive to the d drive. All of my music is in itunes but my ipod won't sync with itunes. The syncing process is taking much longer than usual too. I left my ipod over night to sync and it didnt finish. Fails to sync every time.

    I moved my music from the c drive to the d drive. All of my music is in itunes but my ipod won't sync with itunes. The syncing process is taking much longer than usual too. I left my ipod over night to sync and it didnt finish. Fails to sync every time. I tried to restore my ipod and it didnt help.

    Ignore.  I figured it out:)

  • I am syncing my iPhone now, and step 6 of 6 is taking significantly longer than it usually does. Is there a way to tell how long it will take, or complete the process?

    I am syncing my iPhone now, and step 6 of 6 is taking significantly longer than it usually does. Is there a way to tell how long it will take, or complete the process?

    Warp Stabilising is a two part process.
    Analysis and Stabilising.
    Both have a banner across the Program Monitor during the process ( unless you have turned that off.
    Analysis start automatically on forst application...but you can rock in there and start adjusting the parameters.
    You may need to hit "Analyse" Button again.
    You should see the number of frames and the estimated duration during this process.

  • HT5521 I have a lightning to usb cable which is 2m in length.  Is there any known issues with trying to recharge an iPad 4 with this longer length.  It seems to me it is taking much longer than a 1m cord to recharge.

    I have a lightning to usb cable which is 2m in length.  Is there any known issues with trying to recharge an iPad 4 with this longer length?  It seems to me it is taking much longer than a 1m cord to recharge.

    Axel,
    I'm afraid a new SSD won't be different from your bad USB stick. I had similar issues over USB with both a (no-brand) stick and TWO 64gb Kingston SSDNow's (running Kubuntu 12.04 with Kernel 3.11, ia_64): it all runs exceptionally well (wanna know how it feels like booting in 5 sec?) for a few days - then suddenly you find yourself facing that dreaded (initramfs) prompt. You ask yourself: why? Did I upgrade grub lately? Did I upgrade the Kernel? I don't recall so. Ok, let's fix this... insert favorite live cd, boot, fsck...what???? THOUSANDS of errors??? Hundreds of files and directories corrupted, and the system is unusable - Reinstall everything from scratch onto another drive.
    Rinse and repeat: did this 3 times. Then I found this analysis:
    http://lkcl.net/reports/ssd_analysis.html
    I also suspect USB power interrupts more abruptly than SATA power, at shutdown - basically aggravating any power interruption damages. So now I'm going to:
    - buy an Intel S3500!
    - add commit=1 to my mount options in /etc/fstab
    - edit shutdown procedure to add a 5-10 sec pause after unmounting drives.
    Just my two cents.
    Andrea
    Last edited by andreius (2013-12-29 16:51:04)

  • FCP Export Taking WAY LONGER Than Usual.

    Hi there, I've been using Final Cut Pro to edit video from a 1080p Canon Camcorder for the past 10 months, and since last week, whenever I'm done with the editing process and I decide to export the Video in 720p HD with H264 with AAC Audio, the exports take A LOT LONGER than they took before.
    It can take up to 2 hours to export a short 3 minute video..
    Whereas it USED TO take about only 30 minutes for the same footage before.
    Is there anything I'm forgetting, any setting I've accidentally changed or something?
    Please help me out,
    Thank you.
    (I'm using FINAL CUT PRO 7)

    I still can't figure it out,
    a 3 minute long video which was about 300MBs when exported with QuickTime Conversion from FCP7 is now about 1.25 GBs, I don't remember having changed any settings, I've even tried trashing preferences, but still no luck.
    *An old video that was shot with the same HD Camcorder which was 3 Minute long video I had exported was 257.9 MB & the total bitrate is 11,715 (EXPORTED FILE)*
    *But this file that I'm having size issues with right now is also 3 Minutes long (shot with same camcorder) and the exported file is 1.25 GBs and has a bitrate of 58,693*
    These are the QUICKTIME CONVERSION setting I've always used:
    VIDEO:
    Compression: H.264
    Quality: Best
    Key frame rate: 24
    Frame reordering: yes
    Encoding mode: multi-pass
    Dimensions: 1280x720
    SOUND:
    Format: AAC
    Sample Rate: 44.100 KHZ
    Channels: Stereo (L R)
    Bit Rate: 320 kbps
    I've been using these same settings before and I haven't changed anything, but the file size has tripled on exports.
    *Sequence Settings:*
    QUICKTIME VIDEO SETTINGS:
    APPLE PRORES 422
    Quality 100%
    And btw, I always DO A FULL "RENDER ALL" + "RENDER MIXDOWN" before exporting,
    Is there any settings I must make sure haven't changed???
    What is my problem.

  • Insert + Update code in 11g taking WAY longer than 9 ?

    Hi guys
    Hoping to get a few suggestions as to what might be causing a problem I'm having.
    I've got a routine in a form that was previously running against a version 9 database, and is now running against an 11g database.
    The routine has a series of cursor loops which results in a number of inserts and updates on the database.
    I have in this routine a call to a procedure which inserts a record into a logging table, with the time it is called. With this I can gauge how long the routine is taking.
    On the version 9 database, the log of inserts is as follows :
    After x rows, time taken (mins, secs)
    1000 rows - 1.04
    4000 rows - 4.38
    9000 rows - 9.43
    13000 rows - 14.01
    18000 rows - 15.58
    So, as a very approximate value, 1000 rows a minute consistantly through to the end at 18000.
    The version 11 database is an import of the version 9 database. so has all the same constraints, indexes, triggers etc.
    The code disables foreign key constraints before executing, it doesnt drop the indexes.
    On the version 11 database, the log of inserts is as follows :
    1000 rows - 0.30
    4000 rows - 3.58
    9000 rows - 19.58
    13000 rows - 41.46
    18000 rows - 80.45
    So rather than staying at a rough 1000 rows per 1 minute time, as more rows are inserted, the time to complete is going up and up and up.
    This would point at indexing maybe? But since there is only 1 index on the tables being inserted into, which is the same on the 9 and 11 database, this ought not to be the case.
    Can anyone suggest why the version 11 database is taking so much longer, and why the time to process is increasing rather than staying constant throughout.
    I'm at a loss at what to look for now.
    Thanks a lot
    Scott

    Scott Hillier wrote:
    The code has to gather data from several tables, perform some fairly complex logic on it, building up record structures before inserting into a number of tables and updating several others. None of which can be done with a simple merge statement.
    It has to be done using cursors, both implicit and explicit to retrieve the data first."Has to" is a bit presumptive. SQL has got very powerful constructs and features that can (pretty much) do anything you require. It may be complex SQL but it will process all the rows of data in one go rather than doing continuous context switching.
    Here's a basic example of the time difference between running stuff as a single SQL and context switching...
    [code]
    SQL> ed
    Wrote file afiedt.buf
      1  declare
      2    v_sysdate DATE;
      3  begin
      4    v_sysdate := SYSDATE;
      5    INSERT INTO mytable SELECT rownum FROM DUAL CONNECT BY ROWNUM <= 1000000;
      6    DBMS_OUTPUT.PUT_LINE('Single Transaction: Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
      7    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
      8    v_sysdate := SYSDATE;
      9    FOR i IN 1..1000000
    10    LOOP
    11      INSERT INTO mytable (x) VALUES (i);
    12    END LOOP;
    13    DBMS_OUTPUT.PUT_LINE('Multi Transaction: Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    14    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    15* end;
    SQL> /
    Single Transaction: Time Taken: 1
    Multi Transaction: Time Taken: 37
    PL/SQL procedure successfully completed.
    SQL>
    [/code]One hell of a difference, and it can get worse depending on how many switches are done in a loop.
    Theres no commit until the end.That's good.
    So, feel free to suggest how I go about gathering data to build up the various records without using cursors.Without knowing your structures, data, requirements etc. how can we tell, apart from to say that maximising SQL and minimising PL/SQL will improve performance.
    Even if you do - it still wouldnt explain why tuned code which runs in 18 minutes on one box is running at 90 minutes on another.It could be a configuration factor of the database, but equally it could be a number of other factors.
    Unless you provide comparative explain plans for queries etc. then how can we tell.

  • KeyNote '08 Taking Much Longer than KeyNote '06

    Hello All,
    Up to a month ago, we were running iWorks '06 on a 2.16GHz(white plastic, Serial # W87...) iMac. Just upgraded to iWorks '08 on a 2.4GHz (metal, Serial # W88...) iMac.
    We use large (5mb) animated gifs in our KeyNote presentations. The initial KeyNote setup would process the animated gif to the slide (as determined by the presence of the spinning "color wheel") within a few seconds. This newer setup, however, takes 2-3 minutes.
    This is a big step backward for us. Any suggestions as to how we can speed up this processing?
    Thanks,
    Shawn Rampy
    Message was edited by: wxman123

    1) Which codec are you exporting to ?
    When exporting FCPx is "rendering" and thus accessing the CPUs core(s). Fcpx 10.0.7 had something fundamental change to it... Certain operations really sped up while others slowed down.
    Perhaps this is what you are experiencing.
    When you say slower.... Do you mean... Half as fast ?

  • Importing clips longer than 4min - crashes computer - genral error34 in FCP

    importing clips longer than 4min - crashes computer - genral error34 in FCP
    Posted: Dec 21, 2008 10:48 AM
    Reply Email
    FCP HD 4.5 not taking clips longer than 4min and in FCP Im getting ( general error 34) in my
    audio/video settings. Computer crashes and external hard drives go off line.
    In past FCP rescue fixed it.

    You need to check to make sure that the version of the OS, FCP and quicktime are compatible. Also, is your scratch disk formatted macosextended? If not, it should be.

  • After formatting and creating two partitions on my external HD, why is Time Machine taking 25x longer to perform a backup?

    Hi.
    I bought a macbookpro 13"retina, new two weeks ago and a G-tech 1TB external drive from the apple store. It's my first mac.
    I performed a backup, using Time Machine, which copied 18 GB onto he 1TB external HD in around 20 minutes.
    After a little more reading, I realised I should use Disk utility to partition the 1TB external G-tech drive into a 250gb partition, and a 750gb partition.
    [My MBP has a 250gb flash HD. I'll use the other 750gb for storage of my other files, like photos, music etc.]
    So, using disk utility I formatted the 1TB exernal drive, then created two partitions.
    However, I've started backing up my data using time machine again, onto the 250gb partition, on the external drive, and its now taking 5 hours to copy the 19.81GB
    So why is it taking 25x longer than the first time I did it? [First time it took 20 minutes]
    Thanks

    Since performing the fist backup, a couple of days ago, which took twenty minutes, and the second backup, which is taking 5 hours, I installed MS office for Mac 2011 and Silverlight.
    Dont know if thats responsible?

  • IOS 8.0.2 download that is taking too long?

    I have an iPhone 4s and downloaded iOS 8.0.2 onto my iTunes and into my phone within 2 hours. My sister has an iPhone 5 and iOS 8.0.2 has already been downloaded onto her iTunes. The only process left is to have the download completed on her phone, but that is taking much longer than the actual iOS 8 download to iTunes. The download bar on the phone has almost reached the end, but has stuck at the same place for the last hour. (This did not happen with my phone.)
    I have unplugged the phone from the computer, but the download bar is still visible with the same point that has not moved a bit. How can this be fixed? How can I increase the download speed? Or do I have to patiently wait? Please help!

    If you unplugged the phone from the computer, you have already interrupted the download and install process. An update via iTunes is not a Wifi update, so once it's unplugged, it is not going to start up again.
    Reset the device: Hold down the Home and Power buttons at the same time and continue to hold them down until the Apple appears (up to 30 seconds). Once the reset is complete, the "Slide to Unlock" screen will display. Go to Settings>General>About, and see what version number is showing. If it is 8.0.2, then you should be good to go. If it is not, hook the device back up to iTunes and start the update again.
    Cheers,
    GB

  • Expdp is faster than exp

    Hi Experts,
    how expdp is faster than exp, what oracle internally change to speed up datapump?
    how impdp is faster than imp, what oracle internally change to speed up datapump?
    Thanks,

    exp/imp are clients. Communication between the database instance and the exp utility that writes the export dump file (or from the imp utility to the database instance) goes through SQLNet.
    expdp/impdp are server processes. These processes attach to the SGA.
    (for those who know : we used to do impst and expst a long time ago. Even in 10g it is still possible to create the expst/impst binaries. The "st" stands for SingleTask. Client-Server is not SingleTask).
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Using expdp to do a schema export is taking extremely long time

    I am using expdp in schema mode to export a 300 gigbyte database.
    The job status report 99% complete after about 2 hours .
    But now the job has been running 30 hours and is not finish.
    I can see that it is exporting the domain indexes and had been exporting
    the last index for the last 5 hours. Something is not working because I
    looked at the table the index is using and it has no data. So, why is it taking
    so long to export an index that has no data?
    Can someone tell if there is a way to bypass exporting indexes and a easy way
    to recreate the indexes if you do?
    I am using oracle 11g and expdp utility.

    I checked the log file and there are no errors in the file.
    There are no ORA- xxxx error messages.
    The last line in the log file is as follows:
    "Processing object type schema_export/table/index/domain_index/index "
    I just checked the export job run this morning and it is still on the same
    index object "A685_IX1" . This is a spatial index. It has been sitting at
    this same object according to the job status for at least 24 hours.

  • MOSS 2007 Search - Crawling is taking longer than usual time since last month for same content sources

    Hi all,
    Off late we have discovered that content crawling is taking longer than expected also overlapping to next scheduled too. Literally no crawl logs seen for hours. No entry in crawl logs. Is there anyone out here having similar issue? Please share a solution
    if any found.
    My farm is implemented with MOSS 2007 SP2 Ver no 12.0.0.6554
    There is not packet drop between index server, App and SQL server/Cluster
    Thank you in advance,
    Reach Ram
    Ramakrishna Pulipati SharePoint Consultant, Bangalore, INDIA

    I believe this is ready for submission for the Time Machine forum.
    As noted, it does not cover diagnosis and correction of specific problems or errors, as that would seem to be better handled separately.
    It also doesn't cover anything about Time Capsule, for the very good reason that I'm not familiar with them. If someone wants to draft a separate post for it, or items to add/update here, that would be great!
    Thanks very much.

  • Satellite L735 taking longer than usual to start-up

    My toshiba Satellite L735 was taking longer than usual to startup (more than a minute, usually takes 40 secs) gave it to the service centre, they said they updates the BIOS, optimised processor calculation, formatted it.
    It was fine again, but the problem was back in less than a month, took it again, the person at the counter just reduced the unnecessary programs from the boot list, came back home to notice it was taking even longer to startup (more than 2 mins).
    When i switch it on, its all fine i get the windows logo in no time, but after that a black screen appears and stays for about a minute and a half, then the homescreen appears, the windows startup sound thats supposed to play as soon as the homescreen appears plays about 40 secs aftr the homescreen has appeared.
    Please help.
    Thank you.

    Hi
    The notebook seems to load the windows slowly because too many processes would be loaded at the beginning.
    First of all I recommend downloading and installing the CCLeaner. This freeware tool would help you to clean the registry and to remove the garbage from the system.
    Furthermore you should check your processes which would be loaded at the beginning.
    Start msconfig and go to StartUp tab. Here you can disable no necessary and not important processes.
    I dont know how many computer experience you have so be careful with that.
    If you dont know exactly what the single process does, google for more details.
    Removing a mark near the process will disable this you can enable this later too
    Furthermore I would recommend checking your antivirus software I switched from Norton Antivirus to Avira Antivir software because it much faster and does not need a lot of hardware resources the Avira antivir is freeware software so you can install and use it for free.
    Last but not least defragment the HDD to ensure faster data access
    Good luck

  • Why this Query is taking much longer time than expected?

    Hi,
    I need experts support on the below mentioned issue:
    Why this Query is taking much longer time than expected? Sometimes I am getting connection timeout error. Is there any better way to achieve result in shortest time.  Below, please find the DDL & DML:
    DDL
    BHDCollections
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BHDCollections](
     [BHDCollectionid] [bigint] IDENTITY(1,1) NOT NULL,
     [GroupMemberid] [int] NOT NULL,
     [BHDDate] [datetime] NOT NULL,
     [BHDShift] [varchar](10) NULL,
     [SlipValue] [decimal](18, 3) NOT NULL,
     [ProcessedValue] [decimal](18, 3) NOT NULL,
     [BHDRemarks] [varchar](500) NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_BHDCollections] PRIMARY KEY CLUSTERED
     [BHDCollectionid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    BHDCollectionsDet
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[BHDCollectionsDet](
     [CollectionDetailid] [bigint] IDENTITY(1,1) NOT NULL,
     [BHDCollectionid] [bigint] NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](18, 3) NOT NULL,
     [Quantity] [int] NOT NULL,
     CONSTRAINT [PK_BHDCollectionsDet] PRIMARY KEY CLUSTERED
     [CollectionDetailid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    Banks
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Banks](
     [Bankid] [int] IDENTITY(1,1) NOT NULL,
     [Bankname] [varchar](50) NOT NULL,
     [Bankabbr] [varchar](50) NULL,
     [BankContact] [varchar](50) NULL,
     [BankTel] [varchar](25) NULL,
     [BankFax] [varchar](25) NULL,
     [BankEmail] [varchar](50) NULL,
     [BankActive] [bit] NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_Banks] PRIMARY KEY CLUSTERED
     [Bankid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    Groupmembers
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[GroupMembers](
     [GroupMemberid] [int] IDENTITY(1,1) NOT NULL,
     [Groupid] [int] NOT NULL,
     [BAID] [int] NOT NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_GroupMembers] PRIMARY KEY CLUSTERED
     [GroupMemberid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_BankAccounts] FOREIGN KEY([BAID])
    REFERENCES [dbo].[BankAccounts] ([BAID])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_BankAccounts]
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_Groups] FOREIGN KEY([Groupid])
    REFERENCES [dbo].[Groups] ([Groupid])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_Groups]
    BankAccounts
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BankAccounts](
     [BAID] [int] IDENTITY(1,1) NOT NULL,
     [CustomerID] [int] NOT NULL,
     [Locationid] [varchar](25) NOT NULL,
     [Bankid] [int] NOT NULL,
     [BankAccountNo] [varchar](50) NOT NULL,
     CONSTRAINT [PK_BankAccounts] PRIMARY KEY CLUSTERED
     [BAID] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Banks] FOREIGN KEY([Bankid])
    REFERENCES [dbo].[Banks] ([Bankid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Banks]
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Locations1] FOREIGN KEY([Locationid])
    REFERENCES [dbo].[Locations] ([Locationid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Locations1]
    Currency
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Currency](
     [Currencyid] [int] IDENTITY(1,1) NOT NULL,
     [CurrencyISOCode] [varchar](20) NOT NULL,
     [CurrencyCountry] [varchar](50) NULL,
     [Currency] [varchar](50) NULL,
     CONSTRAINT [PK_Currency] PRIMARY KEY CLUSTERED
     [Currencyid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    CurrencyDetails
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[CurrencyDetails](
     [CurDenid] [int] IDENTITY(1,1) NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](15, 3) NOT NULL,
     [DenominationType] [varchar](25) NOT NULL,
     CONSTRAINT [PK_CurrencyDetails] PRIMARY KEY CLUSTERED
     [CurDenid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    QUERY
    WITH TEMP_TABLE AS
    SELECT     0 AS COINS, BHDCollectionsDet.Quantity AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'Currency') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)
    UNION ALL
    SELECT     BHDCollectionsDet.Quantity AS COINS, 0 AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'COIN') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)),
    TEMP_TABLE2 AS
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM TEMP_TABLE Group By CollectionDate,DSLIPS,Bankname
    SELECT CollectionDate,Bankname,count(DSLIPS) AS DSLIPS,sum(BN) AS BN,sum(COINS) AS coins FROM TEMP_TABLE2 Group By CollectionDate,Bankname
    HAVING COUNT(DSLIPS)<>0;

    Without seeing an execution plan of the query it is hard to suggest something useful. Try insert the result of UNION ALL to the temporary table and then perform an aggregation on that table, not a CTE.
    Just
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM
    #tmp Group By CollectionDate,DSLIPS,Bankname
    HAVING COUNT(DSLIPS)<>0;
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

Maybe you are looking for