Report download is slow for large data sets

APEX 4.1
I have a classic report which retrieves around 1lakh plus rows.
While downloading report as excel it takes 5-10 mins for download.
Any solution for this?
Can we download in sets like first 1000 records n then next 1000 etc.?

what I understood is we can export CSV in background using Kubicek's export_to_excel package.
1.We can provide a button to execute procedure. - jfosteroracle says custom CSV download was slow
2.Use job to download excel in background. - need to check with client if they wish to go ahead with this.
Correct. You need to use the custom package and button on the page to submit the request for downloading the report in back-end.
Is it possible to zip a file first and then download it?
No. As of my knowledge it's not possible to zip a file and then download it.
Thanks
Lakshmi

Similar Messages

  • XML Solutions for Large Data Sets

    Hi,
    I'm working with a large data set (9 million records comprising 36 gigabytes) and am exploring the use of XML with it.
    I've experimented with a JDBC app (taken straight from Steve Muench's excellent <i>Oracle_XML_Applications</i>) for writing to CLOBS, but achieve throughputs of much less than 40k/s (the minimum speed required to process the data in < 10 days).
    What kind of throughputs are possible loading XML records from CLOBs into multiple tables (using server-side Java apps)?
    Could anyone comment whether XML is a feasible possibility for this size data set?
    Regards,
    Mike

    Just would like to identify myself (I'm the submitter):
    Michael Driscoll <[email protected]>.
    null

  • Best Version of SQL Server to Run on Windows 7 Professional for Large Data Sets

    My company will soon be upgrading my work PC from XP to Windows 7 Professional.  I am currently running SQL Server 2000 on my PC and use it to load and analyze data large volumes of data.  I often need to work with 3GB to 5GB of data, and
    have had databases reach 15GB in size.  What would be the best version of SQL Server to install on my PC after the upgrade?  SQL Server Express just won't cut it.  I need more than 2GB of data and the current version of DTS functionality to
    load and transform data.
    Thanks.

    Hi,
    Its difficult to say what would be best for you. You can install SQL Server 2012 standard edition because that is supported on Windows 7 SP1, enterprise edition is not supported. SQL Server 2012 express has now database limitation of 10G which does not includes
    File stream size and log file size but does not provide SSIS features.
    Just have a look at features supported by various editions of SQL Server it will help you in deciding better
    I inclination is towards SQL Server 2012 because its now with SP2 and more stable than 2014( my personal opinion)
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Looping is very slow on large data set

    Hi All,
    I need suggestion in optimizing the below looping scenario .
    Sample data in the table , For easy understanding I kept only 4 columns actual source has  56 columns
    Input :
    #Final
    RwNum
    JobSource
    RuleName
    PackageType
    1
    Presubmission
    GameRating
    Xap
    2
    PostSubmission
    GameRating
    Xap
    3
    Presubmission
    GameRating
    NULL
    4
    Presubmission
    TCRRule
    Xap
    5
    PostSubmission
    NULL
    Xap
    6
    Submission
    NULL
    Xap
    I need to iterate row by row in the above table  to compare the data with the rest of the table . i.e.  First get the data for RwNum =1 and compare with all the other rows in the table (i.e. Rwnum  2,3,4,5,6 )  to merge that data in below
    output format  and repeat the same process for all the rest of rows.
    Expected  Output :
    #Final
    RwNum
    JobSource
    RuleName
    PackageType
    1
    Presubmission
    GameRating
    Xap
    2
    PostSubmission
    GameRating
    Xap
    4
    Presubmission
    TCRRule
    Xap
    6
    Submission
    NULL
    Xap
    In the final output , RwNum 1 is the merged result of  RwNum 1 & 3  . Similarly  RwNum 2 in the output is the merged result of  2 & 5   and other records remain As Is.
    So the query I wrote is below  :
    While (@TopRwNum <=6
    Begin
    While (@InnerRwNum <= 6)
    Begin
    SELECT
    ---- Columns list
    FROM          #final Fr
    JOIN #final Ne
    ON 
                  Fr.RwNum
    =@TopRwNum AND Ne.RwNum
    =@InnerRwNum
    AND @TopRwNum <> @InnerRwNum     
    Where
    --- Conditional logic to compare  and merge the data 
    The above query executed in 10 secs
    The above query is working when the count of rows is small , but if the #final has  ~1000 rows then the no of times the above code
    has to iterate is 1000 *1000  ~= 1000000 times ( as there are outer and inner loops need to run for 1000 times each ) . Then the above
    while loop logic is taking around 20 mins
    Can we optimize the above code to make  iteration faster ?  . Any ideas would be greatly appreciated .

    Hi All ,
    Thanks for your replies .Sorry if haven't followed the forum rules , please forgive me . I kept only images as they help to understand the scenario
    better . Now here i am posting the DDL scripts and also the expected output with some more columns.( I have used table variables here)
    DECLARE  @Tab TABLE (
    RwNum  INT,
    JobParentID UNIQUEIDENTIFIER,JobSource      NVARCHAR(MAX),
    PackageType NVARCHAR(MAX),
    UpdateType  NVARCHAR(MAX),
    IsAutoPassed NVARCHAR(256),
    IsCanceled NVARCHAR(256),
    IsSkipped NVARCHAR(256),
    Result NVARCHAR(256),    
    Fired NVARCHAR(256),     
    RuleName NVARCHAR(256)
    INSERT INTO @Tab
    SELECT 1,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PreSubmissionSource','Xap',         'FullUpdate','','','FALSE','Pass','FALSE','RuleValidationResult^XboxLive'
    UNION
    SELECT 2,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PostSubmission','Xap',         'PartialUpdate','FALSE','','TRUE','Failed','FALSE','RuleValidationResult^GameCategoryNameChange'
    UNION
    SELECT 3,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PreSubmissionSource','Xap',         '','TRUE','TRUE','','Pass','FALSE','RuleValidationResult^XboxLive'
    UNION
    SELECT 4,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PreSubmissionSource','Xap',         '','TRUE','TRUE','','Pass','FALSE','RuleValidationResult^CumulativeDownload'
    UNION
    SELECT 5,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PostSubmission','Xap',         'PartialUpdate','FALSE','','TRUE','Failed','FALSE','PreinstalledPackage'
    UNION
    SELECT 6,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PreSubmissionSource','Xap',         'PartialUpdate','TRUE','TRUE','','Pass','FALSE','RuleValidationResult^CumulativeDownload'
    UNION
    SELECT 7,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PreSubmissionSource','Xap',         'FullUpdate','','','','Pass','','RuleValidationResult^XboxLive'
    UNION
    SELECT 1,'2004235d-af05-4e29-ab8d-50b80a088dd4','PreSubmissionSource','Xap',         'FullUpdate','TRUE','TRUE','','Pass','FALSE','RuleValidationResult^CumulativeDownload'
    UNION
    SELECT 2,'2004235d-af05-4e29-ab8d-50b80a088dd4','PreSubmissionSource','Xap',         'FullUpdate','TRUE','','FALSE','','FALSE','RuleValidationResult^CumulativeDownload'
    SELECT * FROM @Tab  ORDER BY 2 DESC ,1  ASC
    Original source table doesn't RwNum column . In order for me to iterate row by row on the above table i have created "RwNum" column . This column
    has unique ID for every row of a JobIDParent. RwNum column is generated as below 
    ROW_NUMBER() OVER( ORDER BY JobIDParent) AS  RwNum,
    The output should be below  :
    DECLARE  @Output TABLE (
    RwNum  INT,
    JobParentID UNIQUEIDENTIFIER,JobSource      NVARCHAR(MAX),
    PackageType NVARCHAR(MAX),
    UpdateType  NVARCHAR(MAX),
    IsAutoPassed NVARCHAR(256),
    IsCanceled NVARCHAR(256),
    IsSkipped NVARCHAR(256),
    Result NVARCHAR(256),    
    Fired NVARCHAR(256),     
    RuleName NVARCHAR(256)
    INSERT INTO @Output
    SELECT 1,'7ca42851-c3d2-42da-b5b9-d40392ae24fb',    'PreSubmissionSource',    'Xap',         'FullUpdate','TRUE','TRUE','FALSE','Pass','FALSE','RuleValidationResult^XboxLive'
    UNION
    SELECT 2,'7ca42851-c3d2-42da-b5b9-d40392ae24fb',    'PostSubmission', 'Xap',         'PartialUpdate','FALSE','','TRUE','Failed','FALSE','RuleValidationResult^GameCategoryNameChange'
    UNION
    SELECT 5,'7ca42851-c3d2-42da-b5b9-d40392ae24fb',    'PostSubmission', 'Xap',         'PartialUpdate','FALSE','','TRUE','Failed','FALSE','PreinstalledPackage'
    UNION
    SELECT 4,'7ca42851-c3d2-42da-b5b9-d40392ae24fb',    'PreSubmissionSource',    'Xap',         'PartialUpdate','TRUE','TRUE','','Pass','FALSE','RuleValidationResult^CumulativeDownload'
    UNION
    SELECT 1,'2004235d-af05-4e29-ab8d-50b80a088dd4',    'PreSubmissionSource',    'Xap',         'FullUpdate','TRUE','TRUE','FALSE','Pass','FALSE','RuleValidationResult^CumulativeDownload'
    SELECT * FROM @Output  ORDER BY 2 DESC ,1  ASC
    Merge rules to generate the above Output :
    All the below rules should
    be satisfied to merge two rows 
    1) We only need to merge the data that is related to the same JobIDParent
    2) Data in column for any two merging rows should not conflict .
    I.e. Data in a column of a merging row should not be different to the data in the same column for the other merging row(only if both rows are having not NULL value in
    that column )
    3) we can merge two rows ,only if the data in all the columns satisfy any of the below cases 
    case i) If
    the data in the column of a row is having null/EmptySpace & the data in the same column for the other row is a valid value i.e. non empty or not null
    case ii) Data
    in a column of a row is equal to the data in the same column of the other row .
    Output analysis by applying above rules:
    In the @Output  analysis for JobIDParent '7CA42851-C3D2-42DA-B5B9-D40392AE24FB'
    i)RwNum =1 is formed as a result of merge using the above rules from RwNum 3 & RwNum 7 .First the RowNum 1
    and 3 are merged and the output  of these two rows are merged with RwNum 7
    ii) RwNum =2 is not merged with any of the rows in fact table , as the no row in the table satisfy all the merge
    rules with RwNum =2
    iii) RwNum = 4 is formed as a result of merge using the above rules from RwNum 6 .
    iv) RwNum =5 is not merged with any of the rows in fact table , as the no row in the table satisfy all the
    merge rules with RwNum =5
    In the @Output  analysis for JobIDParent '7CA42851-C3D2-42DA-B5B9-D40392AE24FB'
    i)RwNum =1 is formed as a result of merge using the above rules from RwNum 2.
    In this way we want the rows to be merged on the table
    Sorry if my earlier post is not understandable .
    Thanks in advance.

  • Working with Large data sets Waveforms

    When collection data at a high rate ( 30K ) and for a long period (120 seconds) I'm unable rearrange the data due to memory errors, is there a more efficient method?
    Attachments:
    Convert2Dto1D.vi ‏36 KB

    Some suggestions:
    Preallocate your final data before you start your calculations.  The build array you have in your loop will tend to fragment memory, giving you issues.
    Use the In Place Element to get data to/from your waveforms.  You can use it to get single waveforms from your 2D array and Y data from a waveform.
    Do not use the Transpose and autoindex.  It is adding a copy of data.
    Use the Array palette functions (e.g. Reshape Array) to change sizes of current data in place (if possible).
    You may want to read Managing Large Data Sets in LabVIEW.
    Your initial post is missing some information.  How many channels are you acquiring and what is the bit depth of each channel?  30kHz is a relatively slow acquisition rate for a single channel (NI sells instruments which acquire at 2GHz).  120s of data from said single channel is modestly large, but not huge.  If you have 100 channels, things change.  If you are acquiring them at 32-bit resolution, things change (although not as much).  Please post these parameters and we can help more.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Need to load large data set from Oracle table onto desktop using ODBC

    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.

    hillelhalevi wrote:
    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.
    Use Oracle's free Sql Developer
    http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
    You can just issue a query like this
    SELECT /*csv*/ * FROM SCOTT.EMP
    Then just save the results to a file
    See this article by Jeff Smith for other options
    http://www.thatjeffsmith.com/archive/2012/05/formatting-query-results-to-csv-in-oracle-sql-developer/

  • 64-bit LabVIEW - still major problems with large data sets

    Hi Folks -
    I have LabVIEW 2009 64-bit version running on a Win7 64-bit OS with Intel Xeon dual quad core processor, 16 gbyte RAM.  With the release of this 64-bit version of LabVIEW, I expected to easily be able to handle x-ray computed tomography data sets in the 2 and 3-gbyte range in RAM since we now have access to all of the available RAM.  But I am having major problems - sluggish (and stoppage) operation of the program, inability to perform certain operations, etc.
    Here is how I store the 3-D data that consists of a series of images. I store each of my 2d images in a cluster, and then have the entire image series as an array of these clusters.  I then store this entire array of clusters in a queue which I regularly access using 'Preview Queue' and then operate on the image set, subsets of the images, or single images.
    Then enqueue:
    I remember talking to LabVIEW R&D years ago that this was a good way to do things because it allowed non-contiguous access to memory (versus contigous access that would be required if I stored my image series as 3-D array without the clusters) (R&D - this is what I remember, please correct if wrong).
    Because I am experiencing tremendous slowness in the program after these large data sets are loaded, and I think disk access as well to obtain memory beyond 16 gbytes, I am wondering if I need to use a different storage strategy that will allow seamless program operation while still using RAM storage (do not want to have to recall images from disk).
    I have other CT imaging programs that are running very well with these large data sets.
    This is a critical issue for me as I move forward with LabVIEW in this application.   I would like to work with LabVIEW R&D to solve this issue.  I am wondering if I should be thinking about establishing say, 10 queues, instead of 1, to address this.  It would mean a major program rewrite.
    Sincerely,
    Don

    First, I want to add that this strategy works reasonably well for data sets in the 600 - 700 mbyte range with the 64-bit LabVIEW. 
    With LabVIEW 32-bit, I00 - 200 mbyte sets were about the limit before I experienced problems.
    So I definitely noticed an improvement.
    I use the queuing strategy to move this large amount of data in RAM.   We could have used other means such a LV2 globals.  But the idea of clustering the 2-d array (image) and then having a series of those clustered arrays in an array (to see the final structure I showed in my diagram) versus using a 3-D array I believe even allowed me to get this far using RAM instead of recalling the images from disk.
    I am sure data copies are being made - yes, the memory is ballooning to 15 gbyte.  I probably need to have someone examine this code while I am explaining things to them live.  This is a very large application, and a significant amount of time would be required to simplify it, and that might not allow us to duplicate the problem.  In some of my applications, I use the in-place structure for indexing
    data out of arrays to minimize data copies.  I expect I might have to
    consider this strategy now here as well.  Just a thought.
    What I can do is send someone (in US) via large file transfer a 1.3 - 2.7 gbyte set of image data - and see how they would best advise on storing and extracting the images using RAM, how best to optimize the RAM usage, and not make data copies.  The operations that I apply on the images are irrelevant.  It is the storage, movement, and extractions that are causing the problems.  I can also show a screen shot(s) of how I extract the images (but I have major problems even before I get to that point),
    Can someone else comment on how data value references may help here, or how they have helped in one of their applications?  Would the use of this eliminate copies?   I currently have to wait for 64-bit version of the Advanced Signal Processing Toolkit for LabVIEW 2010 before I can move to LabVIEW 2010.
    Don

  • Large data sets and key terms

    Hello, I'm looking for some guidance on how BI can help me. I am a business analyst in a health solutions firm, but not proficient in SQL. However, I have to work with large data sets that just exceed the capabilities of Excel.
    Basically, I'm having to use Excel to manaully search for key terms and apply a values to those results. For instance, I have a medical claims file, with Provider Names, Tax ID, Charges, etc. It's 300,000 records long and 15-25 columsn wide. I need to search for key terms in the provider name like Ambulance, Fire Dept, Rescue, EMT, EMS, etc. Anything that resembles an ambulance service. Also, need to include abbreviations of them such as AMB, FD, or variations like EMT, E M T, EMS, E M S, etc. Each time I do a search, I have filter and apply an "N/A" flag.
    That's just one key term. I also have things like Dentists or DDS, Vision, Optomemtry and a dozen other Provider Types that need to be flagged as "N/A".
    Is this something that can be handled using BI? I have access to a BI group, but I need to understand more about the capabilities of what can be done. As an analyst, I'm having to deal with poor data inegrity. So, just cleaning up the file can be extremely taxing and cumbersome.
    Some insight would be very helpful. Thanks.

    I am not sure if you are looking for an explanation about different BI products? If so, may be this forum is not the place to get a straight answer.
    But, Information Discovery product suite might be useful in your case. Regarding the "large date set" you mentioned, searching and analyzing 300,000 records may not be considered a large data set at least in Endeca standards :).
    All your other requests, could also be very easily implemented using Endeca's product suite. Please reach out to Oracle's Endeca product team and they might guide you on how this product suite would help you.

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • User login report in Active Directory for specific date and time

    I want to get User login report in Active Directory for specific date and time e.g user logged in at15-01-2015 from 8:00am to 4:00pm
    Is any query, script or any tool available?
    Waiting for reply please

    You can identify the last logon date and time using my script here: https://gallery.technet.microsoft.com/scriptcenter/Get-Active-Directory-User-bbcdd771
    If you would like to get back in time and see when the user did a logon / logoff then you need to have auditing enabled. Once done, you can records from Security log in the event viewer: https://social.technet.microsoft.com/Forums/windowsserver/en-US/98cbecb0-d23d-479d-aa65-07e3e214e2c7/manage-active-directory-users-logon-logoff-events
    I have started a Wiki about how to track logon / logoff and it can help too: http://social.technet.microsoft.com/wiki/contents/articles/20422.record-logon-logoff-activities-on-domain-servers-and-workstations-using-group-policy.aspx
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile

  • Expense reports are not processing for future date?

    hi folks,
    in our co.expense reports are not processing for future date, even though the end date is one month from now the expense reports are not getting paid.
    can anybody assist me to come out of this issue.
    thanks in advance,
    regards
    bhanu

    HI,
               Could you please share how to go  Debug mode in Dymanic program, I have scenarion in SAP HR , when Employee is hire , during the hiring action Infotype 32 is updating based on following conditions
    ********INSERT IT0032 FOR CANDIDATE ID *************
    P0001-PERSG<>'5' - True
    P0000-MASSN='01'/X
    P0000-MASSN='H1'/X
    P0000-MASSN<>'84'/X
    P0000-MASSN<>'86'/X
    P0000-MASSN<>'99'/X
    GET_PNALT(ZHR_CREATEP0032)
    RP50D-FIELDS <> ''
    INS,0032,,,(P0000-BEGDA),(P0000-ENDDA)/D
    P0032-PNALT=RP50D-FIELD1
    P0032-ZZ_COMPID='05' _ True
                  Infotype record is being created but there is no data in "RP50D-FIELD1 , so i tried to debug the  subroutine GET_PNALT(ZHR_CREATEP0032) as mention in Dynamic action, its not going in debugger mode.
    Do you have any Idea about how to debug the  program.  used in dynamic action
    Regards,
    Madhu

  • Not all my titles are showing for each data set

    Hello,
    I have made a column graph for 4 different data sets, I need the titles to show for each data set along the X axis. I highlight the titles in my data when making the graph, however, only two of the four are showing. Is there any way I can get all 4 to show?
    ^^^^ Need the title after Baseline and after Numb Hand
    Thank you

    Try clicking the chart then the blue 'Edit Data References' rectangle, and then in the lower left of the window switch from 'Plot Columns as Series' to 'Plot Rows as Series'.  Then switch back if needed. All four should then show up.
    SG

  • Report Runing Very Slow for  range even data set for that range is small

    Hello Experts,
      I have a report which runs on date selection.
    When I run the report for say 01.01.2000 -31.12.2010  whcih contains 95% of the data , the report output is coming with in a minute.
    But when I ran the report from 01.01.2011 - 31.12.2011 , or say 01.01.2011 for a single day , where the data set is hardly 3k records,the report is runing for long time 15 - 20 minutes and not showing output.
    But when i remove this variable and run the report , it is coming again with in a minute.
    One weird thing is when i ran the report from 01.01.2000 - 31.12.2011 it is also coming with in a minute, but when i ran for 01.01.2011 single day it is not coming.
    Can any one share some inputs on this one.
    Thanks
    Vamsi

    Hello Experts,
      I have a report which runs on date selection.
    When I run the report for say 01.01.2000 -31.12.2010  whcih contains 95% of the data , the report output is coming with in a minute.
    But when I ran the report from 01.01.2011 - 31.12.2011 , or say 01.01.2011 for a single day , where the data set is hardly 3k records,the report is runing for long time 15 - 20 minutes and not showing output.
    But when i remove this variable and run the report , it is coming again with in a minute.
    One weird thing is when i ran the report from 01.01.2000 - 31.12.2011 it is also coming with in a minute, but when i ran for 01.01.2011 single day it is not coming.
    Can any one share some inputs on this one.
    Thanks
    Vamsi

  • DRILL DOWN REPORT FOR DIFFERENT DATA SETS in 11G

    Hello.
    My requirment is to do a drill down report with 2 data sets.
    My first data set have all the chart of accounts , My second data set have all the transactions.
    in my report i have 2 data tables. One with the accounts , and when i click in one account get all the transactions for selected account in the second data table.
    Anyone know how i can do this
    Any help would be apreciatted
    Paulo

    You can use the drill down strategy, well described by my friend Kan:
    http://bipconsulting.blogspot.com/2010/02/drill-down-to-detail-or-another-report.html
    regards
    Jorge
    p.s If this answers your question then please grant the points and close the thread

  • Drill Through report not working for large data

    Hi,
    In SSRS 2008 R2, I have a main report and a drill through report from one of the main report's column. The drill through report works mostly except if the data is too large i.e. more than 1 million records. How to fix this problem?
    Thanks,
    Jkrishna

    Nope. WHat I meant was not to show the entire data in child report (ie your 1 million records). Instead add extra parameter as PageNumber. By default set it as 1 when you navigate to child report (default value)
    Then in query behind use a ROW_NUMBER based logic like below
    SELECT *
    FROM
    SELEC ROW_NUMBER() OVER (ORDER BY <combination of unique valued column(s)>) AS Rn,...
    Your existing query other columns
    )t
    WHERE Rn BETWEEN ((@PageNo-1) * 1000) + 1 AND @PageNo * 1000
    Assuming you want 1000 per page
    So when it renders it will show first 1000 records. Add a NextPage icon to report footer and when clicked add a jump to report functionality to same report but with PageNumber parameter value as
    =Parameters!PageNumber.Value + 1
    and it will then give you second page data etc 
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

Maybe you are looking for

  • Cant sync iphone 6 plus to itunes, can only restore backup or setup as new

    I recently bought a new iphone 6 plus and set it up through itunes by restoring from the backup of my previous iphone.  Since then I have made updates on my new phone (installed some apps, updated IDs and passwords, etc), and most recently let it ins

  • In IE website submenus scroll/drop down in Firefox submenu does not scroll down

    In Firefox when I click on a website sub-menu such as credit card info, date, or choose from list no drop down of choices occurs I do not have this problem in IE ..... for example below this box is a question "This happened" with an open box and arro

  • Order by clause need more tuning

    i have a report with a query having 2 more union all queries i give an order by clause on the column no. 3 i.e. order by 3,the order by column is a varchar2 column having the data in this format 05-06/1,05-06/10,,now the problem is the order in which

  • E72 input problem since upgrade

    I updated my E72 to the lastest firmware a couple of days ago, it appeared to go ok although it was really slow for the first few reboot, but since the update it would not accept the password in the facebook app. It would either close the app when i

  • Java.lang.error

    Hi, I have BlackBerry Curve 9220. before I get to sleep last night my BB was okay until when I woke up its already Uncaught exception java.lang.error and many more pops up. The main is plain white and the apps are alined left. can you help me please