Parameters based on large datasets

I'm creating a report with a cascading parameter where the dataset for the parameter has +23k records.
The first level of the parameter get the distinct values from a field GROUP_OWNER and the second level are the GROUP_NAMEs for that group_owner. The requirement is to select multiple group_names.
The first problem of the parameter not using more than the first 1000 records can be over come with the an regitry update. (see http://scn.sap.com/thread/977521) BUT this doesn't work if the report is then used in the Crystal Report View. Is there a registry setting for the viewer?
OR is there a way to have a parameter pass a value to another query for using in another parameter?
Cheers
John

Hi John
There may be a setting, but you will have to define the "viewer". There are a number of viewers, so which one are you using?
- Ludek
Senior Support Engineer AGS Product Support, Global Support Center Canada
Follow me on Twitter

Similar Messages

  • To update large dataset in columnar database (Sybase IQ)

    Hi,
    I want to update a column with random values in Sybase IQ.The no of rows are very large(approx 2 crore).
    I have created a procedure using cursor.
    it is working fine with low dataset but having performance issue with large dataset.
    Is there a workaround for this issue.
    regards,
    Neha Khetan

    Hi Eugene,
    Is it possible to implement this in BDB JE somehowYes, you can create a new separate database for storing the sets of integers. Each record in this database would be one partition (e.g., 1001-2000) for one record in the "main" database.
    The key to this database would be a two part key:
    - the key to the "main" database, followed by
    - the beginning partition value (e.g., 1001)
    For example:
    Main Database:
      Key     Data
       X      string/integer parameters for X
       Y      string/integer parameters for Y
    Integer Partition Database:
      Key     Data
      X,1     Set of integers in range 1-1000 for X
      X,1001  Set of integers in range 1001-2000 for X
      Y,1     Set of integers in range 1-1000 for Y
      Y,1001  Set of integers in range 1001-2000 for Y
       ...Two part keys are easy to implement with a tuple binding. You simply read/write the two fields for the record key, one after another, in the same way that you read/write multiple fields in the record data.
    Mark

  • Charts with large datasets?

    I'm writing an application that requires graphing of multi-series sql statements over moderately large datasets. Think 45-90k data points or so.
    I've noticed that with datasets larger than 5k or so, the flash charts eat a lot of cpu time when rendering and I haven't gotten flash charts to display 15k data points appropriately. I get a message about flash having a long running script.
    Does anybody have suggestions for how to display 50,000+ data point charts in apex? Is there a recommended tool to integrate with apex that would create the chart server-side as a graphic and then push the graphic to the client? Also, if possible I would like to call this tool directly from DB jobs to push graphs out to people via email on a recurring basis.
    Any suggestions would be very much appreciated.
    Thanks,
    Matt

    Thanks Mike.
    I originally worked exclusively in Mac. It was the only game in town at one time. I have been working on higher end Windows-based workstations for the past 10 years or so (I also do some video production). Apple products are 'kool,' just not cost-effective. I am currently running Win7 on a dual-quad core with 8GB Ram and 3+TB high-speed storage.
    I used DeltaGraph for several years but their PostScript was version 1.0. I had a lot of problems with the files, such as not ungrouping grouped objects, font problems and difficulties applying newer effects -- even after re-saving to current version AI. At version 5, I querried Red Rock regarding upgrade of PS support but they said it was not in their plans. I also found that setting up some plots was terrifically complicated in DG. It was quicker to set up simple geometry in layered plots in Illustrator. I have not looked at DG 6 but will check on their PS status.
    I have not looked at importing Excel via PDF. I often do test plots from the source worksheets for reference in Excel but have never considered the results to be workable or usable in a published format. I will take another look at Excel per your suggestion.
    It sure would be great if AI charting were a bit more robust and reliable.

  • How to create a report based on a DataSet programatically

    I'm working on a CR 2008 Add-in.
    Usage of this add-in is: Let the user choose from a list of predefined datasets, and create a totally empty report with this dataset attached to is. So the user can create a report based on this dataset.
    I have a dataset in memory, and want to create a new report in cr2008.
    The new report is a blank report (with no connection information).
    If I set the ReportDocument.SetDataSource(Dataset dataSet) property, I get the error:
    The report has no tables.
    So I must programmatically define the table definition in my blank report.
    I found the following article: https://boc.sdn.sap.com/node/869, and came up with something like this:
    internal class NewReportWorker : Worker
          public NewReportWorker(string reportFileName)
             : base(reportFileName)
    public override void Process()
             DatabaseController databaseController = ClientDoc.DatabaseController;
             Table table = new Table();
             string tabelName = "Table140";
             table.Name = tabelName;
             table.Alias = tabelName;
             table.QualifiedName = tabelName;
             table.Description = tabelName;
             var fields = new Fields();
             var dbField = new DBField();
             var fieldName = "ID";
             dbField.Description = fieldName;
             dbField.HeadingText = fieldName;
             dbField.Name = fieldName;
             dbField.Type = CrFieldValueTypeEnum.crFieldValueTypeInt64sField;
             fields.Add(dbField);
             dbField = new DBField();
             fieldName = "IDLEGITIMATIEBEWIJS";
             dbField.Description = fieldName;
             dbField.HeadingText = fieldName;
             dbField.Name = fieldName;
             dbField.Type = CrFieldValueTypeEnum.crFieldValueTypeInt64sField;
             fields.Add(dbField);
             // More code for more tables to add.
             table.DataFields = fields;
             //CrystalDecisions.ReportAppServer.DataDefModel.ConnectionInfo info =
             //   new CrystalDecisions.ReportAppServer.DataDefModel.ConnectionInfo();
             //info.Attributes.Add("Databse DLL", "xxx.dll");
             //table.ConnectionInfo = info;
             // Here an error occurs.
             databaseController.AddTable(table, null);
             ReportDoc.SetDataSource( [MyFilledDataSet] );
             //object path = @"d:\logfiles\";
             //ClientDoc.SaveAs("test.rpt", ref path, 0);
    The object ClientDoc referes to a ISCDReportClientDocument in a base class:
       internal abstract class Worker
          private ReportDocument _ReportDoc;
          private ISCDReportClientDocument _ClientDoc;
          private string _ReportFileName;
          public Worker(string reportFileName)
             _ReportFileName = reportFileName;
             _ReportDoc = new ReportDocument();
             // Load the report from file path passed by the designer.
             _ReportDoc.Load(reportFileName);
             // Create a RAS Document through In-Proc RAS through the RPTDoc.
             _ClientDoc = _ReportDoc.ReportClientDocument;
          public string ReportFileName
             get
                return _ReportFileName;
          public ReportDocument ReportDoc
             get
                return _ReportDoc;
          public ISCDReportClientDocument ClientDoc
             get
                return _ClientDoc;
    But I get an "Unspecified error" on the line databaseController.AddTable(table, null);
    What am i doing wrong? Or is there another way to create a new report based on a DataSet in C# code?

    Hi,
    Have a look at the snippet code below written for version 9 that you might accommodate to CR 2008, it demonstrates how to create a report based on a DataSet programmatically.
    //=========================================================================
    +           * the following two string values can be modified to reflect your system+
    +          ************************************************************************************************/+
    +          string mdb_path = "C:
    program files
    crystal decisions
    crystal reports 9
    samples
    en
    databases
    xtreme.mdb";    // path to xtreme.mdb file+
    +          string xsd_path = "C:
    Crystal
    rasnet
    ras9_csharp_win_datasetreport
    customer.xsd";  // path to customer schema file+
    +          // Dataset+
    +          OleDbConnection m_connection;                         // ado.net connection+
    +          OleDbDataAdapter m_adapter;                              // ado.net adapter+
    +          System.Data.DataSet m_dataset;                         // ado.net dataset+
    +          // CR variables+
    +          ReportClientDocument m_crReportDocument;          // report client document+
    +          Field m_crFieldCustomer;+
    +          Field m_crFieldCountry;+
    +          void CreateData()+
    +          {+
    +               // Create OLEDB connection+
    +               m_connection = new OleDbConnection();+
    +               m_connection.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + mdb_path;+
    +               // Create Data Adapter+
    +               m_adapter = new OleDbDataAdapter("select * from Customer where Country='Canada'", m_connection);+
    +               // create dataset and fill+
    +               m_dataset = new System.Data.DataSet();+
    +               m_adapter.Fill(m_dataset, "Customer");+
    +               // create a schema file+
    +               m_dataset.WriteXmlSchema(xsd_path);+
    +          }+
    +          // Adds a DataSource using dataset. Since this does not require intermediate schema file, this method+
    +          // will work in a distributed environment where you have IIS box on server A and RAS Server on server B.+
    +          void AddDataSourceUsingDataSet(+
    +               ReportClientDocument rcDoc,          // report client document+
    +               System.Data.DataSet data)          // dataset+
    +          {+
    +               // add a datasource+
    +               DataSetConverter.AddDataSource(rcDoc, data);+
    +          }+
    +          // Adds a DataSource using a physical schema file. This method require you to have schema file in RAS Server+
    +          // box (NOT ON SDK BOX). In distributed environment where you have IIS on server A and RAS on server B,+
    +          // and you execute CreateData above, schema file is created in IIS box, and this method will fail, because+
    +          // RAS server cannot see that schema file on its local machine. In such environment, you must use method+
    +          // above.+
    +          void AddDataSourceUsingSchemaFile(+
    +               ReportClientDocument rcDoc,          // report client document+
    +               string schema_file_name,          // xml schema file location+
    +               string table_name,                    // table to be added+
    +               System.Data.DataSet data)          // dataset+
    +          {+
    +               PropertyBag crLogonInfo;               // logon info+
    +               PropertyBag crAttributes;               // logon attributes+
    +               ConnectionInfo crConnectionInfo;     // connection info+
    +               CrystalDecisions.ReportAppServer.DataDefModel.Table crTable;+
    +               // database table+
    +               // create logon property+
    +               crLogonInfo = new PropertyBag();+
    +               crLogonInfo["XML File Path"] = schema_file_name;+
    +               // create logon attributes+
    +               crAttributes = new PropertyBag();+
    +               crAttributes["Database DLL"] = "crdb_adoplus.dll";+
    +               crAttributes["QE_DatabaseType"] = "ADO.NET (XML)";+
    +               crAttributes["QE_ServerDescription"] = "NewDataSet";+
    +               crAttributes["QE_SQLDB"] = true;+
    +               crAttributes["QE_LogonProperties"] = crLogonInfo;+
    +               // create connection info+
    +               crConnectionInfo = new ConnectionInfo();+
    +               crConnectionInfo.Kind = CrConnectionInfoKindEnum.crConnectionInfoKindCRQE;+
    +               crConnectionInfo.Attributes = crAttributes;+
    +               // create a table+
    +               crTable = new CrystalDecisions.ReportAppServer.DataDefModel.Table();+
    +               crTable.ConnectionInfo = crConnectionInfo;+
    +               crTable.Name = table_name;+
    +               crTable.Alias = table_name;+
    +               // add a table+
    +               rcDoc.DatabaseController.AddTable(crTable, null);+
    +               // pass dataset+
    +               rcDoc.DatabaseController.SetDataSource(DataSetConverter.Convert(data), table_name, table_name);+
    +          }+
    +          void CreateReport()+
    +          {+
    +               int iField;+
    +               // create ado.net dataset+
    +               CreateData();+
    +               // create report client document+
    +               m_crReportDocument = new ReportClientDocument();+
    +               m_crReportDocument.ReportAppServer = "127.0.0.1";+
    +               // new report document+
    +               m_crReportDocument.New();+
    +               // add a datasource using a schema file+
    +               // note that if you have distributed environment, you should use AddDataSourceUsingDataSet method instead.+
    +               // for more information, refer to comments on these methods.+
    +               AddDataSourceUsingSchemaFile(m_crReportDocument, xsd_path, "Customer", m_dataset);+
    +                              +
    +               // get Customer Name and Country fields+
    +               iField = m_crReportDocument.Database.Tables[0].DataFields.Find("Customer Name", CrFieldDisplayNameTypeEnum.crFieldDisplayNameName, CeLocale.ceLocaleUserDefault);+
    +               m_crFieldCustomer = (Field)m_crReportDocument.Database.Tables[0].DataFields[iField];+
    +               iField = m_crReportDocument.Database.Tables[0].DataFields.Find("Country", CrFieldDisplayNameTypeEnum.crFieldDisplayNameName, CeLocale.ceLocaleUserDefault);+
    +               m_crFieldCountry = (Field)m_crReportDocument.Database.Tables[0].DataFields[iField];+
    +               // add Customer Name and Country fields+
    +               m_crReportDocument.DataDefController.ResultFieldController.Add(-1, m_crFieldCustomer);+
    +               m_crReportDocument.DataDefController.ResultFieldController.Add(-1, m_crFieldCountry);+
    +               // view report+
    +               crystalReportViewer1.ReportSource = m_crReportDocument;+
    +          }+
    +          public Form1()+
    +          {+
    +               //+
    +               // Required for Windows Form Designer support+
    +               //+
    +               InitializeComponent();+
    +               // Create Report+
    +               CreateReport();+
    +               //+
    +               // TODO: Add any constructor code after InitializeComponent call+
    +               //+
    +          }+//=========================================================================

  • On what parameters based an aggragate can be decided whether it is good agg

    Hi,
    On what parameters based an aggragate can be decided whether it is good aggr or bad aggr.
    Regards,
    Siva Thottemopudi

    when i execute the report after 300 seconds it gets terminated due to session timedout.
    my report is based on currency type 10 , fiscal year 007.2010 and comapany code
    I have created an aggregate based on the above parameters and when  i chek in RSRT report is hitting the same aggregate.
    But in maintain aggregate of cube in valuation tab i can see negetive signs ( does it indicates bad aggregte??)
    for small company code report executes in a  short time but not for big company code even though suitable aggregates are available.
    what might tbe the reason.
    Edited by: Siv Kishore on Aug 4, 2010 2:35 PM

  • Is anyone working with large datasets ( 200M) in LabVIEW?

    I am working with external Bioinformatics databasesa and find the datasets to be quite large (2 files easily come out at 50M or more). Is anyone working with large datasets like these? What is your experience with performance?

    Colby, it all depends on how much memory you have in your system. You could be okay doing all that with 1GB of memory, but you still have to take care to not make copies of your data in your program. That said, I would not be surprised if your code could be written so that it would work on a machine with much less ram by using efficient algorithms. I am not a statistician, but I know that the averages & standard deviations can be calculated using a few bytes (even on arbitrary length data sets). Can't the ANOVA be performed using the standard deviations and means (and other information like the degrees of freedom, etc.)? Potentially, you could calculate all the various bits that are necessary and do the F-test with that information, and not need to ever have the entire data set in memory at one time. The tricky part for your application may be getting the desired data at the necessary times from all those different sources. I am usually working with files on disk where I grab x samples at a time, perform the statistics, dump the samples and get the next set, repeat as necessary. I can calculate the average of an arbitrary length data set easily by only loading one sample at a time from disk (it's still more efficient to work in small batches because the disk I/O overhead builds up).
    Let me use the calculation of the mean as an example (hopefully the notation makes sense): see the jpg. What this means in plain english is that the mean can be calculated solely as a function of the current data point, the previous mean, and the sample number. For instance, given the data set [1 2 3 4 5], sum it, and divide by 5, you get 3. Or take it a point at a time: the average of [1]=1, [2+1*1]/2=1.5, [3+1.5*2]/3=2, [4+2*3]/4=2.5, [5+2.5*4]/5=3. This second method required far more multiplications and divisions, but it only ever required remembering the previous mean and the sample number, in addition to the new data point. Using this technique, I can find the average of gigs of data without ever needing more than three doubles and an int32 in memory. A similar derivation can be done for the variance, but it's easier to look it up (I can provide it if you have trouble finding it). Also, I think this funtionality is built into the LabVIEW pt by pt statistics functions.
    I think you can probably get the data you need from those db's through some carefully crafted queries, but it's hard to say more without knowing a lot more about your application.
    Hope this helps!
    Chris
    Attachments:
    Mean Derivation.JPG ‏20 KB

  • Issues when Downloading Large Datasets to Excel and CSV

    Hi,
    Hoping someone could lend a hand on the issues described below.
    I have a prompted dahsboard that, dependent upon prompts selected, can return detail datasets. THe intent of this dashboard is to AVOID giving end users Answers Access, but still providing the ability to pull large amounts of detail data in an ad-hoc fashion. When large datasets are returned, end users will download the data to thier local machines and use excel for further analysis. I have tried two options:
    1) Download to CSV
    2) Download data to Excel
    For my test, I am uses the dashboard prompts to return 1 years (2009) worth of order data for North America, down to the day level of granularity. Yes alot of detail data...but this is what many "dataheads" at my organization are requesting...(despite best efforts to evangelize the power of OBIEE to do the aggregation for them...). I expext this report to return somewhere around 200k rows...
    Here are the results:
    1) Download to CSV
    Filesize: 78MB
    Opening the downloaded file is failrly quick...
    126k rows are present in the CSV file...but the dataset abruptly ends in Q3(August) 2009. The following error appears at the end of the incomplete dataset:
    <div><script language="javascript" src="res/b_mozilla/browserdom.js"></script><script language="javascript" src="res/b_mozilla/common.js"></script><div class="ErrorMessage">Odbc driver returned an error (SQLFetchScroll).</div><div style="margin-top:2pt" onclick="SAWMoreInfo(event); return false;"><img class="ErrorExpanderImg" border="0" src="res/sk_oracle10/common/errorplus.gif" align="absmiddle">  Error Details<div style="margin-left:15px;display:none" compresssrc="res/sk_oracle10/common/errorminus.gif">                                                                                                                        
    <div class="ErrorCodes">Error Codes: <span dir="ltr">OPR4ONWY:U9IM8TAC</span></div>                                                                                                                        
    <div style="margin-top:4pt"><div class="ErrorSubInfo">State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred.                                                                                                                        
    [nQSError: 46073] Operation 'stat()' on file '/opt/apps/oracle/obiee/OracleBIData/tmp/nQS_31951_2986_15442940.TMP' failed with error: (75) ,Çyô@BÀŽB@B¨Ž¡pÇôä재ü5HB. (HY000)</div></div></div></div></div>                                                                                                                        
    2) Download to Excel
    Filesize: 46MB
    Opening the Excel file is extremely painful...over 20 minutes to open the file...making excel unusable during the opening process...defeinately not acceptable for end users.
    When opened the file contains only 65k rows...when there should be over 200k...
    Can you please help me understand the limitations of detail data output (downloading) from OBIEE...or provide workarounds for the circumstances above?
    Thanks so much in advance.
    Adam
    Edited by: AdamM on Feb 9, 2010 9:01 PM
    Edited by: AdamM on Feb 9, 2010 9:02 PM

    @chandrasekhar:
    Thanks for your response. I'll try with the export button but also willing to know how to create button on toolbar.And by clicking on that button a popup box will come having two radio buttons asking to download the report either in .xls or in .csv format. I am looking for the subroutines for that.
    Thanks.
    Message was edited by:
            cinthia nazneen

  • Working with and manipulating large datasets

    What experience has everyone had working with and
    manipulating large datasets in coldfusion? I imagine this is more
    an sql server question in my specific case but I would be very
    interested if someone could enlighten me on what I need to look out
    for any potential pitfalls and best practises working with
    coldfusion and what I should learn. Thanks in advance for any
    pointers

    I guess it depends on the purpose of the application in
    question. We are running a scientific data mining application in
    ColdFusion for the last 5-6 years without any major glitches that
    will force us to rethink about ColdFusion. The dataset in this
    application are quite large with 30-40 different tables and some of
    tables have more than 100 million records.
    However our queries are usually highly optimized with
    intelligent use of index and hints (we use Oracle).

  • Is it possible to provide an BEx Web Query with parameters based on iView?

    Hi SAP Portal experts,
    My knowledge of SAP Portal is limited and I ran into the following problem:
    I have got about 60 BEx Web Queries which should be put into some sort of navigation. Furthermore I would like to intodruce one start page where the user can preselect common bex query variables (e.g. company code) by clicking on a map.
    My approach was to put all query links into a BEx Web Application Template. The template includes a JavaScript storing the userdefined values in a cookie. As soon as the user clicks on a link the cookie values are read from that cookie and a query parameter string is set up like "&BI_COMMAND_1-BI_COMMAND_TYPE=SET_VARIABLES_STATE&BI_COMMAND_1-VARIABLE_VALUES-VARIABLE_VALUE_1-VARIABLE_TYPE=VARIABLE_INPUT_STRING&BI_COMMAND_1-VARIABLE_VALUES-VARIABLE_VALUE_1-VARIABLE_TYPE-VARIABLE_INPUT_STRING=2100&BI_COMMAND_1-VARIABLE_VALUES-VARIABLE_VALUE_1-VARIABLE=0P_COCD" and concatenated with the URL for the BEx Query.
    My colleague wants me to put all this into SAP Portal and to create an iView for each BEx Query. How can I achieve that? I have already had a look at the iView property "parameters passed on to BEx Web Application". If I put my parameter string there, it works fine. However, I would like to replace VARIABLE_INPUT_STRING=2100 and VARIABLE=0P_COCD with the values selected by the user by clicking on the map. So how can I set up a parameter or variable in one iView and read it in another iView with a BEx Web Query???
    Thank you very much in advance
    Martin

    I have thought about this. There are some problems here....
    I cannot use the same proxy to invoke the java callout and then based on the code or handler disable it, since
    1) i would have no way to enable back the proxy again.
    2) Also there is some amount of message loss.
    So i will have to use another proxy to do the same, but in this case
    1) what would be the trigger to this proxy?
    2) And how often do i invoke the java callout to see if the URI is up or not? (wouldnt that affect the performance?)
    I am just wondering why did they give an offline URI option in the business service and no similar option in the proxy service, Any Idea?
    Thanks

  • ADO Object was Open error aka DB_E_OBJECTOPEN on Large Dataset in MS SQL 2012

    Hi I am running a query with a big result set in my Delphi application using tADOQuery.
    When I request 12 months data I get an error "Object was Open".
    But if I run the same query on 1 month of data it works fine.
    The mechanism I use to open all queries in my application is the same and has always worked for many years.  This is the first time I have hit this error.  My application has run thousands of different queries across many customer sites without ever
    hitting this error.
    In my research I found that this seems to correspond to an ADO Error DB_E_OBJECTOPEN.
    The 12 month query runs perfectly OK in SQL Mgmt Studio and takes about 1.5 minutes to start showing results.  But it is 4 minutes before it works out there are 3,810,979 rows.
    I am using a client cursor
    myADOQuery.CursorLocation:=clUseClient;
    And I am setting a large CommandTimeout = 600;
    So why does the query fail in the ADO client (but it works in SQL Mgmt Studio)?
    And what does this "Object was Open" error mean?

    Hi Justin,
    The below picture is the relationships between the Data Access component. ADO is based on OLEDB, the underlying provide is OLEDB.
    As you can see, ADO provider is based on OLE DB provider or ODBC Driver.
    And DB_E_OBJECTOPEN is the error you experence. It is returned by OLE DB provider (be it SQLOLEDB, SQLNCLI or any 3rd party provider) when it makes a call to ICommand::Execute without closing the result of the previous execution (see
    http://msdn.microsoft.com/en-us/library/ms718095(VS.85).aspx).
    Did anything change recently in your environment, a new version of the client application was deployed or client driver was upgraded?
    Usually this error occurs because of the problem in the client application, not the client provider. Application should do either of the following:
    Fully consume the result of the previous command execution prior to issuing a new command. If it doesn't, it might be leaking an object.
    If the result must be pending, it should turn set DBPROP_MULTIPLECONNECTIONS property to VARIANT_TRUE which will allow underlying OLE DB provider to establish additional connections under the hood. This must only be done if developer is absolutely sure
    that the application needs the previous result.
    Enable Multiple Active Result Sets on the connection which would allow running multiple queries on the same connection. This is also must be done consciously, as it may mask object leak.
    Besides, maybe you have known that the CommandTimeout is 30s by default. However, the reason that you can succeed when you use SSMS(SQL Server Management Studio) to run the query is because when using SSMS to create a new connetion,
    the commandTimeout is 0 by default. It means it has no limitation for the query command time.  But when you use ADO client to create a new connection to run the query, the time which is taken by the query may beyond the commandTimeout
    setting .So it failed to return the results in your ADO client.
    From a support perspective to further analize this issue is really beyond what we can do here in the forums. If you cannot determine your answer here or on your own, consider opening a support
    case with us. Visit this link to see the various support options that are available to better meet your needs: 
    http://support.microsoft.com/default.aspx?id=fh;en-us;offerprophone."
    Keep us posted.

  • "Page cannot be displayed" error when attempting to download large dataset.

    In the following code, can anybody please tell me at what point the data actually starts to get transferred and the popup "download file" box appears? I'm thinking that the dialog box should appear on the first outputstream.write and start transferring data. However, I think it's not transferring the data until the while loop has finished! That's bad because this is a real long process and I eventually get "Page cannot be displayed" error. On a smaller dataset (shorter while loop) everything seems to work correctly. Is there some sort of a web server configuration that tells it to write out the data ONLY if outputstream closes? I really need it to write out the data as it's coming in. Code is below and thanks in advance.
    response.setHeader("Expires", "0");
    response.setHeader("Content-disposition","inline;filename=Download.csv");
    response.setContentType("application/x-msdownload");
    outputStream = response.getOutputStream();
    bufferInBytes = this.getData();
    while (bufferInBytes != null)
    outputStream.write(bufferInBytes, 0, bufferInBytes.length);
    outputStream.flush();
    bufferInBytes = this.getData();
    outputStream.close();

    Hi All,
    Thanks for all your help.
    Seems I already found out the issue.
    Since we are using "Personal Home Page" for the system profile option "Self Service Personal Home Page Mode" I checked oracle note and found out
    that 11i did not support this mode. Maybe this is the problem that is why we are getting page cannot be displayed error when only 1 responsibility is active.
    The way to fix this, is to use mode 'Framework Only' instead of 'Personal Home Page' as Oracle are phasing out mod_plsql based UI and it will not be present in future releases. No maintenance is being performed (i.e. bug fixes) to the mod_plsql based UI technology.
    Please check this note for more clearer explanation:
    Is The 'Personal Home Page' Mode Supported In Oracle 11i Applications? [ID 368628.1]
    Again, thank you for your help!

  • Using DataSet with large datasets

    I have a product, like a shirt, that comes in 800 colors.
    I've created an xml file with all the color id's, names and RGB
    codes (5 attributes in all) and this xml file is 5,603 lines long.
    It takes a noticeably long time to load. I'm using the auto-suggest
    widget to then show subsets of this list based on ID or color name.
    Is there an example of a way to connect to a php-driven
    datasource, so I can query a database and return the matches to the
    auto-suggest widget?
    Thanks, Scott

    In my Googling I came across this Cold Fusion example:
    http://www.brucephillips.name/blog/index.cfm/2007/3/31/Use-Sprys-New-Auto-Suggest-Widget-T o-Handle-Large-Numbers-of-Suggestions

  • Large datasets...paste a formula into entire column

    I have a large data set....>40,000
    How do I past a formula to an entire column?
    e.g. Score a value using "if/then" logic to the entire dataset
    thanks
    Ryan

    You said "entire column" but maybe you meant "entire column except for header and/or footer rows"?
    If have header cells to avoid, do what Wayne said but after selecting the entire column, Command Click on the header cell(s) to deselect them. If you have footer rows, scroll to the bottom and deselect them too.

  • Handling large datasets

    Hi gang,
    I have a query which returns a large large result set.My goal is to populate a scrollable JTable with this result set.This result set is so large that the memory can also not handle it.So I am looking for options to save the results of this query.
    I am thinking of writing the results of the query to a CSV file and then read chunks of the CSV file in a vector which is then used to populate the JTable(paging the JTable).
    Does any of you have any experience working with large files and basically the performance of CSV files in reading the chunks.Do you think there is a bottleneck which I am ignoring?
    I'll appreciate any suggestions.
    Thanks
    Connie

    I understand you know how to handle the paging with scrollable jtables. I furthermore think that you know that a JTable is backed by a TableModel which contains the data you want to display.
    You state that the result set is likely to exceed the memory of the client computer. A question may be allowed: Is it reasonable to display the ENTIRE result set in a single table then? Assuming that each row occupies one kB of RAM, 64.000 rows would consume 64 MB of RAM, which modern computers CAN handle. Do you really want to ask users to visually handle 64.000 table rows???
    However. The New I/O introduced with JDK 1.4 might help. Write the entire result set into a file (CSV or binary octet stream), and blend them into the memory using FileChannel.map(mode, start, end) with varying start and end parameters depending on the portion to be displayed.

  • How to dynamically generate parameters based on dropdown?

    Hello,
    newbie ABAPer question..
    I was wondering if it's possible to dynamically create a UI based on the values on a drop list/drop down box?
    For example, screen would initialize too
    dropdown list filled with 1 - 50
    field1 field2
    user selects 10 from the drop down list, then the screen would look like this:
    drop down list dropdown list filled with 1 - 50
    field1 field2
    field3 field4
    field5 field6
    field7 field8
    field9 field10
    field11 field12
    field13 field14
    field15 field16
    field17 field18
    field19 field20
    thanks for looking!

    Then you need to use fm
    FREE_SELECTIONS_DIALOG
    for dynamically generating the selection parameters screen.
    Please make a search on this forum for sample code how to use this

Maybe you are looking for

  • How do I add a second Airport Express for Airtunes only?

    I already have one Airport Express hooked up to my G5 for wireless internet. Works just fine. How, in simple english, do I add a second Airport Express to my downstairs stereo fot playing my iTunes music through. I have looked through the help sectio

  • Layer3 to the edge

    I have a goal of segregating some internal networks at my company.  The catalyst for this project is PCI compliance.  Internally I run two 6509-E chassis in VSS and 4507 switches in my closets.  These 4507 switches have a direct trunk link to my 6509

  • Book printing and sharpening

    Hello, I am currently building my first Aperture book. I am wondering what level of sharpening I should apply to my images. Or does the book printing process automatically apply some level of sharpening ? Thanks for sharing your experience, Pierre

  • Can't print on HP4M with Macbook Pro using Snow Leopard

    Hello Everyone, I've tried everything I know...which isn't much...to avoid having to request your help knowing that many people have had similar problems. However, I still can't get my Mac to print with the HP4M printer regardless of what I have trie

  • How to Create Data Model

    I have a database of 1000 tables. How can I create ER-Model for that database. I mean figure showing fields and relationships. I am looking for a smart way. Regards, Amer