Best Approach to store 1.2 billion records

Hello All,
In our OLTP database there will be few tables with 1.2 billion records. The row length is approximate 50 bytes.
Please can I know what are best options to store this historical data.
Would it be advisable to store all this records go in one partitioned table, or create a separate schema in the same database for storage.
Would it impact performance of the database?
thanks

Hi,
1,2 billion rows and each row of 50 bytes - approximately 56 GB around. Huge segment. As you stated it's an OLTP database, can you check across what is the percentage or maximum number of queries with developers - important column which they are querying across and application dealt with.
Based on the input of the above question we can check how best we can partition and columns we are dealing. Perhaps you have stated across historical data. From this kindly verify across what is the % of users check or go-back to historical information - based on that we can estimate the number of partitions we can create on segment and looking forward, you must estimate the % of increase of new/incoming data and day/month wise. so that you might be plan what can we do the old historical data.
HTH
Note- Part of Performance dealt with writing efficient tuned queries
- Pavan Kumar N

Similar Messages

  • Best approach to store collection for html:options

    I am using struts to develop my web application. I am trying to figure out the best way to store a Collection for html:options inside a html:select element.
    My JSP page has a html:select statement which lists all the groups in a system. I have a member variable called getGroupNames() in my ActionForm class. When the JSP page is first requested, I populate the groupNames attribute of ActionForm class with a list of names. The html:options reads the groupNames attribute and lists all the group names.
    The problem: Once the user selects a group name and then submits, if there is any validation problems in other fields in the JSP, I build ActionErrors object and forward the request back to the same JSP. However, now I get an error saying the groupNames is NULL.
    Option1: One way to avoid this error is to AGAIN populate the groupNames list if there is a validation problem.
    Option 2: Another way is to store the list of groupnames in a seperate bean (and NOT as an attrib of the ActionForm) and store it in the REQUEST scope. This way I think the I don't have reload the groupName list in case of a validation failure.
    Could someone please help me in deciding which will be the right approach.
    Thanks

    Hi,
    1,2 billion rows and each row of 50 bytes - approximately 56 GB around. Huge segment. As you stated it's an OLTP database, can you check across what is the percentage or maximum number of queries with developers - important column which they are querying across and application dealt with.
    Based on the input of the above question we can check how best we can partition and columns we are dealing. Perhaps you have stated across historical data. From this kindly verify across what is the % of users check or go-back to historical information - based on that we can estimate the number of partitions we can create on segment and looking forward, you must estimate the % of increase of new/incoming data and day/month wise. so that you might be plan what can we do the old historical data.
    HTH
    Note- Part of Performance dealt with writing efficient tuned queries
    - Pavan Kumar N

  • What is the best approach to store "dynamic" user accessibility ?

    Hi all,
    We are implemennting security in our ADF BC + Faces application. There is always requirement to hide/disable functionalities that a user is not allowed / authorized to access.
    Usually we do this during development time, based on what role the user is in. Using this approach, there is no way to change that , or give access to new role during runtime (after the deployment). This is what I call "static accessibility".
    In our apps, we need the give / revoke access to some functionalities during runtime. This is what I call "dynamic accessibility".
    One approach that comes to my mind is :
    We define the accessibility to each function that we want to protect (hide/unhide) in database tables. Then every time a use enter a page, read these tables through JDBC calls then store tha data in Managed Bean.
    Has anybody here implement this "dynamic accessibility" ?
    Is there a better approach ?
    Thank you very much,
    xtanto

    Saeed,
    SRDemo uses a managed bean that checks is user in role when called and returns true or false. Another approach - more elegant - is the use of a security property resolver as available
    http://jsf-security.sourceforge.net
    Regarding dynamic permissions, the use of JAAS seems to be a good solution. ADF Security uses JAAS permissions to assign component access to users.
    E.g. if the user role manager has access to edit the salary column, then the security constraint added to the update button could be
    #{!bindings.<attribute binding>.updateable}
    Note that ADF Security sets the updateable flag on an attribute.
    Or you use
    #{bindings.<iterator binding>.permissionInfo.create}
    #{bindings.<attribute binding>.permissionInfo.update}
    #{bindings.permissionInfo['pageDefName'].view}
    etc. to determine what a user can do or can't.
    Note that I haven't tested if the permissions are cached for a specific application or if they are checked each time again. If they are checked each time then this would be a performance penalty but allows to dynamically set permissions to user groups as obviously needed in your applications.
    No, we don't have tutorial for this. But a Oracle By Example for end-to-end security implementation is on my collateral plan for JDeveloper 11 (just need to write a doc writer ;-) )
    Frank

  • What is the best way to store and search 2D data

    Hi,
    There is a set of data (~10k records ) in 2D dimension.
    like this :
    Col 1, Col 2, Col3....
    What is the best way to store and search those records ?
    Thanks in advance
    Wilson

    Hi,
    Either userObjet[][] if you know how much data you have, and the data size is fixed, or use a list of lists. E.g. A Vector of Vectors (some will probably say that you should use an ArrayList instead, and that could be the case, but it sounds like you would want to display the data later on, and a DefaultTableModel (for JTables) uses a Vector as data holder).
    Kaj

  • Best approach for performing DMLs using stored procedures

    Hi,
    I have a really general question and would like to hear your say about this.
    I want my application to manipulate or read data using stored procedures (or packages in that manner) and not directly using queries against the DB.
    Let's say I have a table with many columns:
    create table test (pkid number(10),col1 varchar2(30), col2 number(10), col3 date,...);For such a DML procedure, is it best to do something like
    procedure do_update(i_pkid IN number,i_col1 IN varchar2, i_col2 IN number, i_col3 IN date,...) as
    begin
       update test
       set col1=i_col1,
            col2=i_col2,
            col3=i_col3...
       where pkid=i_pkid;
       commit;
    end;Or do a selective update, meaning update only a certain column every time, given only 1 column actually changes? (and how to do that - separate procedures for each column? [columns can be nulls])
    Also, is it better to work with test.col1%type instead of specifying the data type in the procedure?
    And one last question - If I have a table with 100 columns and I don't want to create a procedure with 100 parameters - the best approach would be to use a record?
    I just need to be set on the way I start implementing things in order to do it well from the start.
    Many thanks.
    Edited by: Pyrocks on Nov 17, 2010 1:58 PM

    Pyrocks wrote:
    One last clarification (although it may be more related to c/c++ developers - maybe one of you will know):
    We are working with C++ and VB against a SQLServer and my part is to translate all the existing procedures to Oracle in order to migrate the application to work with Oracle DB.
    The existing procedures use an IN parameter for each column in the table and I would really like to use rowtype like you mentioned.
    Since I'm not a c/c++/vb developer - is there an easy way of working with such types, or even User-Defined Types, from c/c++/vb (as in passing a rowtype record from c++ to a SP ?)
    I'm not looking for the actual way - just want to know how hard it would be and how much code needs to be changed in order to be able to convince the developers that this is the RIGHT way to work.
    PS. none of our developers have experience with ORACLE so they wouldn't know the answer...Not actually an Oracle server-side (SQL language or PL/SQL language) question - but a client one. And it has been a long time since I wrote a fat client using C/C++ or Delphi.
    The OCI (<i>Oracle Call Interface</i>) supports advance (user defined) SQL data types. Has since Oracle 8i. So in that respect, yes the client can support custom SQL data types.
    How well it does depends entirely on that client language's features wrt OCI integration. For example, Delphi 4 was release around Oracle 8i and supported custom SQL types. I would expect that most languages today (like Java and C#) will provide support for it.
    As for usiong +%ROWTYPE+ - this is a PL/SQL clause as far as I know. Unsure whether it is supported by the OCI. What could support it is a pre-compiler like Pro*C. These enable you to mix pseudo SQL source code with client language source code. The pre-compilation step then replaces the pseudo SQL code with native client language calls to the OCI. The code is then compiled by that client language's compiler. Pre-compilers can pull all kinds of interesting "tricks" with their pseudo SQL code support.
    The best would be to consult the applicable client language's manuals that describe the interface it supports (via OCI) to Oracle.

  • What would be best approach to migrate millions of records from on premise SQL server to Azure SQL DB?

    Team,
    In our project, we have a requirement of data migration. We have following scenario and I really appreciate any suggestion from you all on implementation part of it.
    Scenario:
    We have millions of records to be migrated to destination SQL database after some transformation.
    The source SQL server is on premise in partners domain and destination server is in Azure.
    Can you please suggest what would be best approach to do so.
    thanks,
    Bishnu
    Bishnupriya Pradhan

    You can use SSIS itself for this
    Have a batch logic which will identify data batches within source and then include data flow tasks to do the data transfer to Azure. The batch size chosen should be as per buffer meory availability + parallel tasks executing etc.
    You can use ODBC or ADO .NET connection to connect to azure.
    http://visakhm.blogspot.in/2013/09/connecting-to-azure-instance-using-ssis.html
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Best Approach to Compare Current Data to New Data in a Record.

    Which is the best approach in comparing current data to new data for every single record before making updates?
    What I have is a table with 5,000 records that need to be updated every week with data from a data feed, but I dont want to update every single record week after week. I want to update only records with different data.
    I can think of these options:
    1) two cursors (one for current data and one for new)
    2) one cursor for new data and one varray for current data
    3) one cursor for new data and a select into for current data.
    Thanks for your help.

    I don't recommend it over merge, but in theory you could use a checksum (OWA_OPT_LOCK.checksum()) on the rows and if they differ then copy the new row into the destination table. Or you might be able to use rowscn to see if things have changed.
    Like I said, I don't know that I 'd take either approach over merge, but they are options.
    Gaff
    Edited by: Gaff on Feb 11, 2009 3:25 PM
    Actually, rowscn between 2 tables may not be an option. I know you can turn a rowscn into a time value so if that rowscn is derived from anything but the column values for the row that would foil that approach.

  • What is the best approach to insert millions of records?

    Hi,
    What is the best approach to insert millions of record in table.
    If error occurred while inserting the record then how to know which record has failed.
    Thanks & Regards,
    Sunita

    Hello 942793
    There isn't a best approach if you do not provide us the requirements and the environment...
    It depends on what for you is the best.
    Questions:
    1.) Can you disable the Constraints / unique Indexes on the table?
    2.) Is there a possibility to run parallel queries?
    3.) Do you need to know the rows which can not be inserted if the constraints are enabled? Or it is not necessary?
    4.) Do you need it fast or you have time to do it?
    What does "best approach" mean for you?
    Regards,
    David

  • Best approach to creating layouts and design pages for windows store 8.1 applications?

    Hi Everybody,
    Actually I want to design pages for windows store 8.1 applications which are compatible for all windows phone with different resolutions. I want best approach for designing pages, currently I am using * type divisions and designing the pages rather than
    that is there any best approach?

    Here is another some helpful docs:
    Laying out an app page
    Laying out your UI (HTML)
    Laying out your UI (XAML)
    Navigation design patterns
    Choosing a layout (HTML)
    Command design patterns
    Guidelines for advertising
    Guidelines for multiple windows
    Guidelines for projection manager
    Guidelines for resizing windows to tall and narrow layouts
    Guidelines for scaling to pixel density
    Guidelines for window sizes and scaling to screens
    Hope helps.
    --James
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • Best approach to return Large data ( 4K) from Stored Proc

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

  • Best approach to do Range partitioning on Huge tables.

    Hi All,
    I am working on 11gR2 oracle 3node RAC database. below are the db details.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    Thanks,
    Hari

    >
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    >
    Sorry to tell you but that test and partitioning strategy is essentially useless and won't work for you entire table anyway. One reasone is that if you use the WITHOUT VALIDATION clause you must ensure that the data being exchanged actually belongs to the partition you are putting it in. If it doesn't you won't be able to reenable or rebuild any primary key or unique constraints that exist on the table.
    See Exchanging Partitions in the VLDB and Partitioning doc
    http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin002.htm#i1107555
    >
    When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
    If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.
    >
    Comments below are limited to working with ONE table only.
    ISSUE #1 - ALL data will have to be moved regardless of the approach used. This should be obvious since your current data is all in one segment but each partition of a partitioned table requires its own segment. So the nut of partitioning is splitting the existing data into multiple segments almost as if you were splitting it up and inserting it into multiple tables, one table for each partition.
    ISSUE#2 - You likely cannot move that much data in the 2 to 3 hours window that you have available for down time even if all you had to do was copy the existing datafiles.
    ISSUE#3 - Even if you can avoid issue #2 you likely cannot rebuild ALL of the required indexes in whatever remains of the outage windows after moving the data itself.
    ISSUE#4 - Unless you have conducted full volume performance testing in another environment prior to doing this in production you are taking on a tremendous amount of risk.
    ISSUE#5 - Unless you have fully documented the current, actual execution plans for your most critical queries in your existing system you will have great difficulty overcoming issue #4 since you won't have the requisite plan baseline to know if the new partitioning and indexing strategies are giving you the equivalent, or better, performance.
    ISSUE#6 - Things can, and will, go wrong and cause delays no matter which approach you take.
    So assuming you plan to take care of issues #4 and #5 you will probably have three viable alternatives:
    1. use DBMS_REDEFINITION to do the partitioning on-line. See the Oracle docs and this example from oracle-base for more info.
    Redefining Tables Online - http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables007.htm
    Partitioning an Existing Table using DBMS_REDEFINITION
    http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
    2. do the partitioning offline and hope that you don't exceed your outage window. Recover by continuing to use the existing table.
    3. do the partitioning offline but remove the oldest data to minimize the amount of data that has to be worked with.
    You should review all of the tables to see if you can remove older data from the current system. If you can you could use online redefinition that ignores older data. Then afterwards you can extract this old data from the old table for archiving.
    If the amount of old data is substantial you can extract the new data to a new partitioned table in parallel and not deal with the old data at all.

  • Best approach for handling a Repair business

    Hi All,
    A company provides Repair Services for various branded Products. The manufacturers send their Products for repair along with replacement Parts. The repaired Products are despatched back to the Original Manufacturer with details of Repair (nature of repair, Parts consumed etc.,). The Products are serialized and hence repair details has to be provided for each Unit of the Product.
    After repair, Labour and / or Material Charges (billing) are claimed with the Manufacturer (the company may also use some of their own Parts for repair). Actual cost incurred for Repair is also to be determined.
    Following metrics are to be taken care of:
    Technician’s Performance in terms of no of units repaired per day, % Yield (Out of 100 Units, recovery is 80) Technician wise, Cost of Repair – Actual cost incurred for Repair / No of Units repaired
    What would be the best approach to handle the above scenario in SAP R/3 v 4.7 ?
    Request feedback.
    Thanks.
    Raj

    Hi Raj,
               the steps for reparis process is as follows
    1.Repair order process
    2.Technical check and reparis processing
    3.outbound delivery and billing
    Repair order process
    1.create the sales order type RAS-repari/service and include the service product and serviceable material over there and save the order
    2. open the sales order in change mode go to menu sales document and deliver change the delivery type to LR i.e return delivery
    3. assign serial number for the serviceable materials
    the path is go to extras-->serial numbers assign and save
    4. go to VL02N and post the goods reciept that means we are taking the goods in to our stores. Then check IQ03 thr which we can display serial number master record and verify.
    Technical check and reparis processing
    1. open sales order in change mode and here we will recognize how the returns were posted
    go to item view-->repair tab in recieved field you can observe the recieved items
    2. below that Business decision: start repair
    you will get the repari procedure these things we can set it in customization.
    the path is sales and distributionsalessales documentscustomer services-returns and repari procedure--define repair procedure
    we have to assign repair procedure to item category
    so that whenever you will put the material through item category we will get our own repair procedure.
    then in the business decision
    we can select how many u want to repair and how many u want to do outbound and how many u want to scrap for scrap items uhave to issue credit memo after this save the sales order
    open sales order in change mode go to editdisplay criteria-all items
    double click on the item sothat you can see the smorder i.e. service order (which gets generated automatically) which we will assign in requirement class.
    Then go to service order confirm the operations and techo the order
    then last step is
    outbound delivery and billing
    we have to send the repaired device back
    go to va02
    and see as soon as we techo the order an outbound delivery is created in repari tab. Perfrom outbound delivery i.e. salesdocument--deliver(need ot remove default delivery type) and save.
    create transfer order and goods issue is posted.
    then go to DP90
    billing request process individually
    give your sales document number
    press save billing request
    then go to sales document billing
    invoice will get generated save it
    settle the service order to sales order thr kO88 and settle the repair order thr VA88
    still if you have any doubts mail me and reward me with points if it is useful to you
    Regards
    Satish

  • What is best approach to report building in my case?

    Hi all,
    I'm just getting started with Crystal Reports for our Swing-based desktop application.  We need the ability to generate PDF and XLS reports, perhaps later adding web-based dashboarding and interactive reports.  I'm trying to determine the best approach to take with Crystal Reports to fit our application's data.
    Our app stores results in a separate database (either Oracle, SQLServer, or Apache Derby).  The result records contain lots of ID lookups to tables in another database.  This makes using straight SQL for reporting difficult as I would like to avoid cross-database queries.  So I'm thinking of using the POJO reporting approach where our app gathers the results, generates POJOs, and then passes them to the report.
    My concern with this POJO approach is that it seems to require loading all results into memory and generating the report in one big step.  I've read other posts referring to heap issues.  Is there a way to avoid this?  Some-how to page through report data?
    I've also read that Crystal Reports can work with any data provider that implements ResultSet.  Is this true?  If so, could I create my own custom ResultSet implementation that would let me page through my results without loading everything into memory at-once?  If possible, please point me to the documentation for this approach.  I haven't been able to find any examples.
    If there is a better approach that I haven't mentioned, please let me know. 
    Thanks in advance,
    Guy

    The first option is the best one for performance.  The only time you should use result sets is when you need to do runtime manipulation of the data through your application and is not acheivable in a stored procedure.

  • What is the best approach to return Large data from Stored Procedure ?

    no answers to my original post, maybe better luck this time, thanks!
    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    Create a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
    new farm's Web Application(s)/Service Application(s).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

Maybe you are looking for