Best approach to return Large data ( 4K) from Stored Proc

We have a stored proc (Oracle 8i) that:
1) receives some parameters.
2)performs computations which create a large block of data
3) return this data to caller.
compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
Thanks for any help,
Yoram Ayalon

We have a stored proc (Oracle 8i) that:
1) receives some parameters.
2)performs computations which create a large block of data
3) return this data to caller.
compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
Thanks for any help,
Yoram Ayalon

Similar Messages

  • What is the best approach to return Large data from Stored Procedure ?

    no answers to my original post, maybe better luck this time, thanks!
    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    Create a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
    new farm's Web Application(s)/Service Application(s).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Reading Large data files from client machine ( urgent)

    Hi,
    I want to know the best way for reading large data at about 75MB file from client machine and insert to the database.
    Can anybody provide sample code for it.
    Loading the file should be done at client machine and inserting into the database should be done at server side.
    How should i load the file?
    How should i transfer this file or data to server ?
    How should i insert into the database ?
    Thanks in advance.
    regards
    Kalyan

    Like I said before you should be using your application server to serve files >from the server off the filesystem. The database should not store files this big >and should instead just have a reference to this file. I think u have not understood the problem corectly.
    I will make it clear.
    The requirement is as follows.
    This is a j2ee based application.
    Application server is oracle application server.
    Database is oracle9i
    it is thick client (swing based application)
    User enters datasource like c:\turkey.data
    This turkey.data file contains data
    1@1@20050131@1@4306286000113@D00@32000002005511069941@@P@10@0@1@0@0@0@DK@70059420@4330654016574@1@51881100@51881100@@99@D@40235@0@0@1@430441800000000@@11@D@42389@20050201@28483@15@@@[email protected]@@20050208@20050307@0@@@@@@@@@0@@0@0@0@430443400000800@0@0@@0@@@29@0@@@EUR
    like wise we may have more than 3 lacs rows in it.
    We need to read this file and transfer this to the application server. Which are EJBS.
    There we read this file each row in file is one row in the database for a table.
    Like wise we need to insert 3 lacs records in the database.
    We can use Jdbc to insert the data which is not a problem.
    Only problem is how to transfer this data to server.
    I can do it in one way. This is only a example
    I can read all the data in StringBuffer and pass to server.
    There again i get the data from StringBuffer and insert into database using jdbc.
    This way if u do it. It is performance issue and takes long time to insert into the database.It even may give MemoryOutofBond exception.
    just iam looking for the better way of doing this which may get good performace issue.
    Hope u have understood the problem.

  • Best Approach to Compare Current Data to New Data in a Record.

    Which is the best approach in comparing current data to new data for every single record before making updates?
    What I have is a table with 5,000 records that need to be updated every week with data from a data feed, but I dont want to update every single record week after week. I want to update only records with different data.
    I can think of these options:
    1) two cursors (one for current data and one for new)
    2) one cursor for new data and one varray for current data
    3) one cursor for new data and a select into for current data.
    Thanks for your help.

    I don't recommend it over merge, but in theory you could use a checksum (OWA_OPT_LOCK.checksum()) on the rows and if they differ then copy the new row into the destination table. Or you might be able to use rowscn to see if things have changed.
    Like I said, I don't know that I 'd take either approach over merge, but they are options.
    Gaff
    Edited by: Gaff on Feb 11, 2009 3:25 PM
    Actually, rowscn between 2 tables may not be an option. I know you can turn a rowscn into a time value so if that rowscn is derived from anything but the column values for the row that would foil that approach.

  • Best approach to replicate the data.

    Hello Every one,
    I want to know about the best approach to replicate the data.
    First i clear the senario , i have one oracle 10g enterprise edition database and 20 oracle 10g standerd edition databases.The enterprise edition will run at center and the other 20 standered edition databases will run at sites.
    The data will move from center to sites and vice versa not between sites.There is only one schema to replicate with more than 500 tables.
    what will be the best for replication (Updateble MVs or Oracle Streams or any thing else.),its urgentpls.
    Thanx in advance.
    Edited by: user560669 on Dec 13, 2009 11:01 AM

    Hello,
    Yes MV or Oracle Stream are the common ways to replicate datas between databases.
    I think that in your case (you have to replicate a whole Schema) Oracle Streams is interresting (it's not so easy
    to maintain 500 MV).
    But you must care of the type of Edition.
    I'm not sure that Standard Edition allows Advanced replication features. It seems to me (but I may be wrong)
    that Updatable MV is an EE features.
    About the Stream It seems to be available even in SE.
    Please, find enclosed some links about it:
    [http://www.oracle.com/database/product_editions.html]
    [http://www.oracle.com/technology/products/dataint/index.html]
    Hope it can help,
    Best regards,
    Jean-Valentin

  • Need to load large data set from Oracle table onto desktop using ODBC

    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.

    hillelhalevi wrote:
    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.
    Use Oracle's free Sql Developer
    http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
    You can just issue a query like this
    SELECT /*csv*/ * FROM SCOTT.EMP
    Then just save the results to a file
    See this article by Jeff Smith for other options
    http://www.thatjeffsmith.com/archive/2012/05/formatting-query-results-to-csv-in-oracle-sql-developer/

  • Best method to transfer large strings (XML data) to/from stored procedure

    Hi!
    I'm trying to call a PL/SQL procedure from Java. The procedure inputs a string (XML) that is parsed, and returns a result string (also XML).
    Typical size of the string is 5kb -> 1mb.
    I can see two possible solutions:
    1) String / LONG
    2) CLOB (Using DMBS_LOB.createTemporary and getting a CLOB locator and passing the locator to the stored procedure)
    Does anyone have other suggestions?
    What is the fastest method for transferring XML structures from to and from stored procedures?
    Anders

    Anders,
    I would say it depends on your requirement. Both the methods have some advantages and disadvantages.
    Using a CLOB means that you have to use vendor specific libraries but this is more extendible and I fast too.
    Using String/Long will be more portable in the long run but again you lose on speed/performance.
    Just a doubt of mine... If I got it correct, you are transforming one XML to another XML based on some conditions. Why dont you use XSL and XSL StyleSheet Processor packaged with XDK for this? I think this would be the fastest way.
    Hope this helps.

  • What's the best approach to resetting Calendar data on Server?

    I have a database format error in a calendar that I only noticed after the migration to Server on Yosemite.  I'll paste a snippet from the Error Log in at the bottom that shows the error - I've highlighted the description of the problem in red.
    I found a pretty cool writeup from Linc in a different thread, but it's aimed at fixing a similar problem for a local user on their own machine rather that an iCal server like what we're running.  Here's the link to that thread: Re: Calendar crashes on open  For example, does something like Calendar Cleaner work on our server database as well?
    In my case I think I'd basically like to gracefully remove all the Calendar databases from Server and start fresh (all the users' calendars are backed up on their local machines, so they can just import them into fresh/empty calendars once I've cleaned out the old stuff).  Any thoughts on "best approach" would be much appreciated.
    Here's the error log...
    File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twi sted/internet/defer.py", line 1099, in _inlineCallbacks
    2015-01-31 07:14:41-0600 [-] [caldav-0]         result = g.send(result)
    2015-01-31 07:14:41-0600 [-] [caldav-0]       File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python 2.7/site-packages/txdav/caldav/datastore/sql.py", line 3635, in component
    2015-01-31 07:14:41-0600 [-] [caldav-0]         e, self._resourceID
    2015-01-31 07:14:41-0600 [-] [caldav-0]     txdav.common.icommondatastore.InternalDataStoreError: Data corruption detected (Invalid property: GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     VERSION:2.0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CALSCALE:GREGORIAN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     PRODID:-//Apple Inc.//Mac OS X 10.8.2//EN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTART:20121114T215900Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTEND:20121114T232700Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CLASS:PUBLIC
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CREATED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DESCRIPTION:Flight leg 2 of 2 for trip from MSP to LAX\\nhttp://www.google.
    2015-01-31 07:14:41-0600 [-] [caldav-0]      com/search?q=US+29+flight+status\\nBooked on November 8\\, 2012\\n
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTAMP:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LAST-MODIFIED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LOCATION:Sky Harbor International Airport\\, Phoenix\\, AZ
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SEQUENCE:0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     STATUS:CONFIRMED
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SUMMARY:US 29 from PHX to LAX
    2015-01-31 07:14:41-0600 [-] [caldav-0]     URL:http://www.hipmunk.com/flights/MSP-to-LAX#!dates=Nov14,Nov17&group=1&s
    2015-01-31 07:14:41-0600 [-] [caldav-0]      elected_flights=96f6fbfd91,be8b5c748d;kind=flight&locations=MSP,LAX&dates=
    2015-01-31 07:14:41-0600 [-] [caldav-0]      Nov14,Nov16&group=1&selected_flights=96f6fbfd91,
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-CALENDARSERVER-PERUSER-UID:D0737009-CBEE-4251-A288-E6FCE5E00752
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRANSP:OPAQUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACKNOWLEDGED:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACTION:AUDIO
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ATTACH:Basso
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRIGGER:-PT2H
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-APPLE-DEFAULT-ALARM:TRUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-WR-ALARMUID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ) in id: 3405
    2015-01-31 07:14:41-0600 [-] [caldav-0]    
    2015-01-31 07:16:39-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None
    2015-01-31 08:08:40-0600 [-] [caldav-1]  [AMP,client] [calendarserver.tools.purge#warn] Cleaning up future events for principal A95C9DB2-9757-46B2-ADF6-4DECE2728820 since they are no longer in directory
    2015-01-31 08:09:10-0600 [-] [caldav-1]  [-] [twext.enterprise.jobqueue#error] JobItem: 39, WorkItem: 762001 failed: ERROR:  canceling statement due to statement timeout
    2015-01-31 08:09:10-0600 [-] [caldav-1]    
    2015-01-31 08:13:40-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None

    <facepalm>  Well, there you go.  It turns out I was over-thinking this.  The Calendar app on a Mac can manage this database just fine.  Sorry about that.  There may be an easier way to do this, but here's how I did it.
    Use the Calendar.app on a local computer to:
    - Export the corrupted calendar to an ICS file on the local computer (Calendar -> File -> Export -> Export)
    - Create a new local calendar (Calendar -> File -> New Calendar -> On My Mac)
    - Import the corrupted calendar into the new/empty local calendar (Calendar -> File -> Import...)
    - Delete years and years of old events, including the one that was triggering that error message
    - Export the (now much smaller) local calendar to another ICS file on my computer (Calendar -> File -> Export -> Export)
    - Create a new calendar on the server (Calendar -> File -> New Calendar -> in the offending server-based iCal account)
    - Import the edited/fixed/smaller/no-longer-corrupted calendar into the new/empty server calendar (Calendar -> File -> Import...)
    - Make the newly-created iCal calendar the primary calendar (drag it to the top of the list of calendars on the server)
    - Delete the old/corrupted calendar (right-clicking on the bad calendar in the calendar list - you can only delete it once it's NOT the primary calendar any more)

  • Best way to copy large VM files from one drive to another?

    Hi all, I am wondering what the best and fastest way there is to copy a large VM file from one drive to another (external drive to internal drive). Should the .pvm folder (in my case with Parallels) be zipped first? The files I need to copy range from 50 to 70gb. thx for any insight! ps. Even Bombich admits it's difficult for CCC to do...not clear why, but it does slow a full clone down to a crawl. I also have Chronosync at my disposal, but suspect the same is true there as well. Cheers!
    coocoo

    I don't think it matters much which way you do the job. I use Parallels to compress by drives on a regular basis. If you want to copy a dynamic VM, which can grow to just about any size as required, to another drive you will use about the same amount of time zipping the VM - transfering the VM - unzipping the VM, as you would just moving the VM.
    I've moved my VM files to a second partition that does not get cloned to my bootable backup that I create with SuperDuper. I've excluded my entire VM folder from Time Machine and just make zipped backups for each system after any large Service Pack Updates from Microsoft, or any intricate program installations. These get copied to a second partition on my Time Machine drive manually.
    My VM's sizes are kept under control by using common Documents-Movies-Music folders for the Mac and the VM's. My VM's do not have the data files the programs use, they are on my Mac and get backed up every hour with Time Machine.
    Reinstalling a VM just means deleting the broken copy, unzipping my latest backup, and putting into the same folder that Parallels knows to use. The data files are always accessible, even if they were created after my latest VM backup.

  • What would be best approach to migrate millions of records from on premise SQL server to Azure SQL DB?

    Team,
    In our project, we have a requirement of data migration. We have following scenario and I really appreciate any suggestion from you all on implementation part of it.
    Scenario:
    We have millions of records to be migrated to destination SQL database after some transformation.
    The source SQL server is on premise in partners domain and destination server is in Azure.
    Can you please suggest what would be best approach to do so.
    thanks,
    Bishnu
    Bishnupriya Pradhan

    You can use SSIS itself for this
    Have a batch logic which will identify data batches within source and then include data flow tasks to do the data transfer to Azure. The batch size chosen should be as per buffer meory availability + parallel tasks executing etc.
    You can use ODBC or ADO .NET connection to connect to azure.
    http://visakhm.blogspot.in/2013/09/connecting-to-azure-instance-using-ssis.html
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Best approach to change historichal data???

    Hi experts,
    As a requirement of our project, R/3 will periodically do a material (key) value change upon existing product, that is, update the product code with a new one that substitutes the old one.
    Ex: Material A (code M10013) turns Material B (code M10014).
    We want to realign all historichal data stored in BW from the old material code (M10013) to the new one (M10014).
    Question is whether there is a standard way to do this within BW.
    If not, what would be the best approach to dela with this?
    Thanks in advance for your help,
    Best regards,
    Enric
    Note: I have faced this same requirement in APO, and there's a Realignment functionality to convert CVCs, but I'm not sure if we have a similar tool in BW.

    Hi,
    Use Data Mart Concept.
    Load all the historical data of the old with the new material code to a new ODS
    then again pull the data from the new ODS to the existing one.
    hope this helps,
    regards
    suresh

  • Best way to return structured data?procedure or function?

    Hi everyone.
    I cannot choose the best way to return, from java stored procedure, some data.
    for example , which is best way to write this method:
    public STRUCT function(input parameters) { STRUCT result     ...   return result; }
    or
    public void procedure(input parameters, STRUCT[] struct) { STRUCT result     ...   struct[0] = result; }
    in the last way, the wrapper obviously will be: CREATE PROCEDURE name(... IN datatype, data OUT STRUCT)
    Which is best way in efficiency ?
    thank you ;)

    da`,
    By "efficiency", I assume you mean, best performance/least time.
    In my experience, the only way to discover this is to compare the two, since there are many factors that affect the situation.
    Good Luck,
    Avi.

  • Large data moving from one schema to another schema and tablespace

    Dear DBA's
    Oracle 11.2.0.1
    OS : Solaris 10.
    we have 1 TB data in A schema, i want to move to this in B schema for other testing purpose. which method is good to take export/import of this large data? Kindly advice.
    Thanks and Regards
    SG

    Hi
    You can use expdp-impdp or Transportable Tablespaces. Pelase check below note:
    Using Transportable Tablespaces for EBS Release 12 Using Database 11gR2 [ID 1311487.1]
    Regard
    Helios

  • Returning rows from stored proc

    Hello, this is the sample from the SQL Developer help to create proc:
    CREATE OR REPLACE
    PROCEDURE list_a_rating(in_rating IN NUMBER) AS
    matching_title VARCHAR2(50);
    TYPE my_cursor IS REF CURSOR;
    the_cursor my_cursor;
    BEGIN
    OPEN the_cursor
    FOR 'SELECT title
    FROM books
    WHERE rating = :in_rating'
    USING in_rating;
    DBMS_OUTPUT.PUT_LINE('All books with a rating of ' || in_rating || ':');
    LOOP
    FETCH the_cursor INTO matching_title;
    EXIT WHEN the_cursor%NOTFOUND;
    DBMS_OUTPUT.PUT_LINE(matching_title);
    END LOOP;
    CLOSE the_cursor;
    END list_a_rating;
    You have to write a cursor just to display results from a query. To me this is whacked but I'll go with it. So what if you wanted to return that resultset or refcursor so it can be consumed by a calling application or procedure? I tried something like
    CREATE OR REPLACE PROCEDURE list_a_rating(in_rating IN NUMBER, MatchingTitles OUT Ref Cursor)
    but that's not compiling. How do you return data from a proc or function?
    Thanks,
    Ken

    ktrock wrote:
    Hello, this is the sample from the SQL Developer help to create proc:A very poor example..
    You have to write a cursor just to display results from a query. To me this is whacked but I'll go with it. All SQL you send to Oracle for parsing and execution winds up as SQL cursors in the server's SQL shared pool. You, the client, then get a handle as reference to this cursor that was created for your SQL statement. Using this handle you can now fetch the output of the cursor and consume it on the client side.
    So there is no such thing as not using a cursor - you always use cursors. Many times it will be implicitly as you won't see the cursor workings as the client hides that from you.
    As for DBMS_OUTPUT and PL/SQL... PL/SQL cannot "write output". And time and again (stupid?) people will use DBMS_OUTPUT thinking that is "writing to the client's display device".
    It is not. It is writing data into a PL/SQL variable on the server. PL/SQL variable that consumes very expensive dedicated process memory on the server. The client (SQL-Developer, TOAD, SQL*Plus, etc) can read the contents of this variable and the client can render that contents on the client's display device.
    So how on earth does it make sense to take SQL data on the server (that is structured data and can vary significantly in volume), and write that as plain text into a PL/SQL variable?
    It does not make sense. At all. Bluntly put - it is just plain bloody stupid.
    So how do you return data using PL/SQL? The same way as you return data using SQL.
    PL/SQL allows the complexity of the SQL language, database model and so on to be moved from the client application into PL/SQL itself. The client thus no longer need to know table names, what to join where and when, how to write effective SQL and so on. It calls a PL/SQL procedure with optional parameters. This procedure has the logic to create the required SQL (based on the parameters too if applicable) and return the cursor handle for that SQL.
    The client thus calls PL/SQL, PL/SQL serves as the SQL abstraction layer and does the complex SQL bit, and PL/SQL then returns the SQL cursor handle to the client.
    In the PL/SQL language, there are different data types for dealing with the same SQL cursor handle - depending on what you want to do with that SQL cursor.
    If you want to pass that SQL cursor handle to a client, then you need to use the sys_refcursor data type in PL/SQL. Remember that SQL cursor does not know or care what data type the client (such as PL/SQL) uses to reference it. All SQL cursors in the server's shared pool are the same - how you treat them from a client side (as an explicit cursor, implicit cursor, ref cursor, etc) is a client program decision.
    Also keep in mind that a SQL cursor is NOT a result set. It is not a data set that is instantiated or created on the server that contains the data (or references to the data) for your SQL.
    If this was the case - just how many such cursor result sets could the server create before running out of memory?
    A cursor is an executable program. The source SQL code is compiled into a program called a cursor. This cursor contains the execution steps that the server needs to follow to find the data. The cursor then outputs this data as data is found. The execution plan of a cursor shows how this program looks like.

  • Returning rowcount and resultset from stored procedure

    Hello,
    In SQL Server you can return multiple resultsets from stored procedure by simple SELECT statement. But in Oracle, you have to use SYS_REFCURSOR.
    Sample code:
    CREATE OR REPLACE PROCEDURE sp_resultset_test IS
    t_recordset OUT SYS_REFCURSOR
    BEGIN
    OPEN t_recordset FOR
    SELECT * FROM EMPLOYEE;
    END sp_resultset_test;
    Is there any other way in Oracle 10 or 11 ?
    Thank You.

    What is the requirement? Oracle is far more flexible than SQL-Server... with numerous features that do not exist in SQL-Server. (Fact)
    One of the biggest mistakes you can make is to treat Oracle like SQL-Server... trying to match feature A1 in SQL-Server to feature B3 in Oracle.
    Does not work that way.. rather then stick to SQL-Server as SQL-Server does SQL-Server specific features far better than Oracle.
    So instead of trying to map what a T-SQL stored proc to do to an Oracle ref cursor (even to the extent of using that very silly sp_ prefix to name the proc), tell us the problem and requirements... That way we can tell you what Oracle features and options are best used to solve that problem - instead of competing in some unknown feature comparison event with SQL-Server.

Maybe you are looking for

  • Bluethoot and Wifi is not working properly in my apple ipad air

    Hello sir, I m having apple ipad air with serial no: DM*****4HY and i m facing problm with my bluetooth and wifi. Both is not functioning properly. To solve this i have done Factory reset ang setting reset both. But even after doing this my problem o

  • ATI Catalyst 13.1 - Not Good with Photoshop CS6 13.0.1

    I installed Catalyst 13.1 last night. On Windows 7 x64 and Photoshop CS6 13.0.1  I still see the GPU-based color-management issue noted in this thread: http://forums.adobe.com/thread/1093163?tstart=0 Some workarounds:  Configure Photoshop to Basic GP

  • Itunes giving error wont load

    yeah when i ope itunes it gives me a error saying it has to close the program i tried the ewido thing doesnt work i dont know what to do and i cant access my windows firewall either it has a error too.and i dont want to reformat the computer cause i

  • BW variables in report

    Guys, Has anybody experience of building reports with BW variables? When I set mandatory variables in BW, I receive error in BO. A database error ocured. The database error text is: The MDX query SELECT { Measures.D4M5A7PITT7CVVVC8GMZS6QIH } ON COLUM

  • Passing variables from a dynamically created textinput in AS2?

    Hey everyone, I have a contact form in my flash file with name/email/message fields which a user can fill out and then click send, which passes these to a php script which then emails the information that they entered. This works fine when the text i