Dynamic SQL in 3 tier client server

Hi
We are supporting a 3 tier application which uses static SQL in the mid tier to talk to an Oracle database. This uses Twister from the now defunct Brokat company. As the user base continues to increase we are looking to move to use dynamic SQL, but are unsure of the syntax. Our current code would look something like
StringBuffer sb = new StringBuffer();
sb.append("SELECT * FROM MESSAGE WHERE ACCOUNT_NUMBER ='");
sb.append(getAccount());
sb.append("'");
inPool.set("statement",sb.toString());
myOracle.process("execute", inPool, outPool);
where the last 2 statements are Twister specific. Twister must be treated as something of a black box.
Has anyone got any ideas on what we would need to change?

Hi
We are supporting a 3 tier application which uses
static SQL in the mid tier to talk to an Oracle
database. This uses Twister from the now defunct
Brokat company. As the user base continues to
increase we are looking to move to use dynamic SQL,
but are unsure of the syntax. Our current code would
look something like
StringBuffer sb = new StringBuffer();
sb.append("SELECT * FROM MESSAGE WHERE ACCOUNT_NUMBER
='");
sb.append(getAccount());
sb.append("'");
inPool.set("statement",sb.toString());
myOracle.process("execute", inPool, outPool);
where the last 2 statements are Twister specific.
Twister must be treated as something of a black box.
Has anyone got any ideas on what we would need to
change?You want to do something like this:
// SQL statement for prepared statement
String sql= "SELECT * FROM MESSAGE WHERE ACCOUNT_NUMBER = ?";
// Get a connection (we use a pool)
Connection conn = DBUtil.getConnection ();
// Prepare the SQL
PreparedStatement acctPS = conn.prepareCall(sql);
// Bind the value
acctPS.setString(1,getAccount());
ResultSet rset = acctPS.executeQuery();
while (rset.next())
// Process the result set
rset.close();
acctPS.close();

Similar Messages

  • 2-tier Client/server Architecture(Urgent!!!!)

    Hi,anyone can help me to do a client/server architecture.The server is able to track and store the client's name,IC number and his machine's IP.And the server is able to broadcast a question stored in the database and get the answer to the question whereby the answer is also stored in the database.The server is able to broadcast the question to multiple server.Thanks!

    Hi,anyone can help me to do a client/server
    architecture.The server is able to track and store the
    client's name,IC number and his machine's IP.And the
    server is able to broadcast a question stored in the
    database and get the answer to the question whereby
    the answer is also stored in the database.The server
    is able to broadcast the question to multiple
    server.Thanks!you mean able to broadcast to multiple clients, right?
    read this webpage:
    http://java.sun.com/docs/books/tutorial/networking/sockets/clientServer.html
    steal some of the code... maybe from the knock knock server client code and then modify it to meet your needs.
    Don't worry about storing the information in a database until after you have everything else working. The database stuff will require knowledge of JDBC.
    Good luck,
    Tim

  • 2-tier form/server implementation

    Hi all,
    Where can I find information to implement 2-tier (client/server) with Developer 6i?
    Is it possible to do it without installing form/report server?
    Thanks in advance!
    null

    Dear
    All you have to do is to install Forms, Reports Runtime on the client machine. Make a connect string referencing the Server. You can use either 'SQL Easy Configuration' or manually edit 'Tnames.ora' file found in
    %ORACLE%\net80\admin\
    Regards.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Quiny:
    Hi all,
    Where can I find information to implement 2-tier (client/server) with Developer 6i?
    Is it possible to do it without installing form/report server?
    Thanks in advance!
    <HR></BLOCKQUOTE>
    null

  • WARNING - Oracle intends to desupport Forms client server mode

    When Forms 6i becomes desupported you will have to move to Forms 9i. Forms 9i runtime can only be run in web mode from a web browser. You will not be able to run client-server forms in native operating systems such as Windows or Unix. The forms will essentially run on the application server with Java applets to being sent to your web browser. Links to third party products will no longer work with host commands executing on your local machine. You will have to include Java code to make local commands work.
    The transition from Forms 6i client-server to Forms 9i web mode will be one of the most painful upgrades Oracle has ever inflicted on its customers.
    Essentially, Oracle is trying to palm us off with running forms from web browsers whether we or our customers like it or not. Java seems to be creeping in. I suspect Developer will eventually turn into JDeveloper, as they will not want to support two products. Easy one line statements and built-ins will be replaced by hundreds of lines of Java nonsense. Developer will move from a 4GL RAD environment to a cumbersome bloated 3GL Java environment.
    You can stop this happening if you want to keep you customers happy by:
    1. Sending Oracle enhancement requests to allow Forms 9i to run in native client-server mode.
    2. Complaining to your Oracle sales contact.
    3. Asking difficult questions at Oracle user groups.

    Duncan et al.,
    I've been wondering why exactly a Forms9i app. needs to run in a web page? Why could the applet not be deployed in a more "standalone" fashion, i.e., an independent application window. This would at least offer the appearance of a native application, complete with the new Java look-and-feel.
    If Oracle really wanted make their customers happy, they would then take the next step and come up with a way to embed OC4J into a client-side deployment executable, which would then effectively allow for a 2-tier client-server architecture.
    It seems to me that 2-tier/3-tier each have their place in the world, depending on the situation. In the extreme "2-tier" example would be an application that is to be deployed on a single client workstation. It would be hard to argue that a separate application server ought to be used. On the other extreme, anybody who has tried to manage the deployment--and upgrade--of a large number of Forms clients is very attracted to the prospect of only having to maintain and upgrade a few application servers.
    I agree with the direction of the product as far as replacing Toolkit2 and the native runtime with the JRE. The advantage of on-demand updating of application code is compelling. The capability of moving application logic to the middle tier is extremely useful. Platform independence is now done using the "universal" JRE instead of TK2.
    If the product could maintain the client-side processing capability--without resorting to Javabeans--it would be just that much stronger. As an application architect, I want to be able to design the application to allocate the work where it makes the most sense, either on the client, the application server, or the database server.
    How hard would it be to put this client-side processing capability back into the product?
    Regards,
    Bruce MacDonald

  • What is minimum client server-install?

    Hi there,
    I just wondering since I could not find it in any documentation.
    What should I pick in custom installation if I just want to run form and report on 2-tier client/server environment.
    Many thanks,

    runtime for forms and reports and sqlnet

  • Alternative to native, dynamic sql to return a ref cursor to a client

    I'm on Oracle 8.0.4, and would like to pass a string of values like '1,2,7,100,104' that are the primary key for a table. Then use something like:
    procedure foo( MyCur RefCurType, vKey varchar2)
    begin
    open MyCur for
    'select names from SomeTable' &#0124; &#0124;
    ' where ID in (' &#0124; &#0124; vKey &#0124; &#0124; ')'
    end;
    This would return a recordset to (in this case) a Crystal Reports report.
    However, native dynamic SQL ain't available until 8.1.0. So can anyone think of a clever way to accomplish this, with a way to return a cursor? I can't figure out how to do this with DBMS_SQL, because open_cursor is just returning a handle, not a referene to a cursor that can be passed to a remote client.
    Thanks in advance.

    I'm on Oracle 8.0.4, and would like to pass a string of values like '1,2,7,100,104' that are the primary key for a table. Then use something like:
    procedure foo( MyCur RefCurType, vKey varchar2)
    begin
    open MyCur for
    'select names from SomeTable' &#0124; &#0124;
    ' where ID in (' &#0124; &#0124; vKey &#0124; &#0124; ')'
    end;
    This would return a recordset to (in this case) a Crystal Reports report.
    However, native dynamic SQL ain't available until 8.1.0. So can anyone think of a clever way to accomplish this, with a way to return a cursor? I can't figure out how to do this with DBMS_SQL, because open_cursor is just returning a handle, not a referene to a cursor that can be passed to a remote client.
    Thanks in advance.

  • Access-SQL Server (Client Server Configuration) Best Way To Refresh SQL Server Records ?

    We are using Access 2013 as the front end and SQL Server 2014 as the back end to a client server configuration.
    Access controls are bound to the SQL fields with the same names. When using Access to create a new record in a Form, the data are not transferred to SQL if the form is exited to display a different Form or Access is closed. If the right or left arrow navigation
    buttons at the bottom of the form are first used to display either the previous or next record, then the data in the new record are correctly transferred to SQL.
    What is the best way to refresh the new SQL record prior to the closing of the new record in the bound Access form ? We have tried Requery of the entire form and with all of the individual controls without success. We are looking for a method of refreshing
    SQL that functions in a manner similar to that of what happens with the navigation buttons.
    Thank you very much for your assistance.
    Robert Robinson
    RERThird

    Hi Stefan,
    I had added the code to set me.dirty = False in response to the On Dirty event and didn't realize that it was working properly. I had tried other several approaches and must have become confused somewhere along the line.
    I retested the program. On Dirty is working and the problem is solved.
    Thank you very much for your assistance.
    Robert Robinson
    RERThird

  • Client/Server -- n-Tier

    Dear All,
    It may seem nonsense (specially to those with 1000+ posts) but many applications still use the Client/Server mythology, and our team is one of them.
    My question is, what are the steps involved in transferring a set of forms (6/6i) that are deployed through Client/Server to an n-tier mythology using 9i Database or 10g?
    Thanks

    The same FMX file can actually work c/s and on the web..although I woulf suggest regeneration of the fmx files and there are some hand coding things you need to do so the upgrade to the web it not difficult.
    More details on otn.oracle.com/formsupgrade and you can read testemonials from customers as well as technical papers.
    Regards
    Grant Ronald
    Forms Product Management

  • CLOBs and Dynamic SQL method 4

    We have just added CLOBs ( select_dp->T == 112 ) to our applications. We have a application tier server that is a Pro*C program that reads and iterprets some "extended" .sql files. We this server uses dynamic sql method 4, with host arrays, etc. The docs say you cannot do a FETCH FOR <value> with CLOBs, but you can ALLOCATE and FREE the clobLocator with a FOR clause. What good is the ALLOCATE and FREE "FOR" if you can't FETCH with a FOR ?? Can you really select clobs along with typical varchars??

    Can you tell me exactly where are you going to use the Dynamic SQL method 4 ? so that i can help you in using the method.
    Thiagu.

  • BCP and dynamic SQL

    Hello All,
    Been looking into this for a couple of days, and I keep hitting brick walls, so I'm hoping someone can offer me a bit of inspiration. What I'm trying to do is write a stored procedure that lets the user specify a list of tables, and an output directory, and the SP creates a series of BCP statements that export these tables to comma delimited files.
    This wouldn't be too hard, but I need to output the field headings in the first row of the table (and use quotes as text qualifiers). I'm doing this by looping round sys.columns, pulling out all the fieldnames, creating two select statements, and UNION ALL-ing them together. e.g.......
    select 'FIELD1','FIELD2','FIELD3','FIELD4'
    union all
    select field1,field2,field3,field4 from tablename
    It all works fine until you try it on a table with a lot of columns. Although you can build a big SQL statement in an NVARCHAR(MAX), BCP only appears to read the first 4000 characters of it, so it fails.
    To get round this, I've moved all of the code that builds the big SQL statement to its own stored procedure (i.e. you pass the tablename, and it returns the table with the field names in the first row). Then, I can just call this new SP in my BCP statement, with a couple of parameters. 
    The problem I'm getting is BCP is complaining saying '[Microsoft][SQL Native Client]BCP host-files must contain at least one column'.  I'm setting no count off, and there are no print statements, so I'm assuming this is because the data is getting returned via an exec sp_executesq (although this is a guess). I can't think of a way round this though, as the SQL need to be dynamic.
    alter PROCEDURE [dbo].[sp_QBMultiFileExportGetData]
    @tablename varchar(100),
    @dbname varchar(100)
    AS
    BEGIN
    declare @Execstring as nvarchar(MAX)
    declare @currentfieldname as varchar(100)
    declare @selectlist as varchar(8000)
    declare @fieldnamelist as varchar(8000)
    declare @colnames table
    columnname varchar(100)
    begin
    set nocount on
    set @execstring='select COLUMN_NAME '+
    'from ' + @dbname + '.INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = ''' + @tablename + ''''
    insert into @colnames(columnname)
    exec sp_executesql @execstring
    set @selectlist=''
    set @fieldnamelist=''
    --Loop through fieldnames, and build two strings
    --One for outputting fieldnames and one for selecting the actual data
    while exists(select * from @colnames)
    begin
    select top 1 @currentfieldname=columnname from @colnames
    set @selectlist=@selectlist + 'quotename(['+ @currentfieldname + '],char(34)),'
    set @fieldnamelist=@fieldnamelist + '''' + @currentfieldname + ''' [' +@currentfieldname + '],'
    delete from @colnames where columnname=@currentfieldname
    end
    --remove last quote
    set @selectlist=substring(@selectlist,1,len(@selectlist)-1)
    set @fieldnamelist=substring(@fieldnamelist,1,len(@fieldnamelist)-1)
    --Built string to execute, with fieldnames, and select fields
    set @execstring='select ' + @fieldnamelist  + ' union all select ' + @selectlist + ' from ' + @dbname + '..'  + @tablename
    return exec sp_executesql @execstring
    end
    END
    this returns exactly what I want, but when I try to use it in a BCP statement, I get the error....
    i.e.
    EXEC master..xp_cmdshell 'bcp "exec QCDev.dbo.sp_QBMultiFileExportGetData ''tablename'',''dbname''" queryout C:\\outputfile.txt -T -t","'
    Error = [Microsoft][SQL Native Client]BCP host-files must contain at least one column
    Anyone ever tried this before?

    Hi Guys,
    Thanks for the suggestions. I had been trying to avoid temp tables (don't really like them), but I think eventually, they were the only way to go. Unfortunately, this opened a whole can of scoping worms, and after a couple of hours, its all given me a right headache. However, the good news is I've finally got it working as I wanted.
    I was finding I was having issues using temp tables, as the tables being used were dynamic, so I would have to create them in a dynamic SQL string, and they weren't propagating upwards from child to parent. I seemed to be getting the same problem using global temporary tables too, although I'm not sure why, as they should have worked They seemed to be out of scope by the time the SP that was calling my sp_QBMultiFileExportGetData tried to output the data. This might possibly have been because BCP wasn't seeing the same scope, but I've not tested it fully (and its very possible I was making a mistake).
    The solution was to abandon sp_QBMultiFileExportGetData, and merge the code back into the calling script. However, rather than trying to pass an enormous SQL string to bcp, running it separately with sp_executesql, and dumping the results in a global temp table. Then let bcp just call a 'select * from temptable', to avoid the select statement getting too long. Its not the most elegant solution, but it seems to work fine.
    ALTER PROCEDURE [dbo].[sp_QBMultiFileExport]
    -- Add the parameters for the stored procedure here
    @tablenames varchar(1000), --list of tables to be exported
    @outputpath varchar(1000), --output path ***AS SEEN BY THE SERVER, NOT THE CLIENT***
    @servername varchar(100), --Server where data resides
    @dbname varchar(100), --database name
    @delimiter varchar(1) --output delimiter
    AS
    BEGIN
    declare @Execstring as nvarchar(max)
    declare @currenttable as varchar(100)
    declare @colnames table
    columnname varchar(100)
    declare @currentfieldname as varchar(100)
    declare @selectlist as varchar(max)
    declare @fieldnamelist as varchar(max)
    --Get rid of CRLFs in the tablenames parameter
    set @tablenames=replace(@tablenames,char(10),'')
    set @tablenames=replace(@tablenames,char(13),'')
    --add extra comma to the end of the list (needed later for consistency)
    set @tablenames=@tablenames+','
    --Get first table in the list
    set @currenttable=substring(@tablenames,1,charindex(',',@tablenames)-1)
    while @tablenames<>''
    begin
    --Get a list of fieldnames from syscols
    insert into @colnames(columnname)
    select COLUMN_NAME
    from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = @currenttable
    set @selectlist=''
    set @fieldnamelist=''
    while exists(select * from @colnames)
    begin
    --get first column name
    select top 1 @currentfieldname=columnname from @colnames
    --add to select statement lists
    set @selectlist=@selectlist + 'quotename(['+ @currentfieldname + '],char(34)),'
    set @fieldnamelist=@fieldnamelist + '''' + @currentfieldname + ''' [' +@currentfieldname + '],'
    --remove column from temptable
    delete from @colnames where columnname=@currentfieldname
    end
    --remove last quote from field lists
    set @selectlist=substring(@selectlist,1,len(@selectlist)-1)
    set @fieldnamelist=substring(@fieldnamelist,1,len(@fieldnamelist)-1)
    --check for temp table, and drop if necessary
    IF object_id('tempdb..##MultiFileExportTempTable') IS NOT NULL
    BEGIN
    DROP TABLE ##MultiFileExportTempTable
    END
    --Build list of fieldnames, and select list, unioned together
    --and put the results in temptable
    set @execstring='select ' + @fieldnamelist  + ' into ##MultiFileExportTempTable union all select ' + @selectlist + ' from ' + @dbname + '..'  + @currenttable
    exec sp_executesql @execstring
    --get BCP to pull data back from ##temptable, and dump in file
    set @execstring='EXEC master..xp_cmdshell ''bcp "select * from ##MultiFileExportTempTable" queryout ' + @outputpath + '\' + @currenttable + '.txt' + ' -c -T -t"' + @delimiter + '"'''
    exec sp_executesql @execstring
    --drop tablename from list
    set @tablenames=replace(@tablenames,@currenttable + ',','')
    --if tablenames list is not empty, get the next one
    if @tablenames<>''
    set @currenttable=substring(@tablenames,1,charindex(',',@tablenames)-1)
    else
    set @currenttable=''
    end
    IF object_id('tempdb..##MultiFileExportTempTable') IS NOT NULL
    BEGIN
    DROP TABLE ##MultiFileExportTempTable
    END
    END
    So, you call this with...
    exec dbo.[sp_QBMultiFileExport] 'table1,table2,table3',filepath,servername,dbname,delimiter
    ...and it creates delimited files called table1.txt, table2.txt and table3.txt in the specified folder, with field headings and text qualifiers.
    Many thanks for all your suggestions

  • Getting error while using DYNAMIC SQL

    Hi Team,
    I am Oracle DBA. I have limited knowledge on PL/SQL. I used below PL/SQL code to drop 50 partitons from one of the table.
    I used Dynamic SQL EXECUTE IMMEDIATE to drop partions. But error occured. If I commented EXECUTE IMMEDIATE, procedure executed successfully.
    Please suggest me, where i did the mistake. Also please suggest for better code than my code. please find below code and error details.
    SQL> ed
    Wrote file afiedt.buf
    1 DECLARE
    2 CURSOR DROP_PARTITON IS select partition_name from user_tab_subpartitions where PARTITION_NAME<='ABCD_2011_04';
    3 BEGIN
    4 for curr IN DROP_PARTITON LOOP
    5 DBMS_output.put_line(curr.partition_name);
    6 execute immediate(Alter table Table_Name drop partition curr.partition_name);
    7 end loop;
    8* END;
    SQL> /
    execute immediate(Alter table BILLCHRG drop partition curr.partition_name);
    ERROR at line 6:
    ORA-06550: line 6, column 19:
    PLS-00103: Encountered the symbol "ALTER" when expecting one of the following:
    ( - + case mod new not null others <an identifier>
    <a double-quoted delimited-identifier> <a bind variable> avg
    count current exists max min prior sql stddev sum variance
    execute forall merge time timestamp interval date
    <a string literal with character set specification>
    <a number> <a single-quoted SQL string> pipe
    <an alternatively-quoted string literal with character set specification>
    <an alternative
    SQL> ed
    Wrote file afiedt.buf
    1 DECLARE
    2 CURSOR DROP_PARTITON IS select partition_name from user_tab_subpartitions where PARTITION_NAME<='ABCD_2011_04';
    3 BEGIN
    4 for curr IN DROP_PARTITON LOOP
    5 DBMS_output.put_line(curr.partition_name);
    6 --execute immediate(Alter table TABLE_NAME drop partition curr.partition_name);
    7 end loop;
    8* END;
    SQL> /
    ABCD_2009_06
    ABCD_2009_06
    ABCD_2009_06
    BILLCHRG_2011_04
    PL/SQL procedure successfully completed.

    PL/SQL code runs on the server, inside an Oracle process - thus PL/SQL code cannot dynamically write and display messages to the client. That server process is not connected to any keyboard, mouse or display.
    DBMS_OUTPUT can be used. This is a PL/SQL buffer area in that server process that code can write lines of text too. When the server process informs the client that it has completed, the client can now request the contents of the DBMS_OUTPUT buffer and the client can display it on the client device.
    This is what set serveroutput on in SQL*Plus does - tell the sqlplus client to request the DBMS_OUTPUT buffer after each Oracle server call made and to display the contents locally.
    So to display the SQL command can be done using DBMS_OUTPUT. E.g.
    declare
      dropPart varchar2(32767);
    begin
      for c in (select...) loop
        dropPart := 'alter table my_tab drop partition '||c.partition_name';
        --// write the SQL command to DBMS_OUTPUT
        DBMS_OUTPUT.put_line( dropPart );
        --// execute the SQL using a begin..end block in order to catch error
        begin
          execute immediate dropPart;
          DBMS_OUTPUT.put_line( 'command completed successfully' );
        exception when OTHERS then
          DBMS_OUTPUT.put_line( 'command failed with: '||SQLERRM(SQLCODE) );
        end;
      end loop;
    end;So after this code block has been executed and partitions dropped, sqlplus will display the DBMS_OUTPUT generated by this code block.

  • Developer 6.0 - Client/server mode

    Assuming I have an application server between the front and back ends, what needs to be running in the middle tier, if I am running strictly client/server ?

    It's, first, a problem of client power. On 3 tier, the client needs to do nothing else except run the web browser; the PL/SQL code in forms/reports running on the application server. So, a 3rd tier client might be even an old 486, while a pure client-server would have to be Pentium, preferably 64 MB RAM, have enough disk space for the application and have Forms & Reports runtime installed.
    Then, there is a matter of network communication: an application server could accomodate more clients than classic client-server.

  • Dynamic SQL and PL/SQL Gateway

    This question is kind of out of curiosity...I had created a procedure that used some dynamic sql (execute immediate), and was trying to use it on pl/sql gateway. I kept getting page not found errors until I removed the execute immediate statement, and reverted to using static sql statements.
    I am just curious, is dynamic sql not supported at all with pl/sql gateway?
    Thanks
    Kevin

    > Relax damorgan, no need to be condescending. Of course I read the docs ..
    Well, you're one of the few that actually read the docs.. And one of many that lacked to state any real technical details for forum members to understand the actual problem, the actual error, and what the environment is that this is happening in.
    Remember that you came to this forum for forum members to help you. In order for us to do that, you need to help us understand
    - your problem
    - your environment
    - what you have tried
    What PL/SQL Gateway do you refer to? Thus is an old term for an old product - today in Oracle there are two "gateways" into the PL/SQL engine via HTTP. Via Apache/mod_plsql and via the internal Java servlet web engine called EPG inside Oracle.
    As for what the "Gateway" access to the PL/SQL engine via HTTP.. whether it supports EXECUTE IMMEDIATE or not is like asking if a car "supports" soft drinks or not (just because a human that may consume soft drinks acts as the driver of the car). Not sensible or relevant at all.
    mod_plsql creates an Oracle session to the database instance, and executes a PL/SQL procedure in the database. This is no different from any other client connection to Oracle. Oracle has no clue that the client is mod_plsql and not TOAD or Java or VB or PHP or Perl or whatever else.
    So how can this support or not support the EXECUTE IMMEDIATE command? Does PL/SQL support EXECUTE IMMEDIATE? Well duh...
    Why do you get a generic 404? Because the PL/SQL call made by mod_plsql failed with an unhandled exception. mod_plsql gets that exception and now what? Was a valid HTP buffer created for it to stream to the web browser? If the buffer perhaps partially completed? All that mod_plsql knows is that it asked for a HTP buffer via that PL/SQL call and it got an exception in return.
    A 404 HTTP error is the only reasonable and logical response for it to pass to the web browser in this case.
    PS. to see why mod_plsql fail, refer to the access_log and error_log of that Apache httpd server

  • ODBC, bulk inserts and dynamic SQL

    I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
    At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
    Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
    I have also considered using the FOR ALL statement and SQL*Loader utility.
    My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
    I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
    What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
    Any ideas??
    null

    Hi,
    I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
    1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
    2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
    3) Use SQL*Loader (the best performance, but no real control of what's happening).
    I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
    These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
    Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
    null

  • Ref cursors and dynamic sql..

    I want to be able to use a fuction that will dynamically create a SQL statement and then open a cursor based on that SQL statement and return a ref to that cursor. To achieve that, I am trying to build the sql statement in a varchar2 variable and using that variable to open the ref cursor as in,
    open l_stmt for refcurType;
    where refcurType is a strong ref cursor. I am unable to do so because I get an error indication that I can not use strong ref cursor type. But, if I can not use a strong ref cursor, I will not be able to use it to build the report based on the ref cursor because Reports 9i requires strong ref cursors to be used. Does that mean I can not use dynamic sql with Reports 9i ref cursors? Else, how I can do that? Any documentation available?

    Philipp,
    Thank you for your reply. My requirement is that, sometimes I need to construct a whole query based on some input, and sometimes not. But the output record set would be same and the layout would be more or less same. I thought ref cursor would be ideal. Ofcourse, I could do this without dynamic SQL by writing the SQL multiple times if needed. But, I think dynamic SQL is a proper candidate for this case. Your suggestion to use lexical variable is indeed a good alternative. In effect, if needed, I could generate an entire SQL statement and place in some place holder (like &stmt) and use it as a static SQL query in my data model. In that case, why would one ever need ref cursor in reports? Is one more efficient over the other? My guess is, in the lexical variable case, part of the processing (like parsing) is done on the app server while in a function based ref cursor, the entire process takes place in the DB server and there is probably a better chance for re-use(?)
    Thanks,
    Murali.

Maybe you are looking for

  • Improving screen image on Samsung Syncmaster Monitor connected to MBP

    I struggled to remove the fuzziness from the picture displayed on my Samsung Syncmaster XL2270HD Monitor from my 2010 MacBook Pro. They are connected using an iWires DVI to HDMI cable. Then I found a contribution from ROLNAS2 from 2 years ago and a v

  • Frequency Response Function & FFT & Inverse FFT (problem of unit Volts-RMS)

    Hello everyone, I am currently working on a VI in order to compare two analog signals : the first one corresponds to the output signal (my reference) which is sent by my data acquisition card to a shaker and the second one corresponds to the input si

  • Rounding in pdf forms to tens position?

    I would like to have a formula in pdf which rounds to the nearest tens position.  So a 14 would be rounded to 10 and a 15 would be rounded to 20.  Any suggestions?

  • Investigative Case Management Analytics in SAP BW

    There does not appear to be standard BW business content extractors (datasources) available for ICM in BW7.  Has anyone implemented ICM with requirements for analytics in BW? Could you tell me whether you had to develop custom extractors/datasources?

  • Recovering from lightroom catalog backup

    Hi, For the past 2 months i have been using lightroom. I used to store my original photos in E dirve(E:) and import in lightroom for processing. Unfourtanetly a week back my hard-disk crashed and i am not able to recover any files other than C Drive(