CURSOR method
Hi,
Im using cursor method in my sql query. For two tables, im putting the cursor. Im populating data from two tables into third internal table.
Now, the data from third internal table is inserted into a ztable.
During this entire process, my first cursor will be opened and second cursor will be open and closed till it gets the complete records based on first cursor. data insertion happens into a ztable and finally closing the cursor for the first one.
open cursor dbcur1
open cursor dbcur2
close cursor dbcur2
inserting into ztable
close cursor dbcur1. }
My problem is that a huge time is consumed while inserting the rows into dbtable. Here, im trying to do a batch processing. So for each batch process, my ztable will be inserted with new records.
So, what the best case of reducing the time for processing this activity.
thanks
rohith
Hi,
The best method i can think of is update the data outside the SQL. Collect data into an internal table. Once all the data is collected, close all the cursors, then INSERT the data into ZTABLE at single shot.
Logic behind this is, it reduces the number of database hits. As we know database operations works on queing principles. So instead of waiting in Q for each INSERT statement, update the data at once and COMMIT.
Hope you got the logic.
Thanks,
Vinod.
Similar Messages
-
Need help getting DataProvider cursor methods to work in JSC
Hi, I'm a newbie to Creator as well as JSF. I'm using JSC Update 1. I've
worked through a couple of the beginning tutorials including "Performing
Inserts, Updates, and Deletes" without a problem. I'm now trying to
craft a simple jsf database form application on my own using the sample
travel database and I'm stuck!
I put text fields on the form corresponding to each of the 4 fields in
the PERSON table and bound them to the fields in the table which, on
examination, resulted in a CachedRowSetDataProvider (personDataProvider)
in Page1 and a CachedRowSetXImpl (personRowSet) in SessionBean1. I then put 4 buttons on the form (First, Previous, Next, Last) for record
navigation. Here is the code I put in the first two actions (the others are
the same except for the cursor methods):
public String btn_firstrec_action() {
try {
personDataProvider.cursorFirst();
} catch (Exception e){
error("cursorFirst failed: " + e.getMessage());
log("cursorFirst failed " , e);
return null;
public String btn_next_action() {
try {
personDataProvider.cursorNext();
} catch (Exception e){
error("cursorNext failed: "+ e.getMessage() );
log("cursorNext failed " , e);
return null;
etc.
When I run the application using the bundled app server I get strange
behavior when I click the navigation buttons. There are 6 records in the
table. The application opens up displaying the data for the first
record. Clicking "Next" takes me to record 2--so far so good. However,
with repeated clicks on "Next", "Previous", "First" the data displayed in
the form remains the same. If I click "Last" (personDataProvider.cursorLast(); ), the data from record 6 is rendered OK!
I worked a little in the debugger. I added a cursorChanging method and
put a break point in there. Then I watched the index attributes for rk1
an rk2 as I continued the execution. I had to hit Alt+F5 (Continue)
2 to 4 times on each button click depending on the action--4X
with "Next" and "Previous". For each button click the index values would
change at first to what logically seemed the correct values but
then snap back to 0 or 1 as I kept continuing depending on the sequence--
wierd to me.
I also tried configuring all the form fields in a virtual form with
"Participate" on for the text fields and "Submit" on for the buttons
with the same result (I was really "shooting in the dark" here!).
I'm obviously missing something here--this shouldn't be this difficult!
I have scanned through a lot of the excellent tutorials, articles and forum posts as well as some of the sample apps but haven't as yet found a similar example of using the DataProvider cursor methods which seems pretty basic to me but, I could have easily missed seeing what I needed.
Any help would be greatly appreciated--I'm really counting on this tool
to get an overdue project going so I'm somewhat desperate.
Thanks, Len SissonThis happened to me as well the first time I used the JSC (and I was a newbie in web). I believe it is because everytime you hit the button, actually the sequence of program will back to init(), button_action(), prerender(), destroy(). So, try to remember the last state of the cursor position using session variable. And then you put a code in the init() to move back to the last cursor position.
Check the sample program provided by JSC (SinglePageCrudForm). This sample demonstrate how to put a code in init(), destroy() in order to 'remember' the last position.
hope that helps
Message was edited by:
discusfish -
Dear Experts,
I am using parallel cursor method for a nested loop by using this method the report got very fast
but the data from the loop where I used Parallel cursor method is not coimng after 7000 records.
Say when I am running the report from 1st jan to 30 jan total records are 48,000 but data from parallel cursor method 's loop is cumin till 7th of jan (7000 records) after that all values are coming zero.
When I am running it from 7th of jan to 30 th Jan data from that loop is cumin till 15th of jan(7000 records) after that values are cumin zero.
Below I am writing the code I used for parallel cursor method loop
paralele cursor method
data : v_index type sy-tabix.
read TABLE i_konv into wa_konv with key knumv = wa_vbrk-knumv
kposn = wa_vbrp-posnr binary search.
if sy-subrc = 0.
v_index = sy-tabix.
loop at i_konv into wa_konv FROM v_index. "FROM v_index.
if wa_konv-knumv = wa_vbrk-knumv.
if wa_konv-kposn <> wa_vbrp-posnr.
exit.
endif.
else.
exit.
endif.
Thanks and Regards,
Vikas PatelHi Vikas,
First check there are records available completely in you Internal table...
and Here is a very simple example for parallel cusor..
REPORT zparallel_cursor.
TABLES:
likp,
lips.
DATA:
t_likp TYPE TABLE OF likp,
t_lips TYPE TABLE OF lips.
DATA:
w_runtime1 TYPE i,
w_runtime2 TYPE i,
w_index LIKE sy-index.
START-OF-SELECTION.
SELECT *
FROM likp
INTO TABLE t_likp.
SELECT *
FROM lips
INTO TABLE t_lips.
GET RUN TIME FIELD w_runtime1.
SORT t_likp BY vbeln.
SORT t_lips BY vbeln.
LOOP AT t_likp INTO likp.
LOOP AT t_lips INTO lips FROM w_index.
IF likp-vbeln NE lips-vbeln.
w_index = sy-tabix.
EXIT.
ENDIF.
ENDLOOP.
ENDLOOP.
compare the difference.
Thanks & regards,
Dileep .C -
Listed below is my code. I get the following errors when trying to execute
Error(35,29): PLS-00103: Encountered the symbol "@" when expecting one of the following: mod <an identifier> <a double-quoted delimited-identifier> <a bind variable> current sql execute forall merge pipe
on this block [BEGIN
OPEN source_table;
FETCH source_table INTO @FIN;]
Please any ideas will be helpful.
create or replace
PROCEDURE CHP_ENCOUNTER_LOAD
AS
** Cursor method to cycle through the DWFILENCOUNTER table and get DWFILENCOUNTER Info for each iRowId.
** Revision History:
** Date Name Description Project
** 07/20/10 ¿-
BEGIN
DECLARE CURSOR source_table IS
SELECT DISTINCT
SI.FIN
FROM
CHP_SURG_INFECTION SI;
-- Declare all Variables ***********
--DECLARE
dblPatient_ID int;
dblOrganization int;
-- Get rows from source table (CHP_SURG_INFECTION) into cursor
--SET NOCOUNT ON
BEGIN
OPEN source_table;
FETCH source_table INTO @FIN
--...bunch more fields here...
--This is where you perform your detailed row-by-row processing.
WHILE @@Fetch_Status = 0
****** Avoid Duplicates Section
-- Select to see if Encounter already exists (FIN exists on the DWFILENCOUNTER table)
SELECT E.FIN
FROM DWFILENCOUNTER E
WHERE E.FIN = @FIN
-- Loop onto next row if already found
IF SQL%NOTFOUND THEN
goto end_loop;
END IF;
******* Lookup Section ************
-- PATIENT - Select Patient from Patient Table (MRN exists on the DWFILPATIENT Table)
SELECT P.Patient_ID into dblPatient_ID
FROM DWFILPATIENT P
WHERE P.MRN = @MRN;
-- Log out Error if Patient Not found and Loop to next Record (don't insert)
IF SQL%NOTFOUND THEN
goto end_loop;
END IF;
-- ORGANIZATION - Select Organization from the DWOrganization table based on name
SELECT O.Organization_Id into dblOrganization_ID
FROM DWDIMORGANIZATION O
WHERE O.ORG_NAME = @FACILITY;
-- Log out Error if Organization Not found and Loop to next Record (don't insert)
IF SQL%NOTFOUND THEN
goto end_loop;
END IF;
**** Insert Rows *********************************
-- Inserting columns in the DWFILENCOUNTER table
INSERT INTO DWFILENCOUNTER (ENCOUNTER_TYPE, FIN, PATIENT_ID, ORGANIZATION_ID )
VALUES (@ENCOUNTER_TYPE, @FIN, dblPatient_ID, dblOrganization_ID)
-- Commit
¿- Get the next row.
FETCH
<<end_loop>>
END
LOOP
CLOSE source_table
DEALLOCATE source_table
RETURN
END CHP_ENCOUNTER_LOAD;Sory for not posting the right code. Here is the code with all vriable included.
create or replace
PROCEDURE CHP_ENCOUNTER_LOAD
AS
** Cursor method to cycle through the DWFILENCOUNTER table and get DWFILENCOUNTER Info for each iRowId.
** Revision History:
** Date Name Description Project
** 07/20/10 ¿-
--SET NOCOUNT ON
-- Declare all Variables ***********
DECLARE
@dblPatient_ID int;
@dblOrganization int;
@FIN VARCHAR2(50 BYTE);
@MRN VARCHAR2(25 BYTE);
@FACILITY VARCHAR2(50 BYTE);
@ENCOUNTER_TYPE VARCHAR2(50 BYTE);
CURSOR source_table FOR
BEGIN
-- Get rows from source table (CHP_SURG_INFECTION) into cursor
SELECT DISTINCT
SI.FIN
SI.ENCNTR_TYPE
FROM
CHP_SURG_INFECTION SI;
OPEN source_table
FETCH source_table INTO
@FIN,
@MRN,
@FACILITY,
@ENCOUNTER_TYPE
--...bunch more fields here...
--This is where you perform your detailed row-by-row processing.
WHILE @@Fetch_Status = 0
BEGIN
--****** Avoid Duplicates Section
-- Select to see if Encounter already exists (FIN exists on the DWFILENCOUNTER table)
SELECT E.FIN
FROM DWFILENCOUNTER E
WHERE E.FIN = @FIN
-- Loop onto next row if already found
IF SQL%NOTFOUND THEN
goto end_loop;
END IF;
--******* Lookup Section ************
-- PATIENT - Select Patient from Patient Table (MRN exists on the DWFILPATIENT Table)
SELECT P.Patient_ID into dblPatient_ID
FROM DWFILPATIENT P
WHERE P.MRN = @MRN;
-- Log out Error if Patient Not found and Loop to next Record (don't insert)
IF SQL%NOTFOUND THEN
goto end_loop;
END IF;
-- ORGANIZATION - Select Organization from the DWOrganization table based on name
SELECT O.Organization_Id into dblOrganization_ID
FROM DWDIMORGANIZATION O
WHERE O.Name = @FACILITY;
-- Log out Error if Organization Not found and Loop to next Record (don't insert)
IF SQL%NOTFOUND THEN
goto end_loop;
END IF;
--**** Insert Rows *********************************
-- Inserting columns in the DWFILENCOUNTER table
INSERT INTO DWFILENCOUNTER (ENCOUNTER_TYPE, FIN, PATIENT_ID, ORGANIZATION_ID )
VALUES (@ENCOUNTER_TYPE, @FIN, @dblPatient_ID, @dblOrganization_ID)
-- Commit
-- Get the next row.
FETCH
<<end_loop>>
END
LOOP
CLOSE source_table
DEALLOCATE source_table
RETURN
END CHP_ENCOUNTER_LOAD; -
Cursor Fetch into Record vs Record Components
Anyone,
What are pros and cons of the use of the cursor method that fatches into a single record value vs. that of record components?
I am just not seeing why you might use one vs other so was curious if their is something I am missing in reagrds to performance or temp space required or anything like that????
Thanks for any help,
MillerYou should use record components when possible. It is easier to code and read.
Cursor loops are great:
for c_rec in select a, b, c from t1, t2 where t1.a = t2.a loop
dbms_output.put_line(c_rec.b || c_rec.a);
end loop;
In the above case you don't even have to declare c_rec!
Tom Best -
Hello,
Can some one give me a description in detail as to what exactly Parallel cursor method is and how it works.
Also, what are the Performance Tuning Techniques that we can follow during ABAP Development?
Thanks and Regards,
Venkatactually, I would not recommend the parallel cursor technique! First name is actually incorrect internal tables have no cursor only indexes, only parallel
index would make sense.
Performance improvement:
If costs for sort are taken into account, then parallel index is not faster than loop with loop on sorted table.
or with standard table
loop
read ... binary search
index = sy-tabix
loop ... from index
if ( condition is not fulfilled )
exit
endif.
endloop
endloop
The full parallel index technique should find all deviations between 2 tables, additional lines in Tab1, additional lines in tab2, changes.
Feel free to find a complete solution. We can compare results, it is not worth the effort!
Siegfried -
Hi there.
I am new to BDB and am using version 4.8.26.
I am experiencing an issue with the Db::cursor() method. It blows up with a DBException where the 'what' informs me "Error:Unable to create variable object"
Can someone please review the code below and tell me what I am doing wrong?
Thanks alot in advance!!
CTOR
BDBManager(const std::string& env, u_int32_t pagesize, u_int32_t lg_buf_size): m_dbEnv(0){
u_int32_t envflags =
DB_CREATE
| DB_RECOVER
| DB_INIT_TXN
| DB_INIT_MPOOL
| DB_INIT_LOG
| DB_INIT_LOCK;
u_int32_t dbflags = DB_CREATE;//don't perform I/O to write to logging subsystem
* because we are using transactions we need to make sure the
* log file can hold at least a couple of
m_dbEnv.set_lg_bsize(lg_buf_size);
m_dbEnv.open(env.c_str(), envflags, 0);
m_dbEnv.set_flags(DB_TXN_NOSYNC, 1);
m_rawMsgs = new Db(&m_dbEnv, 0);
m_rawMsgs->set_bt_compare(&sortkey);
m_rawMsgs->open(NULL, "raw.db", NULL, DB_BTREE, dbflags, 0);
m_state = new Db(&m_dbEnv, 0);
m_state->set_bt_compare(&sortkey);
m_state->set_pagesize(pagesize);
m_state->set_flags(DB_DUPSORT);
m_state->open(NULL, "state.db", NULL, DB_BTREE, dbflags, 0);
My Dodgy Method
void openTxn(){
m_currentTxn = NULL;
m_putCurs = NULL;
m_dbEnv.txn_begin(NULL, &m_currentTxn, 0);
if(tx){
try{
m_state->cursor(m_currentTxn, &m_putCurs, 0);//cursor only needed for state
catch(DbException dbE){
m_state->err(dbE.get_errno(), dbE.what());
catch(std::exception e){
m_state->errx(e.what());
private:
static int sortkey(Db* db, const Dbt* a, const Dbt* z){
u_int64_t aa,zz;
memcpy(&aa, a->get_data(), sizeof(u_int64_t));
memcpy(&zz, z->get_data(), sizeof(u_int64_t));
return (aa-zz);
DbEnv m_dbEnv;
Db* m_rawMsgs;
Db* m_state;
DbTxn* m_currentTxn;
Dbc* m_putCurs;Hello,
"Error:Unable to create variable object" is not a Berkeley DB error.
Is there any issue with space where the program is being run? If you
are not compiling without optimization you could try that.
Are any other errors raised? You can also set verbose messaging
which could provide more information on what the problem is.
You can set verbose error messages with either:
http://download.oracle.com/docs/cd/E17076_02/html/api_reference/CXX/envset_errfile.html
http://download.oracle.com/docs/cd/E17076_02/html/api_reference/CXX/dbset_errfile.html
Thanks,
Sandra -
What is parallel cursor technique.
what is parallel cursor technique. Please give an example
thanx in advanceSuppose u got data in two internal table itab1 and itab2 from header and from item table
u have to take cobine the values into one table
so normally what we do is that we write loop .(item table -itab2 ) inside antother loop (header table-itab1) . but it will affect the performance
So go for Parallel cursor method
Regards,
Nikhil -
OPEN CURSOR approach- Performance
Hi All,
We have a requirement wherein we are using OPEN CURSOR and FETCH CURSOR in a ABAP Program (Function Module).
Any sample code related to CURSORs will be help full.
Is OPEN CURSOR method better in terms of performance compared to simple SELECT statements?
Any help regarding this will be highly appriciated.
Thanks in advance.Hi Shilpa,
Yes from performance perspective if you are reading a large chunk of data then the Open Cursor gives you better performance. You mak keep you mouse cursor on the word cursor and read help.
Sample Below:
DATA: BEGIN OF count_line,
carrid TYPE spfli-carrid,
count TYPE i,
END OF count_line,
spfli_tab TYPE TABLE OF spfli.
DATA: dbcur1 TYPE cursor,
dbcur2 TYPE cursor.
OPEN CURSOR dbcur1 FOR
SELECT carrid count(*) AS count
FROM spfli
GROUP BY carrid
ORDER BY carrid.
OPEN CURSOR dbcur2 FOR
SELECT *
FROM spfli
ORDER BY carrid.
DO.
FETCH NEXT CURSOR dbcur1 INTO count_line.
IF sy-subrc <> 0.
EXIT.
ENDIF.
FETCH NEXT CURSOR dbcur2
INTO TABLE spfli_tab PACKAGE SIZE count_line-count.
ENDDO.
CLOSE CURSOR: dbcur1,
dbcur2.
best regards,
Kazmi -
New major release, JE 5.0, is available
Hi all,
We're pleased to announce a new major release, JE 5.0.34. The release contains a significant number of new features and performance enhancements. Complete information can be found in the JE documentation, specifically in the change log. The release is available via the download site or Maven.
Below is a copy of the major enhancements listed in the release notes. See the change log for a complete list.
<li> The new DiskOrderedCursor class can be used to iterate over all records in a database, for increased performance when transactional guarantees are not required. [#15260]
<li> A new Environment.preload method can be used to preload multiple databases at a time, for increased performance compared to preloading each database individually. [#18153]
<li> The JE environment can now be spread across multiple subdirectories to take advantage of multiple disks or file systems. [#19125]
<li> The new AppStateMonitor class lets the HA application add more application specific information to the notion of node state in a replication group. [#18046]
<li> New options have been added for changing the host of a JE replication node, and for moving a JE replication group. See the Utilities section of the change log.
<li> Replicated nodes can now be opened in UNKNOWN state, to support read only operations in a replicated system when a master is not available. [#19338]
<li> New Cursor methods were added to allow quickly skipping over a specified number of key/value pairs. [#19165]
<li> A per-Environment ClassLoader may now be configured and will be used by JE for loading all user-supplied classes. [#18368]
<li> The java.io.Closeable interface is now implemented by all JE classes and interfaces with a public void close() method. This allows using these objects with the Java 1.7 try-with-resources statement. [#20559]
<li> The Environment.flushLog method has been added. It can be used to make durable, by writing to the log, all preceding non-transactional write operations, without performing a checkpoint. [#19111]
<li> Performance of record update and deletion operations has been significantly improved when the record is not in the JE cache and the application does not need to read the record prior to performing the update or deletion. [#18633]
<li> An internal format change was made for databases with duplicate keys that improves operation performance, reduces memory and disk overhead, and increases concurrency. [#19165]
<li> An improvement has been made that requires significantly less writing per checkpoint, less writing during eviction, and less metadata overhead in the JE on-disk log files. [#19671]
<li> Improved DbCacheSize utility to take into account memory management enhancements and improve accuracy. Added support for key prefixing, databases configured for sorted duplicates, and replicated environments. [#20145]
<li> EnvironmentConfig.TREE_COMPACT_MAX_KEY_LENGTH was added for user configuration of the in-memory compaction of keys in the Btree. [#20120]
-- The JE TeamThe format change is for BDB Java Edition 5.0 only, and does not apply to other BDB products. It's forward compatible, which means that a BDB JE application using JE 5.0 can use environments written with older versions of JE. However, it's not backwards compatible; once your environment has been opened by JE 5.0, it's been converted to the new format, and can no longer be readable by older versions of JE.
Just for the benefit of others who may not have read the changelog, here's the full text:
JE 5.0.34 has moved to on-disk file format 8.
The change is forward compatible in that JE files created with release 4.1 and earlier can be read when opened with JE 5.0.34. The change is not backward compatible in that files created with JE 5.0 cannot be read by earlier releases. Note that if an existing environment is opened read/write, a new log file is written by JE 5.0 and the environment can no longer be read by earlier releases.
There are two important notes about the file format change.
The file format change enabled significant improvements in the operation performance, memory and disk footprint, and concurrency of databases with duplicate keys. Environments which contain databases with duplicate keys must run an upgrade utility before opening an environment with this release. See the Performance section for more information.
An application which uses JE replication may not upgrade directly from JE 4.0 to JE 5.0. Instead, the upgrade must be done from JE 4.0 to JE 4.1 and then to JE 5.0. Applications already at JE 4.1 are not affected. Upgrade guidance can be found in the new chapter, "Upgrading a JE Replication Group", in the "Getting Started with BDB JE High Availability" guide.I may not have fully understood your question, so let us know if that doesn't answer it.
Edited by: Linda Lee on Dec 1, 2011 10:41 AM -
How user variable table names in select statement
Dear all,
I have three table gp1,gp2,g3. i want user variable table in sql query
for example at oracle forms have a list table showing table names gp1,gp2,gp3
at form i want user this query
select gpno from :table where gpno=120;
how i can specify table name Dynamicly in select query
ThanksForms_DDL is a one-way street: You can only pass DDL commands TO the database; you cannot get data back using Forms_DDL.
Exec_SQL is the Forms package that enables dynamic sql within a form. But to retrieve data, you have to make a Exec_SQL call for every column in every row. So it is not a good thing to use, either.
The ref cursor method should work. You could also retrieve the data into a record group using populate_group_with_query -- it also enables dynamic data retrieval.
But if you already know you have three distinct tables and you know their names, I would keep it simple and just write three sql select statements. -
In my original email I should have made the point clear that an indexed
column was required, that led to some confusion, apologies.
Under Oracle 7 even if the column is indexed the query engine still does a
full scan of the index to find the maximum or minimum value. As strange as
this seems it is possible to view it using the Oracle trace functions such
as tkprof. This method is quicker than not having an index but the cursor
method is far more efficient.
When using a cursor based approach Oracle will go straight to the first
record of the index (depending on MAX or MIN) and retrieve the data. By
exiting at that point the function has been performed and the I/O operations
are extremely low compared to a full index scan.
Of course there is a trade off depending on the amount of rows but for large
indexed tables the cursor approach will be far faster than the normal
functions. I'm not sure how other RDBMS's handle MAX/MIN but this has been
my experience with Oracle. This process may be faster still by using PL/SQL
but then you are incorporating specific database languages which is
obviously a problem if you port to a different RDBMS. Here is some code you
can try for Oracle PL/SQL functions:
declare
cursor myCur1 is
select number_field
from number_table
order by number_field desc;
begin
open myCur1;
fetch myCur1 into :max_val;
close myCur1;
end;
I hope this clarifies things a bit more. If in doubt of the execution plan
of a performance critical query use the database trace functions as they
show up all sorts of surprises. MAX and MIN are easy to understand when
viewing code but perform poorly under Oracle, whether v8 behaves differently
I have yet to discover.
Cheers,
Dylan.
-----Original Message-----
From: [email protected] [mailto:[email protected]]
Sent: Thursday, 7 January 1999 3:37
To: [email protected]
Subject: RE: SQL functions in TOOL code
I guess my point is that MAX can always be implemented more
efficiently than the SORT/ORDER-BY approach (but may not be the
case, depending on the RDBMS). If an ORDER-BY
can use an index (which means that the indexing mechanism involves
a sorted collection rather than an unordered hashtable) so can
MAX - in which case finding a MAX value could be implemented
in either O(1) or O(logn) time, depending on the implementation.
The last sentence being the major point of this whole discussion,
which is that your mileage may vary depending on the RDBMS - so
try using both approaches if performance is a problem.
In terms of maintenance, MAX is the much more intuitive approach
(In My Opinion, of course), since a programmer can tell right away
what the code is attempting to do.
Chad Stansbury
BORN Information Services, Inc.
-----Original Message-----
From: [email protected]
To: [email protected]; [email protected]; [email protected]
Sent: 1/6/99 10:45 AM
Subject: RE: SQL functions in TOOL code
Well, yes, but in that specific case (looking for max() value) would not
be
true that, if you have an index (and only then) on that specific column
some
databases (like Oracle) will be smart enough to use index and find max
value
without full table scan and without using order by clause?
Dariusz Rakowicz
Consultant
BORN Information Services (http://www.born.com)
8101 E. Prentice Ave, Suite 310
Englewood, CO 80111
303-846-8273
[email protected]
-----Original Message-----
From: Sycamore [SMTP:[email protected]]
Sent: Wednesday, January 06, 1999 10:29 AM
To: [email protected]; [email protected]
Subject: Re: SQL functions in TOOL code
If (and only if) an index exists on the exact columns in the ORDER BY
clause, some databases are smart enough to traverse the index (inforward
or
reverse order) instead of doing a table scan followed by a sort.
If there is no appropriate index, you always end up with some kind ofsort
step.
Of course this is all highly schema- and database-dependent, so youmust
weigh those factors when deciding to exploit this behavior.
Kevin Klein
Sycamore Group, LLC
Milwaukee
-----Original Message-----
From: [email protected] <[email protected]>
To: [email protected] <[email protected]>
Date: Wednesday, January 06, 1999 9:40 AM
Subject: RE: SQL functions in TOOL code
This seems a bit counter-intuitive to me... primarily due to
the fact that both MAX and ORDER-BY functionality would require
a full table scan on the given column... no? However, I would
think that a MAX can be implemented more efficiently since it
just requires the max value in a given set (which can be performed
in O(n) time on an unordered set) versus an ORDER-BY (sort)
performance on an unordered set of at best O(nlogn) time.
Am I missing something? Please set me straight on this 'un.
Chad Stansbury
BORN Information Services, Inc.
-----Original Message-----
From: Jones, Dylan
To: 'Vuong, Van'
Cc: [email protected]
Sent: 1/5/99 4:42 PM
Subject: RE: SQL functions in TOOL code
Hi Van,
Operating a function such as MAX or MIN is possible as given in your
example
but it is worth pointing out the performance overhead with such a
method.
When you use MAX, Oracle will do a full table scan of the column so
if
you
have a great many rows it is very inefficient.
In this case use a cursor based approach and depending on your
requirments
(MAX/MIN) use a descending or ascending ORDER BY clause.
eg.
begin transaction
for ( aDate : SomeDateDomain ) in
sql select DATE_FIELD
from DATE_TABLE
order by
DATE_FIELD DESC
on session MySessionSO
do
found = TRUE;
aLatestDate.SetValue(aDate);
// Only bother about the first record
exit;
end for;
end transaction;
On very large tables the performance increases with the above method
will be
considerable so it is worth considering which method to use whensizing
your
database and writing your code.
Cheers,
Dylan.
-----Original Message-----
From: Vuong, Van [mailto:[email protected]]
Sent: Tuesday, 5 January 1999 6:50
To: [email protected]
Subject: SQL functions in TOOL code
Is it possible to execute a SQL function from TOOL code?
For example:
SQL SELECT Max(Version) INTO :MyVersion
FROM Template_Design
WHERE Template_Name = :TemplateName
ON SESSION MySession;
The function in this example is MAX().
I am connected to an Oracle database.
Thanks,
Van Vuong
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive<URL:http://pinehurst.sageit.com/listarchive/>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive<URL:http://pinehurst.sageit.com/listarchive/>
>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive<URL:http://pinehurst.sageit.com/listarchive/>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/> -
How can we increse this coding Part
Hi there
I came across some coding to improve .while looking the progam it is unique .some say it is correct as per the Sap point of view .but some dosent.
Please verify is this the correct way for coding .
IF NOT skont IS INITIAL.
IF NOT aksaldo IS INITIAL.
IF NOT summen IS INITIAL.
LOOP AT organ.
CLEAR: f_bwkey, f_bklas, f_bwtty, f_bwtar, sum.
SELECT bwkey bklas bwtty bwtar SUM( salk3 ) FROM mbew
INTO (f_bwkey, f_bklas, f_bwtty, f_bwtar, sum)
WHERE bwkey EQ organ-bwkey
AND matnr IN matnr
AND bklas IN ibklas
AND bwtar IN bwtar
GROUP BY bwkey bklas bwtty bwtar.
CHECK NOT sum IS INITIAL.
MOVE f_bwkey TO xmbew-bwkey.
MOVE f_bklas TO xmbew-bklas.
MOVE f_bwtty TO xmbew-bwtty.
MOVE f_bwtar TO xmbew-bwtar.
MOVE sum TO xmbew-salk3.
COLLECT xmbew.
ENDSELECT.
CLEAR: f_bwkey, f_bklas, f_bwtty, f_bwtar, sum.
SELECT bwkey bklas bwtty bwtar SUM( salk3 ) FROM ebew
INTO (f_bwkey, f_bklas, f_bwtty, f_bwtar, sum)
WHERE bwkey EQ organ-bwkey
AND matnr IN matnr
AND bklas IN ibklas
AND bwtar IN bwtar
GROUP BY bwkey bklas bwtty bwtar.
CHECK NOT sum IS INITIAL.
MOVE f_bwkey TO xmbew-bwkey.
MOVE f_bklas TO xmbew-bklas.
MOVE f_bwtty TO xmbew-bwtty.
MOVE f_bwtar TO xmbew-bwtar.
MOVE sum TO xmbew-salk3.
COLLECT xmbew.
ENDSELECT.
CLEAR: f_bwkey, f_bklas, f_bwtty, f_bwtar, sum.
SELECT bwkey bklas bwtty bwtar SUM( salk3 ) FROM qbew
INTO (f_bwkey, f_bklas, f_bwtty, f_bwtar, sum)
WHERE bwkey EQ organ-bwkey
AND matnr IN matnr
AND bklas IN ibklas
AND bwtar IN bwtar
GROUP BY bwkey bklas bwtty bwtar.
CHECK NOT sum IS INITIAL.
MOVE f_bwkey TO xmbew-bwkey.
MOVE f_bklas TO xmbew-bklas.
MOVE f_bwtty TO xmbew-bwtty.
MOVE f_bwtar TO xmbew-bwtar.
MOVE sum TO xmbew-salk3.
COLLECT xmbew.
ENDSELECT.
consider valuated subcontractor stocks from OBEW "n497391
CLEAR: f_bwkey, f_bklas, f_bwtty, f_bwtar, sum. "n497391
SELECT bwkey bklas bwtty bwtar SUM( salk3 ) "n497391
FROM obew "n497391
INTO (f_bwkey, f_bklas, f_bwtty, f_bwtar, sum) "n497391
WHERE bwkey EQ organ-bwkey "n497391
AND matnr IN matnr "n497391
AND bklas IN ibklas "n497391
AND bwtar IN bwtar "n497391
GROUP BY bwkey bklas bwtty bwtar. "n497391
CHECK NOT sum IS INITIAL. "n497391
MOVE f_bwkey TO xmbew-bwkey. "n497391
MOVE f_bklas TO xmbew-bklas. "n497391
MOVE f_bwtty TO xmbew-bwtty. "n497391
MOVE f_bwtar TO xmbew-bwtar. "n497391
MOVE sum TO xmbew-salk3. "n497391
COLLECT xmbew. "n497391
ENDSELECT. "n497391
ENDLOOP.
ELSEIF summen IS INITIAL.
CLEAR xmbew. "388498
SELECT mandt matnr bwkey bwtar lvorm lbkum salk3
vprsv verpr stprs peinh bklas salkv lfgja lfmon
bwtty pstat vksal eklas qklas
FROM mbew INTO CORRESPONDING FIELDS OF xmbew
FOR ALL ENTRIES IN organ WHERE bwkey EQ organ-bwkey
AND matnr IN matnr
AND bklas IN ibklas
AND bwtar IN bwtar.
APPEND xmbew.
ENDSELECT.
Begin of Optima APP 037
IF NOT xmbew IS INITIAL.
Start of Insert E_FIR.018 PRADHSA1
SELECT matnr werks prctr
FROM marc
INTO TABLE i_marc
FOR ALL ENTRIES IN xmbew
WHERE matnr = xmbew-matnr
AND werks = xmbew-bwkey.
End of Insert E_FIR.018 PRADHSA1
ENDIF.
Begin of Optima APP 037
CLEAR xmbew. "388498
SELECT mandt matnr bwkey bwtar lbkum salk3
vprsv verpr stprs peinh bklas salkv lfgja lfmon
bwtty vksal sobkz vbeln posnr
FROM ebew INTO CORRESPONDING FIELDS OF xmbew
FOR ALL ENTRIES IN organ WHERE bwkey EQ organ-bwkey
AND matnr IN matnr
AND bklas IN ibklas
AND bwtar IN bwtar.
xmbew-no_sum = 'X'.
APPEND xmbew.
ENDSELECT.
CLEAR xmbew. "388498
SELECT mandt matnr bwkey bwtar lbkum salk3
vprsv verpr stprs peinh bklas salkv lfgja lfmon
bwtty vksal sobkz pspnr
FROM qbew INTO CORRESPONDING FIELDS OF xmbew
FOR ALL ENTRIES IN organ WHERE bwkey EQ organ-bwkey
AND matnr IN matnr
AND bklas IN ibklas
AND bwtar IN bwtar.
xmbew-no_sum = 'X'.
APPEND xmbew.
ENDSELECT.
consider valuated subcontractor stocks from OBEW "n497391
CLEAR xmbew. "n497391
SELECT mandt matnr bwkey bwtar lbkum salk3 "n497391
vprsv verpr stprs peinh bklas salkv "n497391
lfgja lfmon bwtty vksal sobkz lifnr "n497391
FROM obew INTO CORRESPONDING FIELDS OF xmbew "n497391
FOR ALL ENTRIES IN organ "n497391
WHERE bwkey EQ organ-bwkey "n497391
AND matnr IN matnr "n497391
AND bklas IN ibklas "n497391
AND bwtar IN bwtar. "n497391
xmbew-no_sum = 'X'. "n497391
APPEND xmbew. "n497391
ENDSELECT. "n497391
ENDIF.
Thanks in advance
RajaHi Raj,
1) Avoid select statements inside a loop which will effect the performance of your program
2) First get all the required data from tables mbew, ebew, qbew, obew, qbew, obew in to separate internal tables using for all entries from internal table organ instead of using select----endselect in a loop
3) use nested loops instead of select------endselect but use parallel cursor method in nested loop to improve performance.
The below example shows how to improve performance if we use nested loop using parallel cursor method
Nested Loop using Parallel Cursor:
REPORT zparallel_cursor2.
TABLES:
likp,
lips.
DATA:
t_likp TYPE TABLE OF likp,
t_lips TYPE TABLE OF lips.
DATA:
w_runtime1 TYPE i,
w_runtime2 TYPE i,
w_index LIKE sy-index.
START-OF-SELECTION.
SELECT *
FROM likp
INTO TABLE t_likp.
SELECT *
FROM lips
INTO TABLE t_lips.
GET RUN TIME FIELD w_runtime1.
SORT t_likp BY vbeln.
SORT t_lips BY vbeln.
LOOP AT t_likp INTO likp.
LOOP AT t_lips INTO lips FROM w_index.
IF likp-vbeln NE lips-vbeln.
w_index = sy-tabix.
EXIT.
ENDIF.
ENDLOOP.
ENDLOOP.
GET RUN TIME FIELD w_runtime2.
w_runtime2 = w_runtime2 - w_runtime1.
WRITE w_runtime2.
Thanks,
Naveen Kumar. -
Scrolling, Counting (the whole shebang?)
Bear with me here, this one's causing me headaches and sleepless nights (and I'm not exaggerating. While this issue sits in Bugzilla the boss cannot be silenced.)
<cliche>The story so far...</cliche>
An implementation of a scrolling result set I built recently for one of the queries in our system was done like this (The database system in question is Postgresql, btw):
(1) Count the rows using a SELECT count(*) ...
(2) Get a cursor with DECLARE CURSOR FOR SELECT ...
(3) Use FETCH to grab rows, MOVE to move the cursor, etc.
(4) Finish up at some point in the future with a CLOSE.
Now...
The entire reason we went through this horror was because ResultSet in Postgres isn't implemented using native Postgres cursors. This made the development team uneasy (we thought the performance wouldn't be up to scratch) so we eventually decided we would go for a somewhat more brute-force way of using Postgres' built-in commands. The result, a very fast scrolling result set, which works fine, for the one or two pages in our system which use it.
But (there is ALWAYS a but, right?)...
Since the DECLARE CURSOR, FETCH, MOVE, and CLOSE statements are not in the SQL standard, the query doesn't work under, say, DB2. Likelihood is it wouldn't work under any other database at all. And for portability's sake, we're considering this to be a bad thing.
In addition to this, the need to get the count separately from the query itself means that in every place where we do a SELECT, we have to have an equivalent SELECT to do the count (which in some cases can bear little or no resemblance to the one used to do the query.)
So...
Question 1: Is doing an 'ordinary' SELECT query, holding the ResultSet in memory and simply reading off row by row only parts of it considered 'safe', where 'safe' is defined by not causing OutOfMemoryError for queries in the order of 10,000-100,000 results?
Now, we require a count of the number of the results. In our existing system this is the slowest part of the process because the SELECT count(*) takes orders of magnitude more time to execute than the DECLARE CURSOR. That is, the DECLARE CURSOR takes about 10ms, and the SELECT count(*) can take a minute, depending on the size of the result set, presumably because the DECLARE CURSOR doesn't actually do the query, and that the query executes as you fetch more results from it, or in the background. (Beats me, anything involving databases is magic as far as I'm concerned.)
Since we 'require' the count though (for the user interface only, despite the fact that it cripples the query speed somewhat), we might be able to get away with doing it this way, so...
Question 2: Would doing the following set of commands likely take the same amount of time to run (or less, or more), compared to doing the SELECT count(*)?
PreparedStatement ps = /* insert code here ;-) */
ResultSet rs = ps.executeQuery();
rs.afterLast();
int numRows = rs.getRow();
rs.beforeFirst();I guess another part of Question 2 is, is that sequence of commands considered an 'acceptable' way of counting the number of rows returned? It seems to me they would take a long time to run, but sob I don't know.
The main thing is, we have an extraordinarily large number of different queries in this system, and all of them pretty much use joins of some sort, and all of them are incredibly complex. The system as a whole is quite monolithic, which is why I'm trying to make this enhancement as simple as possible. The count-and-then-cursor method would not only be incompatible with DB2, Oracle and whatever, but would probably take an order of magnitude more time to roll into the system than something which simply caches the result sets.
Question 3: Does anyone know of an independent group, which might have implemented cursor-based ResultSet scrolling into Postgres' JDBC driver? This would be a big time saver, as the lack of this in their driver makes the database nearly useless for any sizable system, and 'unfortunately' our company has a heavily Open-Source philosophy (believe me, I love OS too, but in this case it's crippling us.)
Question 4: Does anyone know of a company in the Sydney region who is looking to recruit a guy who knows a hell of a lot about Jabber and various IM protocols, but not so much about databases? [Disclaimer: if you are my current employer, this is a JOKE.]
Since this is such a weight on my shoulders at the moment, I'll put up a fairly sizable number of dukes. Let's hope we attract the hot-shots. These problems must have been done 1000 times before, but I've never seen a tutorial on them, and not even the 5kg JDBC book we have here helps at all.To have scrollable ResultSet you can use CachedRowSet (you can use it with any DBMS):
http://developer.java.sun.com/developer/earlyAccess/crs/
To prevent OutOfMemoryError you can:
PreparedStatement ps = /* insert code here ;-) */
PreparedStatement.setMaxRows(int�maxRows);
ResultSet rs = ps.executeQuery();
CachedRowSet crs = new CachedRowSet();
crs.populate(rs);
crs.afterLast();
int numRows = crs.getRow();
crs.beforeFirst();
and if numRows = maxRows promt users that there are more records and let them redefine query
or if you don't have to display all rows (who needs 100000 rows once anyway? ;)
retrieve and cache just keys and display result page by page (there are some other paterns to do paging).
100000 records is a lot, but it depends on number of users and RAM - it's still can be done.
there is another OpenSource JDBC driver for PostgreSQL: jxDBCon http://sourceforge.net/projects/jxdbcon/
you can compare it with PostgreSQL driver -
Hi,
I neeed to add a button on PO screen.When I click this button , it should call a z transaction.
I am trying to find screen exit/menu exit to acheive with no result.
Can some one help me in finding the solution.
Thanks,
VijayHi...
You can use either field-symbols while looping but the better solution is to use 'PARELLEL CURSOR' method..
PLease refer the following code ..
int_ekko is 1st table
int_ekpo is 2nd table..
LOOP AT int_ekko INTO wa_ekko.
perform do_somethig
READ TABLE int_ekpo INTO wa_ekpo WITH KEY ebeln = wa_ekko-ebeln.
IF sy-subrc = 0.
CLEAR: loc_tabix,
wa_ekpo.
loc_tabix = sy-tabix.
LOOP AT int_ekpo INTO wa_ekpo FROM loc_tabix.
IF wa_ekko-ebeln EQ wa_ekpo-ebeln.
*the code to be executed for the second loop..
ELSE.
CLEAR loc_tabix.
EXIT.
ENDIF.
ENDLOOP.
ENDIF.
CLEAR: wa_ekko.
ENDLOOP.
Hope this helpss...
Maybe you are looking for
-
Quicktime for windows problem. No Audio!
Movie info: Source: ***.avi Format: 3ivx D4, 624 x 352, Unknown MPEG Layer 3, Stereo, 48.000 kHz Movie FPS 23.98 Playing FPS: 24.40 Data Size: 208.76 MB Data Rate: 140.26 Current Time: 00:00:04.04 Duration: 00:25:00.00 Normal Size: 624 x 352 pixels C
-
The Main Menu bar at top is hidden - how do I keep it showing?
The main menu bar at the top of Firefox 3.5 is hidden. I can bring it up using Alt-T but can't keep it showing. Does not seem to be an option in the "Options" menu under "Tools" == This happened == Every time Firefox opened == Today, after I installe
-
DW CS3 crashes when opening one particular file
I'm using DW CS3 on Mac OSX. I have a huge website built in php. Today after making minor changes to a page, adding one recordset, DW crashes every time I try to open just that page. Everything else is working okay. Can that page be corrupt? Cau
-
photoshopCS5にてクリッピングパスをつけた画像データをpng形式にて保存した場合は.再度開いてもクリッピングパスは表示されます. photoshopCS6及びCCにて同じように保存して再度開くと.クリッピングパスが消えた状態で開かれます. CS6とCCで保存したpngデータをCS5で開くと.CS6以上では見えなかったクリッピングパスが見える状態で開かれます. Photoshop上ではpngの取り扱いについて.バージョンによる違いがあるのでしょうか? OSはMac OS10.7〜10.9
-
every time I close and re-open premiere...almost every one of my multiclips seems to revert back to the slate shot. almost every clip seems to hold the same spot in which I cut but the footage is just looping and all the work has to be started over.