Urgent! Slow Result Set -- temp table slowing me??
-- Running BC4J/JHeadstart/UIX
Description:
I have a uix page that calls a Servlet and passes a TABLE_NAME. The Servlet gets the TABLE_NAME and calls a class that extends oracle.jheadstart.persistence.bc4j.handler.DataSourceHandlerImpl to create a ViewObject and get the data we need. Once the ViewObject is passed back to the servlet, the servlet loops through the VIewObject and builds a report. See the problem below and the code at the bottom.
Problem:
I am running a query that returns approx 5000 records to my ViewObject. I then loop through the rows and construct a report. The view object will return the first 1085 records quickly, however, the following 4000 come back very slowly. I read online that BC4J creates temp tables to store large resultsets and then streams the data to the user as you need it. Is this our potential bottleneck?
Questions:
Is there a way to have it return all the rows? What can I do to speed this up?
Code:
--- Begin Servlet Snippet ---
private ByteArrayOutputStream createReport(HttpServletRequest request)
try{
// PARM_REPORT = table name
String reportName = request.getParameter(PARM_REPORT);
System.out.println(">> calling getReport for " + reportName);
RdmUserHandlerImpl handler = new RdmUserHandlerImpl();
ViewObject vo = handler.getReportView2(reportName, request.getSession().getId());
System.out.println(">> back from get report");
// loop through report and print every 100
while(vo.hasNext())
curRow++;
if (curRow % 100 == 0 )
System.out.println(curRow + "");
--- End Servlet Snippet ---
--- Begin RdmUserHandlerImpl Snippet ---
public ViewObject getReportView2(String tableName, Object sessionId) throws Exception {
System.out.println("IN GET REPORT VIEW");
ApplicationModule appMod = (ApplicationModule)getConnection("classpath...resource.MyUser", sessionId);
// First see if we already created the view definition
ViewObject vo = appMod.findViewObject(tableName);
// If it was already created then refresh it, else lets try to create it
if(vo != null) {
System.out.println("found existing view");
vo.reset();
else {
System.out.println("view not found, making new view");
String query="SELECT * FROM " + tableName;
System.out.println("QUERY = " + query);
vo = appMod.createViewObjectFromQueryStmt(tableName, query);
// max fetch returns -1
System.out.println("MAX Fetch Size = " + vo.getMaxFetchSize());
return vo;
--- End RdmUserHandlerImpl Snippet ---
Please reply asap! Deadline is coming fast!
-Matt
Matt,
I think that you are right, the temporary tables created by BC4J are the reason for making it slow after a certain number of records. One of Steve Muench's articles includes the text:
One of the most frequent performance-related questions we get on the Oracle Technet discussion forum is a question like, "After I query about a 1000 rows in a view object, my application gets very, very slow. What's happening?"
It explains how you can turn off this feature, see the full article at http://www.oracle.com/technology/products/jdev/tips/muench/voperftips/index.html.
The following article gives a lot of helpful information about the temporary tables:
http://www.oracle.com/technology/products/jdev/htdocs/bc4j/bc4j_temp_tables.html
The next article gives general tips for performance tuning of BC4J:
http://www.oracle.com/technology/products/jdev/howtos/10g/adfbc_perf_and_tuning.html
Hope this helps,
Sandra Muller
JHeadstart Team
Similar Messages
-
Result Set fetch agonisingly slow
I am having sporadic trouble retrieving data from a ResultSet. I do not know how to tell if it is an Oracle problem or a JDBC problem.
In a while( results.next() ) loop, for some result sets it pauses for several seconds after every 10 iterations. This usually causes a webserver time-out and the results never get to the browser. It is NOT volume related, as some LARGER result sets from almost identical queries (i.e. with just one value in the where clause changed) run fine. We are using Oracle 8i, and the "problem" query always runs fine in SQLPlus (i.e. less than ten seconds for the execution and display of ~700 rows).
some relevant evidence:
a) Usually the PreparedStatement.execute() itself is pretty quick - just a few seconds at worst
b) All result sets from this query pause every 10 iterations, but most pause for just a fraction of a second
c) With a certain value in the where clause, the pauses are 4-30 seconds, which, even when only ~700 rows are returned, results in a response time of several minutes.
d) The pauses are in the results.next() statement itself (I have output timestamps at the very beginning and the very end of the loop to show this).
e) the query is a join of six tables
f) the part of the where clause that changes is: AND FULFILLER.NAME IN (...) , where the IN clause can contain single or multiple names (but I am using a single value in the "problem" query)
g) The FULFILLER.NAME column IS an indexed field, and that index IS being used (according to "EXPLAIN PLAN") in both the fast queries and the slow queries.
What confuses me (amongst several things) is this: I would have thought that the values in the where clause would only affect the creation of the ResultSet, and not the reading of that result set. Am I wrong? Any ideas anyone?
Thanks,
Martin Reynolds (renozu)I am having sporadic trouble retrieving data from a
ResultSet. I do not know how to tell if it is an
Oracle problem or a JDBC problem.
In a while( results.next() ) loop, for some
result sets it pauses for several seconds after every
10 iterations. This usually causes a webserver
time-out and the results never get to the browser. It
is NOT volume related, as some LARGER result sets from
almost identical queries (i.e. with just one value in
the where clause changed) run fine. We are using
Oracle 8i, and the "problem" query always runs fine in
SQLPlus (i.e. less than ten seconds for the execution
and display of ~700 rows).
some relevant evidence:
a) Usually the PreparedStatement.execute() itself is
pretty quick - just a few seconds at worst
b) All result sets from this query pause every
10 iterations, but most pause for just a fraction of a
second
c) With a certain value in the where clause, the
pauses are 4-30 seconds, which, even when only ~700
rows are returned, results in a response time of
several minutes.
d) The pauses are in the results.next() statement
itself (I have output timestamps at the very beginning
and the very end of the loop to show this).
e) the query is a join of six tables
f) the part of the where clause that changes is:
AND FULFILLER.NAME IN (...) , where the IN
clause can contain single or multiple names (but I am
using a single value in the "problem" query)
g) The FULFILLER.NAME column IS an indexed field, and
that index IS being used (according to "EXPLAIN PLAN")
in both the fast queries and the slow queries.
What confuses me (amongst several things) is this: I
would have thought that the values in the where clause
would only affect the creation of the
ResultSet, and not the reading of that result
set. Am I wrong? Any ideas anyone?
this honestly doesn't HAVE to be the case... depending on the cursor.
i think if one has a forward only cursor the database could figure it out as it scans along the table. it should be fater in fact because if it does like this you only do one table scan and not two. this theory seems to fall apart when you say it is using the index BUT if I was writing a database and you had a forward only cursor AND the distribution of keys in the index indicated that there were many rows that would match your query I might ignore the index and do it as described.
so call me crazy but here is my suggestion...
if the cursor you are using is a forward only cursor then try a scrollable cursor.
if it is already a scrollable cursor or changing it didn't help then try this...
rs.last();
rs.beforeFirst();
// now process normally using next()i would think that this would force the database to find all the rows.... so it should help.
also the server timeout issue will sort of need to be addressed also if the query still takes a long time to run...
Thanks,
Martin Reynolds (renozu) -
Displaying large result sets in Table View u0096 request for patterns
When providing a table of results from a large data set from SAP, care needs to be taken in order to not tax the R/3 database or the R/3 and WAS application servers. Additionally, in terms of performance, results need to be displayed quickly in order to provide sub-second response times to users.
This post is my thoughts on how to do this based on my findings that the Table UI element cannot send an event to retrieve more data when paging down through data in the table (hopefully a future feature of the Table UI Element).
Approach:
For data retrieval, we need to have an RFC with search parameters that retrieves a maximum number of records (say 200) and a flag whether 200 results were returned.
In terms of display, we use a table UI Element, and bind the result set to the table.
For sorting, when they sort by a column, if we have less than the maximum search results, we sort the result set we already have (no need to go to SAP), but otherwise the RFC also needs to have sort information as parameters so that sorting can take place during the database retrieval. We sort it during the SQL select so that we stop as soon as we hit 200 records.
For filtering, again, if less than 200 results, we just filter the results internally, otherwise, we need to go to SAP, and the RFC needs to have this parameterized also.
If the requirement is that the user must look at more than 200 results, we need to have a button on the screen to fetch the next 200 results. This implies that the RFC will also need to have a start point to return results from. Similarly, a previous 200 results button would need to be enabled once they move beyond the initial result set.
Limitations of this are:
1. We need to use custom RFC function as BAPIs dont generally provide this type of sorting and limiting of data.
2. Functions need to directly access tables in order to do sorting at the database level (to reduce memory consumption).
3. Its not a great interface to add buttons to Get next/previous set of 200.
4. Obviously, based on where you are getting the data from, it may be better to load the data completely into an internal table in SAP, and do sorting and filtering on this, rather than use the database to do it.
Does anyone have a proven pattern for doing this or any improvements to the above design? Im sure SAP-CRM must have to do this, or did they just go with a BSP view when searching for customers?
Note I noticed there is a pattern for search results in some documentation, but it does not exist in the sneak preview edition of developer studio. Has anyone had in exposure to this?
Update - I'm currently investigating whether we can create a new value node and use a supply function to fill the data. It may be that when we bind this to the table UI element, that it will call this incrementally as it requires more data and hence could be a better solution.Hi Matt,
i'm afraid, the supplyFunction will not help you to get out of this, because it's only called, if the node is invalid or gets invalidated again. The number of elements a node contains defines the number of elements the table uses for the determination of the overall number of table rows. Something quite similar to what you want does already exist in the WD runtime for internal usage. As you've surely noticed, only "visibleRowCount" elements are initially transferred to the client. If you scroll down one or multiple lines, the following rows are internally transferred on demand. But this doesn't help you really, since:
1. You don't get this event at all and
2. Even if you would get the event, since the number of node elements determines the table's overall rows number, the event would never request to load elements with an index greater than number of node elements - 1.
You can mimic the desired behaviour by hiding the table footer and creating your own buttons for pagination and scrolling.
Assume you have 10 displayed rows and 200 overall rows, What you need to be able to implement the desired behaviour is:
1. A context attribute "maxNumberOfExpectedRows" type int, which you would set to 200.
2. A context attribute "visibleRowCount" type int, which you would set to 10 and bind to table's visibleRowCount property.
3. A context attribute "firstVisibleRow" type int, which you would set to 0 and bind to table's firstVisibleRow property.
4. The actions PageUp, PageDown, RowUp, RowDown, FirstRow and LastRow, which are used for scrolling and the corresponding buttons.
The action handlers do the following:
PageUp: firstVisibleRow -= visibleRowCount (must be >=0 of course)
PageDown: firstVisibleRow += visibleRowCount (first + visible must be < maxNumberOfExpectedRows)
RowDown/Up: firstVisibleRow++/-- with the same restrictions as in page "mode"
FirstRow/LastRow is easy, isn't it?
Since you know, which sections of elements has already been "loaded" into the dataSource-node, you can fill the necessary sections on demand, when the corresponding action is triggered.
For example, if you initially display elements 0..9 and goto last row, you load from maxNumberOfExpected (200) - visibleRows (10) entries, so you would request entries 190 to 199 from the backend.
A drawback is, that the BAPIs/RFCs still have to be capable to process such "section selecting".
Best regards,
Stefan
PS: And this is meant as a workaround and does not really replace your pattern request. -
ABAP routine - looping a result set from table.
I am trying to access a table and pass the resulting rows to an internal table.
However, before passing, I need to do some data manipulation on one of the columns of the table and then pass it.
If I didn't have to do this, I could have just said
INTO CORRESPONDING FIELDS OF TABLE
But since I have do data manipulation, I cannot do this. This is what I am doing now.
SELECT * FROM /BI0/QEMPLOYEE WHERE JOB = '1234' AND DATETO = '99991231'.
MOVE /BI0/QEMPLOYEE-EMPLOYEE TO INT_EMP-EMPN.
MOVE /BI0/QEMPLOYEE-JOB TO STR1.
STR2 = STR1 + 0(4)
SEARCH STR2 FOR ' ' STARTING AT 1 ENDING AT 1.
IF SY-SUBRC EQ 0.
REPLACE ' ' WITH '0' INTO STR2.
ENDIF.
MOVE STR2 TO INT_EMP-FAC.
But this will move only one row. How can I move all the rows? Is there a loop that I can use? I can see a loop for internal tables but I need to loop the result set and then send it. I posted this question twice but haven't gotten the right answer.
Thanks.Hi,
Are you trying to write the code in Update Routine or a Start Routine.?
If you want to update all the records, you can use the start routine.
So you need to use a loop for doing all the records.
If you are using a Update routine then you dont have to use the loop.
But here I recommend the Start Routine is the best choice.
Here is a document which can be helpful for you:
http://bluestonestep.com/component/option,com_docman/task,doc_download/gid,13/
Happy Tony -
Unbearably slow result sets. Please help.
I'm trying to draw data out of a read-only Omnis database and migrate it into a mysql database, and the process is really, really slow. The biggest transfer involves moving a paltry 3117 rows of data. This alone takes some 10 minutes O_o. Something is amiss, and I've traced the bottleneck to the resultset recursion. I get a funky resultset out of omnis when using the JDBC-ODBC bridge [I don't even have access to column headers, and therefor can't index them via getXXX(String s)]. I weeded through these forums and someone having similarly slow (though not to this degree) performance moving things out of mysql was advised to limit the resultset fetch size via a call to
stat = con.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY);
stat.setFetchSize(1);
I've tried this, but it keeps on throwing an exception.
SQLException: java.sql.SQLException: Invalid Fetch Size
Any thoughts would be greatly appreciated.To clarify, (I probably used the wrong term) I meant iterating down a resultset via calls to resultset.next().
I did a couple more tests, however, and found that it is not the calls to next() that are the problem, but the getXXX()'s. I made a little program that accessed the omnis database, ran a select * on a table of 90 rows, displayed the first five entries in each row, and counted each row as it was output to the screen. Anyway, my findings were that, in 10 seconds the program had moved through a paltry 47 rows of the omnis database. As a control, I ran the exact same program against the MySQL database, and got all 3117 rows of output in around 4 seconds... I also ran the same program against omnis without the outputs, and it ran at an appreciable speed. I am therefor forced to conclude that the problem lies in the way it handles gets... is there anything I can do about this? -
I have a ResultSet with ruffly 7000 rows and 28 columns, the last column of which is an oracle.sql.ARRAY with at most 1800 rows itself. I get the ResultSet from a callable statment with the out paramater of type oracle.jdbc.driver.OracleTypes.CURSOR. I'm having trouble parsing through the set. As I'm parsing (resultSet.next()), after every tenth record it stalls for 5 sec, I assume it's because weblogic is fetching 10 more records. I tried changing the fetch size from 10 to 1000 but got an out of memory error(I have plenty of memory allocated, 500Mb). I also changed the prefetch feature, and set the chunk size to 6000 on the console. I really need this to be faster, any ideas? Thank you for your time.
Hi Michael,
Generally it's better to process large amounts of
data at the DB server. Have you considered moving
that processing a stored procedure?
Regards,
Slava Imeshev
"Michael Garland" <[email protected]> wrote in message
news:3bcf31d5$[email protected]..
I have a ResultSet with ruffly 7000 rows and 28 columns, the last columnof which is an oracle.sql.ARRAY with at most 1800 rows itself. I get the
ResultSet from a callable statment with the out paramater of type
oracle.jdbc.driver.OracleTypes.CURSOR. I'm having trouble parsing through
the set. As I'm parsing (resultSet.next()), after every tenth record it
stalls for 5 sec, I assume it's because weblogic is fetching 10 more
records. I tried changing the fetch size from 10 to 1000 but got an out of
memory error(I have plenty of memory allocated, 500Mb). I also changed the
prefetch feature, and set the chunk size to 6000 on the console. I really
need this to be faster, any ideas? Thank you for your time. -
Cannot access columns in a result set using table alias in Oracle database
I have a query which joins a few tables. There are a few columns in various tables with identical names. In the query, I have assigned table aliases for each table thinking it'll be the manner in which I access a specific column in a specific database table. However, when trying to retrieve the column, I'm getting an exception stating "Invalid column name". I had no problem doing so in my last project when I was coding against MySQL database so this is likely to be a driver implementation issue. My current workaround is to assign a column alias though I find this to be annoying and it does make the query very verbose.
My question is whether this option is perhaps a configuration issue, a bug, or something that I'm missing. Also, I would like to know if anybody has an elegant workaround without accessing columns using their numeric index.
I'm querying an Oracle 10i database in a managed environment (database connection is obtained from a Weblogic data source).
Sample query:
select
a.address1,
d.address1
from
account a
inner join
department d on a.department_id = d.department_id
where
a.account_id = 1000;
When trying to access a ResultSet instance in the following manner, I will get an exception:
rs.getString("d.address1");
Retrieving "address1" will return the first column in the select clause.jonathan3 wrote:
My question is whether this option is perhaps a configuration issue, a bug, or something that I'm missing. Since you already figured out that you can use an alias one can suppose that it is the last in that you are missing that you already have a solution.
You can try extracting the meta data to see if it has a name without the alias. Probably not.
Also, I would like to know if anybody has an elegant workaround without accessing columns using their numeric index.One can only suppose that you consider using names "elegant". -
Is it possible to filter the nested table result set of table column
Hi
Create or replace type address_record
as object
( address_no number,
address_name varchar2,
address_startDate date,
address_expiryDate date
create or replace type address_rec_tab
as table of address_record;
Create table employee
(emp_no number,
emp_name varchar2,
adresses address_rec_tab
1st approach
==========
<pre>
select
emp.emp_no,
emp.emp_name,
emp.addresses
from employee emp,
table(*emp.addresses*) add
where add.address_expiryDate >=sysdate
</pre>
In the above example my SQL query address collection object is not returning filtered or current address list.
I suppose this is due to fact taht my where clause is not attached to the nested table.
Through my reading I gather that I can only use the following query to filter the address collection.
2nd approach
==========
<pre>
select
emp.emp_no,
emp.emp_name
cursor(select address_no,
address_name,
address_startDate,
address_expiryDate
from employee emp,
table (*emp.addresses*) add
where add.address_expiry_date >=sysdate)
from employee emp,
table (*emp.addresses*) add
where add.address_expiry_date >=sysdate) -- probably this redundent
</pre>
But this approch forces me to rebuild addresses collection object.
I was wondering anybody can suggest me a way so that 1st approach works? I do not have to rebuild collection object in this way.
Thanks for your help in advance
Regards
CharanCreate statements have been slightly modified;
Create or replace type address_record as object
( address_no number,
address_name varchar2(20),
address_startDate date,
address_expiryDate date
create or replace type address_rec_tab as table of address_record;
Create table employee
(emp_no number,
emp_name varchar2(20),
add_list address_rec_tab
nested table add_list store as a_list
insert into employee values (1, 'KMCHARAN', address_rec_tab ( address_record(1, 'NORTH POLE', trunc(sysdate-1), trunc(sysdate+10) ) ,
address_record(1, 'SOUTH_POLE', trunc(sysdate-1), trunc(sysdate+10) )
insert into employee values (2, 'ME', address_rec_tab ( address_record(2, 'EAST', trunc(sysdate-2), trunc(sysdate+12) ) ,
address_record(2, 'WEST', trunc(sysdate-2), trunc(sysdate+12) )
SQL> l
1 select *
2 from employee
3 ,table(add_list) a
4* where a.Address_StartDate = trunc(sysdate-1)
SQL> /
EMP_NO EMP_NAME
ADD_LIST(ADDRESS_NO, ADDRESS_NAME, ADDRESS_STARTDATE, ADDRESS_EXPIRYDATE)
ADDRESS_NO ADDRESS_NAME ADDRESS_S ADDRESS_E
1 KMCHARAN
ADDRESS_REC_TAB(ADDRESS_RECORD(1, 'NORTH POLE', '08-APR-10', '19-APR-10'), ADDRESS_RECORD(1, 'SOUTH_
1 NORTH POLE 08-APR-10 19-APR-10
1 KMCHARAN
ADDRESS_REC_TAB(ADDRESS_RECORD(1, 'NORTH POLE', '08-APR-10', '19-APR-10'), ADDRESS_RECORD(1, 'SOUTH_
1 SOUTH_POLE 08-APR-10 19-APR-10 -
How can I use ONE Text search iView to event/affect mutliple Result Sets?
hello everyone,
i have a special situation in which i have 6 flat tables in my repository which all have a common field called Location ID (which is a lookup flat to the Locations table).
i am trying to build a page with a free-form text search iView on Table #1 (search field = Location ID). when I execute the search, the result set for Table #1 is properly updated, but how do I also get Result Set iViews for Tables #2-6 to also react to the event from Text Search for Table #1 so that they are updated?
i don't want to have to build 6 different text search iViews (one for each table). i just want to use ONE text search iView for all the different result set tables. but, in the documentation and iView properties, the text search iView doesn't have any eventing.
if you have any suggestions, please help.
many thanks in advance,
mmhello Donna,
that should not be a problem, since you are detailw with result sets and detail iviews because custom eventing can be defined for those iviews.
Yes, it says "no records" found because an active search and record selection havent' been performed for it (only your main table does).
So, yes, define a custom event, and pass the appropriate parameters and you should be fine.
Creating a custom event between a Result Set iView and an Item Details iView is easy and works. I have done it.
See page 35 of the Portal Content Development Guide for a step-by-step example, which is what I used.
For my particular situation, the problem I'm having is that I want the Search Text iView's event (i.e., when the Submit button is pressed) to be published to multiple iViews, all with different tables. Those tables all share some common fields, which is what the Search iView has, so I'd like to pass the search critera to all of the iViews.
-mm -
Insert into a temp table is to slow
i'd like to know why if i created a temp table out of my procedure the insert into it get slower than if i create that temp table inside my procedure. follows an example:
create table #Test (col1 varchar(max))
go
create proc dbo.test
as
begin
truncate table #Test
insert into #Test
select 'teste'
FROM sys.tables
cross join sys.columns
end
go
exec dbo.test
go
create table #Test2 (col1 varchar(max))
go
truncate table #Test2
insert into #Test2
select 'teste'
FROM sys.tables
cross join sys.columns
At test, we get duration 71700, reads 45220, CPU 26052 At test2, we get duration 49636, reads 45166, cpu 24960
best regardsThere should be no difference.
You would have to repeat the test you designed a few times to take readings and then reverse the order.
BOL: "
Benefits of Using Stored Procedures
The following list describes some benefits of using procedures.
Reduced server/client network traffic
The commands in a procedure are executed as a single batch of code. This can significantly reduce network traffic between the server and client because only the call to execute the procedure is sent across the network. Without the code encapsulation provided
by a procedure, every individual line of code would have to cross the network.
Stronger security
Multiple users and client programs can perform operations on underlying database objects through a procedure, even if the users and programs do not have direct permissions on those underlying objects. The procedure controls what processes and activities
are performed and protects the underlying database objects. This eliminates the requirement to grant permissions at the individual object level and simplifies the security layers.
The EXECUTE AS
clause can be specified in the CREATE PROCEDURE statement to enable impersonating another user, or enable users or applications to perform certain database activities without needing direct permissions on the underlying objects and commands. For example,
some actions such as TRUNCATE TABLE, do not have grantable permissions. To execute TRUNCATE TABLE, the user must have ALTER permissions on the specified table. Granting a user ALTER permissions on a table may not be ideal because the user will effectively
have permissions well beyond the ability to truncate a table. By incorporating the TRUNCATE TABLE statement in a module and specifying that module execute as a user who has permissions to modify the table, you can extend the permissions to truncate the table
to the user that you grant EXECUTE permissions on the module.
When calling a procedure over the network, only the call to execute the procedure is visible. Therefore, malicious users cannot see table and database object names, embed Transact-SQL statements of their own, or search for critical data.
Using procedure parameters helps guard against SQL injection attacks. Since parameter input is treated as a literal value and not as executable code, it is more difficult for an attacker to insert a command into the Transact-SQL statement(s) inside
the procedure and compromise security.
Procedures can be encrypted, helping to obfuscate the source code. For more information, see SQL Server Encryption.
Reuse of code
The code for any repetitious database operation is the perfect candidate for encapsulation in procedures. This eliminates needless rewrites of the same code, decreases code inconsistency, and allows the code to be accessed and executed by any user or application
possessing the necessary permissions.
Easier maintenance
When client applications call procedures and keep database operations in the data tier, only the procedures must be updated for any changes in the underlying database. The application tier remains separate and does not have to know how about any changes
to database layouts, relationships, or processes.
Improved performance
By default, a procedure compiles the first time it is executed and creates an execution plan that is reused for subsequent executions. Since the query processor does not have to create a new plan, it typically takes less time to process the procedure.
If there has been significant change to the tables or data referenced by the procedure, the precompiled plan may actually cause the procedure to perform slower. In this case, recompiling the procedure and forcing a new execution plan can improve performance.
LINK: http://technet.microsoft.com/en-us/library/ms190782.aspx
Kalman Toth Database & OLAP Architect
Free T-SQL Scripts
New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012 -
How to get the table name of a field in a result set
hi!
i have a simple sql query as
select tbl_customerRegistration.*, tbl_customerAddress.address from tbl_customerRegistration, tbl_customerAddress where tbl_customerAddress.customer_id = tbl_customerRegistration.customer_ID
this query executes well and gets data from the database when i get ResultsetMetaData from my result set (having result of above query) i am able to get the field name as
ResultSetMetaData rsmd = rs.getMetaData();//rs is result set
String columnName = rsmd.getColumnName(1);
here i get columnName = "Customer_id"
but when i try to get the tabel name from meta data as
String tableName = rsmd.getTableName(1); i get empty string in table name....
i want to get the table name of the respective field here as it is very important to my logic.....
how can i do that.....
please help me in that regard as it is very urgent
thanks in advance
sajjad ahmed paracha
you may also see the discussion on following link
http://forum.java.sun.com/thread.jspa?threadID=610200&tstart=0So far as I'm aware, you can't get metadata information about the underlying tables in a query from Oracle and/or the Oracle drivers. I suspect, in fact, that the driver would have to have its own SQL parser to get this sort of information.
I'm curious though-- how do you have application logic that depends on the name of the source table but not know in the application what table is involved? Could you do something "cheesy" like
SELECT 'tbl_customerRegistration' AS tbl1_name,
tbl_customerRegistration.*
...Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Saving SQL result set in new table
Is it possible to save a SQL result set in a new table? (easily)
What I want to do is duplicate, or back up a table.Create table temp as (select a,b,c from your_table); This statement will create a table with your resultset. This will work in Oracle, I am not sure about others.
-
Access result set in user define type of table
here is the situation. I have a stored procedure that dequeues messages of a AQ and passes them as an OUT parameter in a collection of a user defined type. The same type used to define the queues. The java code executes properly but seems like we don't/can't access the result set. We don't receive any erros but don't know how to access the results. I've included relevant parts of the problem.
I know this should be doable but........Can someone please tell us what we are doing wrong....thanks in advance.
-----create object type
create type evt_ot as object(
table_name varchar(40),
table_data varchar(4000));
---create table of object types.
create type msg_evt_table is table of evt_ot;
----create queue table with object type
begin
DBMS_AQADM.CREATE_QUEUE_TABLE (
Queue_table => 'etlload.aq_qtt_text',
Queue_payload_type => 'etlload.evt_ot');
end;
---create queues.
begin
DBMS_AQADM.CREATE_QUEUE (
Queue_name => 'etlload.aq_text_que',
Queue_table => 'etlload.aq_qtt_text');
end;
Rem
Rem Starting the queues and enable both enqueue and dequeue
Rem
EXECUTE DBMS_AQADM.START_QUEUE (Queue_name => 'etlload.aq_text_que');
----create procedure to dequeue an array and pass it OUT using msg_evt_table ---type collection.
create or replace procedure test_aq_q (
i_array_size in number ,
o_array_size out number ,
text1 out msg_evt_table) is
begin
DECLARE
message_properties_array dbms_aq.message_properties_array_t :=
dbms_aq.message_properties_array_t();
msgid_array dbms_aq.msgid_array_t;
dequeue_options dbms_aq.dequeue_options_t;
message etlload.msg_evt_table;
id pls_integer := 0;
retval pls_integer := 0;
total_retval pls_integer := 0;
ctr number :=0;
havedata boolean :=true;
java_exp exception;
no_messages exception;
pragma EXCEPTION_INIT (java_exp, -24197);
pragma exception_init (no_messages, -25228);
BEGIN
DBMS_OUTPUT.ENABLE (20000);
dequeue_options.wait :=0;
dequeue_options.correlation := 'event' ;
id := i_array_size;
-- Dequeue this message from AQ queue using DBMS_AQ package
begin
retval := dbms_aq.dequeue_array(
queue_name => 'etlload.aq_text_que',
dequeue_options => dequeue_options,
array_size => id,
message_properties_array => message_properties_array,
payload_array => message,
msgid_array => msgid_array);
text1 := message;
o_array_size := retval;
EXCEPTION
WHEN java_exp THEN
dbms_output.put_line('exception information:');
WHEN no_messages THEN
havedata := false;
o_array_size := 0;
end;
end;
END;
----below is the java code....
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Struct;
import oracle.jdbc.driver.OracleCallableStatement;
import oracle.jdbc.driver.OracleTypes;
public class TestOracleArray {
private final String SQL = "{call etlload.test_aq_q(?,?,?)}";//array size, var name for return value, MessageEventTable
private final String driverClass = "oracle.jdbc.driver.OracleDriver";
private final String serverName = "OurServerName";
private final String port = "1500";
private final String sid = "OurSid";
private final String userId = "OurUser";
private final String pwd = "OurPwd";
Connection conn = null;
public static void main(String[] args){
TestOracleArray toa = new TestOracleArray();
try {
toa.go();
} catch (InstantiationException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IllegalAccessException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
private void go() throws InstantiationException, IllegalAccessException, ClassNotFoundException, SQLException{
Class.forName(driverClass).newInstance();
String url = "jdbc:oracle:thin:@"+serverName+":"+port+":"+sid;
conn = DriverManager.getConnection(url,userId,pwd);
OracleCallableStatement stmt = (OracleCallableStatement)conn.prepareCall(SQL);
//set 1 input
stmt.setInt(1, 50);
//register out 1
stmt.registerOutParameter(2, OracleTypes.NUMERIC);
//register out 2
stmt.registerOutParameter(3, OracleTypes.ARRAY, "MSG_EVT_TABLE");
* This code returns a non-null ResultSet but there is no data in the ResultSet
* ResultSet rs = stmt.executeQuery();
* rs.close();
* Tried all sorts of combinations of getXXXX(1);
* All return the same error Message: Invalid column index
* So it appears that the execute statment returns no data.
stmt.execute();
Struct myObject = (Struct)stmt.getObject(1);
stmt.close();
conn.close();
}Hi,
Sorry but I'd refer you to the following sections (and code samples/snippets) in my book:
Mapping User-Defined Object Types (AD) to oracle.sql.STRUCT in section 3.3, shows how to pass user defined types as IN, OUT,IN/OUT
JMS over Streams/AQ in the Database: shows how to consume AQ
message paylod in section 4.2.4
CorporateOnine, in section 17.2, show how to exchanges user defined type objects b/w AQ and JMS
All these will hopefully help you achieve what you are trying to do.
Kuassi -
More time in Extracting result set ( performence) VERY URGENT
Hi all,
This program is taking much more time in Extracting the the result set.........(How to increase the performence of this program)
How to decrease the Execuition of the time.......???
***INCLUDE Z00_BCI010 .
TABLES: z00_bc_cpt_sess, " Stockage des compteurs associés aux
" progs de traitement.
z00_bc_erreur, " Table des anomalies des programmes
" spécifiques.
*début ajout FAE 30463
z00_bc_err_log. "Table de stockage et retention des
"erreurs
*fin ajout FAE 30463
t100. " Messages.
Déclaration des données internes *
Table interne des anomalies.
DATA: BEGIN OF itb_erreur OCCURS 0.
INCLUDE STRUCTURE z00_bc_erreur.
DATA: END OF itb_erreur.
Nombre de jour pour la suppression des données dans la table des
anomalies
DATA: i_nb_jour(3) TYPE n.
*début ajout FAE 30463
DATA: w_dl_delai LIKE z00_bc_err_log-z_delai,
w_in_stockage LIKE z00_bc_err_log-z_stockage VALUE 'X'.
*fin ajout FAE 30463
Date et heure d'éxécution.
DATA: i_dt_date_execution LIKE sy-datum,
i_hr_heure_execution LIKE sy-uzeit.
Date de suppression.
DATA: z_date LIKE sy-datum.
Compteur de session.
DATA: o_ct_session LIKE z00_bc_cpt_sess-z_ct_session.
Compteur pour le numéro de ligne de la table d'anomalie.
DATA: l_ct_num_ligne LIKE z00_bc_erreur-z_no_num_ligne VALUE '00'.
Données utile au remplissage de la table interne des anomalies.
Nom du programme ABAP.
DATA: i_repid LIKE z00_bc_erreur-z_repid.
Libellé du traitement.
DATA: w_lb_traitement LIKE itb_erreur-z_lb_lib_trait.
Clé identifiant l'objet traité.
DATA: i_ds_clef_objet LIKE z00_bc_erreur-z_ds_clef_objet.
Code anomalie.
DATA: i_cd_message LIKE sy-msgno.
Type de message d'anomalie.
DATA: i_ty_message LIKE sy-msgty.
Classe de message.
DATA: i_classe_message LIKE sy-msgid.
Variable de message.
DATA: i_msgv1 LIKE sy-msgv1,
i_msgv2 LIKE sy-msgv2,
i_msgv3 LIKE sy-msgv3,
i_msgv4 LIKE sy-msgv4.
Phase d'éxécution du programme.
DATA: i_in_phase_exec LIKE z00_bc_erreur-z_in_phase_exec.
Libellé de la clé.
DATA: i_clef_objet(30).
Constante.
CONSTANTS: k_heure(8) VALUE 'Heure', "#EC NOTEXT
k_code(4) VALUE 'Code', "#EC NOTEXT
k_lb_message(80) VALUE 'Désignation', "#EC NOTEXT
k_ligne LIKE sy-linsz VALUE '127',
k_societe LIKE sy-title VALUE 'SCHNEIDER ELECTRIC INDUSTRIES S.A.S.',
k_projet LIKE sy-title VALUE 'LOGOS'.
TRAITEMENT *
Nettoyage de la table interne et des données.
FREE itb_erreur.
CLEAR: i_msgv1,
i_msgv2,
i_msgv3,
i_msgv4.
Form F930_INIT *
Fonction : *
- Renseigne la table transparente des anomalies. *
Donnée globale : *
- ITB_ERREUR Tables internes des erreurs. *
Entrées : *
- I_REPID Nom du programme en erreur. *
- I_NB_JOUR Nombre de jour avant la suppression des *
enrgistrement de la lable Z00_BC_ERREUR. *
Sortie : *
- O_CT_SESSION Compteur de session. *
FORM f930_init USING i_repid
i_nb_jour.
Appel de la routine de suppression des vieux enregistrement.
PERFORM f911_suppression_anomalie USING i_repid
i_nb_jour.
Mise à jour des compteur de session.
PERFORM f912_maj_z00_bc_cpt_sess USING i_repid
CHANGING o_ct_session.
ENDFORM.
Form F930_INIT_BLOCAGE *
Fonction : *
- Suppression des vieux enregistrements
- mise à jour table session
Donnée globale : *
- ITB_ERREUR Tables internes des erreurs. *
Entrées : *
- I_REPID Nom du programme en erreur. *
- I_NB_JOUR Nombre de jour avant la suppression des *
enrgistrement de la lable Z00_BC_ERREUR. *
Sortie : *
- O_CT_SESSION Compteur de session. *
FORM f930_init_blocage USING i_repid
i_nb_jour.
Appel de la routine de suppression des vieux enregistrement.
avec contrôle entrée de blocage
PERFORM f911_suppression_anomalie_bloc USING i_repid
i_nb_jour.
Mise à jour des compteur de session.
PERFORM f912_maj_z00_bc_cpt_sess USING i_repid
CHANGING o_ct_session.
ENDFORM.
Form F900_ERREUR *
Fonction : *
- Renseigne la table transparente des anomalies. *
Donnée globale : *
- ITB_ERREUR Tables internes des erreurs. *
Donnée locale : *
- l_CT_NUM_LIGNE Compteur de ligne *
- O_CT_SESSION Numéro du compteur de session *
Entrées : *
- I_REPID Nom du programme en erreur. *
- I_IN_PHASE_EXEC Phase d'éxécution du programme *
- I_DS_CLEF_OBJET Clé identifiant l'objet traité. *
- I_DT_DATE_EXECUTION Date d'éxécution. *
- I_HR_HEURE_EXECUTION Heure d'éxécution. *
- I_TY_MESSAGE Type de message. *
- I_CD_MESSAGE code anomalie. *
- I_CLASSE_MESSAGE Classe de message. *
- I_MSGV1 Variable de message. *
- I_MSGV2 Variable de message. *
- I_MSGV3 Variable de message. *
- I_MSGV4 Variable de message. *
FORM f900_erreur USING i_repid
i_in_phase_exec
i_ds_clef_objet
i_dt_date_execution
i_hr_heure_execution
i_ty_message
i_cd_message
i_classe_message
value(i_msgv1)
value(i_msgv2)
value(i_msgv3)
value(i_msgv4). "#EC CALLED
DE3K913901 début ajout
On récupère l'incrément qui sera inclu au n°de session :
IF o_ct_session IS INITIAL
AND i_repid = 'Z06_MMR001'.
PERFORM f912_maj_z00_bc_cpt_sess USING 'Z06_MMR001'
CHANGING o_ct_session.
ENDIF.
DE3K913901 fin ajout
Nettoyage de la zone de l'en tete de la table interne.
CLEAR itb_erreur.
Incrementation du compteur du numéro de ligne de la table d'anomalie.
l_ct_num_ligne = l_ct_num_ligne + 1.
Remplissage de la table interne.
MOVE: i_repid TO itb_erreur-z_repid,
l_ct_num_ligne TO itb_erreur-z_no_num_ligne,
i_dt_date_execution TO itb_erreur-z_dt_date_exec,
i_hr_heure_execution TO itb_erreur-z_hr_heure_exec,
w_lb_traitement TO itb_erreur-z_lb_lib_trait,
i_in_phase_exec TO itb_erreur-z_in_phase_exec,
i_ds_clef_objet TO itb_erreur-z_ds_clef_objet.
CONCATENATE i_ty_message
i_cd_message
INTO itb_erreur-z_cd_message.
Récupération du libellé du message.
CALL FUNCTION 'MESSAGE_TEXT_BUILD'
EXPORTING
msgid = i_classe_message
msgnr = i_cd_message
msgv1 = i_msgv1
msgv2 = i_msgv2
msgv3 = i_msgv3
msgv4 = i_msgv4
IMPORTING
message_text_output = itb_erreur-z_lb_message.
*début modification FAE 30463
*Si le flag est pas coché, on met à jour la table des erreurs
*Z00_BC_ERREUR
*s'il n'y a pas d'entrée dans la table on met aussi à jour
*Z00_BC_ERREUR
IF NOT w_in_stockage IS INITIAL.
Mise à jour de la table.
PERFORM f910_mise_a_jour.
ENDIF.
Mise à jour de la table interne.
APPEND itb_erreur.
Nettoyage des variables.
CLEAR: i_msgv1,
i_msgv2,
i_msgv3,
i_msgv4.
*fin modification FAE 30463
ENDFORM.
Form F910_MISE_A_JOUR *
Fonction : *
- Mets à jour les tables Z00_BC_CPT_SESS et Z00_BC_ERREUR. *
Donnée globale : *
- ITB_ERREUR Tables internes des erreurs. *
FORM f910_mise_a_jour.
Mise à jour des anomalies.
MOVE-CORRESPONDING itb_erreur TO z00_bc_erreur.
CONCATENATE itb_erreur-z_dt_date_exec
itb_erreur-z_hr_heure_exec
o_ct_session
INTO z00_bc_erreur-z_no_num_session.
MODIFY z00_bc_erreur.
ENDFORM.
Form F911_SUPPRESSION_VIEILLE_ANOMALIE *
Fonction : *
- Supprime les villes anomalies. *
Donnée globale : *
- Z00_BC_ERREUR Table des anomalies des programmes spécifiques.*
Entrée : *
- I_REPID Nom du programme en erreur. *
- I_NB_JOUR Nombre de jour avant la suppression des *
enrgistrement de la lable Z00_BC_ERREUR. *
FORM f911_suppression_anomalie USING i_repid
i_nb_jour.
début ajout FAE 30463
SELECT SINGLE z_stockage z_delai
INTO (w_in_stockage, w_dl_delai)
FROM z00_bc_err_log
WHERE z_repid = i_repid.
*si le programme est dans la table z00_bc_err_log, on récupère la zone
Z_DELAI (délai de rétention des erreurs)
*sinon le délai est celui passé en paramètre de cette fonction
IF sy-subrc = 0.
z_date = sy-datum - w_dl_delai.
ELSE.
z_date = sy-datum - i_nb_jour.
Pas d'enreg. ds table param, alors on stockera ds Z00_BC_ERREUR
w_in_stockage = 'X'.
ENDIF.
fin ajout FAE 30463
Suppression des enregistrements trop vieux.
DELETE FROM z00_bc_erreur WHERE z_repid EQ i_repid
AND z_dt_date_exec LE z_date.
ENDFORM.
Form F911_SUPPRESSION_ANOMALIE_BLOC *
Fonction : *
- Supprime les villes anomalies en tenant compte des objets de bloc *
Donnée globale : *
- Z00_BC_ERREUR Table des anomalies des programmes spécifiques.*
Entrée : *
- I_REPID Nom du programme en erreur. *
- I_NB_JOUR Nombre de jour avant la suppression des *
enrgistrement de la lable Z00_BC_ERREUR. *
FORM f911_suppression_anomalie_bloc USING i_repid
i_nb_jour.
Blocage de la table
CALL FUNCTION 'ENQUEUE_EZ00_BC_ERREUR'
EXPORTING
mode_z00_bc_erreur = 'E'
z_mandt = sy-mandt
z_repid = i_repid
X_Z_REPID = ' '
_SCOPE = '2'
_WAIT = ' '
_COLLECT = ' '
EXCEPTIONS
foreign_lock = 1
system_failure = 2
OTHERS = 3.
Suppression des enr que si la table pour ce pg n'est pas bloquée
Si bloquée => ne rien faire car suppression aura déjà eu lieu
IF sy-subrc EQ 0.
début ajout FAE 30463
SELECT SINGLE z_stockage z_delai
INTO (w_in_stockage, w_dl_delai)
FROM z00_bc_err_log
WHERE z_repid = i_repid.
*si le programme est dans la table z00_bc_err_log, on récupère la zone
Z_DELAI (délai de rétention des erreurs)
*sinon le délai est celui passé en paramètre de cette fonction
IF sy-subrc = 0.
z_date = sy-datum - w_dl_delai.
ELSE.
z_date = sy-datum - i_nb_jour.
Pas d'enreg. ds table param, alors on stockera ds Z00_BC_ERREUR
w_in_stockage = 'X'.
ENDIF.
fin ajout FAE 30463
Suppression des enregistrements trop vieux.
DELETE FROM z00_bc_erreur WHERE z_repid EQ i_repid
AND z_dt_date_exec LE z_date.
Déblocage de la table.
CALL FUNCTION 'DEQUEUE_EZ00_BC_ERREUR'
EXPORTING
MODE_Z00_BC_ERREUR = 'E'
z_mandt = sy-mandt
z_repid = i_repid.
ENDIF.
ENDFORM.
Form F912_MAJ_Z00_BC_CPT_SESS *
Fonction : *
- Renseigne la table transparente des sessions. *
Données globales : *
- z00_BC_CPT_SESS Stockage des compteurs associés aux progs de *
traitement. *
- Z00_BC_ERREUR Table des anomalies des programmes spécifiques*
Entrée : *
- I_REPID Nom du programme en erreur. *
Sortie : *
- O_CT_SESSION Numéro de session. *
FORM f912_maj_z00_bc_cpt_sess USING i_repid
CHANGING o_ct_session.
Blocage de la table
CALL FUNCTION 'ENQUEUE_EZ00_BC_CPT_SESS'
EXPORTING
mode_z00_bc_cpt_sess = 'E'
z_mandt = sy-mandt
z_repid = i_repid
X_Z_REPID = ' '
_SCOPE = '2'
_WAIT = ' '
_COLLECT = ' '
EXCEPTIONS
foreign_lock = 1
system_failure = 2
OTHERS = 3.
Si la table est déjà vérouillée.
IF sy-subrc NE 0.
DO.
Si c'est la 99eme fois que l'on reboucle alors on sort du programme.
IF sy-index EQ 99.
STOP.
ENDIF.
sinon attendre 1 seconde.
WAIT UP TO 1 SECONDS.
Blocage de la table
CALL FUNCTION 'ENQUEUE_EZ00_BC_CPT_SESS'
EXPORTING
mode_z00_bc_cpt_sess = 'E'
z_mandt = sy-mandt
z_repid = i_repid
X_Z_REPID = ' '
_SCOPE = '2'
_WAIT = ' '
_COLLECT = ' '
EXCEPTIONS
foreign_lock = 1
system_failure = 2
OTHERS = 3.
Si table bloquée.
IF sy-subrc EQ 0.
EXIT.
ENDIF.
ENDDO.
ENDIF.
Lecture dans la table des sessions.
SELECT SINGLE * FROM z00_bc_cpt_sess WHERE z_repid EQ i_repid.
Contrôle si un enregistrement avec le même nom de programme existe et
si le compteur de session est différent de '99'.
IF sy-subrc EQ 0 AND z00_bc_cpt_sess-z_ct_session NE 99.
z00_bc_cpt_sess-z_ct_session = z00_bc_cpt_sess-z_ct_session + 1.
o_ct_session = z00_bc_cpt_sess-z_ct_session.
MODIFY z00_bc_cpt_sess.
Si un enregistrement avec le même nom de programme existe et
si le compteur de session est égal à '99'.
ELSEIF sy-subrc EQ 0 AND z00_bc_cpt_sess-z_ct_session EQ 99.
o_ct_session = z00_bc_cpt_sess-z_ct_session.
MODIFY z00_bc_cpt_sess.
Sinon.
ELSEIF sy-subrc NE 0.
z00_bc_cpt_sess-z_ct_session = '00'.
z00_bc_cpt_sess-z_repid = i_repid.
o_ct_session = z00_bc_cpt_sess-z_ct_session.
MODIFY z00_bc_cpt_sess.
ENDIF.
COMMIT WORK.
Déblocage de la table.
CALL FUNCTION 'DEQUEUE_EZ00_BC_CPT_SESS'
EXPORTING
mode_z00_bc_cpt_sess = 'E'
z_mandt = sy-mandt
z_repid = i_repid.
X_Z_REPID = ' '
_SCOPE = '3'
_SYNCHRON = ' '
_COLLECT = ' '
ENDFORM.
Form F920_TOP_OF_PAGE *
Fonction : *
- Entête Schneider *
Entrée : *
- I_REPID Nom du programme en erreur. *
FORM f920_top_of_page USING i_repid. "#EC CALLED
En-tete de page.
CALL FUNCTION 'Z_00_BC_TOP_OF_PAGE'
EXPORTING
p_linsz = k_ligne
p_pagno = sy-pagno
p_prog = i_repid
p_projet = k_projet
p_societe = k_societe
p_sujet = sy-title.
IF sy-subrc = 0.
ENDIF.
ENDFORM.
Form F920_EDITION *
Fonction : *
- Edition des erreurs. *
Données globales : *
- ITB_ERREUR Table interne des anomalies. *
Entrée : *
- I_REPID Nom du programme en erreur. *
- I_CLEF_OBJET, Désignation de la clé de l'objet. *
FORM f920_edition USING i_repid
i_clef_objet. "#EC CALLED
En-tete de tableau.
WRITE AT (sy-linsz) sy-uline.
WRITE: sy-vline,
k_heure(8),
sy-vline,
k_code(4),
sy-vline,
k_lb_message(80),
sy-vline,
i_clef_objet.
WRITE AT sy-linsz sy-vline.
LOOP AT itb_erreur.
Edition de la table interne des anomalies.
Controle des couleurs.
IF itb_erreur-z_cd_message(1) NE 'S'.
IF itb_erreur-z_cd_message(1) EQ 'W'.
FORMAT COLOR = 7 INTENSIFIED OFF.
ELSEIF itb_erreur-z_cd_message(1) EQ 'I'.
FORMAT COLOR = 3 INTENSIFIED OFF.
ELSE.
FORMAT COLOR = 6 INTENSIFIED OFF.
ENDIF.
ELSE.
FORMAT COLOR = 5 INTENSIFIED ON.
ENDIF.
WRITE AT (sy-linsz) sy-uline.
WRITE: sy-vline,
itb_erreur-z_hr_heure_exec,
sy-vline,
itb_erreur-z_cd_message(4),
sy-vline,
itb_erreur-z_lb_message(80),
sy-vline,
*Begin change PIT DE3K936510
itb_erreur-z_ds_clef_objet(27).
itb_erreur-z_ds_clef_objet(59).
*End change PIT DE3K936510
WRITE AT sy-linsz sy-vline.
ENDLOOP.
WRITE AT (sy-linsz) sy-uline.
ENDFORM.
INCLUDE: z00_bci010. " Gestion des anomalies.
Déclaration des données *
Tables de la bases de données *
TABLES:
ekpo, " Poste document d'achat.
lfa1, " Base fournisseurs (généralités).
marc, " Données division de l'article.
z03_bw_cmp1,
eord, "Répertoire des sources appro. Achats
eina, "Fiche infos-achats - données générales
t024, "Groupes d'acheteurs
tvarv. "FAE17345+
Déclaration des données internes *
Déclaration de la table interne qui permet de recuperer les mois.
DATA: BEGIN OF itb_months OCCURS 12.
INCLUDE STRUCTURE t247.
DATA: END OF itb_months.
Table pour récupération d'infos sur les divisions
DATA: BEGIN OF itb_t001w OCCURS 0,
werks LIKE t001w-werks,
fabkl LIKE t001w-fabkl," Clé du calendrier d'entreprise
END OF itb_t001w.
Déclaration de la table interne contenant les infos sur l'adresse du
fournisseur.
DATA: BEGIN OF itb_adresse OCCURS 0,
lifnr LIKE lfa1-lifnr, " Numéro de compte fournisseur.
name1 LIKE lfa1-name1, " Nom 1.
name2 LIKE lfa1-name2, " Nom 2.
name3 LIKE lfa1-name3, " Nom 3.
name4 LIKE lfa1-name4, " Nom 4.
stras LIKE lfa1-stras, " N° de rue et nom de la rue.
pstlz LIKE lfa1-pstlz, " Code postal.
ort01 LIKE lfa1-ort01, " Localité.
pfach LIKE lfa1-pfach, " Boîte postale.
pstl2 LIKE lfa1-pstl2, " Code de la boîte postale.
land1 LIKE lfa1-land1, " Clé de pays.
landx LIKE t005t-landx, " Pays.
spras LIKE lfa1-spras, " Code langue
END OF itb_adresse.
Déclaration d'une table interne pour les informations sur les
prévisions de commande.
DATA: BEGIN OF itb_prev_cde OCCURS 0,
werks LIKE marc-werks, " Division
idnlf LIKE eina-idnlf, " ADDsde ref article frn
lifnr LIKE eord-lifnr, " N° fournisseur.
ekgrp LIKE marc-ekgrp, " Groupe d'acheteurs
dispo LIKE marc-dispo, " Code gestionnaire MRP
matnr LIKE eord-matnr, " Article.
maktx LIKE makt-maktx, " Désignation article.
bstmi LIKE marc-bstmi, " Quantité de commande.
men00 LIKE plaf-gsmng, " Quantité du mois en cours M.
men01 LIKE plaf-gsmng, " Quantité pour le mois M+1.
men02 LIKE plaf-gsmng, " Quantité pour le mois M+2.
men03 LIKE plaf-gsmng, " Quantité pour le mois M+3.
men04 LIKE plaf-gsmng, " Quantité pour le mois M+4.
men05 LIKE plaf-gsmng, " Quantité pour le mois M+5.
men06 LIKE plaf-gsmng. " Quantité pour le mois M+6.
DATA: END OF itb_prev_cde.
Structure de travail pour les commandes convernant les PFC
DATA str_pca_pfc LIKE itb_prev_cde.
*add sde
DATA str_eord_pfc LIKE itb_prev_cde.
Structure de travail pour les prévisons PFC
DATA str_prev_pfc LIKE itb_prev_cde.
Déclaration d'une table interne pour les informations sur le
portefeuille des commandes d'achat.
DATA: BEGIN OF itb_pca OCCURS 0,
werks LIKE ekpo-werks, " Division
idnlf LIKE eina-idnlf, " ADDsde ref article frn
lifnr LIKE eord-lifnr, " N° fournisseur.
ekgrp LIKE marc-ekgrp, " Groupe d'acheteurs
dispo LIKE marc-dispo, " Code gestionnaire MRP
matnr LIKE eord-matnr, " Article.
maktx LIKE makt-maktx, " Désignation article.
ebeln LIKE ekes-ebeln, " Numéro du document d'achat.
ebelp LIKE ekes-ebelp, " Numéro de poste du document d'achat.
slfdt LIKE eket-slfdt, " Date de livraison statistique
eindt LIKE ekes-eindt, " Date de livraison indiquée dans la
" confirmation de la cde.
menge LIKE ekes-menge, " Quantité indiquée dans la confirma-
" tion de la commande.
attdu LIKE eket-wemng, " Portefeuille fournisseur.
netpr LIKE ekpo-brtwr, " Prix net du document d'achat dans
" la devise du document.
rtard TYPE i, " Retard en jours ouvres.
wemng LIKE eket-wemng, " Quantité de l'entrée de marchandise.
bldat LIKE mkpf-bldat, " Date inscrite sur la pièce/sur le
" document.
qtran LIKE ekes-menge, " Quantité en transit.
dtran LIKE ekes-eindt. " Date du dernier avis de transit.
DATA: END OF itb_pca.
DATA: w_i TYPE i, "Compteur
w_i_char(1) TYPE c, "Texte pour récupérer compteur
w_nm_zone(20) TYPE c, "Nom zone pour assign au field-symbols
w_nb_j TYPE i, "Nb de jours ouvrés jusqu'à fin mois
w_nb_j_tot TYPE i. "Nb de jours ouvrés du mois
FIELD-SYMBOLS: TYPE ANY.
Déclaration d'une table contenant les fiches info achat.
DATA: BEGIN OF itb_eina OCCURS 0,
matnr LIKE eina-matnr,
lifnr LIKE eina-lifnr,
idfnl LIKE eina-idnlf.
DATA: END OF itb_eina.
Déclaration d'une table contenant les infos groupe acheteur.
*DATA: BEGIN OF itb_t024 OCCURS 0,
ekgrp LIKE t024-ekgrp,
eknam LIKE t024-eknam,
ektel LIKE t024-ektel,
telfx LIKE t024-telfx.
*DATA: END OF itb_t024.
Déclaration d'une table pour le transfert du fichier.
DATA: BEGIN OF itb_transfert OCCURS 0,
col00(8), "division
col00bis(20), "Reference article
col01(18), "No article
col02(45), "libelle article
col021(17), "Groupe acheteur
col022(13), "gestionnaire
col03(17), "qte commandée ou No de commande
col04(13), "Mois 1 ou No poste cde
col05(20), "Mois 2 ou délai initial
col06(33), "Mois 3 ou délai négocié
col07(13), "Mois 4 ou qté cdée
col08(13), "Mois 5 ou qté attendue
col09(13), "Mois 6 ou Montant attendu
col10(13), "Mois 7 ou retard
col11(13), "qte partielle livree
col12(13), "date livraison partielle
col13(13), "qté en transit
col14(13). "date
DATA: END OF itb_transfert.
Déclaration d'une table contenant la liste des fournisseurs.
DATA: BEGIN OF itb_lifnr OCCURS 0,
werks LIKE marc-werks,
idfnl LIKE eina-idnlf,
lifnr LIKE eord-lifnr,
ekgrp LIKE marc-ekgrp,
spras LIKE lfa1-spras,
eknam LIKE t024-eknam,
ektel LIKE t024-ektel,
telfx LIKE t024-telfx.
DATA: END OF itb_lifnr.
Déclaration d'une structure pour la selection dans la table MSEG.
DATA : BEGIN OF itb_mseg OCCURS 0,
mblnr LIKE mseg-mblnr,
mjahr LIKE mseg-mjahr,
ebeln LIKE mseg-ebeln,
ebelp LIKE mseg-ebelp,
END OF itb_mseg.
déclaration d'une structure pour la selection dans EKPO.
DATA : BEGIN OF itb_ekpo OCCURS 0,
ebeln LIKE ekpo-ebeln,
lifnr LIKE ekko-lifnr,
ekgrp LIKE ekko-ekgrp,
ebelp LIKE ekpo-ebelp,
matnr LIKE ekpo-matnr,
werks LIKE ekpo-werks,
menge LIKE ekpo-menge,
bpumz LIKE ekpo-bpumz,
netpr LIKE ekpo-netpr,
peinh LIKE ekpo-peinh, "Base de prix FAE17345+
"AFT++
bpumn LIKE ekpo-bpumn,
dispo LIKE marc-dispo, "AFT++
END OF itb_ekpo.
déclaration d'une structure pour la selection dans EKPO.
DATA : BEGIN OF itb_ekko OCCURS 0,
ebeln LIKE ekko-ebeln,
lifnr LIKE ekko-lifnr,
spras LIKE ekko-spras,
END OF itb_ekko.
déclaration d'une structure pour la selection dans EKES.
DATA : BEGIN OF itb_ekes OCCURS 0,
ebeln LIKE eket-ebeln,
ebelp LIKE eket-ebelp,
etens LIKE ekes-etens,
ebtyp LIKE ekes-ebtyp,
eindt LIKE ekes-eindt,
menge LIKE ekes-menge,
dabmg LIKE ekes-dabmg,
END OFhai ,
use code inspector to find the performance issue sorce code and also it gives some tips to tune the peformance.
Go to program in display mode or editable mode and in menu bar you
have program menu in first......chose it and go to cheak....select it we have list including code inspector do it and tune it............
plzz reward if useful
regards,
jai.m -
JOIN ON 2 different sets of table depending on the result of first set
<br>
I have a query where it returns results. I want to join this query to
2 different sets of table depending upon the first set has a result or not.
if first set didnt had a results or records then check for the second set.
SELECT
peo.email_address,
r.segment1 requistion_num,
to_char(l.line_num) line_num,
v.vendor_name supplier,
p.CONCATENATED_SEGMENTS category,
to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
TO_CHAR(l.need_by_date,'MM/DD/YYYY') need_by_date,
pe.full_name requestor,
l.item_description,
pr.segment1 project_num,
t.task_number,
c.segment1,
c.segment2
FROM po_requisition_headers_all r,
po_requisition_lines_all l,
(SELECT project_id,task_id,code_combination_id, distribution_id,requisition_line_id,creation_date FROM
(SELECT project_id,task_id,code_combination_id,distribution_id,creation_date,requisition_line_id,ROW_NUMBER ()
OVER (PARTITION BY requisition_line_id ORDER BY requisition_line_id,distribution_id ) rn
FROM po_req_distributions_all pod) WHERE rn = 1) d,
gl_code_combinations c,
POR_CATEGORY_LOV_V p,
per_people_v7 pe,
PA_PROJECTS_ALL pr,
PA_TASKS_ALL_V t,
ap_vendors_v v,
WHERE d.creation_date >= nvl(to_date(:DATE_LAST_CHECKED,
'DD-MON-YYYY HH24:MI:SS'),SYSDATE-1)
AND
l.requisition_header_id = r.requisition_header_id
AND l.requisition_line_id = d.requisition_line_id
AND d.code_combination_id = c.code_combination_id
AND r.APPS_SOURCE_CODE = 'POR'
AND l.category_id = p.category_id
AND r.authorization_status IN ('IN PROCESS','PRE-APPROVED','APPROVED')
AND l.to_person_id = pe.person_id
AND pr.project_id(+) = d.project_id
AND t.project_id(+) = d.project_id
AND t.task_id(+) = d.task_id
AND v.vendor_id(+) = l.vendor_id
and r.requisition_header_id in(
SELECT requisition_header_id FROM po_requisition_lines_all pl
GROUP BY requisition_header_id HAVING SUM(nvl(pl.quantity,0) * nvl(pl.unit_price, 0)) >=100000)
group by
peo.email_address,
r.REQUISITION_HEADER_ID,
r.segment1 ,
to_char(l.line_num) ,
v.vendor_name,
p.CONCATENATED_SEGMENTS ,
to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
TO_CHAR(l.need_by_date,'MM/DD/YYYY') ,
pe.full_name ,
l.item_description,
c.segment1,
c.segment2,
pr.segment1 ,
t.task_number
<b>I want to join this query with this first set </b>
SELECT b.NAME, c.segment1 CO, c.segment2 CC,
a.org_information2 Commodity_mgr,
b.organization_id, p.email_address
FROM hr_organization_information a, hr_all_organization_units b, pay_cost_allocation_keyflex c, per_people_v7 p
WHERE a.org_information_context = 'Financial Approver Information'
AND a.organization_id = b.organization_id
AND b.COST_ALLOCATION_KEYFLEX_ID = c.COST_ALLOCATION_KEYFLEX_ID
and a.ORG_INFORMATION2 = p.person_id
AND NVL (b.date_to, SYSDATE + 1) >= SYSDATE
AND b.date_from <= SYSDATE;
<b>if this doesnt return any result then i need to join the query with the 2nd set</b>
select lookup_code, meaning, v.attribute1 company, v.attribute2 cc,
decode(v.attribute3,null,null,p1.employee_number || '-' || p1.full_name) sbu_controller,
decode(v.attribute4,null,null,p2.employee_number || '-' || p2.full_name) commodity_mgr
from fnd_lookup_values_vl v,
per_people_v7 p1, per_people_v7 p2
where lookup_type = 'BIO_FIN_APPROVER_INFO'
and v.attribute3 = p1.person_id(+)
and v.attribute4 = p2.person_id(+)
order by lookup_code
How do i do it?
[pre]<br>
I have hard coded the 2 jon sets into one using UNION ALL but if one record exists in both sets how would i diferentiate between the 2 sets.
COUNT(*) will only give the total records.
if there r total 14
suppose first set gives 12 records
second set gives 4 records.
But i want only 14 records which could 12 from set 1 and 2 from set 2 since set1 and set2 can have common records.
SELECT
peo.email_address,
r.segment1 requistion_num,
to_char(l.line_num) line_num,
v.vendor_name supplier,
p.CONCATENATED_SEGMENTS category,
to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
TO_CHAR(l.need_by_date,'MM/DD/YYYY') need_by_date,
pe.full_name requestor,
l.item_description,
pr.segment1 project_num,
t.task_number,
c.segment1,
c.segment2
FROM po_requisition_headers_all r,
po_requisition_lines_all l,
(SELECT project_id,task_id,code_combination_id, distribution_id,requisition_line_id,creation_date FROM
(SELECT project_id,task_id,code_combination_id,distribution_id,creation_date,requisition_line_id,ROW_NUMBER ()
OVER (PARTITION BY requisition_line_id ORDER BY requisition_line_id,distribution_id ) rn
FROM po_req_distributions_all pod) WHERE rn = 1) d,
gl_code_combinations c,
POR_CATEGORY_LOV_V p,
per_people_v7 pe,
PA_PROJECTS_ALL pr,
PA_TASKS_ALL_V t,
ap_vendors_v v,
WHERE d.creation_date >= nvl(to_date(:DATE_LAST_CHECKED,
'DD-MON-YYYY HH24:MI:SS'),SYSDATE-1)
AND
l.requisition_header_id = r.requisition_header_id
AND l.requisition_line_id = d.requisition_line_id
AND d.code_combination_id = c.code_combination_id
AND r.APPS_SOURCE_CODE = 'POR'
AND l.category_id = p.category_id
AND r.authorization_status IN ('IN PROCESS','PRE-APPROVED','APPROVED')
AND l.to_person_id = pe.person_id
AND pr.project_id(+) = d.project_id
AND t.project_id(+) = d.project_id
AND t.task_id(+) = d.task_id
AND v.vendor_id(+) = l.vendor_id
and r.requisition_header_id in(
SELECT requisition_header_id FROM po_requisition_lines_all pl
GROUP BY requisition_header_id HAVING SUM(nvl(pl.quantity,0) * nvl(pl.unit_price, 0)) >=100000)
group by
peo.email_address,
r.REQUISITION_HEADER_ID,
r.segment1 ,
to_char(l.line_num) ,
v.vendor_name,
p.CONCATENATED_SEGMENTS ,
to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
TO_CHAR(l.need_by_date,'MM/DD/YYYY') ,
pe.full_name ,
l.item_description,
c.segment1,
c.segment2,
pr.segment1 ,
t.task_number
UNION ALL
SELECT
r.segment1 requistion_num,
to_char(l.line_num) line_num,
v.vendor_name supplier,
p.CONCATENATED_SEGMENTS category,
to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
TO_CHAR(l.need_by_date,'MM/DD/YYYY') need_by_date,
pe.full_name requestor,
l.item_description,
pr.segment1 project_num,
t.task_number,
c.segment1,
c.segment2
FROM po_requisition_headers_all r,
po_requisition_lines_all l,
(SELECT project_id,task_id,code_combination_id, distribution_id,requisition_line_id,creation_date FROM
(SELECT project_id,task_id,code_combination_id,distribution_id,creation_date,requisition_line_id,ROW_NUMBER ()
OVER (PARTITION BY requisition_line_id ORDER BY requisition_line_id,distribution_id ) rn
FROM po_req_distributions_all pod) WHERE rn = 1) d,
gl_code_combinations c,
POR_CATEGORY_LOV_V p,
per_people_v7 pe,
PA_PROJECTS_ALL pr,
PA_TASKS_ALL_V t,
ap_vendors_v v,
fnd_lookup_values_vl flv,
per_people_v7 p1,
per_people_v7 p2
WHERE d.creation_date >= nvl(to_date('11-APR-2008',
'DD-MON-YYYY HH24:MI:SS'),SYSDATE-1)
AND
l.requisition_header_id = r.requisition_header_id
AND l.requisition_line_id = d.requisition_line_id
AND d.code_combination_id = c.code_combination_id
AND r.APPS_SOURCE_CODE = 'POR'
AND l.org_id = 141
AND l.category_id = p.category_id
AND r.authorization_status IN ('IN PROCESS','PRE-APPROVED','APPROVED')
AND l.to_person_id = pe.person_id
AND pr.project_id(+) = d.project_id
AND t.project_id(+) = d.project_id
AND t.task_id(+) = d.task_id
AND v.vendor_id(+) = l.vendor_id
AND flv.attribute1=c.segment1
AND flv.attribute2=c.segment2
AND flv.lookup_type = 'BIO_FIN_APPROVER_INFO'
and flv.attribute3 = p1.person_id(+)
and flv.attribute4 = p2.person_id(+)
and r.requisition_header_id in(
SELECT requisition_header_id FROM po_requisition_lines_all pl
GROUP BY requisition_header_id HAVING SUM(nvl(pl.quantity,0) * nvl(pl.unit_price, 0)) >=100000)
group by
r.REQUISITION_HEADER_ID,
r.segment1 ,
to_char(l.line_num) ,
v.vendor_name,
p.CONCATENATED_SEGMENTS ,
to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
TO_CHAR(l.need_by_date,'MM/DD/YYYY') ,
pe.full_name ,
l.item_description,
c.segment1,
c.segment2,
pr.segment1 ,
t.task_number
Maybe you are looking for
-
The problem of installing oracle database 9.2.0 on Redhat 7.2
Hi, there, I meet an error when I want to run runInstaller to install oracle database 9.2.0 on Redhat 7.2 at the Oracle Database Configuration Assistant step: ORA-03113:end-of-file on communication channel. the installer report every other things are
-
Hi! I tried to install Oracle 9i on a machine running CentOS 5. In order to complete the process I followed the instructions given at this address: http://ivan.kartik.sk/oracle/install_ora9_fedora.html. Everything worked fine until I run the Oracle u
-
Error reading the data of InfoProvider /APC/FINGLV01
Hi All, Via RSA1 I wanted to check the data but met the following error /APC/FIAGLACC_TD01_FQ9CLNT001: The application running deliberately caused an abo Error in substep Error reading the data of InfoProvider /APC/FINGLV01 Also, in all oview view, i
-
How to install correctly iOS 7 on iPhone 5
Hi everyone, Do I have to restore my iPhone as new and the upgrade to iOS 7 or just back up my iPhone and then upgrade? Thank you guys
-
Install CD kernel panic (0.7.2)
Hi to all on this forum, and a big thanks to developers of this distro. I get kernel panic when booting the install CD. That is: I put the arch-0.7.2 install CD in my CDrom, then i type "arch(-noscsi) earlymodules=piix (or earlymods=piix) ide-legacy