Improving result set performance
I am getting really poor performance on returning a resultset from an oracle database with JDBC. We've tested the queries on the database side and they are lightning fast. I've tried messing with the fetchsize but nothing I do improves performance. The process is fairly simple. I am calling a remote procedure:
CallableStatement callStmt = conn.prepareCall(query);
callStmt.registerOutParameter(1, OracleTypes.CURSOR);
callStmt.setString (2, id);
callStmt.execute();
// return the result set
ResultSet rset = (ResultSet)callStmt.getObject(1);
// loop through rows and read in data points
while (rset.next()) {
DataPoint data = new DataPoint();
data.setPoint1(rset.getTimestamp(2));
data.setPoint2(rset.getLong(3));
array.add(data);
callStmt.close();
Its taking 10-15 seconds to read in 1000 records. It needs to read in many more rows than that, but I had to pair down the query just to get it to a reasonable amount of time for debugging. Thanks for any help you can offer.
I am getting really poor performance on returning a resultset from an oracle database with JDBC. We've tested the queries on the database side and they are lightning fast. I've tried messing with the fetchsize but nothing I do improves performance. The process is fairly simple. I am calling a remote procedure:
CallableStatement callStmt = conn.prepareCall(query);
callStmt.registerOutParameter(1, OracleTypes.CURSOR);
callStmt.setString (2, id);
callStmt.execute();
// return the result set
ResultSet rset = (ResultSet)callStmt.getObject(1);
// loop through rows and read in data points
while (rset.next()) {
DataPoint data = new DataPoint();
data.setPoint1(rset.getTimestamp(2));
data.setPoint2(rset.getLong(3));
array.add(data);
callStmt.close();
Its taking 10-15 seconds to read in 1000 records. It needs to read in many more rows than that, but I had to pair down the query just to get it to a reasonable amount of time for debugging. Thanks for any help you can offer.
Similar Messages
-
Performance issue - Result set
Hi,
I am facing performance problem with query execution. The code almost looks like this
Statement stmt = con.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(0);
result = stmt.executeQuery("SELECT name, phoneNumber from tbluser inner join tblUserDetails ON tbluser.userid =tblUserDetails.userid WHERE ([phoneNumber] like'%123456789%') ORDER BY [name], [userID]");This query is taking too much time from the code.The same code is taking only few seconds for execution from the query analyzer.
when I changed the result set type to TYPE_SCROLL_SENSITIVE its performance improved significantly.
Please help me to find out y why a scroll sensitive result set performs better than a forward only result set?
CAn anybody help me how to use client side\server side cursors?
I have created indexes in tables. Is there any other way to improve the performance? Thanks in advance. Any help will be greatly appreciated.
Edited by: Aryan.s on Sep 16, 2008 7:49 PMOk, if you wait many rows in the select you change
stmt.setFetchSize(0);
to
stmt.setFetchSize(50);
Then, if you use "[phoneNumber] like" the database never used the index, you need tu use "=", ">" or "<", etc, if you want the database use the index, you never use "in", "exist", "like" or function "function(column) == variable", my suggestion is:
result = stmt.executeQuery(
"SELECT name, phoneNumber " +
"from tbluser inner join tblUserDetails ON tbluser.userid =tblUserDetails.userid " +
"WHERE ([phoneNumber] = '123456789') ORDER BY [name], [userID]");
And use a mask in the input of your application (JFormattedText) -
Performance to fetch result set from stored procedure.
I read some of related threads, but couldn't find any good suggestions about the performance issue to fetch the result set from a stored procedure.
Here is my case:
I have a stored procedure which will return 2,030,000 rows. When I run the select part only in the dbartisan, it takes about 3 minutes, so I know it's not query problem. But when I call the stored procedure in DBArtisan in following way:
declare cr SYS_REFCURSOR;
firstname char(20);
lastname char(20);
street char(40);
city char(20);
STATE varchar2(2);
begin DISPLAY_ADDRESS(cr);
DBMS_OUTPUT.ENABLE(null);
LOOP
FETCH cr INTO firstname,lastname,street, city, state;
EXIT WHEN cr%NOTFOUND;
DBMS_OUTPUT.PUT_LINE( firstname||','|| lastname||','|| street||',' ||city||',' ||STATE);
END LOOP;
CLOSE cr;
end;
It will take about 100 minutes. When I used DBI fetchrow_array in perl code, it took about same amount of time. However, same stored procedure in sybase without using cursor, and same perl code, it only takes 12 minutes to display all results. We assume oracle has better performance. So what could be the problem here?
The perl code:
my $dbh = DBI->connect($databaseserver, $dbuser, $dbpassword,
{ 'AutoCommit' => 0,'RaiseError' => 1, 'PrintError' => 0 })
or die "couldn't connect to database: " . DBI->errstr;
open OUTPUTFILE, ">$temp_output_path";
my $rc;
my $sql="BEGIN DISPLAY_ADDRESS(:rc); END;";
my $sth = $dbh->prepare($sql) or die "Couldn't prepare statement: " . $dbh->errstr;
$sth->bind_param_inout(':rc', \$rc, 0, { ora_type=> ORA_RSET });
$sth->execute() or die "Couldn't execute statement: " . $sth->errstr;
while($address_info=$rc->fetchrow_arrayref()){
my ($firstname, $lastname, $street, $city, $STATE) = @$address_info;
print OUTPUTFILE $firstname."|".$lastname."|".$street."|".$city."|".$STATE;
$dbh->commit();
$dbh->disconnect();
close OUTPUTFILE;
Thanks!
rulinThanks for you reply!
1) The stored procedure has head
CREATE OR REPLACE PROCEDURE X_OWNER.DISPLAY_ADDRESS
cv_1 IN OUT SYS_REFCURSOR
AS
err_msg VARCHAR2(100);
BEGIN
--Adaptive Server has expanded all '*' elements in the following statement
OPEN cv_1 FOR
Select ...
commit;
EXCEPTION
WHEN OTHERS THEN
err_msg := SQLERRM;
dbms_output.put_line (err_msg);
ROLLBACK;
END;
If I only run select .. in DBArtisan, it display all 2030,000 rows in 3:44 minutes
2) But when call stored procedure, it will take 80-100 minutes .
3) The stored procedure is translated from sybase using migration tools, it's very simple, in sybase it just
CREATE PROCEDURE X_OWNER.DISPLAY_ADDRESS
AS
BEGIN
select ..
The select part is exact same.
4) The perl code is almost exact same, except the query sql:
sybase verson: my $sql ="exec DISPLAY_ADDRESS";
and no need bind the cursor parameter.
This is batch job, we create a file with all information, and ftp to clients everynight.
Thanks!
Rulin -
Will Result Cache improve the database performance in 11g? what is the max size of Result Cache?
Thanks for convincing me I really need a new laptop...
SQL> select /*+ result_cache */ count(*) from emp,emp,emp,emp,emp,emp,emp;
COUNT(*)
105413504
Elapsed: 00:00:52.94
SQL> select /*+ result_cache */ count(*) from emp,emp,emp,emp,emp,emp,emp;
COUNT(*)
105413504
Elapsed: 00:00:00.01
SQL> select /*+ result_cache */ count(*) from emp,emp,emp,emp,emp,emp,emp;
COUNT(*)
105413504
Elapsed: 00:00:00.01
SQL> select count(*) from emp,emp,emp,emp,emp,emp,emp;
COUNT(*)
105413504
Elapsed: 00:00:45.08
SQL> select count(*) from emp,emp,emp,emp,emp,emp,emp;
COUNT(*)
105413504
Elapsed: 00:01:16.29
SQL> select count(*) from emp,emp,emp,emp,emp,emp,emp;
COUNT(*)
105413504
Elapsed: 00:01:18.63
Message was edited by: Hoek
The result cache seems to have lagged a bit for the first hintless query.. -
Query Performance and Result Set Query/Sub-Query
Hi,
I have an infoset with as many as <b>15 joins</b> with different ODS and Master Data..<b>The ODS are quite huge with 20 million, 160 million records)...</b>Its taking a lot of time even for a few days and need to get atleast 3 months data in a reasonable amount of time....
Could anyone please tell me whether <b>Sub-Query or Result Set Query have to be written against the same InfoProvider</b> (Cube, Infoset, etc)...or they can be used in queries
on different infoprovider...
Please suggest...Thanks in Advance.
Regards
AnilHi Bhanu,
Needed some help defining the Precalculated Value Set as I wasnt succesful....
Please suggest answers if possible for the questions below
1) Can I use Filter Criteria for restricting the value set for the characteristic when I Define a Query on an ODS. When i tried this it gave me errors as below ..
"System error in program CL_RSR_REQUEST and form EXECUTE_VTABLE:COB_PRO_GET_ALWAYS.... Error when filling value set DELIVERY..
Job cancelled after system exception ERROR_MESSAGE"
2) Can I create a create a Value Set predefined on an Infoset - Not Succesful as Infoset have names such as "InfosetName_F000232" and cannot find the characteristic in Paramter Tab for Value Set in Reprting Agent.
3) How does the precalculated value Set variable help if I am not using any Filtering Criteria and is storing the List of all values for the Variable from the ODS which consists of 20 millio records.
Thanks for your help.
Kind Regards
Anil -
RDBMS Event Generator Issue - JDBC Result Set Already Closed
All -
I am having a problem with an RDBMS event generator that has been exposed by our Load Testing. It seems that after the database is under load I get the following exception trace:
<Aug 7, 2007 4:33:06 PM EDT> <Info> <EJB> <BEA-010213> <Message-Driven EJB: PollerMDB_SessionRqt_1186515408009's transaction was rolledback. The transact ion details are: Xid=BEA1-7F8C65474500D80A5B94(218826722),Status=Rolled back. [Reason=weblogic.transaction.internal.AppSetRollbackOnlyException],numRepli esOwedMe=0,numRepliesOwedOthers=0,seconds since begin=0,seconds left=60,XAServerResourceInfo[JMS_Affinity_cgJMSStore_auto_1]=(ServerResourceInfo[JMS_Affi nity_cgJMSStore_auto_1]=(state=rolledback,assigned=wli_int_1),xar=JMS_Affinity_cgJMSStore_auto_1,re-Registered = false),XAServerResourceInfo[ACS.Telcordi a.XA.Pool]=(ServerResourceInfo[ACS.Telcordia.XA.Pool]=(state=rolledback,assigned=wli_int_1),xar=ACS.Telcordia.XA.Pool,re-Registered = false),XAServerReso urceInfo[JMS_Affinity_cgJMSStore_auto_2]=(ServerResourceInfo[JMS_Affinity_cgJMSStore_auto_2]=(state=rolledback,assigned=wli_int_2),xar=null,re-Registered = false),SCInfo[wli_int_domain+wli_int_2]=(state=rolledback),SCInfo[wli_int_domain+wli_int_1]=(state=rolledback),properties=({START_AND_END_THREAD_EQUAL =false}),local properties=({weblogic.jdbc.jta.ACS.Telcordia.XA.Pool=weblogic.jdbc.wrapper.TxInfo@d0b2687}),OwnerTransactionManager=ServerTM[ServerCoordin atorDescriptor=(CoordinatorURL=wli_int_1+128.241.233.85:8101+wli_int_domain+t3+, XAResources={weblogic.jdbc.wrapper.JTSXAResourceImpl, Affinity_cgPool, J MS_Affinity_cgJMSStore_auto_1, ACSDispatcherCP_XA, ACS.Dispatcher.RDBMS.Pool, ACS.Telcordia.XA.Pool},NonXAResources={})],CoordinatorURL=wli_int_1+128.241 .233.85:8101+wli_int_domain+t3+).>
<Aug 7, 2007 4:33:06 PM EDT> <Warning> <EJB> <BEA-010065> <MessageDrivenBean threw an Exception in onMessage(). The exception was:
javax.ejb.EJBException: Error occurred while processing message received by this MDB. This MDB instance will be discarded after cleanup; nested exceptio n is: java.lang.Exception: Error occurred while preparing messages for Publication or while Publishing messages.
javax.ejb.EJBException: Error occurred while processing message received by this MDB. This MDB instance will be discarded after cleanup; nested exception is: java.lang.Exception: Error occurred while preparing messages for Publication or while Publishing messages
java.lang.Exception: Error occurred while preparing messages for Publication or while Publishing messages
at com.bea.wli.mbconnector.rdbms.intrusive.RDBMSIntrusiveQryMDB.fetchUsingResultSet(RDBMSIntrusiveQryMDB.java:561)
at com.bea.wli.mbconnector.rdbms.intrusive.RDBMSIntrusiveQryMDB.onMessage(RDBMSIntrusiveQryMDB.java:310)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:400)
at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:333)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:298)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2698)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2523)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
Caused by: java.sql.SQLException: Result set already closed
at weblogic.jdbc.wrapper.ResultSet.checkResultSet(ResultSet.java:105)
at weblogic.jdbc.wrapper.ResultSet.preInvocationHandler(ResultSet.java:67)
at weblogic.jdbc.wrapper.ResultSet_oracle_jdbc_driver_OracleResultSetImpl.next()Z(Unknown Source)
at com.bea.wli.mbconnector.rdbms.intrusive.RDBMSIntrusiveQryMDB.handleResultSet(RDBMSIntrusiveQryMDB.java:611)
at com.bea.wli.mbconnector.rdbms.intrusive.RDBMSIntrusiveQryMDB.fetchUsingResultSet(RDBMSIntrusiveQryMDB.java:514)
... 8 more
javax.ejb.EJBException: Error occurred while processing message received by this MDB. This MDB instance will be discarded after cleanup; nested exception is: java.lang.Exception: Error occurred while preparing messages for Publication or while Publishing messages
at com.bea.wli.mbconnector.rdbms.intrusive.RDBMSIntrusiveQryMDB.onMessage(RDBMSIntrusiveQryMDB.java:346)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:400)
at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:333)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:298)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2698)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2523)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
>
I have tried several things and had my team do research but we have not been able to find an answer to the problem.
If anyone can offer any insight as to why we might be getting this error it would be greatly appreciated. Thanks!i also have same error during load testing, mainly this error
"Unexpected exception while enlisting XAConnection java.sql.SQLException"
i tried rerunning after increasing connection pool sizes, transaction timeout, but no luck, marginal improvement in performance though
also tried changing the default tracking levl to none, but no luck.
i am with 8.1SP5, how about you ?
do share if you are able to bypass these errors
cheers -
How Can we improve the report performance..?
Hi exports,
I am learning the Business Objects XIR2, Please let me know How Can we improve the report performance..?
Please give the answer in detailed way.First find out why your report is performing slowly. Then fix it.
That sounds silly, but there's really no single-path process for improving report performance. You might find issues with the report. With the network. With the universe. With the database. With the database design. With the query definition. With report variables. With the ETL. Once you figure out where the problem is, then you start fixing it. Fixing one problem may very well reveal another. I spent two years working on a project where we touched every single aspect of reporting (from data collection through ETL and all the way to report delivery) at some point or another.
I feel like your question is a bit broad (meaning too generic) to address as you have phrased it. Even some of the suggestions already given...
Array fetch size - this determines the number of rows fetched at a single pass. You really don't need to modify this unless your network is giving issues. I have seen folks suggest setting this to one (which results in a lot of network requests) or 500 (which results in fewer requests but they're much MUCH larger). Does either improve performance? They might, or they might make it worse. Without understanding how your network traffic is managed it's hard to say.
Shortcut joins? Sure, they can help, as long as they are appropriate. [Many times they are not.|http://www.dagira.com/2010/05/27/everything-about-shortcut-joins/]
And I could go on and on. The bottom line is that performance tuning doesn't typically fall into a "cookie cutter" approach. It would be better to have a specific question. -
The operation is not allowed for result set type FORWARD_ONLY
Hi,
I am trying to use scroll insensitive resultset in following manner-
Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY);
ResultSet rs = stmt.executeQuery("some sql here");
rs.last();
but the rs.last() throws this exception-
com.sap.dbtech.jdbc.exceptions.SQLExceptionSapDB: The operation is not allowed for result set type FORWARD_ONLY.
at com.sap.dbtech.jdbc.ResultSetSapDB.assertNotForwardOnly(ResultSetSapDB.java:2725)
at com.sap.dbtech.jdbc.ResultSetSapDB.last(ResultSetSapDB.java:557)
Database version: Kernel 7.5.0 Build 019-121-082-363
Driver Version: MaxDB JDBC Driver, MySQL MaxDB, 7.6.0 Build 030-000-005-567
This used to work with 7.5 version driver. Why this is failing with 7.6 driver, any clues?
Thanks,
VinodHi Vinod,
due to performance improvement when using forward only cursor we changed the default for the resultset type from TYPE_SCROLL_SENSITIVE to TYPE_FORWARD_ONLY starting with JDBC driver version 7.6. So I guess the exception comes from a statement where you didn't set the resultset type while creating it. Please check if all of the statements that you want to be scrollable have set the correct resultset type.
Regards,
Marco -
Displaying large result sets in Table View u0096 request for patterns
When providing a table of results from a large data set from SAP, care needs to be taken in order to not tax the R/3 database or the R/3 and WAS application servers. Additionally, in terms of performance, results need to be displayed quickly in order to provide sub-second response times to users.
This post is my thoughts on how to do this based on my findings that the Table UI element cannot send an event to retrieve more data when paging down through data in the table (hopefully a future feature of the Table UI Element).
Approach:
For data retrieval, we need to have an RFC with search parameters that retrieves a maximum number of records (say 200) and a flag whether 200 results were returned.
In terms of display, we use a table UI Element, and bind the result set to the table.
For sorting, when they sort by a column, if we have less than the maximum search results, we sort the result set we already have (no need to go to SAP), but otherwise the RFC also needs to have sort information as parameters so that sorting can take place during the database retrieval. We sort it during the SQL select so that we stop as soon as we hit 200 records.
For filtering, again, if less than 200 results, we just filter the results internally, otherwise, we need to go to SAP, and the RFC needs to have this parameterized also.
If the requirement is that the user must look at more than 200 results, we need to have a button on the screen to fetch the next 200 results. This implies that the RFC will also need to have a start point to return results from. Similarly, a previous 200 results button would need to be enabled once they move beyond the initial result set.
Limitations of this are:
1. We need to use custom RFC function as BAPIs dont generally provide this type of sorting and limiting of data.
2. Functions need to directly access tables in order to do sorting at the database level (to reduce memory consumption).
3. Its not a great interface to add buttons to Get next/previous set of 200.
4. Obviously, based on where you are getting the data from, it may be better to load the data completely into an internal table in SAP, and do sorting and filtering on this, rather than use the database to do it.
Does anyone have a proven pattern for doing this or any improvements to the above design? Im sure SAP-CRM must have to do this, or did they just go with a BSP view when searching for customers?
Note I noticed there is a pattern for search results in some documentation, but it does not exist in the sneak preview edition of developer studio. Has anyone had in exposure to this?
Update - I'm currently investigating whether we can create a new value node and use a supply function to fill the data. It may be that when we bind this to the table UI element, that it will call this incrementally as it requires more data and hence could be a better solution.Hi Matt,
i'm afraid, the supplyFunction will not help you to get out of this, because it's only called, if the node is invalid or gets invalidated again. The number of elements a node contains defines the number of elements the table uses for the determination of the overall number of table rows. Something quite similar to what you want does already exist in the WD runtime for internal usage. As you've surely noticed, only "visibleRowCount" elements are initially transferred to the client. If you scroll down one or multiple lines, the following rows are internally transferred on demand. But this doesn't help you really, since:
1. You don't get this event at all and
2. Even if you would get the event, since the number of node elements determines the table's overall rows number, the event would never request to load elements with an index greater than number of node elements - 1.
You can mimic the desired behaviour by hiding the table footer and creating your own buttons for pagination and scrolling.
Assume you have 10 displayed rows and 200 overall rows, What you need to be able to implement the desired behaviour is:
1. A context attribute "maxNumberOfExpectedRows" type int, which you would set to 200.
2. A context attribute "visibleRowCount" type int, which you would set to 10 and bind to table's visibleRowCount property.
3. A context attribute "firstVisibleRow" type int, which you would set to 0 and bind to table's firstVisibleRow property.
4. The actions PageUp, PageDown, RowUp, RowDown, FirstRow and LastRow, which are used for scrolling and the corresponding buttons.
The action handlers do the following:
PageUp: firstVisibleRow -= visibleRowCount (must be >=0 of course)
PageDown: firstVisibleRow += visibleRowCount (first + visible must be < maxNumberOfExpectedRows)
RowDown/Up: firstVisibleRow++/-- with the same restrictions as in page "mode"
FirstRow/LastRow is easy, isn't it?
Since you know, which sections of elements has already been "loaded" into the dataSource-node, you can fill the necessary sections on demand, when the corresponding action is triggered.
For example, if you initially display elements 0..9 and goto last row, you load from maxNumberOfExpected (200) - visibleRows (10) entries, so you would request entries 190 to 199 from the backend.
A drawback is, that the BAPIs/RFCs still have to be capable to process such "section selecting".
Best regards,
Stefan
PS: And this is meant as a workaround and does not really replace your pattern request. -
SSIS and Update-Insert(Upsert) based on a SQL Server Result Set
So on a weekly basis, I have to sweep a sub-set of our Member data. So my query is built to produce this result set. I then have to take that result set and compare it to a data table which contains all that same information that I have provided to a 3rd
party vendor. So I need to take into account the following scenarios...
Obviously, if the Member is new, then I have to Insert its row to what we'll call table 3rdPartyMember and create a row out on a 3rdPartyMemberAuditTrail Table
Individually, I also have to manage if any of the following data fields have changed: Name, Birth Date, Addressing Information, Member Plan, Member Termination...and by Individually what I mean is that if the Last Name has changed, then update its row in
3rdPartyMember and Insert a row to 3rdPartyMemberAuditTrail indicating "Last Name Change"...and similarly if Address Line 1 has changed..."Address Line 1 Change"
So I guess my question is this. Should this be done in one whole Stored Procedure invoked by my SSIS Package or should it be done in pieces?
Create the Temporary Table to house our weekly Member subset extract and result set
Determine if the row exists on 3rdPartyMember Table and if it does not, Insert it
If the row exists, determine if there is a First Name Change...Last Name Change...BirthDate change...Address Line 1 change...etc... If there is a change, Update 3rdPartyMember Table and also Insert a row to 3rdPartyMemberAuditTrail indicating the change
Based on all this, I'm just not sure the right way to approach this...meaning should we create one big Stored Procedure to handle everything or each individual data point in my SSIS as an UpSert based on its lookup. I am new to the SSIS World and don't really
know what the generally accepted practice is per se of creating and running an SSIS Package with multiple data point update steps or doing so in one big Stored Procedure.
Also....if anyone know of some good YouTubes or web sites that would instruct me as to how to go about doing this, I'd GREATLY appreciate it.
Thanks for your review and am hopeful for a reply.In my opinion using SSIS is quite a pain for Upsert-functionality, especially with your requirement of maintaining an audit trail. With Google you can probably find 20 different variants all of which are quite complex. I would go for the procedure
version just to keep my sanity. It is so easy to test and verify.
I have been using an excellent Uppsert add on from Pragmatic Works Task Factory, but that would not help with the audit trail, though...
http://pragmaticworks.com/Products/Features?Feature=UpsertDestination(BatchUpdateOrInsert)
I recently discovered the CHECKSUM-function in Transact-SQL, it might simplify your code and certainly improve performance, if you have lot's of rows.
Here is an example:
create table member
(id int primary key,
name char(10) not null,
address char(20) not null,
checks int not null);
create table member3p
(id int primary key,
name char(10) not null,
address char(20) not null,
checks int not null);
create index xmember3p on member3p(id,checks);
--create member3p_audit (...);
insert into member values(1,'tom','new york',0);
insert into member values(2,'mary','dallas',0);
insert into member values(3,'marvin','durham',0);
update member
set checks = checksum(name,address);
insert into member3p values(1,'tom','new york',0);
insert into member3p values(2,'mary','chicago',0);
update member3p
set checks = checksum(name,address);
insert into member3p
select *
from member m
where not exists
( select *
from member3p m3p
where m3p.id = m.id );
-- insert into audit...
select m.*,
case
when m.name <> m3p1.name then 'x'
else ' '
end as name_change,
case
when m.address <> m3p1.address then 'x'
else ' '
end as adr_change
from member m,
member3p m3p1
where m.id = m3p1.id and
exists
( select *
from member3p m3p2
where m3p2.id = m.id and
m3p2.checks <> m.checks );
-- loop and update member3p and member_audit-- It's OK to update columns that were not changedupdate member3p set name = @new_name, address = @new_address; -
I'm pretty sure this is basic stuff, and I should probably know this already, but here goes:
when you execute a query through EntityManager, are results eagerly fetched, or are they being loaded as the result set is iterated through?
I was wondering if when I iterate through only the first 5 results, is the whole result set still being populated?
I've been learning this stuff for a while but, to make matters worse, I was required to learn some LINQ in the mean time, so similar concepts got messed up in my mind... thus this question.
Thanks for the time!It would depend on the implementation of the JDBC driver what it does, but generally you're going to get eager fetching of at least the first level records.
Linked records (non-primitive fields fetched from other tables) may be eagerly or lazily fetched depending on the configuration set. by default however those are eagerly fetched (and that's what JPA means when it talks about eager fetching).
If your dependency tree is deep and the child records are needed relatively rarely (for example you're not going to display them initially after a fetch, but only when explicitly drilling down), lazy fetching of those can improve performance. If they are required to be fetched as the norm, lazy fetching there can cost performance, so it's a design decision that requires some thought. -
Hello,
I try to get the client side result set cache working, but i have no luck :-(
I'm using Oracle Enterprise Edition 11.2.0.1.0 and as client diver 11.2.0.2.0.
Executing the query select /*+ result_cache*/ * from p_item via sql plus or toad will generate an nice execution plan with an RESULT CACHE node and the v$result_cache_objects contains some rows.
After I've check the server side cache works. I want to cache the client side
My simple Java Application looks like
private static final String ID = UUID.randomUUID().toString();
private static final String JDBC_URL = "jdbc:oracle:oci:@server:1521:ORCL";
private static final String USER = "user";
private static final String PASSWORD = "password";
public static void main(String[] args) throws SQLException {
OracleDataSource ds = new OracleDataSource();
ds.setImplicitCachingEnabled(true);
ds.setURL( JDBC_URL );
ds.setUser( USER );
ds.setPassword( PASSWORD );
String sql = "select /*+ result_cache */ /* " + ID + " */ * from p_item d " +
"where d.i_size = :1";
for( int i=0; i<100; i++ ) {
OracleConnection connection = (OracleConnection) ds.getConnection();
connection.setImplicitCachingEnabled(true);
connection.setStatementCacheSize(10);
OraclePreparedStatement stmt = (OraclePreparedStatement) connection.prepareStatement( sql );
stmt.setLong( 1, 176 );
ResultSet rs = stmt.executeQuery();
int count = 0;
for(; rs.next(); count++ );
rs.close();
stmt.close();
System.out.println( "Execution: " + getExecutions(connection) + " Fetched: " + count );
connection.close();
private static int getExecutions( Connection connection ) throws SQLException {
String sql = "select executions from v$sqlarea where sql_text like ?";
PreparedStatement stmt = connection.prepareStatement(sql);
stmt.setString(1, "%" + ID + "%" );
ResultSet rs = stmt.executeQuery();
if( rs.next() == false )
return 0;
int result = rs.getInt(1);
if( rs.next() )
throw new IllegalArgumentException("not unique");
rs.close();
stmt.close();
return result;
100 times the same query is executed and the statement exection count is incemented every time. I expect just 1 statement execution ( client database roundtrip ) and 99 hits in client result set cache. The view CLIENT_RESULT_CACHE_STATS$ is empty :-(
I'm using the oracle documentation at http://download.oracle.com/docs/cd/E14072_01/java.112/e10589/instclnt.htm#BABEDHFF and I don't kown why it does't work :-(
I'm thankful for every tip,
André KullmannI wanted to post a follow-up to (hopefully) clear up a point of potential confusion. That is, with the OCI Client Result Cache, the results are indeed cached on the client in memory managed by OCI.
As I mentioned in my previous reply, I am not a JDBC (or Java) expert so there is likely a great deal of improvement that can be made to my little test program. However, it is not intended to be exemplary, didactic code - rather, it's hopefully just enough to illustrate that the caching happens on the client (when things are configured correctly, etc).
My environment for this exercise is Windows 7 64-bit, Java SE 1.6.0_27 32-bit, Oracle Instant Client 11.2.0.2 32-bit, and Oracle Database 11.2.0.2 64-bit.
Apologies if this is a messy post, but I wanted to make it as close to copy/paste/verify as possible.
Here's the test code I used:
import java.sql.ResultSet;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import oracle.jdbc.pool.OracleDataSource;
import oracle.jdbc.OracleConnection;
class OCIResultCache
public static void main(String args []) throws SQLException
OracleDataSource ods = null;
OracleConnection conn = null;
PreparedStatement stmt = null;
ResultSet rset = null;
String sql1 = "select /*+ no_result_cache */ first_name, last_name " +
"from hr.employees";
String sql2 = "select /*+ result_cache */ first_name, last_name " +
"from hr.employees";
int fetchSize = 128;
long start, end;
try
ods = new OracleDataSource();
ods.setURL("jdbc:oracle:oci:@liverpool:1521:V112");
ods.setUser("orademo");
ods.setPassword("orademo");
conn = (OracleConnection) ods.getConnection();
conn.setImplicitCachingEnabled(true);
conn.setStatementCacheSize(20);
stmt = conn.prepareStatement(sql1);
stmt.setFetchSize(fetchSize);
start = System.currentTimeMillis();
for (int i=0; i < 10000; i++)
rset = stmt.executeQuery();
while (rset.next())
if (rset != null) rset.close();
end = System.currentTimeMillis();
if (stmt != null) stmt.close();
System.out.println();
System.out.println("Execution time [sql1] = " + (end-start) + " ms.");
stmt = conn.prepareStatement(sql2);
stmt.setFetchSize(fetchSize);
start = System.currentTimeMillis();
for (int i=0; i < 10000; i++)
rset = stmt.executeQuery();
while (rset.next())
if (rset != null) rset.close();
end = System.currentTimeMillis();
if (stmt != null) stmt.close();
System.out.println();
System.out.println("Execution time [sql2] = " + (end-start) + " ms.");
System.out.println();
System.out.print("Enter to continue...");
System.console().readLine();
finally
if (rset != null) rset.close();
if (stmt != null) stmt.close();
if (conn != null) conn.close();
}In order to show that the results are cached on the client and thus server round-trips are avoided, I generated a 10046 level 12 trace from the database for this session. This was done using the following database logon trigger:
create or replace trigger logon_trigger
after logon on database
begin
if (user = 'ORADEMO') then
execute immediate
'alter session set events ''10046 trace name context forever, level 12''';
end if;
end;
/With that in place I then did some environmental setup and executed the test:
C:\Projects\Test\Java\OCIResultCache>set ORACLE_HOME=C:\Oracle\instantclient_11_2
C:\Projects\Test\Java\OCIResultCache>set CLASSPATH=.;%ORACLE_HOME%\ojdbc6.jar
C:\Projects\Test\Java\OCIResultCache>set PATH=%ORACLE_HOME%\;%PATH%
C:\Projects\Test\Java\OCIResultCache>java OCIResultCache
Execution time [sql1] = 1654 ms.
Execution time [sql2] = 686 ms.
Enter to continue...This is all on my laptop, so results are not stellar in terms of performance; however, you can see that the portion of the test that uses the OCI client result cache did execute in approximately half of the time as the non-cached portion.
But, the more compelling data is in the resulting trace file which I ran through the tkprof utility to make it nicely formatted and summarized:
SQL ID: cqx6mdvs7mqud Plan Hash: 2228653197
select /*+ no_result_cache */ first_name, last_name
from
hr.employees
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 10000 0.10 0.10 0 0 0 0
Fetch 10001 0.49 0.54 0 10001 0 1070000
total 20002 0.60 0.65 0 10001 0 1070000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 94
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
107 107 107 INDEX FULL SCAN EMP_NAME_IX (cr=2 pr=0 pw=0 time=21 us cost=1 size=1605 card=107)(object id 75241)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 10001 0.00 0.00
SQL*Net message from client 10001 0.00 1.10
SQL ID: frzmxy93n71ss Plan Hash: 2228653197
select /*+ result_cache */ first_name, last_name
from
hr.employees
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 11 22 0
Fetch 2 0.00 0.00 0 0 0 107
total 4 0.00 0.01 0 11 22 107
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 94
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
107 107 107 RESULT CACHE 0rdkpjr5p74cf0n0cs95ntguh7 (cr=0 pr=0 pw=0 time=12 us)
0 0 0 INDEX FULL SCAN EMP_NAME_IX (cr=0 pr=0 pw=0 time=0 us cost=1 size=1605 card=107)(object id 75241)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
log file sync 1 0.00 0.00
SQL*Net message from client 2 1.13 1.13The key differences here are the execute, fetch, and SQL*Net message values. Using the client-side cache, the values drop dramatically due to getting the results from client memory rather than round-trips to the server.
Of course, corrections, clarifications, etc. welcome and so on...
Regards,
Mark -
How can I use ONE Text search iView to event/affect mutliple Result Sets?
hello everyone,
i have a special situation in which i have 6 flat tables in my repository which all have a common field called Location ID (which is a lookup flat to the Locations table).
i am trying to build a page with a free-form text search iView on Table #1 (search field = Location ID). when I execute the search, the result set for Table #1 is properly updated, but how do I also get Result Set iViews for Tables #2-6 to also react to the event from Text Search for Table #1 so that they are updated?
i don't want to have to build 6 different text search iViews (one for each table). i just want to use ONE text search iView for all the different result set tables. but, in the documentation and iView properties, the text search iView doesn't have any eventing.
if you have any suggestions, please help.
many thanks in advance,
mmhello Donna,
that should not be a problem, since you are detailw with result sets and detail iviews because custom eventing can be defined for those iviews.
Yes, it says "no records" found because an active search and record selection havent' been performed for it (only your main table does).
So, yes, define a custom event, and pass the appropriate parameters and you should be fine.
Creating a custom event between a Result Set iView and an Item Details iView is easy and works. I have done it.
See page 35 of the Portal Content Development Guide for a step-by-step example, which is what I used.
For my particular situation, the problem I'm having is that I want the Search Text iView's event (i.e., when the Submit button is pressed) to be published to multiple iViews, all with different tables. Those tables all share some common fields, which is what the Search iView has, so I'd like to pass the search critera to all of the iViews.
-mm -
To improve the system performance of the code
Please help me to improve the system performance of the following program. its very urgent
report zsdr0125
no standard page heading
* LINE-SIZE 170 " SIR 061880
line-size 210 " SIR 061880
line-count 58
message-id zz.
** Report header ******************************************************
** Report name: Activity Costing Report
** Report id: RO-01148
** Designed/Coded. Tori Chandler. Reporting Team.
** Date: March 01, 2000.
** Original SIR: 016113
** Application Area: V - Sales & Distribution (OTC)
** Correction/Transport: D10K951579
** Description: It is normal business practice for logistics
** operations to charge business units for the activity
** incurred on their behalf. This is consistent with
** activity based costing principles between BU and
** shared resources. The activities involved are picking
** storage, shipping and receiving. The purpose of this
** report is to provide data for the first 3.
** QRB2 - 03/13/2000 - Tracy, Antoine, Christian, Tori
** History:
* 06/14/2001 SIR 032383 CTS Antoine Dailly
* A plant (WERKS) can have several Distrib points (VSTEL)
* SIGN = 032383
* Modification History:
* Date Modified by SIR CTS Description
*11/14/2001 J.CAMPION 034606 D10K979189 Logistics Activity report
* Logistics Activity reports
* We added fields ship to customer and country and we also make another
* total
* QRB2 Tracy L. Obrien
* Modification History:
* Date Modified by SIR CTS Description
*11/14/2001 J.CAMPION 37838 D10K982890 Select only
* material type HALB
*06/12/2002 T OBrien 38784 D10K988181
* Allow option to get material weight from Master data or
* from the delivery.
* Modified by SIR CTS Date
* Jim Harwood 42730 D10K993119 10 Oct 2002
* Description: Code amended to default the Goods Issue Date range to
* the previous month. Also amended to print out the Select Options.
* Modified by SIR CTS Date
* Jim Harwood 44381 D10K994598 18 Nov 2002
* Description: Correct date range processing. APPEND statement added
* so that record is added ot internal table S_WADAT. Also S_VKORG
* removed as it was NO DISPLAY and nothing was assigned to it. It's use
* in an SQL call may be causing the wrong optimization.
* Modified by SIR CTS Date
* Tori Chandler 45567 D10K995875 03 Jan 2003
* Description: Correct reporting of weights for non-pickable items
* when the Material Master Data radiobutton is selected. Also found
* from SIR 37838, that material type is hardcoded on the LIPS. I
* created a new select option and the person needing the report
* can control if they want only HALB or all line items. Also,
* because of this the delivery weight is obtain from header,
* changing to accumulate from LIPS to match which lines are selected
* QRB2: 1/15/2003: Eileen, Jerome and Tori
*{ INSERT D11K901833 1
* Modified by Sir CTS Date
* Sue Kastan 48712/054042 D11K901833 28 Aug, 2003
* Fix overcounting of records from LIPS
*} INSERT
* Modified by SIR CTS Date *
* Vijay Andea 061880 D11K918628 04/20/2006 *
* D11K946194 *
* Description: Enhence ZSDR0125 Activity Cost Driver Report to allow *
* Analysis by Product Groupings. *
*} INSERT
* Modified by SIR CTS Date *
* Prakash Arunachalam 091510 D11K950288 09/26/2006 *
* Description: Correct Activity Cost report - material weight *
* calculation *
* Modified by SIR CTS Date *
* Murali Krishna 301978 D50K903293 01/20/2008
* Description: Improve the system performance of this report and
* clean-up of code into various form routine
* Table declaration.
tables: likp, " SD Document: Delivery Header Data
lips, " SD document: Delivery: Item data
vepo, " SD Document: Shipping Unit Item (Content)
vekp, " SD Document: Shipping Unit Header
knvv, " Customer Master Sales Data
kna1, " General Data in Customer Master SIR 34606
marm, " Units of Measure
mara, " Material Master: General Data SIR 38784
t001l, " Stge locs
t001k, " Valuation area
tvswz, " Shipping Points per Plant
t134, " Material types
z0234. " Alternative Unit of Measure
*** Selection screen.
selection-screen begin of block b1 with frame title text-001.
parameters: p_werks like t001l-werks obligatory memory id wrk.
select-options: s_lgort for t001l-lgort,
* S_VKORG FOR TVKO-VKORG NO-DISPLAY ," SIR 34606, 42730
* S_WADAT FOR LIKP-WADAT_IST OBLIGATORY NO-EXTENSION,
s_wadat for likp-wadat_ist no-extension, " SIR 42730
s_mtart for t134-mtart, " SIR 45567
s_lfart for likp-lfart no-display no intervals.
selection-screen skip 2.
selection-screen comment 1(21) text-002. " SIR 38784
parameters: p_delwt radiobutton group grp1, " SIR 38784
p_mstwt radiobutton group grp1. " SIR 38784
*--------------------------------------------------*Start of SIR 061880
selection-screen skip 2.
selection-screen comment 1(21) text-006.
parameters: p_voldl radiobutton group 2, " Volume from Delivery
p_volmd radiobutton group 2. " Volume from Master Data
*-----------------------------------------------------End of SIR 061880
selection-screen end of block b1.
*--------------------------------------------------*Start of SIR 061880
selection-screen begin of block b2 with frame title text-007.
select-options: s_cbuun for knvv-kvgr1 no intervals, " Customer BU
s_mbuun for mara-prdha+1(2) no intervals, " Material BU
s_lobus for mara-prdha+3(3) no intervals, " LOB
s_pac1 for mara-prdha+6(3) no intervals. " PAC1
selection-screen end of block b2.
*----------------------------------------------------*End of SIR 061880
*---Type Declaration for Internal Tables------------------------------*
types: begin of t_likp,
vbeln like likp-vbeln, " delivery
vstel like likp-vstel, " shipping point
lfart like likp-lfart, " delivery type
vkorg like likp-vkorg, " Sales organization
kunag like likp-kunag, " sold-to party
kunnr like likp-kunnr, " ship to party SIR 34606
btgew like likp-btgew, " Delivery weight
gewei like likp-gewei, " Unit of weight
anzpk like likp-anzpk, " Number of Packages SIR 61880
volum like likp-volum, " Delivery Volume SIR 61880
voleh like likp-voleh, " Volume Unit SIR 61880
vtwiv like likp-vtwiv, " Distribution channel
spaiv like likp-spaiv, " Division
wadat_ist like likp-wadat_ist, " actual goods issue date
del_flg(1) type c, "(+) SIR 301978
end of t_likp.
types: begin of t_lips,
vbeln like lips-vbeln, " delivery
posnr like lips-posnr, " delivery line
matnr like lips-matnr, " material
lgort like lips-lgort, " storage location
prodh like lips-prodh, " product hierarchy
meins like lips-meins, " base UoM
brgew like lips-brgew, " Material weight
gewei like lips-gewei, " Unit of weight
volum like lips-volum, " Material Volume SIR 61880
voleh like lips-voleh, " Volume Unit SIR 61880
lgmng like lips-lgmng, " actual delivery quantity
komkz like lips-komkz, " Indicator for picking control
mtart like lips-mtart, " Material type " SIR 37838
del_flg(1) type c, "(+) SIR 301978
end of t_lips.
types: begin of t_vepo,
venum like vepo-venum, " shipping unit number
vbeln like vepo-vbeln, " delivery
end of t_vepo.
types: begin of t_vekp,
venum like vekp-venum, " shipping unit number
brgew like vekp-brgew, " actual weight
gewei_max like vekp-gewei_max, " unit of weight
vpobjkey like vekp-vpobjkey, " key for assigned object
end of t_vekp.
types: begin of t_knvv,
kunnr like knvv-kunnr, " customer number
ktgrd like knvv-ktgrd, " acct assign group
kvgr1 like knvv-kvgr1, " customer group 1
end of t_knvv.
types: begin of t_kna1, " SIR 34606
kunnr like kna1-kunnr, " customer number " SIR 34606
land1 like kna1-land1, " contry " SIR 34606
end of t_kna1. " SIR 34606
types: begin of t_marm,
matnr like marm-matnr, " material
meinh like marm-meinh, " Alt unit of measure 032383
umrez like marm-umrez, " numerator
umren like marm-umren, " denominator
end of t_marm.
types: begin of t_mara, " SIR 38784
matnr like mara-matnr, " material " SIR 38784
prdha like mara-prdha, " Product Hierarchy " SIR 61880
brgew like lips-brgew, " gross weight " SIR 38784
gewei like mara-gewei, " Unit of weight " SIR 38784
volum like mara-volum, " Volume " SIR 61880
voleh like mara-voleh, " Volume Unit " SIR 61880
del_flg(1) type c, "(+) SIR 301978
end of t_mara. " SIR 38784
types: begin of t_tvswz,
vstel like tvswz-vstel, " shipping point
end of t_tvswz.
types: begin of t_z0234, "032383
vstel like z0234-vstel, " shipping point 032383
zpaluom like z0234-zpaluom," pallet unit of measure 032383
zcsuom like z0234-zcsuom," Case unit of measure 032383
end of t_z0234. "032383
types: begin of t_output_dt,
wadat_ist like likp-wadat_ist, " Goods issue date
ktgrd like knvv-ktgrd," acct assign group
bu like knvv-kvgr1," business unit
kunnr like kna1-kunnr," ship to location SIR 34606
land1 like kna1-land1," ship to location SIR 34606
d_btgew like likp-btgew," delivery weight
m_brgew like lips-brgew," material weight
a_brgew like vekp-brgew," actual weight of ship unit
num_del type i, " counter of deliveries
num_pallets type i, " number of pallets
num_cases type i, " number of cases
num_loose type i, " loose quantity
num_delln type i, " counter of delivery lines
* packages like likp-anzpk," Number of Packages " SIR 61880
packages(3) type p, " Number of Packages " SIR 61880
volume like lips-volum," Volume " SIR 61880
lobus(3) type c, " Line of Business " SIR 61880
pac1(3) type c, " PAC1 " SIR 61880
end of t_output_dt.
types: begin of t_output_ag,
ktgrd like knvv-ktgrd," acct assign group
bu like knvv-kvgr1," business unit
land1 like kna1-land1," country SIR 34606
d_btgew like likp-btgew," delivery weight
m_brgew like lips-brgew," material weight
a_brgew like vekp-brgew," actual weight of ship unit
num_del type i, " counter of deliveries
num_pallets type i, " number of pallets
num_cases type i, " number of cases
num_loose type i, " loose quantity
num_delln type i, " counter of delivery lines
* packages like likp-anzpk," Number of Packages " SIR 61880
packages(3) type p, " Number of Packages " SIR 61880
volume like lips-volum," Volume " SIR 61880
lobus(3) type c, " Line of Business " SIR 61880
pac1(3) type c, " PAC1 " SIR 61880
end of t_output_ag.
types: begin of t_output_gs, " SIR 34606
ktgrd like knvv-ktgrd," acct assign group " SIR 34606
bu like knvv-kvgr1," business unit " SIR 34606
d_btgew like likp-btgew," delivery weight " SIR 34606
m_brgew like lips-brgew," material weight " SIR 34606
a_brgew like vekp-brgew," actual weight " SIR 34606
num_del type i, " counter of deliv " SIR 34606
num_pallets type i, " number of pallets " SIR 34606
num_cases type i, " number of cases " SIR 34606
num_loose type i, " loose quantity " SIR 34606
num_delln type i, " counter of deliv " SIR 34606
* packages like likp-anzpk, " Number of Package " SIR 61880
packages(3) type p, " Number of Packages" SIR 61880
volume like lips-volum, " Volume " SIR 61880
lobus(3) type c, " Line of Business " SIR 61880
pac1(3) type c, " PAC1 " SIR 61880
end of t_output_gs. " SIR 34606
*-------------------------------------------------* Begin of SIR 061880
* Material Type
types: begin of t_mtart,
mtart like t134-mtart, " Material Type
end of t_mtart.
* Customer Business Unit.
types: begin of t_kvgr1,
kvgr1 like knvv-kvgr1, " Customer Group 1
end of t_kvgr1.
* sales Organization.
types: begin of t_lgort,
lgort like t001l-lgort, " Sales Organization
end of t_lgort.
* Begin of SIR 301978
* Header: Material Document
types: begin of t_mkpf,
vgart type mkpf-vgart,
xblnr type likp-vbeln,
end of t_mkpf.
* End of SIR 301978
*---------------------------------------------------* End of SIR 061880
*---Internal Tables---------------------------------------------------*
data: i_likp type t_likp occurs 0 with header line,
i_temp_likp type t_likp occurs 0 with header line,
v_likp type t_likp,
i_lips type t_lips occurs 0 with header line,
i_temp_lips type t_lips occurs 0 with header line,
v_lips type t_lips,
i_vepo type t_vepo occurs 0,
* V_VEPO TYPE T_VEPO,
i_vekp type t_vekp occurs 0,
v_vekp type t_vekp,
i_knvv type t_knvv occurs 0 with header line,
v_knvv type t_knvv,
i_kna1 type t_kna1 occurs 0, " SIR 34606
v_kna1 type t_kna1, " SIR 34606
i_z0234 type t_z0234 occurs 0, " 032383
v_z0234 type t_z0234, " 032383
i_z0234_uom type t_z0234 occurs 0, " 032383
v_z0234_uom type t_z0234, " 032383
i_marm type t_marm occurs 0 with header line," SIR 61880
i_marm_pallet type t_marm occurs 0 with header line,
v_marm_pallet type t_marm,
i_marm_case type t_marm occurs 0 with header line,
v_marm_case type t_marm,
*-------------------------------------------------* Begin of SIR 061880
* I_MARA TYPE T_MARA OCCURS 0, " SIR 38784
i_mara1 type t_mara occurs 0 with header line,
i_mtart type t_mtart occurs 0 with header line,
i_kvgr1 type t_kvgr1 occurs 0 with header line,
i_lgort type t_lgort occurs 0 with header line,
v_kvgr1 type t_kvgr1,
*---------------------------------------------------* End of SIR 061880
v_mara type t_mara, " SIR 38784
i_tvswz type t_tvswz occurs 0,
v_tvswz type t_tvswz, "(+) SIR 301978
i_output_dt type t_output_dt occurs 0,
v_output_dt type t_output_dt,
i_output_ag type t_output_ag occurs 0,
v_output_ag type t_output_ag,
i_output_gs type t_output_gs occurs 0, " SIR 34606
v_output_gs type t_output_gs, " SIR 34606
i_mkpf type table of t_mkpf with header line."SIR 301978
*** Data Declarations *
data: v_page(3) type c, " Page Counter
v_comp like t001k-bukrs, " zbsn0001 company code
v_title(24) type c, " zbsn0001 report title
v_rpttyp type c, " report type
v_ok type c, " control While... endwhile
v_diff_date type p, " days between selection
v_werks like t001l-werks, " plant
* v_z0234_zpaluom like z0234-zpaluom, " Pallet Unit of Measure
* v_z0234_zcsuom like z0234-zcsuom, " Case Unit of Measure
v_palwto type p decimals 6, " SIR 091510
* "like likp-btgew, " Weight after conversion
v_vekp_tabix like sy-tabix, " index on read
v_vekp_sum_brgew like vekp-brgew, " actual weight
v_pallet_qty like lips-lgmng, " calculated pallet qty
v_pallet_integer type i, " true pallet qty
v_case_qty like lips-lgmng, " calculated case qty
v_case_integer type i, " true case qty
v_qty_not_pallets like lips-lgmng, " calculated
v_num_pallets like lips-lgmng, " calculated nbr of pallets
v_num_pallets_int type i, " true nbr of pallets
v_num_cases like lips-lgmng, " calculated nbr of cases
v_num_cases_int type i, " true nbr of cases
v_total_case_qty like lips-lgmng, " case quantity
v_loose_qty type i, " calculated loose quantity
*-------------------------------------------------* Begin of SIR 061880
v_volume like lips-volum, " Volume After Convertion
v_cbuun like knvv-kvgr1, " Customer BU
v_mbuun like knvv-kvgr1, " Material BU
v_lobus(3) type c, " Line of Business
v_pac1(3) type c, " PAC1
v_flag type c. " Flag Indicator for No Data
*---------------------------------------------------* End of SIR 061880
* Begin of SIR 301978
* Ranges
ranges : r_vstel for tvswz-vstel.
* End of SIR 301978
* Constants
data: c_uom(3) type c value 'KG'. " Kilogram Unit of Meas.
data : c_vom like mara-voleh value 'M3'. " Metter Cube. " SIR 061880
constants : c_wl(2) type c value 'WL'. "(+) SIR 301978
* Initialization.
initialization.
s_lfart-sign = 'I'.
s_lfart-option = 'EQ'.
s_lfart-low = 'LF '.
append s_lfart.
s_lfart-low = 'NL '.
append s_lfart.
s_lfart-low = 'NLCC'.
append s_lfart.
s_lfart-low = 'ZLFI'.
append s_lfart.
* AT SELECTION-SCREEN.
at selection-screen.
* SIR 42730 - If no Goods Issue Date has been specified in the
* Select Options set the range to the previous month.
if s_wadat-low is initial.
s_wadat-sign = 'I'.
s_wadat-option = 'BT'.
s_wadat-high = sy-datum. " Today's date
s_wadat-high+6(2) = '01'. " First of this month
s_wadat-high = s_wadat-high - 1. " End of last month
s_wadat-low = s_wadat-high. " End of last month
s_wadat-low+6(2) = '01'. " First of last month
append s_wadat.
endif. " SIR 42730 IF S_WADAT-LOW IS INITIAL.
clear v_werks. "(+) SIR 301978
* Validate Plant/Storage Location from selection screen
select werks up to 1 rows into v_werks from t001l
where werks = p_werks and
lgort in s_lgort.
endselect.
if sy-subrc ne 0.
message e045 with text-e01.
endif.
* Validate Storage Location
if not s_lgort[] is initial.
select lgort from t001l into table i_lgort where lgort in s_lgort.
if sy-subrc ne 0.
message e045 with text-e09.
endif.
endif.
* Validate date range. do not allow more that 31 days
if not s_wadat-high is initial.
v_diff_date = s_wadat-high - s_wadat-low.
if v_diff_date >= '31'.
message e045 with text-e02.
endif.
endif.
*-------------------------------------------------* Begin of SIR 061880
* Validation for Material Type in Selection Screen
if not s_mtart[] is initial.
select mtart from t134 into table i_mtart where mtart in s_mtart.
if sy-subrc ne 0.
message e045 with text-e07.
endif.
endif.
at selection-screen on block b2.
* Validation for Material Business Unit and Customer Business Unit.
if s_cbuun-low is not initial and s_mbuun-low is not initial.
message e045 with text-e05.
endif.
* Validation for Possible combinations of Material BU, Line of Business
* and PAC1
if ( s_mbuun-low is not initial and s_lobus-low is not initial
and s_pac1-low is not initial )
or ( s_mbuun-low is not initial and s_lobus-low is not initial )
or ( s_lobus-low is not initial and s_pac1-low is not initial )
or ( s_mbuun-low is not initial and s_pac1-low is not initial ).
message e045 with text-e06.
endif.
* Validation for Customer Business Unit.
if not s_cbuun[] is initial.
select kvgr1 from tvv1 into table i_kvgr1 where kvgr1 in s_cbuun.
if sy-subrc ne 0.
message e045 with text-e08.
endif.
* free: i_kvgr1.
endif.
*---------------------------------------------------- End of SIR 061880
* TOP-OF-PAGE
* Top of the page routine to print the headers and columns
top-of-page.
format color col_heading on.
if v_rpttyp = 'D'.
v_title = text-h01.
else.
v_title = text-h02.
endif.
perform zbsn0001_standard_header " Standard Report Heading Form
using v_comp v_title 'U' v_page.
skip.
if v_rpttyp = 'D'.
perform write_dtlvl_headings.
else.
perform write_aglvl_headings.
endif.
* FORM WRITE_DTLVL_HEADINGS *
* for date detail level, print the column headers *
form write_dtlvl_headings.
write: /001 text-h03, " Acct
038 text-h04, " ------ P I C K I N G ------
105 text-h05, " ----- S H I P P I N G -----
/001 text-h06, " Group
007 text-h07, " Date
017 text-h08, " BU
021 text-h17, " ship to party 34606
030 text-h18, " country
039 text-h09, " Pallets
057 text-h10, " Cases
075 text-h11, " Loose
093 text-h12, " Lines
105 text-h13, " Material Kg
124 text-h16, " Delivery Kg
143 text-h14, " Actual Kg
161 text-h15, " Deliveries
*----------------------------------------------------Start of SIR 61880
173 text-h20, " Packages
189 text-h21, " Volume
201 text-h22, " LOB
207 text-h23. " PAC1
*-----------------------------------------------------*End of SIR 61880
skip.
endform. " end of write_dtlvl_headings.
* FORM WRITE_AGLVL_HEADINGS *
* for account group detail level, print the column headers *
form write_aglvl_headings.
write: /001 text-h03, " Acct
038 text-h04, " ------ P I C K I N G ------
105 text-h05, " ----- S H I P P I N G -----
/001 text-h06, " Group
017 text-h08, " BU
030 text-h18, " country SIR 34606
039 text-h09, " Pallets
057 text-h10, " Cases
075 text-h11, " Loose
093 text-h12, " Lines
105 text-h13, " Material Kg
124 text-h16, " Delivery Kg
143 text-h14, " Actual Kg
161 text-h15, " Deliveries
*---------------------------------------------------*Start of SIR 61880
173 text-h20, " Packages
189 text-h21, " Volume
201 text-h22, " LOB
207 text-h23. " PAC1
*-----------------------------------------------------*End of SIR 61880
skip.
endform. " end of write_aglvl_headings.
*-------------------------- SUBBROUTINES -----------------------------*
include zbsn0001. " Include to print all Standard Report Titles
include zsdn0004. " Print the Select Options
*** MAIN SELECTION. *
start-of-selection.
refresh: i_likp, i_temp_likp, i_lips, i_temp_lips, i_vepo,
i_vekp, i_knvv, i_marm_pallet, i_marm_case, i_tvswz,
i_output_dt, i_output_ag, i_mara1.
clear: v_likp, v_lips, v_vekp, v_knvv, v_mara,
v_marm_pallet, v_marm_case, v_output_dt, v_output_ag.
v_rpttyp = 'D'.
* SIR 42730 - Echo the Select Options to the output listing and print
* the Goods Issue Date range being used.
write: /,/. " SIR 43730 - Skip a couple of lines to centre it.
perform zsdn0004_print_select_options using sy-cprog ' '.
if s_wadat-high is initial.
write: /,/, text-003, 34 s_wadat-low.
else.
write: /,/, text-003, 34 s_wadat-low, 44 text-004, 47 s_wadat-high.
endif.
new-page.
* Main Processing. *
perform get_data.
if not i_lips[] is initial.
perform create_output.
endif.
* END-OF-SELECTION
end-of-selection.
*-------------------------------------------------* Start of SIR 061880
* PERFORM WRITE_REPORT.
if not i_output_dt[] is initial and not i_output_ag[] is initial and
not i_output_gs[] is initial.
perform write_report.
else.
skip 2.
write:/ text-e03.
skip 2.
uline.
clear: v_flag.
endif.
*---------------------------------------------------* End of SIR 061880
* FORM GET_DATA *
* build all of the internal tables needed for the report *
* selects on t001k, z0234, likp, lips, vepo, vekp, knvv & marm, mara *
form get_data.
perform get_data_t001k. " SIR 301978
perform get_data_tvswz. " SIR 301978
if sy-subrc = 0. " Build shipping point range
perform get_data_Z0234. " SIR 301978
if sy-subrc ne 0.
v_flag = 'X'.
* stop.
else.
sort i_z0234 by vstel. " 032383
endif.
* Begin of SIR 301978
* The data retrievalfrom LIKP has been modified for performance
* reasons
perform get_data_mkpf. " SIR 301978
if sy-subrc eq 0.
* Deleting data other than Goods issued for delivery
delete i_mkpf where vgart ne c_wl.
sort i_mkpf by xblnr.
endif.
* Deleting the data from the internal table i_likp by comparing
* shipping point
r_vstel-sign = 'I'.
r_vstel-option = 'EQ'.
clear : v_tvswz.
loop at i_tvswz into v_tvswz.
r_vstel-low = v_tvswz-vstel.
append r_vstel.
endloop.
if not i_mkpf[] is initial.
perform get_data_likp. " SIR 301978
endif.
if sy-subrc = 0.
delete i_likp where not lfart in s_lfart.
* End of SIR 301978
* Get data for the delivery lines
if not i_likp[] is initial.
perform get_data_lips. " SIR 301978
* Begin of SIR 301978
if i_lips[] is initial.
v_flag = 'X'.
message i089 with text-i02.
leave list-processing.
endif. " Return code for LIPS select
* End of SIR 301978
endif.
*-------------------------------------------------* Begin of SIR 061880
* Get Data From MARA (Material Master) to Read Material Weight and
* Material Voluem from Master Data.
if not i_lips[] is initial.
* Begin of SIR 301978
* Delete the duplicate material from delivery item table
i_temp_lips[] = i_lips[].
sort i_temp_lips by matnr.
delete adjacent duplicates from i_temp_lips comparing matnr.
perform get_data_mara. " SIR 301978
if sy-subrc = 0.
sort i_mara1 by matnr.
clear i_temp_lips.
refresh i_temp_lips.
* End of SIR 301978
else.
v_flag = 'X'.
message i089 with text-i01.
leave list-processing.
endif.
endif.
* Filter I_LIPS and I_MARA1 When Either material BU or Line of
* Business (LOB) or PAC1 are not left Balnk
perform filter_data_for_prdha.
* Get data for the sold-to customer
perform i_knvv_fill_data.
* Filter I_LIKP & I_LIPS & I_MARA for Cust-BU, When Cust-Bu is not
* left Blank.
perform filter_likp_lips_mara_custbu.
*---------------------------------------------------- End of SIR 061880
*-------------------------------------------------* Begin of SIR 034606
* Get data for the ship to party
i_temp_likp[] = i_likp[].
sort i_temp_likp by kunnr.
delete adjacent duplicates from i_temp_likp comparing kunnr.
perform get_data_kna1. " SIR 301978
if sy-subrc = 0.
sort i_kna1 by kunnr.
endif.
*---------------------------------------------------* End of SIR 034606
else.
v_flag = 'X'.
message i089 with text-i04.
leave list-processing.
endif. " return code to LIKP select
* endif. " SIR 061880
free: i_temp_likp.
endif. " return code for TVSWZ select
* Process table LIPS
if not i_lips[] is initial.
i_temp_lips[] = i_lips[].
sort i_temp_lips by vbeln.
delete adjacent duplicates from i_temp_lips comparing vbeln.
* Get actual weight
perform get_data_vepo. " SIR 301978
if sy-subrc = 0.
sort i_vepo by venum.
delete adjacent duplicates from i_vepo comparing venum.
perform get_data_vekp. " SIR 301978
sort i_vekp by vpobjkey.
endif.
free: i_vepo, i_temp_lips.
i_temp_lips[] = i_lips[].
sort i_temp_lips by matnr.
delete adjacent duplicates from i_temp_lips comparing matnr.
* Get Units of Measure for Material
perform read_data_from_marm. " SIR 061880
* Begin of SIR 032383
i_z0234_uom = i_z0234.
sort i_z0234_uom by zpaluom.
delete adjacent duplicates from i_z0234_uom comparing zpaluom.
* End of SIR 032383
clear i_marm_pallet. " SIR 061880
refresh i_marm_pallet. " SIR 061880
loop at i_z0234_uom into v_z0234_uom. "LOOP Z0234 032383
*-------------------------------------------------* Begin of SIR 061880
* Get Alternative Unit of Measure for Pallets
perform i_marm_pallet_fill.
endloop. "ENDLOOP Z0234 032383
** get alternative UoM for pallets
* SELECT MATNR " material
* MEINH " Alt unit of measure " 032383
* UMREZ " numerator
* UMREN " denominator
** into table i_marm_pallet " 032383
* APPENDING TABLE I_MARM_PALLET "032383
* FROM MARM
* FOR ALL ENTRIES IN I_TEMP_LIPS
* WHERE MATNR = I_TEMP_LIPS-MATNR AND
** meinh = v_z0234_zpaluom. " 032383
* MEINH = V_Z0234_UOM-ZPALUOM. " 032383
*---------------------------------------------------* End of SIR 061880
sort i_marm_pallet by matnr meinh.
free i_z0234_uom. "032383
i_z0234_uom = i_z0234. "032383
sort i_z0234_uom by zcsuom. "032383
delete adjacent duplicates from i_z0234_uom comparing zcsuom."32383
clear i_marm_case. " SIR 061880
refresh i_marm_case. " SIR 061880
loop at i_z0234_uom into v_z0234_uom. "LOOP Z0234 032383
*-------------------------------------------------* Begin of SIR 061880
* Get Alternative Unit of Measure for Cases
perform i_marm_case_fill.
endloop. "ENDLOOP Z0234 032383
*---------------------------------------------------* End of SIR 061880
sort i_marm_case by matnr meinh.
else.
v_flag = 'X'.
message i089 with text-i02.
* leave list-processing.
endif. " table LIPS is empty
endform. " get_data
* FORM CREATE_OUTPUT *
* process internal table LIPS, for each delivery/delivery lines create*
* an output record and collect into i_output internal table. Fields *
* used for header are goods issue date, acct assign, BU for customer, *
* total weight, total weight for shipping unit and count of *
* deliveries, the remaining fields will have a value of zero for the *
* collect. For each delivery line, fields used are goods issue date, *
* acct assign, BU for material, number of pallets, number of cases, *
* loose quantity and number of delivery lines. The remaining header *
* fiels will be zero for the collect. *
form create_output.
*-------------------------------------------------* Begin of SIR 061880
sort i_mara1 by matnr.
sort i_lips by vbeln posnr matnr.
sort i_likp by vbeln.
sort i_z0234 by vstel.
sort i_knvv by kunnr.
sort i_kna1 by kunnr.
sort i_vekp by vpobjkey.
sort i_marm_pallet by matnr meinh.
sort i_marm_case by matnr meinh.
* LOOP AT I_LIPS INTO V_LIPS.
loop at i_lips.
clear v_lips.
v_lips = i_lips.
*---------------------------------------------------* End of SIR 061880
at new vbeln.
perform collect_header_data.
endat.
if not v_lips-komkz is initial. " SIR 45567
perform collect_item_data.
endif. " SIR 45567
endloop.
endform. " create_output
* FORM COLLECT_HEADER_DATA *
* Fields used for header are goods issue date, acct assign, *
* BU for customer, total weight, total weight for shipping unit *
* and count ofdeliveries, the remaining fields will have a value of *
* zero for the collect. *
form collect_header_data.
clear: v_likp, v_knvv, v_vekp, v_vekp_sum_brgew.
v_ok = 'Y'.
read table i_likp into v_likp with key vbeln = v_lips-vbeln
binary search.
if sy-subrc = 0. " SIR 061880
clear v_z0234. "(+) SIR 301978
read table i_z0234 into v_z0234 "32383
with key vstel = v_likp-vstel "32383
binary search. "32383
read table i_knvv into v_knvv with key kunnr = v_likp-kunag
binary search.
if sy-subrc ne 0.
select single ktgrd kvgr1 into (v_knvv-ktgrd, v_knvv-kvgr1)
from knvv where kunnr = v_likp-kunag and
vkorg = v_likp-vkorg and
vtweg = '01' and " intercompany values
spart = '01'. " intercompany values
endif.
clear v_kna1. " SIR 34606
read table i_kna1 into v_kna1 " SIR 34606
with key kunnr = v_likp-kunnr " SIR 34606
binary search. " SIR 34606
read table i_vekp into v_vekp
with key vpobjkey(10) = v_likp-vbeln
binary search.
if sy-subrc = 0.
v_vekp_tabix = sy-tabix.
while v_ok = 'Y'.
if v_vekp-gewei_max ne c_uom.
perform z_unit_conversion
using v_vekp-brgew v_vekp-gewei_max c_uom v_palwto.
v_vekp_sum_brgew = v_vekp_sum_brgew + v_palwto.
else.
v_vekp_sum_brgew = v_vekp_sum_brgew + v_vekp-brgew.
endif.
v_vekp_tabix = v_vekp_tabix + 1.
read table i_vekp into v_vekp
index v_vekp_tabix.
if sy-subrc = 0.
if v_vekp-vpobjkey(10) ne v_likp-vbeln.
v_ok = 'N'.
endif.
else.
v_ok = 'N'.
endif.
endwhile.
endif.
endif. " SIR 061880
* populate output tables
clear: v_output_dt, v_output_ag,v_output_gs. " SIR 34606
v_output_dt-wadat_ist = v_likp-wadat_ist.
v_output_dt-ktgrd = v_knvv-ktgrd.
v_output_ag-ktgrd = v_knvv-ktgrd.
v_output_gs-ktgrd = v_knvv-ktgrd. " SIR 34606
*-------------------------------------------------* Begin of SIR 061880
* V_OUTPUT_DT-BU = V_KNVV-KVGR1.
* V_OUTPUT_AG-BU = V_KNVV-KVGR1.
* V_OUTPUT_GS-BU = V_KNVV-KVGR1. " SIR 34606
* Populate Business Unit,Line of Busness and PAC1 Values in Header Data
perform get_busunit_lobus_pac1_data1.
*---------------------------------------------------* End of SIR 061880
v_output_dt-kunnr = v_kna1-kunnr. " SIR 34606
* V_OUTPUT_AG-KUNNR = V_KNA1-KUNNR. " SIR 34606
v_output_dt-land1 = v_kna1-land1. " SIR 34606
v_output_ag-land1 = v_kna1-land1. " SIR 34606
v_output_dt-a_brgew = v_vekp_sum_brgew.
v_output_ag-a_brgew = v_vekp_sum_brgew.
v_output_gs-a_brgew = v_vekp_sum_brgew. " SIR 34606
if s_mbuun[] is initial and " SIR 61880
s_lobus[] is initial and " SIR 61880
s_pac1[] is initial. " SIR 61880
v_output_dt-num_del = 1.
v_output_ag-num_del = 1.
v_output_gs-num_del = 1. " SIR 34606
*-------------------------------------------------* Begin of SIR 061880
* Number of Packages
v_output_dt-packages = v_likp-anzpk.
v_output_ag-packages = v_likp-anzpk.
v_output_gs-packages = v_likp-anzpk.
else.
clear: v_output_dt-num_del,
v_output_ag-num_del,
v_output_gs-num_del,
v_output_dt-packages,
v_output_ag-packages,
v_output_gs-packages.
endif.
*---------------------------------------------------* End of SIR 061880
collect v_output_dt into i_output_dt.
collect v_output_ag into i_output_ag.
collect v_output_gs into i_output_gs. " SIR 34606
endform. " collect_header_data
* FORM COLLECT_ITEM_DATA *
* For each delivery line, fields used are goods issue date, *
* acct assign, BU for material, number of pallets, number of cases, *
* loose quantity and number of delivery lines. The remaining header *
* fiels will be zero for the collect. *
form collect_item_data.
clear: v_pallet_qty, v_pallet_integer,
v_case_qty, v_case_integer,
v_num_pallets, v_num_pallets_int,
v_num_cases, v_num_cases_int,
v_qty_not_pallets, v_total_case_qty, v_loose_qty.
read table i_marm_pallet into v_marm_pallet
with key matnr = v_lips-matnr
meinh = v_z0234-zpaluom binary search."32382
if sy-subrc = 0.
v_pallet_qty = v_marm_pallet-umrez / v_marm_pallet-umren.
* round down partial pallets
v_pallet_integer = v_pallet_qty - '.499'.
endif.
read table i_marm_case into v_marm_case
with key matnr = v_lips-matnr
meinh = v_z0234-zcsuom binary search."32382
if sy-subrc = 0.
v_case_qty = v_marm_case-umrez / v_marm_case-umren.
* round down partial cases
v_case_integer = v_case_qty - '.499'.
endif.
if v_pallet_integer > 0.
v_num_pallets = v_lips-lgmng / v_pallet_integer.
v_num_pallets_int = v_num_pallets - '.499'.
endif.
v_qty_not_pallets = v_lips-lgmng -
( v_num_pallets_int * v_pallet_integer ).
if v_case_integer > 0.
v_num_cases = v_qty_not_pallets / v_case_integer.
v_num_cases_int = v_num_cases - '.499'.
endif.
v_total_case_qty = v_num_cases_int * v_case_integer.
if v_qty_not_pallets = v_total_case_qty.
v_loose_qty = 0.
else.
v_loose_qty = 1.
endif.
* populate output tables
clear: v_output_dt, v_output_ag, v_output_gs. " SIR 34606
v_output_dt-wadat_ist = v_likp-wadat_ist.
v_output_dt-ktgrd = v_knvv-ktgrd.
v_output_ag-ktgrd = v_knvv-ktgrd.
v_output_gs-ktgrd = v_knvv-ktgrd. " SIR 34606
v_output_dt-kunnr = v_kna1-kunnr. " SIR 34606
* V_OUTPUT_AG-KUNNR = V_KNA1-KUNNR. " SIR 34606
v_output_dt-land1 = v_kna1-land1. " SIR 34606
v_output_ag-land1 = v_kna1-land1. " SIR 34606
*-------------------------------------------------* Begin of SIR 061880
* V_OUTPUT_DT-BU = V_LIPS-PRODH+1(2).
* V_OUTPUT_AG-BU = V_LIPS-PRODH+1(2).
* V_OUTPUT_GS-BU = V_LIPS-PRODH+1(2). " SIR 34606
* Populate Business Unit,Line of Business and PAC1 Values for Item Data.
perform get_busunit_lobus_pac1_data2.
if V_OUTPUT_DT-BU is initial.
clear V_OUTPUT_DT.
exit.
endif.
*---------------------------------------------------* End of SIR 061880
* get delivery weight from delivery lines instead of header " SIR 45567
if v_lips-gewei = c_uom.
v_output_dt-d_btgew = v_lips-brgew.
v_output_ag-d_btgew = v_lips-brgew.
v_output_gs-d_btgew = v_lips-brgew. " SIR 34606
else.
perform z_unit_conversion
using v_lips-brgew v_lips-gewei c_uom v_palwto.
v_output_dt-d_btgew = v_palwto.
v_output_ag-d_btgew = v_palwto.
v_output_gs-d_btgew = v_palwto. " SIR 34606
endif.
* Get material weight from delivery (LIPS)(IF P_DELWT = 'X')" SIR 38784
if p_delwt = 'X'. " SIR 38784
if v_lips-gewei = c_uom.
v_output_dt-m_brgew = v_lips-brgew.
v_output_ag-m_brgew = v_lips-brgew.
v_output_gs-m_brgew = v_lips-brgew. " SIR 34606
else.
perform z_unit_conversion
using v_lips-brgew v_lips-gewei c_uom v_palwto.
v_output_dt-m_brgew = v_palwto.
v_output_ag-m_brgew = v_palwto.
v_output_gs-m_brgew = v_palwto. " SIR 34606
endif.
else. " SIR 38784
* Get material weight from Master data (MARA) " SIR 38784
read table i_mara1 into v_mara " SIR 38784
with key matnr = v_lips-matnr " SIR 38784
binary search. " SIR 38784
if v_mara-gewei = c_uom. " SIR 38784
v_output_dt-m_brgew = v_mara-brgew * v_lips-lgmng. " SIR 38784
v_output_ag-m_brgew = v_mara-brgew * v_lips-lgmng. " SIR 38784
v_output_gs-m_brgew = v_mara-brgew * v_lips-lgmng. " SIR 38784
else. " SIR 38784
perform z_unit_conversion " SIR 38784
using v_mara-brgew v_mara-gewei c_uom v_palwto. " SIR 38784
v_output_dt-m_brgew = v_palwto * v_lips-lgmng. " SIR 38784
v_output_ag-m_brgew = v_palwto * v_lips-lgmng. " SIR 38784
v_output_gs-m_brgew = v_palwto * v_lips-lgmng. " SIR 38784
endif. " SIR 38784
endif. " SIR 38784
*-------------------------------------------------* Begin of SIR 061880
* To Get the Volume Data.
perform get_volume_data.
*---------------------------------------------------* End of SIR 061880
v_output_dt-num_pallets = v_num_pallets_int.
v_output_ag-num_pallets = v_num_pallets_int.
v_output_gs-num_pallets = v_num_pallets_int. " SIR 34606
v_output_dt-num_cases = v_num_cases_int.
v_output_ag-num_cases = v_num_cases_int.
v_output_gs-num_cases = v_num_cases_int. " SIR 34606
v_output_dt-num_loose = v_loose_qty.
v_output_ag-num_loose = v_loose_qty.
v_output_gs-num_loose = v_loose_qty. " SIR 34606
*{ INSERT D11K901833 1
* Sir 054042/48712 insert code to put in 0 instead of 1 for collect
* value for delivery lines
if v_lips-lgmng = 0.
v_output_dt-num_delln = 0.
v_output_ag-num_delln = 0.
v_output_gs-num_delln = 0.
else.
* end of insert for 054042/48712
*} INSERT
v_output_dt-num_delln = 1.
v_output_ag-num_delln = 1.
v_output_gs-num_delln = 1.Since this is your first post, let me clue you in. This forum is meant for asking very short and specific questions. For example, if you've asked, how could you improve a particular one SELECT statement, I'm sure many people would have helped you.
But you cannot seriously expect someone on this forum to spend his/her time and review such a long program. For this people are normally getting paid, you know. -
WAD : Result set is too large; data retrieval restricted by configuration
Hi All,
When trying to execute the web template by giving less restiction we are getting the below error :
Result set is too large; data retrieval restricted by configuration
Result set too large (758992 cells); data retrieval restricted by configuration (maximum = 500000 cells)
But when we try to increase the number of restictions it is giving output. For example if we give fiscal period, company code ann Brand we are able to get output. But if we give fical period alone it it throwing the above error.
Note : We are in SP18.
Whether do we need to change some setting in configuration? If we yes where do we need to change or what else we need to do to remove this error
Regards
KarthikHi Karthik,
the standard setting for web templates is to display a maximum amount of 50.000 cells. The less you restrict your query the more data will be displayed in the report. If you want to display more than 50.000 cells the template will not be executed correctly.
In general it is advisable to restrict the query as much as possible. The more data you display the worse your performance will be. If you have to display more data and you execute the query from query designer or if you use the standard template you can individually set the maximum amount of cells. This is described over [here|Re: Bex Web 7.0 cells overflow].
However I do not know if (and how) you can set the maximum amount of cells differently as a default setting for your template. This should be possible somehow I think, if you find a solution for this please let us know.
Brgds,
Marcel
Maybe you are looking for
-
How can I transfer song or photo files from Downloads on Firefox to my own hard drive for permanent storage?
-
Photosmart C7280 All-In-One Wireless Not working
My wireless printing was working fine with my Photosmart C7280 for about at least a year or more and now recently the wireless side stopped working. My Apple devices (both iPad and Macbook) stopped recognizing my printer. Even when I tried to add via
-
Cannot access class oracle.sql.BLOB;
Hi, I am trying to save a .tif file into Oracle database. When I run the program in JDeveloper I get the following errors: Error(9,19): cannot access class oracle.sql.BLOB; file oracle\sql\BLOB.class not found Error(59,29): class BLOB not found in cl
-
Hi All, i am going to create File2File scenario using File content conversion(Flat file). Please let me know how to set the content conversion parameters. Any body have the documentation for this, please send me ASAP. Thanks & Regards, Nagarjuna.
-
Different results for one query
Hi, Today i got a very surprising issue with SQL 2008, my one team member inform that a single query is resulting two different data set from two diff-diff SSMS's query windows. Query is like this, Select * from table1 where trandate between '2014-08