Session pooling and statement handles
Hi there,
I have a large multi-threaded application (perhaps >100 threads). Each thread is continuously processing events (very high volumes) which involves some manipulation and some database operations (from a fixed set of possible operations).
I am using session pooling but what I want to know is, Should I:
(a) Prepare my fixed set of statement handles up front at program start-up when I'm creating the session pool and then reuse the statement handles in each event processing thread (also, is this thread safe ? even if it is, all threads would be contending on the same statement handles)
or
(b) Prepare the statement handle for each event which presumably will exploit the statement cache on the session pool. This would also mean not having statement handles shared between threads thus removing any thread contention issues.
I think (b) is the option for me, but does anyone have any thoughts ?
With a), one would think it's OK, but I would hate to find out that it's not thread safe by accident.
But anyway, with b) the cost of allocating private statement handles in each thread seems very low. The memory required for the statement handle plus its bind and define handles could very well be below 8k per statement. If you've got say, 5 statements * 100 threads, you're only looking at around 4MB overall.
Finally, you might want to make sure that the session pool statement cache is working by checking the values for 'executions' and 'parse_calls' in V$SQL for your statements.
Similar Messages
-
Foreign JMS QCF and Weblogic Sever Session Pool
Hi!
We have Weblogic 6.1 SP2 installation.
We are trying to use JMS Server session pool and connection consumer configuration
with MQSeries QCF registered to weblogic JNDI via startup class.
Upon server startup weblogic is throwing ClassCast exception for QueueConnection.
It seems weblogic is expecting QueueConnection implementation by weblogic.
Any suggestions or alternative way of doing it.
Thanks
Jay PArikh
Hi Jay,
Server session pools do not support foreign providers.
The preferred way to integrate foreign providers is via MDBs or
via the Messaging Bridge (bridge available in SP3). For a
comprehensive write-up on integrating foreign providers with
WL see the whitepaper "Using Foreign JMS Providers with
WebLogic Server" on dev2dev.bea.com.
Tom
Jay Parikh wrote:
> Hi!
>
> We have Weblogic 6.1 SP2 installation.
>
> We are trying to use JMS Server session pool and connection consumer configuration
> with MQSeries QCF registered to weblogic JNDI via startup class.
>
> Upon server startup weblogic is throwing ClassCast exception for QueueConnection.
> It seems weblogic is expecting QueueConnection implementation by weblogic.
>
> Any suggestions or alternative way of doing it.
>
> Thanks
> Jay PArikh
-
Connection Pool and Database Sessions
Hi,
Is there any way to use the connection pool or Datasource while connecting to database?If I am using a stateless sesssion bean and using a Data Access layer which just creates a database session to write the persistence toplink objects how I can make use of application server connection pool?
Thanks,
VinodHi,
Is there any way to use the connection pool or Datasource while connecting to database?If I am using a stateless sesssion bean and using a Data Access layer which just creates a database session to write the persistence toplink objects how I can make use of application server connection pool?Hi Vinod,
Yes, TopLink allows you to use the app server's connection pooling and transaction services. Chapter 2 of the Oracle9iAS TopLink Foundation Library Guide provides details as do the TopLink examples. The easiest way to set this up is by using the sessions.xml file. The sample XML below is from the file <toplink903>\examples\ias\examples\ejb\sessionbean\sessions.xml. Here we are adding the datasource defined in OC4J and specifying that we are using the OC4J transaction controller also.
<login>
<user-name>sa</user-name>
<password></password>
<datasource>java:comp/env/jdbc/ejbJTSDataSource</datasource>
<uses-external-transaction-controller>true</uses-external-transaction-controller>
<uses-external-connection-pool>true</uses-external-connection-pool>
</login>
<external-transaction-controller-class>oracle.toplink.jts.oracle9i.Oracle9iJTSExternalTransactionController</external-transaction-controller-class>
When using this approach you need to change your TopLink code slightly in the EJB methods:
a. Acquire the ACTIVE unit of work from the server
session (again, see the EmployeeSessionEJB code
example) with something like:
UnitOfWork uow = clientSession.getActiveUnitOfWork();
b. Calls to uow.commit() can be ommitted or commented out
because the EJB will handle this. Note that of course
the methods you create in the EJB that are using this
approach must have TX Required (default).
Hope this helps.
Pete -
Is there a difference between Statement Cache and the statement handle!
Hello!
The OCI statement cache is !session! wide. When I have a sql statement that was used before, I can use this feature.
But what is the difference between this feature and my statement handle for a certain sql statement that I can store and reuse a second time?
My stored statement handle is already prepared and the placeholders are bound. The second time I only have to copy new values in the memory positions and do an execute and that's all.
Thank you in advance
WolfgangThe underlying optimization is the same. When you re-execute a statement, you are reusing the metadata already available in the statement and the cursor already open on the server. If you know exactly the set of statements that you are going to execute repeatedly, you can maintain the cache on your own. (Yes, you save on doing the Bind/Define calls multiple times).
OCI Statement cache makes it transparent and the application does not need to keep the references/indexes to the relevant statements. Also once a cache size is set, least recently used statements get out of the cache when the cache is full and needs to accommodate more.
To optimize the bind/defines on the statements from the statement cache, you can use this feature:
http://www.filibeto.org/sun/lib/nonsun/oracle/11.2.0.1.0/E11882_01/appdev.112/e10646/oci09adv.htm#sthref1358 -
Pooling Prepared Statement and Resultset
Is their any mechanism in Weblogic by which we can pool prepared statements and
resultset.Yes you can cache prepared statements. The connection pool will
automatically cache the prepared statements (10 prepared statement was the
limit in previous so you may want to write some startup class to load the
imp prepared statements.). When you reuse the prepared statement the
connection pool will pick it from its prepared statement cache. In 6.1 you
can configure the number of prepared statements that you want to cache.
But you cannot have multiple resultsets open for the same statement object.
You can have cache rows though.
Please search the bea newsgroups for more info. There are a bunch of posts
that will be helpful to you.
http://search.bea.com/weblogic/gonews/
sree
"hogan" <[email protected]> wrote in message
news:3bd9cf88$[email protected]..
>
Is their any mechanism in Weblogic by which we can pool preparedstatements and
resultset. -
Listener EA2: database connection pool and connection revalidation
Hi all,
As one can expect from early adopter release there could be some bugs but I can't find any references in forum to my situation:
* My 11g XE database and listener are starting as windows services when server boots operating system (Windows Server 2003 R2).
* I configured my web server (unsupported Jetty 9.0.0.M1) to start as windows service when operating system starts.
* Apex Listener 2.0.0.268.17.05 configured to connect with XE using JDBC thin driver with default settings (initial pool size 3, max statements 10, min connections 1, max connections 10, inactivity timeout 1800, abandoned connection timeout 900)
* Because web server starts a bit faster than Oracle database when apex connects first time it gets "ORA-12528, TNS:listener: all appropriate instances are blocking new connections" (could be that database still starting but already registered service with listener)
* From listener.log file I can see that all further connections made from Apex listener succeeds
* When I try to open any apex page with browser I am getting 404 error and apex listener logs error (*time is 2 days after system startup*):
2012-11-30 3:56:02 PM oracle.dbtools.common.config.db.DatabaseConfig badConfiguration
SEVERE: The pool named: apex is not correctly configured, error: Listener refused the connection with the following error:
ORA-12528, TNS:listener: all appropriate instances are blocking new connections
ConnectionPoolException [error=BAD_CONFIGURATION]
at oracle.dbtools.common.jdbc.ConnectionPoolException.badConfiguration(ConnectionPoolException.java:62)
at oracle.dbtools.common.config.db.DatabaseConfig.badConfiguration(DatabaseConfig.java:146)
at oracle.dbtools.common.config.db.DatabaseConfig.createPool(DatabaseConfig.java:168)
at oracle.dbtools.common.config.db.DatabaseConfig.getConnection(DatabaseConfig.java:68)
at oracle.dbtools.common.jdbc.ora.OraPrincipal.connection(OraPrincipal.java:25)
at oracle.dbtools.apex.ModApexContext.getConnection(ModApexContext.java:320)
at oracle.dbtools.apex.Procedure.getProcedure(Procedure.java:166)
at oracle.dbtools.apex.OWA.validateProcedure(OWA.java:384)
at oracle.dbtools.apex.security.Security.isValidRequest(Security.java:171)
at oracle.dbtools.apex.ModApex.validateRequest(ModApex.java:233)
at oracle.dbtools.apex.ModApex.doGet(ModApex.java:79)
at oracle.dbtools.apex.ModApex.service(ModApex.java:263)
at oracle.dbtools.rt.web.HttpEndpointBase.modApex(HttpEndpointBase.java:288)
at oracle.dbtools.rt.web.HttpEndpointBase.service(HttpEndpointBase.java:127)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:665)
<... Jetty web server stack ...>
at java.lang.Thread.run(Unknown Source)
2012-11-30 3:56:02 PM oracle.dbtools.rt.web.HttpEndpointBase modApex
* Oracle listener log for same time (no errors here):
30-NOV-2012 15:56:01 * (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=SYSTEM))(SERVICE_NAME=xe)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=SYSTEM))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1078)) * establish * xe * 0
30-NOV-2012 15:56:01 * (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=SYSTEM))(SERVICE_NAME=xe)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=SYSTEM))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1079)) * establish * xe * 0
30-NOV-2012 15:56:01 * (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=SYSTEM))(SERVICE_NAME=xe)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=SYSTEM))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1080)) * establish * xe * 0
30-NOV-2012 15:56:01 * (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=SYSTEM))(SERVICE_NAME=xe)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=SYSTEM))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1081)) * establish * xe * 0
* For some reason apex listener keeps first connection status and won't try to establish new connection if first attempt finished with ORA-12528
* The same scenario is valid when at time of web server start oracle database/listener is not available - even if database and listener starts and apex successfully establishes further connections all apex pages gets 404 error
* If I restart web server windows service (while oracle db and listener still running) browser opens apex pages without errors and apex listener does not log any errors
* I know that I can avoid this error delaying start of web server windows service but it would be nice to have production release 2.x without such bugsHi,
Is there any way to use the connection pool or Datasource while connecting to database?If I am using a stateless sesssion bean and using a Data Access layer which just creates a database session to write the persistence toplink objects how I can make use of application server connection pool?Hi Vinod,
Yes, TopLink allows you to use the app server's connection pooling and transaction services. Chapter 2 of the Oracle9iAS TopLink Foundation Library Guide provides details as do the TopLink examples. The easiest way to set this up is by using the sessions.xml file. The sample XML below is from the file <toplink903>\examples\ias\examples\ejb\sessionbean\sessions.xml. Here we are adding the datasource defined in OC4J and specifying that we are using the OC4J transaction controller also.
<login>
<user-name>sa</user-name>
<password></password>
<datasource>java:comp/env/jdbc/ejbJTSDataSource</datasource>
<uses-external-transaction-controller>true</uses-external-transaction-controller>
<uses-external-connection-pool>true</uses-external-connection-pool>
</login>
<external-transaction-controller-class>oracle.toplink.jts.oracle9i.Oracle9iJTSExternalTransactionController</external-transaction-controller-class>
When using this approach you need to change your TopLink code slightly in the EJB methods:
a. Acquire the ACTIVE unit of work from the server
session (again, see the EmployeeSessionEJB code
example) with something like:
UnitOfWork uow = clientSession.getActiveUnitOfWork();
b. Calls to uow.commit() can be ommitted or commented out
because the EJB will handle this. Note that of course
the methods you create in the EJB that are using this
approach must have TX Required (default).
Hope this helps.
Pete -
Difference between Session method and call transaction
Hi,
please give me the differences between session method and call transaction,
briefly explaining synchronus , asynchronus, process, update.
Thanks in advanceHi Vijay Kumar
CLASSICAL BATCH INPUT (Session Method)
CALL TRANSACTION
BATCH INPUT METHOD:
This method is also called as CLASSICAL METHOD.
Features:
Asynchronous processing.
Synchronous Processing in database update.
Transfer data for more than one transaction.
Batch input processing log will be generated.
During processing, no transaction is started until the previous transaction has been written to the database.
CALL TRANSACTION METHOD :
This is another method to transfer data from the legacy system.
Features:
Synchronous processing. The system performs a database commit immediately before and after the CALL TRANSACTION USING statement.
Updating the database can be either synchronous or asynchronous. The program specifies the update type.
Transfer data for a single transaction.
Transfers data for a sequence of dialog screens.
No batch input processing log is generated.
http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
Among the two methods call transaction is better compared to session bcoz data transfer is faster in it.
Differences between call transaction and session.
Session Method:
1) Data is not updated in the database table until the session is processed.
2) No sy-subrc is returned.
3) Error log is created for error records.
4) Updation is always synchronous.
Call Transaction Method:
1) Immediate updation in the database table.
2) sy-subrc is returned.
3)Error need to be handled explicitly.
4) updation can be synchronous as well as asynchronous.
2) ya u can use using the N mode no screen.
3)u can't handle multiple transaction in call transaction.
4) u can handle multiple transaction in session using the BDC_INSERT fm.
5)When u go to SM35 u can able to see the error records.
Which is best?
That depends on your requirement. Both of them have there advantages.
According to the situation u can choose any one of these.
difference between batch input and call transaction in BDC Session method.
1) synchronous processing.
2) can tranfer large amount of data.
3) processing is slower.
4) error log is created
5) data is not updated until session is processed.
Call transaction.
1) asynchronous processing
2) can transfer small amount of data
3) processing is faster.
4) errors need to be handled explicitly
5) data is updated automatically
For session method,these are the function modules to b used.
BDC_OPEN_GROUP
BDC_INSERT
BDC_CLOSE_GROUP
For call transaction,this is the syntax.
CALL TRANSACTION TCODE USING BDCDATA
MODE A or E or N
UPDATE A or S
MESSAGES INTO MESSTAB.
Take a scenario where we need to post documents in FB01 and the input file has say 2000 records (2000 documents, not line items in FB01 but 2000 records)
In the BDC call transaction method
We call the transaction FB01 2000 times (once for each record posting) and if the processing fails in record no 3 it can be captured and start with reocord 4.
Eg: Loop at itab.
call transaction FB01
capture errors
endloop.
In the session method.
We do not explicity call the transaction 2000 times, but all the records are appeneded into a session and this session is stored. The processinf of the session is done wwhenever the user wants it to be done. Hence the errors cannot be captured in the program itself
Check these link:
http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
http://www.sap-img.com/abap/question-about-bdc-program.htm
http://www.itcserver.com/blog/2006/06/30/batch-input-vs-call-transaction/
http://www.planetsap.com/bdc_main_page.htm
Batch Input Session method is asynchronous as told by others here. But the advantage of this is that you have all the error messages and the data for each transaction held persistantly. You don't have to code anything for processing them or writing the logs.
But at the same time, the same feature can be disadvantageous if you need to react to an error or if there are too many errors to manually correct in a session. Since the session are created in the program and its execution is done seperately, you loose the trackability of such transactions.
With a call transaction, what was a disadvantage above will become an advantage. Call transaction immediately gives you messages back and you can react to it in your program. But the disadvantage is that, if you have several hundreds of transactions to run, running them from within the program can be resource crunching affair. It will hamper the system performance and you cannot really distribute the load. Of course, you have some mechanisms with which you can overcome this, but you will have to code for it. Also, storing the messages and storing the errored transaction data etc will have to be handled by you in the program. Whereas, in batch input session, your program's job is to just create the session, after that everything is standard SAP system's responsibility.
Ideally, you should do a call transaction if the resources are not a problem and if it fails, put the errored transaction into a session.
You can decide based on the data volume that your BDC is processing. If data volume is high go for session else call transaction will do.The call transaction updates will be instantaneous where as session needs to be processed explictly after creation.
Session Method
1) Session method supports both small amount of data aswell as large amount of data
2) data processing is asynchronus and data updation is synchronus.
3) it process multiple apllication while perfomaning validations.
4) in session method data will be updated in data base only after processing session only.
5) system provide by default logfile for handling error records.
6) it supports both foreground aswell as background process
in bdc we use FM ...
bdc_open_group " for creating Session
bdc_insert " adding transaction and bdcdata table for updating database
bdc_close_group " for closing Session
Call Transaction
1) Call transaction exclusively for small amout of data
2) it supports only one apllication while perfoming validations
3) there is no default logfile, We can explicitly provide logic for creating logfile for handling error records.
we can create logfile by using structure....BDCMSGCOLL
4) it doesn't support background processing.
5) data processing is synchronous and Data updation is Synchronous( default), in
this method also supports daya updation in asynchronus process also.
syntax:
Call transaction <transaction-name> using BDCDATA
mode <A/N/E>
update <L/A/S>
messages into BDCMSGCOLL.
BDC:
Batch Data Communication (BDC) is the process of transferring data from one SAP System to another SAP system or from a non-SAP system to SAP System.
Features :
BDC is an automatic procedure.
This method is used to transfer large amount of data that is available in electronic medium.
BDC can be used primarily when installing the SAP system and when transferring data from a legacy system (external system).
BDC uses normal transaction codes to transfer data.
Types of BDC :
CLASSICAL BATCH INPUT (Session Method)
CALL TRANSACTION
BATCH INPUT METHOD:
This method is also called as CLASSICAL METHOD.
Features:
Asynchronous processing.
Synchronous Processing in database update.
Transfer data for more than one transaction.
Batch input processing log will be generated.
During processing, no transaction is started until the previous transaction has been written to the database.
CALL TRANSACTION METHOD :
This is another method to transfer data from the legacy system.
Features:
Synchronous processing. The system performs a database commit immediately before and after the CALL TRANSACTION USING statement.
Updating the database can be either synchronous or asynchronous. The program specifies the update type.
Transfer data for a single transaction.
Transfers data for a sequence of dialog screens.
No batch input processing log is generated.
For BDC:
http://myweb.dal.ca/hchinni/sap/bdc_home.htm
https://www.sdn.sap.com/irj/sdn/wiki?path=/display/home/bdc&
http://www.sap-img.com/abap/learning-bdc-programming.htm
http://www.sapdevelopment.co.uk/bdc/bdchome.htm
http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
http://help.sap.com/saphelp_47x200/helpdata/en/69/c250684ba111d189750000e8322d00/frameset.htm
http://www.sapbrain.com/TUTORIALS/TECHNICAL/BDC_tutorial.html
Check these link:
http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
http://www.sap-img.com/abap/question-about-bdc-program.htm
http://www.itcserver.com/blog/2006/06/30/batch-input-vs-call-transaction/
http://www.planetsap.com/bdc_main_page.htm
call Transaction or session method ?
Check the following links:
http://www.sap-img.com/bdc.htm
See below example code :
Call three FM : BDC_OPEN_GROUP,BDC_INSERT and BDC_CLOSE_GROUP.
Once you execute the program and it creates the session at SM35 Transaction.
Report : ZMPPC011
Type : Data upload
Author : Chetan Shah
Date : 05/05/2005
Transport : DV3K919557
Transaction: ??
Description: This ABAP/4 Program creates new Production Versions
(C223). It accepts tab-delimited spreadsheet input and
creates BDC sessions.
Modification Log
Date Programmer Request # Description
06/10/2005 Chetan Shah DV3K919557 Initial coding
report zmppc011 no standard page heading line-size 120 line-count 55
message-id zz.
pool of form routines
include zmppn001.
Define BDC Table Structure
data: begin of itab_bdc_tab occurs 0.
include structure bdcdata.
data: end of itab_bdc_tab.
Input record layout of Leagcy File
data: begin of itab_xcel occurs 0,
matnr(18) type c,
werks(4) type c,
alnag(2) type c,
verid(4) type c,
text1(40) type c,
bstmi like mkal-bstmi,
bstma like mkal-bstma,
adatu(10) type c,
bdatu(10) type c,
stlal(2) type c,
stlan(1) type c,
serkz(1) type c,
mdv01(8) type c,
elpro(4) type c,
alort(4) type c,
end of itab_xcel.
data: begin of lt_pp04_cache occurs 0,
matnr like itab_xcel-matnr,
werks like itab_xcel-werks,
alnag like itab_xcel-alnag,
plnnr like mapl-plnnr,
arbpl like crhd-arbpl,
ktext like crtx-ktext,
end of lt_pp04_cache.
data: v_ssnnr(4) type n,
v_lines_in_xcel like sy-tabix,
v_ssnname like apqi-groupid,
v_trans_in_ssn type i,
wa_xcel LIKE itab_xcel,
l_tabix like sy-tabix,
v_matnr like rc27m-matnr,
v_plnnr like mapl-plnnr,
v_plnal like mapl-plnal,
v_tcode like sy-tcode value 'C223',
v_plnty like plas-plnty value 'R',
v_objty like crhd-objty value 'A',
v_plpo_steus like plpo-steus value 'PP04',
v_verwe like crhd-verwe value '0007'.
Parameters
selection-screen: skip 3.
selection-screen: begin of block 1 with frame.
parameters: p_name like rlgrap-filename
default 'C:\My Documents\InputFile.txt'
obligatory,
bdc session name prefix
p_bdcpfx(6) default 'ZPVCRT'
obligatory,
number for transction per BDC session
p_trnssn type i
default 2000 obligatory,
retain the BDC session after successfull execution
p_keep like apqi-qerase
default 'X',
user who will be executing BDC session
p_uname like apqi-userid
default sy-uname
obligatory.
selection-screen: end of block 1.
possible entry list (F4 dropdown) for input file name
at selection-screen on value-request for p_name.
*-SELECT FILE FROM USERS LOCAL PC
call function 'WS_FILENAME_GET'
exporting
DEF_FILENAME = ' '
def_path = 'C:\Temp\'
mask = ',.,..'
mode = 'O'
title = 'Select File '(007)
importing
filename = p_name
RC =
exceptions
inv_winsys = 1
no_batch = 2
selection_cancel = 3
selection_error = 4
others = 5.
if sy-subrc 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
endif.
begin the show
start-of-selection.
read data from input file
perform transfer_xcel_to_itab.
loop at itab_xcel.
hang on to xcel line num
l_tabix = sy-tabix.
each line in the xcel file marks begining of new prod.version defn
if num-of-trnas-in-session = 0, create new BDC session
if v_trans_in_ssn is initial.
perform bdc_session_open.
endif.
begin new bdc script for rtg create trans
fill in bdc-data for prod.version maintenance screens
perform bdc_build_script.
insert the bdc script as a BDC transaction
perform bdc_submit_transaction.
keep track of how many BDC transactions were inserted in the BDC
session
add 1 to v_trans_in_ssn.
if the user-specified num of trans in BDC session is reached OR
if end of input file is reached, close the BDC session
if v_trans_in_ssn = p_trnssn or
l_tabix = v_lines_in_xcel.
perform bdc_session_close.
clear v_trans_in_ssn.
endif.
endloop.
top-of-page.
call function 'Z_HEADER'
EXPORTING
FLEX_TEXT1 =
FLEX_TEXT2 =
FLEX_TEXT3 =
FORM TRANSFER_XCEL_TO_ITAB *
Transfer Xcel Spreadsheet to SAP Internal Table *
form transfer_xcel_to_itab.
Read the tab-delimited file into itab
call function 'WS_UPLOAD'
exporting
filename = p_name
filetype = 'DAT'
IMPORTING
filelength = flength
tables
data_tab = itab_xcel
exceptions
conversion_error = 1
file_open_error = 2
file_read_error = 3
invalid_table_width = 4
invalid_type = 5
no_batch = 6
unknown_error = 7
others = 8.
if sy-subrc = 0.
sort the data
sort itab_xcel by matnr werks.
clear v_lines_in_xcel.
if no data in the file - error out
describe table itab_xcel lines v_lines_in_xcel.
if v_lines_in_xcel is initial.
write: / 'No data in input file'.
stop.
endif.
else.
if file upload failed - error out
write: / 'Error reading input file'.
stop.
endif.
endform.
FORM BDC_SESSION_OPEN *
Open BDC Session *
form bdc_session_open.
create bdc session name = prefix-from-selectn-screen + nnnn
add 1 to v_ssnnr.
concatenate p_bdcpfx v_ssnnr into v_ssnname.
open new bdc session
call function 'BDC_OPEN_GROUP'
exporting
client = sy-mandt
group = v_ssnname
keep = p_keep
user = p_uname
exceptions
client_invalid = 1
destination_invalid = 2
group_invalid = 3
group_is_locked = 4
holddate_invalid = 5
internal_error = 6
queue_error = 7
running = 8
system_lock_error = 9
user_invalid = 10
others = 11.
endform.
FORM BDC_BUILD_SCRIPT *
Build BDC *
form bdc_build_script.
data: l_arbpl like crhd-arbpl,
l_text1 like mkal-text1,
l_mdv01 like mkal-mdv01,
l_mapl like mapl.
clear bdc-data itab - begin of new bdc transaction
clear itab_bdc_tab.
refresh itab_bdc_tab.
read material cross reference tables to determine sap part#
clear : v_matnr, v_plnnr, v_plnal.
perform read_matnr_cross_ref using itab_xcel-matnr
itab_xcel-werks
changing v_matnr.
determine the version description to use
if itab_xcel-text1 is initial.
l_text1 = itab_xcel-verid.
else.
l_text1 = itab_xcel-text1.
endif.
determine the routing group# and group ctr# to use
perform read_routing .
determine the production line to use
if itab_xcel-mdv01 is initial.
if not provided in the file then:
prod line = work ctr on the last PP04 op of the rtg determined above
perform read_wc_on_last_pp04 using v_plnnr v_plnal
changing l_mdv01.
NOTE: when executing the above form\routine, if v_plnnr is initial
or v_plnal is initial, THEN l_mdv01 will automatically be
returned blank (ie initial)
else.
l_mdv01 = itab_xcel-mdv01.
endif.
build bdc script
perform bdc_build_script_record
fill in initial screen
using: 'X' 'SAPLCMFV' '1000',
' ' 'BDC_OKCODE' '=ENTE',
' ' 'MKAL-WERKS' itab_xcel-werks,
' ' 'MKAL-MATNR' v_matnr,
' ' 'MKAL_ADMIN-DISPO' space,
' ' 'MKAL-PLNNR' space,
' ' 'MKAL_ADMIN-STTAG' space,
' ' 'MKAL-PLNNG' space,
' ' 'MKAL-MDV01' space,
' ' 'MKAL-PLNNM' space,
click create button on initial screen and go to detail screen
'X' 'SAPLCMFV' '1000',
' ' 'BDC_OKCODE' '=CREA',
fill in the detail screen and go back to initial screen
'X' 'SAPLCMFV' '2000',
' ' 'BDC_OKCODE' '=CLOS',
' ' 'MKAL_EXPAND-MATNR' v_matnr,
' ' 'MKAL_EXPAND-VERID' itab_xcel-verid,
' ' 'MKAL_EXPAND-TEXT1' l_text1,
' ' 'MKAL_EXPAND-BSTMI' itab_xcel-bstmi,
' ' 'MKAL_EXPAND-BSTMA' itab_xcel-bstma,
' ' 'MKAL_EXPAND-ADATU' itab_xcel-adatu,
' ' 'MKAL_EXPAND-BDATU' itab_xcel-bdatu,
' ' 'MKAL_EXPAND-PLTYG' v_plnty,
' ' 'MKAL_EXPAND-PLNNG' v_plnnr,
' ' 'MKAL_EXPAND-ALNAG' v_plnal,
' ' 'MKAL_EXPAND-STLAL' itab_xcel-stlal,
' ' 'MKAL_EXPAND-STLAN' itab_xcel-stlan,
' ' 'MKAL_EXPAND-SERKZ' itab_xcel-serkz,
' ' 'MKAL_EXPAND-MDV01' l_mdv01,
' ' 'MKAL_EXPAND-ELPRO' itab_xcel-elpro,
' ' 'MKAL_EXPAND-ALORT' itab_xcel-alort,
save the production version from initial screen
'X' 'SAPLCMFV' '1000',
' ' 'BDC_OKCODE' '=SAVE'.
endform.
FORM BDC_SUBMIT_TRANSACTION *
Submit BDC Session *
form bdc_submit_transaction.
Load BDC script as a trqansction in BDC session
call function 'BDC_INSERT'
exporting
tcode = v_tcode
tables
dynprotab = itab_bdc_tab
exceptions
internal_error = 01
not_open = 02
queue_error = 03
tcode_invalid = 04.
endform.
FORM BDC_BUILD_SCRIPT_RECORD *
form bdc_build_script_record using dynbegin name value.
clear itab_bdc_tab.
if dynbegin = 'X'.
move: name to itab_bdc_tab-program,
value to itab_bdc_tab-dynpro,
'X' to itab_bdc_tab-dynbegin.
else.
move: name to itab_bdc_tab-fnam,
value to itab_bdc_tab-fval.
shift itab_bdc_tab-fval left deleting leading space.
endif.
append itab_bdc_tab.
endform.
FORM BDC_SESSION_CLOSE *
Close BDC Session *
form bdc_session_close.
close the session
call function 'BDC_CLOSE_GROUP'
exceptions
not_open = 1
queue_error = 2
others = 3.
skip 2.
if sy-subrc ne 0.
write: / 'Error Closing BDC Session ' , 'RETURN CODE: ', sy-subrc.
else.
write : / 'Session created:', v_ssnname,
50 '# of transactions:', v_trans_in_ssn.
endif.
endform.
*& Form read_routing_cache
*FORM read_routing_cache USING pi_matnr
pi_werks
pi_alnag
pi_verid
pi_mdv01.
DATA: BEGIN OF lt_plpo OCCURS 0,
vornr LIKE plpo-vornr,
objty LIKE crhd-objty,
objid LIKE crhd-objid,
arbpl LIKE crhd-arbpl,
END OF lt_plpo,
l_mapl_plnnr LIKE mapl-plnnr.
determine the routing group#
CLEAR lt_pp04_cache.
chk if its in the cache first, if not then get it from MAPL table
and put it in the cache
READ TABLE lt_pp04_cache WITH KEY matnr = pi_matnr
werks = pi_werks
alnag = pi_alnag.
IF sy-subrc = 0.
do nothing - lt_pp04_cache header line has rtg#
ELSE.
get the routing group # from MAPL
SELECT plnnr INTO l_mapl_plnnr
FROM mapl UP TO 1 ROWS
WHERE matnr = pi_matnr AND
werks = pi_werks AND
plnty = 'R' AND
plnal = pi_alnag AND
loekz = space.
ENDSELECT.
put it in the cache internal table
IF NOT l_mapl_plnnr IS INITIAL.
lt_pp04_cache-matnr = pi_matnr.
lt_pp04_cache-werks = pi_werks.
lt_pp04_cache-alnag = pi_alnag.
lt_pp04_cache-plnnr = l_mapl_plnnr.
APPEND lt_pp04_cache.
ENDIF.
ENDIF.
if the rtg# was determined AND
-- the work center was not determined yet AND
-- work center was really needed for this line in the input file
then
-- read the work center from last PP04 operation on the routing
-- update the cache accordingly
IF NOT lt_pp04_cache-plnnr IS INITIAL AND
lt_pp04_cache-arbpl IS INITIAL AND
( pi_verid IS INITIAL OR
pi_mdv01 IS INITIAL ).
read the last PP04 operation
CLEAR lt_plpo.
REFRESH lt_plpo.
SELECT vornr eobjty eobjid e~arbpl
INTO CORRESPONDING FIELDS OF TABLE lt_plpo
FROM plas AS b
INNER JOIN plpo AS c
ON bplnty = cplnty AND
bplnnr = cplnnr AND
bzaehl = czaehl
INNER JOIN crhd AS e
ON carbid = eobjid
WHERE b~plnty = v_plnty AND
b~plnnr = lt_pp04_cache-plnnr AND
b~plnal = lt_pp04_cache-alnag AND
c~loekz = space AND
c~steus = v_plpo_steus AND
e~objty = v_objty AND
e~werks = lt_pp04_cache-werks AND
e~verwe = v_verwe.
SORT lt_plpo BY vornr DESCENDING.
READ TABLE lt_plpo INDEX 1.
IF NOT lt_plpo-arbpl IS INITIAL.
lt_pp04_cache-arbpl = lt_plpo-arbpl.
read work center description
SELECT SINGLE ktext INTO lt_pp04_cache-ktext
FROM crtx WHERE objty = lt_plpo-objty AND
objid = lt_plpo-objid AND
spras = sy-langu.
the following read will get the index of the correct record to be
updated in the cache
READ TABLE lt_pp04_cache
WITH KEY matnr = pi_matnr
werks = pi_werks
alnag = pi_alnag.
MODIFY lt_pp04_cache
INDEX sy-tabix
TRANSPORTING arbpl ktext.
ENDIF.
ENDIF.
*ENDFORM. " read_last_pp04_operation_cache
*& Form read_routing
form read_routing.
data: begin of lt_mapl occurs 0,
plnnr like mapl-plnnr,
plnal like mapl-plnal,
end of lt_mapl,
l_arbpl like crhd-arbpl.
get all the rtg# and grp ctr# from MAPL
select plnnr plnal
into corresponding fields of table lt_mapl
from mapl
where matnr = v_matnr and
werks = itab_xcel-werks and
plnty = v_plnty and "Rate Routing
loekz = space. "with del flag = OFF
sort lt_mapl by plnal.
if not itab_xcel-verid is initial.
if the verid=0001 then use the 1st good rtg-grp# and grp-ctr#
if itab_xcel-verid = '0001'.
read table lt_mapl index 1.
v_plnnr = lt_mapl-plnnr.
v_plnal = lt_mapl-plnal.
else.
if the verid0001 then use the rtg-grp# and grp-ctr# of the routing
whose work center on the last PP04 operation matches the given verid
loop at lt_mapl.
clear l_arbpl.
get the work center from the last PP04 operation
perform read_wc_on_last_pp04 using lt_mapl-plnnr
lt_mapl-plnal
changing l_arbpl.
if itab_xcel-verid = l_arbpl.
v_plnnr = lt_mapl-plnnr.
v_plnal = lt_mapl-plnal.
exit.
endif.
endloop.
endif.
else.
do nothing
endif.
For version IDs that are other then '0000' or 'ZWIP' :--
if itab_xcel-verid NE '0000' and
itab_xcel-verid NE 'ZWIP'.
if routing group# or group counter was not determined, make the
valid-to date 99/99/9999 so that the BDC, on execution, errors out.
if v_plnnr is initial or
v_plnal is initial.
itab_xcel-bdatu = '99/99/9999'.
endif.
endif.
determine the routing group#
CLEAR lt_pp04_cache.
chk if its in the cache first, if not then get it from MAPL table
and put it in the cache
READ TABLE lt_pp04_cache WITH KEY matnr = pi_matnr
werks = pi_werks
alnag = pi_alnag.
IF sy-subrc = 0.
do nothing - lt_pp04_cache header line has rtg#
ELSE.
get the routing group # from MAPL
put it in the cache internal table
IF NOT l_mapl_plnnr IS INITIAL.
lt_pp04_cache-matnr = pi_matnr.
lt_pp04_cache-werks = pi_werks.
lt_pp04_cache-alnag = pi_alnag.
lt_pp04_cache-plnnr = l_mapl_plnnr.
APPEND lt_pp04_cache.
ENDIF.
ENDIF.
if the rtg# was determined AND
-- the work center was not determined yet AND
-- work center was really needed for this line in the input file
then
-- read the work center from last PP04 operation on the routing
-- update the cache accordingly
IF NOT lt_pp04_cache-plnnr IS INITIAL AND
lt_pp04_cache-arbpl IS INITIAL AND
( pi_verid IS INITIAL OR
pi_mdv01 IS INITIAL ).
read the last PP04 operation
CLEAR lt_plpo.
REFRESH lt_plpo.
SELECT vornr eobjty eobjid e~arbpl
INTO CORRESPONDING FIELDS OF TABLE lt_plpo
FROM plas AS b
INNER JOIN plpo AS c
ON bplnty = cplnty AND
bplnnr = cplnnr AND
bzaehl = czaehl
INNER JOIN crhd AS e
ON carbid = eobjid
WHERE b~plnty = v_plnty AND
b~plnnr = lt_pp04_cache-plnnr AND
b~plnal = lt_pp04_cache-alnag AND
c~loekz = space AND
c~steus = v_plpo_steus AND
e~objty = v_objty AND
e~werks = lt_pp04_cache-werks AND
e~verwe = v_verwe.
SORT lt_plpo BY vornr DESCENDING.
READ TABLE lt_plpo INDEX 1.
IF NOT lt_plpo-arbpl IS INITIAL.
lt_pp04_cache-arbpl = lt_plpo-arbpl.
read work center description
SELECT SINGLE ktext INTO lt_pp04_cache-ktext
FROM crtx WHERE objty = lt_plpo-objty AND
objid = lt_plpo-objid AND
spras = sy-langu.
the following read will get the index of the correct record to be
updated in the cache
READ TABLE lt_pp04_cache
WITH KEY matnr = pi_matnr
werks = pi_werks
alnag = pi_alnag.
MODIFY lt_pp04_cache
INDEX sy-tabix
TRANSPORTING arbpl ktext.
ENDIF.
ENDIF.
endform. " read_last_pp04_operation_cache
*& Form read_wc_on_last_pp04
form read_wc_on_last_pp04 using pi_plnnr
pi_plnal
changing pe_arbpl.
data: begin of lt_plpo occurs 0,
vornr like plpo-vornr,
objty like crhd-objty,
objid like crhd-objid,
arbpl like crhd-arbpl,
end of lt_plpo.
get all the PP04 operations for the given rtg# & grp-ctr#
select vornr eobjty eobjid e~arbpl
into corresponding fields of table lt_plpo
from plas as b
inner join plpo as c
on bplnty = cplnty and
bplnnr = cplnnr and
bzaehl = czaehl
inner join crhd as e
on carbid = eobjid
where b~plnty = v_plnty and "Rate Routing
b~plnnr = pi_plnnr and
b~plnal = pi_plnal and
c~loekz = space and "Oper Del Flag = OFF
c~steus = v_plpo_steus and "PP04
e~objty = v_objty. "WC Obj Type = 'A'
read the last operation
sort lt_plpo by vornr descending.
read table lt_plpo index 1.
pe_arbpl = lt_plpo-arbpl.
endform. " read_wc_on_last_pp04
Goto LSMW-> Select Direct Input method in 1st step. These are the standard programs for data transfer.
Otherwise goto SPRO->SAP Reference IMG-> Under this you'll find standard data transfer programs module wise.
Reward points,
Shakir -
Connection pooling and auditing on an oracle database
Integration of a weblogic application with an oracle backend,
Connection pooling, and auditing ,2 conflicting requirements ?
Problem statement :
We are in the process of maintaining a legacy client server application where
the client is
written in PowerBuilder and the backend is using an Oracle database.
Almost all business logic is implemented in stored procedures on the database.
When working in client/server mode ,1 PowerBuilder User has a one-to-one relation
with
a connection(session) on the oracle database.
It is a requirement that the database administrator must see the real user connected
to the database
and NOT some kind of superuser, therefore in the PowerBuilder app each user connects
to the database
with his own username.(Each user is configured on the database via a seperate
powerbuilder security app).
For the PowerBuilder app all is fine and this app can maintain conversional state(setting
and
reading of global variables in oracle packages).
The management is pushing for web-based application where we will be using bea
weblogic appserver(J2EE based).
We have build an business app which is web-based and accessing the same oracle
backend app as
the PowerBuilder app is doing.
The first version of this web-based app is using a custom build connector(based
on JCA standard and
derived from a template provided by the weblogic integration installation).
This custom build connector is essentially a combination of a custom realm in
weblogic terms
and a degraded connection pool , where each web session(browser) has a one-to-one
relation
with the back end database.
The reason that this custom connector is combining the security functionality
and the pooling
functionality , is because each user must be authenticated against the oracle
database(security requirement)
and NOT against a LDAP server, and we are using a statefull backend(oracle packages)
which would make it
difficult to reuse connections.
A problem that surfaced while doing heavy loadtesting with the custom connector,
is that sometimes connections are closed and new ones made in the midst of a transaction.
If you imagine a scenario where a session bean creates a business entity ,and
the session bean
calls 1 entity bean for the header and 1 entity bean for the detail, then the
header and detail
must be created in the same transaction AND with the same connection(there is
a parent-child relationship
between header and detail enforced on the back end database via Primary and Foreing
Keys).
We have not yet found why weblogic is closing the connection!
A second problem that we are experincing with the custom connector, is the use
of CMP(container managed persistence)
within entity beans.
The J2EE developers state that the use of CMP decreases the develoment time and
thus also maintenance costs.
We have not yet found a way to integrate a custom connector with the CMP persistence
scheme !
In order to solve our loadtesting and CMP persistence problems i was asked to
come up with a solution
which should not use a custom connector,but use standard connection pools from
weblogic.
To resolve the authentication problem on weblogic i could make a custom realm
which connects to the
backend database with the username and password, and if the connection is ok ,
i could consider this
user as authenticated in weblogic.
That still leaves me with the problem of auditing and pooling.
If i were to use a standard connection pool,then all transaction made in the oracle
database
would be done by a pool user or super user, a solution which will be rejected
by our local security officer,
because you can not see which real user made a transaction in the database.
I could still use the connection pool and in the application , advise the application
developers
to set an oracle package variable with the real user, then on arrival of the request
in the database,
the logic could use this package variable to set the transaction user.
There are still problems with this approach :
- The administrator of the database can still not see who is connected , he will
only see the superuser connection.
- This scheme can not be used when you want to use CMP persistence , since it
is weblogic who will generate the code
to access the database.
I thought i had a solution when oracle provided us with a connection pool known
as OracleOCIConnectionPool
where there is a connection made by a superuser, but where sessions are multiplexed
over this physical pipe with the real user.
I can not seem to properly integrate this OCI connectionpool into weblogic.
When using this pool , and we are coming into a bean (session or entity bean)
weblogic is wrapping
this pool with it's own internal Datasource and giving me back a connection of
the superuser, but not one for the real user,
thus setting me with my back to the wall again.
I would appreciate if anyone had experienced the same problem to share a possible
solution with us
in order to satisfy all requirements(security,auditing,CMP).
Many Thanks
Blyau Gino
[email protected]Hi Blyau,
As Joe has already provided some technical advice,
I'll try to say something on engineering process level.
While migrating an application from one technology to
other, like client-server to n-tier in you case, customers and
stakeholders want to push into the new system as many old
requirements as possible. This approach is AKA "we must
have ALL of the features of the old system". Mostly it happens
because they don't know what they want. Ad little understanding
of abilities of the new technology, and you will get a requirement
like the one you have in you hands.
I think "DBA must see real user" is one of those. For this
type of requirements it can make sense to try to drop it,
or to understand its nature and suggest alternatives. In this
particular case it can be a system that logs user names,
login and logout times.
Blind copying of old features into an incompatible new architecture
may endanger the whole project and can result in its failure.
Hope this helps.
Regards,
Slava Imeshev
"Blyau Gino" <[email protected]> wrote in message
news:[email protected]...
>
Integration of a weblogic application with an oracle backend,
Connection pooling, and auditing ,2 conflicting requirements ?
Problem statement :
We are in the process of maintaining a legacy client server applicationwhere
the client is
written in PowerBuilder and the backend is using an Oracle database.
Almost all business logic is implemented in stored procedures on thedatabase.
When working in client/server mode ,1 PowerBuilder User has a one-to-onerelation
with
a connection(session) on the oracle database.
It is a requirement that the database administrator must see the real userconnected
to the database
and NOT some kind of superuser, therefore in the PowerBuilder app eachuser connects
to the database
with his own username.(Each user is configured on the database via aseperate
powerbuilder security app).
For the PowerBuilder app all is fine and this app can maintainconversional state(setting
and
reading of global variables in oracle packages).
The management is pushing for web-based application where we will be usingbea
weblogic appserver(J2EE based).
We have build an business app which is web-based and accessing the sameoracle
backend app as
the PowerBuilder app is doing.
The first version of this web-based app is using a custom buildconnector(based
on JCA standard and
derived from a template provided by the weblogic integrationinstallation).
This custom build connector is essentially a combination of a custom realmin
weblogic terms
and a degraded connection pool , where each web session(browser) has aone-to-one
relation
with the back end database.
The reason that this custom connector is combining the securityfunctionality
and the pooling
functionality , is because each user must be authenticated against theoracle
database(security requirement)
and NOT against a LDAP server, and we are using a statefull backend(oraclepackages)
which would make it
difficult to reuse connections.
A problem that surfaced while doing heavy loadtesting with the customconnector,
>
is that sometimes connections are closed and new ones made in the midst ofa transaction.
If you imagine a scenario where a session bean creates a business entity,and
the session bean
calls 1 entity bean for the header and 1 entity bean for the detail, thenthe
header and detail
must be created in the same transaction AND with the same connection(thereis
a parent-child relationship
between header and detail enforced on the back end database via Primaryand Foreing
Keys).
We have not yet found why weblogic is closing the connection!
A second problem that we are experincing with the custom connector, is theuse
of CMP(container managed persistence)
within entity beans.
The J2EE developers state that the use of CMP decreases the develomenttime and
thus also maintenance costs.
We have not yet found a way to integrate a custom connector with the CMPpersistence
scheme !
In order to solve our loadtesting and CMP persistence problems i was askedto
come up with a solution
which should not use a custom connector,but use standard connection poolsfrom
weblogic.
To resolve the authentication problem on weblogic i could make a customrealm
which connects to the
backend database with the username and password, and if the connection isok ,
i could consider this
user as authenticated in weblogic.
That still leaves me with the problem of auditing and pooling.
If i were to use a standard connection pool,then all transaction made inthe oracle
database
would be done by a pool user or super user, a solution which will berejected
by our local security officer,
because you can not see which real user made a transaction in thedatabase.
I could still use the connection pool and in the application , advise theapplication
developers
to set an oracle package variable with the real user, then on arrival ofthe request
in the database,
the logic could use this package variable to set the transaction user.
There are still problems with this approach :
- The administrator of the database can still not see who is connected ,he will
only see the superuser connection.
- This scheme can not be used when you want to use CMP persistence , sinceit
is weblogic who will generate the code
to access the database.
I thought i had a solution when oracle provided us with a connection poolknown
as OracleOCIConnectionPool
where there is a connection made by a superuser, but where sessions aremultiplexed
over this physical pipe with the real user.
I can not seem to properly integrate this OCI connectionpool intoweblogic.
When using this pool , and we are coming into a bean (session or entitybean)
weblogic is wrapping
this pool with it's own internal Datasource and giving me back aconnection of
the superuser, but not one for the real user,
thus setting me with my back to the wall again.
I would appreciate if anyone had experienced the same problem to share apossible
solution with us
in order to satisfy all requirements(security,auditing,CMP).
Many Thanks
Blyau Gino
[email protected] -
Use of application module pool and ADF Busines Components
Hi to all;
Lets suppose an web application with about 10 CRUD forms and 15 to 20 reports or forms just to query and show data;
That's clear to me, all the advantages of using App modules pool.
But for that reports ..... Just an Read only and Forward Only data ?
I was wondering, if it will be more effective and lightweight if we just take an JNDI JDBC connection query data and show it.
This imaginary application will make use of application module pool to provide that 10 CRUD web forms and in other hand, will have for reports,JNDI data sources;
What are your opinion about having this two architectural approach working together in one application ?
Very thanks;
Marcos Ortega
Brazil;Hi Deepak;
BC4J in my opinion is great and i am proud to share this opinion with all of you;
As a meter of fact, i post this thread to help me better understand BC4J architecture.
I think that my doubt main point is ...
Are application modules pool's life cycle an extra work , when the job is just to read and show data ?
Perhaps, an document about statefull and/or stateless application service release, help me;
IMHO;
cached data most of the time must to be discarted for reports, always we want to query database directly, View's object ClearCache() method would be called to reports.
I think that it's different, when we are talking about sequent requests when we need to span the session, views and entities states.
Forwards Thanks; -
On Threads, Pools and other beasts
I'm seeing some unexpected behaviours in our production system relating to
(AFAICS) threads and connection pools, and hope you could bring some info
about them.
Our system is built over 4 PIII Xeon processors under Linux.
We do currently have this configuration:
15 threads for the "default" queue and 10 for an special servlet one, we
decided to separate threads in two queues in order to assure our users get
always a thread besides what the rest of the system is "doing".
Even we do not have a lot of users (about 20 or so) they do generate a lot
of load as for the bussiness logic inherent to the application.
For sample, so you can understand what goes with practically any user
"action", think on this.
After a user "confirms" some data via a servlet and after executing the data
validation and bussines rules some messages are sent via JMS to the
"asynchronous" part of the system (that is running in the same weblogic
instance). After commiting the user transaction an thus releasing the
servlet thread, so it can be used by the same or other user, JMS messages
are delivered to MDBs that must transform information from on-line (servlet)
processes in different ways so they can be stored onto other systems, i.e.
into a mainframe, into an XML DB and possibly into another RDBMS. Our
configuration is that there can be as much as 10 MDB of each type (I mean
for each kind of "action" of a servlet) running concurrently and as you can
suppose those processes do take some time to communicate with destination
systems and perform their work.
We end at last with a lot of concurrent processes in our system that ends
some time with the users complaining about system responsiveness.
After all this explanation I would like to know if 25 threads for
"background" and on-line processes is too low (as I'm afraid they are). The
problem is we can't seem to increase the number of threads without being
very careful with JDBC connection pools.
Currently we have two connection pools. We do demarcate transactions in the
clients (servlets, batch processes) we have a "transacted" pool and a "non
transacted" one.
We are delegating persistence to the contanier (formally in our case we are
using TopLink persistence and it uses in it's deployment descriptor both
types of pools)
Our configuration is as follows:
Oracle pool NON Tx 60 connections
Oracle pool Tx 30 connections
initially we create 5 connections for each pool with an increment of 5 for
each one too.
From the tests I have made I have discovered that setting more threads than
the minimum amount of pools yields to this exception:
weblogic.common.ResourceException: No available connections in pool
myNonTxPool
at
weblogic.common.internal.ResourceAllocator.reserve(ResourceAllocator.java:57
8)
at
weblogic.common.internal.ResourceAllocator.reserve(ResourceAllocator.java:40
5)
at
weblogic.common.internal.ResourceAllocator.reserveNoWait(ResourceAllocator.j
ava:373)
at
weblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:165
at
weblogic.jdbc.common.internal.ConnectionPool.reserveNoWait(ConnectionPool.ja
va:126)
at
weblogic.jdbc.common.internal.RmiDataSource.getPoolConnection(RmiDataSource.
java:194)
at
weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java
:219)
the behaviour I would expect is that in case a thread needs a connection and
there isn't any one available the thread may be blocked and it would receive
the connecion once one is released, of course as I can see in the stack the
ConnectionPool.reserveNoWait() method is behaving just the other way.
The main problem with this is that as you can see we are "forced" to spend
90 (60+30) connections to the DB (even we will never use more than 25
(15+10) simultaneously just because we must assure that at least there is
one "reserved" connection to each thread.
Our DBA thinks that it can't be possible that we spend such number of
connections that could be taken by another application(s) (as the DB is
shared with other apps)
Currently our DB system is not set as "multithreaded" so each connection
created against the DB is a process on the system and of course they are a
really scarce resource.
My question is. What would be a "fine" number of threads for an application
like this that is mainly "background-batch processing" but assuring on-line
users have their threads always available?
I have just another doubt (maybe this is not the right thread to ask for it
but...) how does the UserTransaction actually works? I mean, is the
connection given to the thread (and thus extracted from pool) as soon as the
thread begin it's work? or is it given in the instant of "commiting" to the
DB. I know maybe using TopLink changes default Weblogic CMP behaviour but I
would like to know what the "default" Weblogic behaviour is; and, what
happens when you don't start a transaction in the client and total execution
time exceeds 30 seconds? I have seen a rollback due to "exceeding" those 30
seconds althought I'm sure we do not open any transaction, what kind of
"transaction" is that? Is just a way of Weblogic to assure a thread is not
"locked" more than a certain period of time so the system never "stalls"?
Thanks in advance.
Regards.
Ignacio.Hi Ignacio,
See my answer inline.
"Ignacio G. Dupont" <[email protected]> wrote in message
news:[email protected]...
I'm seeing some unexpected behaviors in our production system relating to
(AFAICS) threads and connection pools, and hope you could bring some info
about them.
Our system is built over 4 PIII Xeon processors under Linux.
We do currently have this configuration:
15 threads for the "default" queue and 10 for an special servlet one, weThat numbers defines number of concurrent requests services
associated with that queues can handle. If monitoring CPU utilization
shows that CPU load is not high, let say less than 90%, - you can increase
that numbers.
decided to separate threads in two queues in order to assure our users get
always a thread besides what the rest of the system is "doing".
Even we do not have a lot of users (about 20 or so) they do generate a lot
of load as for the bussiness logic inherent to the application.>
For sample, so you can understand what goes with practically any user
"action", think on this.[eaten]
We end at last with a lot of concurrent processes in our system that ends
some time with the users complaining about system responsiveness.You will have to run a load test in your QA environment and play with queue
sizes. In addition, you may want to run a profiler (like JProbe or
OptimizeIt)
for maximum load to find if there are bottlenecks in the application.
After all this explanation I would like to know if 25 threads for
"background" and on-line processes is too low (as I'm afraid they are).The
It all depends of the usage pattern. I'd say that for a production
environment
with any noticeable load, it's low.
problem is we can't seem to increase the number of threads without being
very careful with JDBC connection pools.Yes, you will have to increase size of the pools to match maximum
number of ongoing transactions. Minimum would be a number of execution
threads. Actual number should be determined either by load testing or
by setting it to a guaranteed high level.
Currently we have two connection pools. We do demarcate transactions inthe
clients (servlets, batch processes) we have a "transacted" pool and a "non
transacted" one.
We are delegating persistence to the contanier (formally in our case weare
using TopLink persistence and it uses in it's deployment descriptor both
types of pools)
Our configuration is as follows:
Oracle pool NON Tx 60 connections
Oracle pool Tx 30 connections
initially we create 5 connections for each pool with an increment of 5 for
each one too.
From the tests I have made I have discovered that setting more threadsthan
the minimum amount of pools yields to this exception:That's quite natural.
weblogic.common.ResourceException: No available connections in pool
myNonTxPool[eaten]
the behaviour I would expect is that in case a thread needs a connectionand
there isn't any one available the thread may be blocked and it wouldreceive
That would lock exec threads very quickly. A connection pool is a vital
resource that is to be available constantly. So weblogic uses fail-fast
approach so that you can adjust setting to match highest load.
the connecion once one is released, of course as I can see in the stackthe
ConnectionPool.reserveNoWait() method is behaving just the other way.>
The main problem with this is that as you can see we are "forced" to spend
90 (60+30) connections to the DB (even we will never use more than 25
(15+10) simultaneously just because we must assure that at least there is
one "reserved" connection to each thread.That's right.
Our DBA thinks that it can't be possible that we spend such number of
connections that could be taken by another application(s) (as the DB is
shared with other apps)I don't think it's a correct observation. Oracle can be configured to handle
more connections. I saw weblogic pools configured to handle 200
connections.
Currently our DB system is not set as "multithreaded" so each connection
created against the DB is a process on the system and of course they are a
scarce resource.Application demand for resources should be satisfied.
My question is. What would be a "fine" number of threads for anapplication
like this that is mainly "background-batch processing" but assuringon-line
users have their threads always available?It should be high enough to satisfy requirement to handle given number
of concurrent requests processed on given hardware. Normally this
is determined by load testing and gradual increase of this number
to the point where you see that hardware (seen as CPU load)
cannot handle it. Buy the way this point sometimes is unreachable
as application becomes DB-bound, i.e. bottleneck is shifted to
the database.
I have just another doubt (maybe this is not the right thread to ask forit
but...) how does the UserTransaction actually works? I mean, is the
connection given to the thread (and thus extracted from pool) as soon asthe
thread begin it's work? or is it given in the instant of "committing" tothe
It's given when a connection, assuming it's obtained from TxDatasource,
is requested.
DB. I know maybe using TopLink changes default Weblogic CMP behavior but I
would like to know what the "default" Weblogic behavior is; and, what
happens when you don't start a transaction in the client and totalexecution
time exceeds 30 seconds? I have seen a rollback due to "exceeding" those30
seconds although I'm sure we do not open any transaction, what kind of
"transaction" is that? Is just a way of Weblogic to assure a thread is notFor instance, stateful session beans are transactional.
"locked" more than a certain period of time so the system never "stalls"?Basically, no, it's not. There is no way to "unlock" thread after certain
period.
So when a queue has finished processing, TX monitor checks the timeout,
and if there is one, issues a corresponding rollback. So, it's possible
for thread to run for 10 hours if the timeout is 30 seconds.
Since 7.0 weblogic is capable of detecting such situations so that
administrator can be informed about it and required actions can be
taken [on application side].
Hope this helps.
Regards,
Slava Imeshev -
HttpSession vs. Stateful Session Bean ---- when State Session is large
I hope most of the people come across with this issue where to put the state
for the internet/intranet based applications when they are using
servlet/jsps calling session beans. Weblogic 4.5.1 does support httpsession
in-memory replication for the servlets but the stateful session beans are
not replicated in clustered environment. Plus with stateful bean u get
activation/passivation overhead too. So one tempts to use stateless session
beans and store state in httpsession?
Then what is the upper limit for the session state one can put in
HttpSession with the weblogic? Is there any way to configure it?
One way to overcome the httpsession size limitation is to use database for
storing state and just store some unique Ids for the session info in
httpSession. But then there will be overhead for database connection?(jdbc
connection pool can provide some help here). So what is the recommended way
for doing this provided few thousand concurrent clients with session size
say exceeding 4kb per client?
Thanks
UsmaniThere are no special setting in sun-ejb-jar.xml regarding cache settings. The default settings from server.xml are used:
<jdbc-connection-pool steady-pool-size="8" max-pool-size="32" max-wait-time-in-millis="60000" pool-resize-quantity="2" idle-timeout-in-seconds="300" is-isolation-level-guaranteed="false" is-connection-validation-required="false" connection-validation-method="auto-commit" fail-all-connections="false" datasource-classname="oracle.jdbc.pool.OracleDataSource" name="ebs">
<property value="jdbc:oracle:thin:@myebsdbsserver:1521:ebsdevdb" name="url"/>
<property value="ebs" name="user"/>
<property value="ebs" name="password"/>
</jdbc-connection-pool>
<ejb-container steady-pool-size="32" pool-resize-quantity="16" max-pool-size="64" cache-resize-quantity="32" max-cache-size="512" pool-idle-timeout-in-seconds="600" cache-idle-timeout-in-seconds="600" removal-timeout-in-seconds="5400" victim-selection-policy="nru" commit-option="B" monitoring-enabled="true">
</ejb-container>The Session Bean uses Container Managed Transaction. Is it possible in this case, that the bean isn't 'idle enough' in order to set into passivated? -
Hi i need material for module pool and alv's.
hi
i need material for module pool and ALV's(not object oriented) if any one have pls do send that to my
ID [email protected]
thanx in advance.Hi
Check the below link:
http://wiki.ittoolbox.com/index.php/FAQ:What_is_module_pool_program_in_abap%3F
http://help.sap.com/saphelp_46c/helpdata/en/35/26b1aaafab52b9e10000009b38f974/content.htm
http://sap.mis.cmich.edu/sap-abap/abap09/sld011.htm
http://sap.mis.cmich.edu/sap-abap/abap09/index.htm
http://www.geocities.com/ZSAPcHAT
http://www.allsaplinks.com/files/using_table_in_screen.pdf
http://help.sap.com/saphelp_webas630/helpdata/en/9f/db9cdc35c111d1829f0000e829fbfe/content.htm
http://www.sapdevelopment.co.uk/dialog/dialoghome.htm
http://www.sap-img.com/
http://help.sap.com/saphelp_46c/helpdata/en/08/bef2dadb5311d1ad10080009b0fb56/content.htm
http://www.sapgenie.com/links/abap.htm
http://help.sap.com/saphelp_nw04/helpdata/en/c9/5472fc787f11d194c90000e8353423/frameset.htm
You can also check the transaction ABAPDOCU which gives you lot of sample programs.
Also you can see the below examples...
Go to se38 and give demodynpro and press F4.
YOu will get a list of demo module pool programs.
One more T-Code is ABAPDOCU.
YOu can find more examples there.
See the prgrams:
DEMO_DYNPRO_TABLE_CONTROL_1 Table Control with LOOP Statement
DEMO_DYNPRO_TABLE_CONTROL_2 Table Control with LOOP AT ITAB
http://www.geocities.com/ZSAPcHAT
http://www.allsaplinks.com/files/using_table_in_screen.pdf
ALV
ABAP List Viewer
The common features of report are column alignment, sorting, filtering, subtotals, totals etc. To implement these, a lot of coding and logic is to be put. To avoid that we can use a concept called ABAP List Viewer (ALV).
This helps us to implement all the features mentioned very effectively.
Using ALV, We can have three types of reports:
1. Simple Report
2. Block Report
3. Hierarchical Sequential Report
There are some function modules which will enable to produce the above reports without much effort.
All the definitions of internal tables, structures and constants are declared in a type-pool called SLIS.
1. SIMPLE REPORT.
The important function modules are
a. Reuse_alv_list_display
b. Reuse_alv_fieldcatalog_merge
c. Reuse_alv_events_get
d. Reuse_alv_commentary_write
e. Reuse_alv_grid_display
A. REUSE_ALV_LIST_DISPLAY : This is the function module which prints the data.
The important parameters are :
I. Export :
i. I_callback_program : report id
ii. I_callback_pf_status_set : routine where a user can set his own pf status or change the functionality of the existing pf status
iii. I_callback_user_command : routine where the function codes are handled
iv. I_structure name : name of the dictionary table
v. Is_layout : structure to set the layout of the report
vi. It_fieldcat : internal table with the list of all fields and their attributes which are to be printed (this table can be populated automatically by the function module REUSE_ALV_FIELDCATALOG_MERGE
vii. It_events : internal table with a list of all possible events of ALV and their corresponding form names.
II. Tables :
i. t_outtab : internal table with the data to be output
B. REUSE_ALV_FIELDCATALOG_MERGE : This function module is used to populate a fieldcatalog which is essential to display the data in ALV. If the output data is from a single dictionary table and all the columns are selected, then we need not exclusively create the field catalog. Its enough to mention the table name as a parameter(I_structure name) in the REUSE_ALV_LIST_DISPLAY. But in other cases we need to create it.
The Important Parameters are :
I. Export :
i. I_program_name : report id
ii. I_internal_tabname : the internal output table
iii. I_inclname : include or the report name where all the dynamic forms are handled.
II Changing
ct_fieldcat : an internal table with the type SLIS_T_FIELDCAT_ALV which is
declared in the type pool SLIS.
C. REUSE_ALV_EVENTS_GET : Returns table of possible events for a list type
Parameters :
I. Import :
Et_Events : The event table is returned with all possible CALLBACK events
for the specified list type (column 'NAME'). For events to be processed by Callback, their 'FORM' field must be filled. If the field is initialized, the event is ignored. The entry can be read from the event table, the field 'FORM' filled and the entry modified using constants from the type pool SALV.
II. Export :
I_List_type :
0 = simple list REUSE_ALV_LIST_DISPLAY
1 = hierarchcal-sequential list REUSE_ALV_HIERSEQ_LIST_DISPLAY
2 = simple block list REUSE_ALV_BLOCK_LIST_APPEND
3 = hierarchical-sequential block list
REUSE_ALV_BLOCK_LIST_HS_APPEND
D. REUSE_ALV_COMMENTARY_WRITE : This is used in the Top-of-page event to print the headings and other comments for the list.
Parameters :
I. it_list_commentary : internal table with the headings of the type slis_t_listheader.
This internal table has three fields :
Typ : H header, S selection , A - action
Key : only when typ is S.
Info : the text to be printed
E. REUSE_ALV_GRID_DISPLAY : A new function in 4.6 version, to display the results in grid rather than as a preview.
Parameters : same as reuse_alv_list_display
This is an example for simple list.
2. BLOCK REPORT
This is used to have multiple lists continuously.
The important functions used in this report are:
A. REUSE_ALV_BLOCK_LIST_INIT
B. REUSE_ALV_BLOCK_LIST_APPEND
C. REUSE_ALV_BLOCK_LIST_HS_APPEND
D. REUSE_ALV_BLOCK_LIST_DISPLAY
A. REUSE_ALV_BLOCK_LIST_INIT
Parameters:
I. I_CALLBACK_PROGRAM
II. I_CALLBACK_PF_STATUS_SET
III. I_CALLBACK_USER_COMMAND
This function module is used to set the default gui status etc.
B. REUSE_ALV_BLOCK_LIST_APPEND
Parameters :
Export :
I. is_layout : layout settings for block
II. it_fieldcat : field catalog
III. i_tabname : internal table name with output data
IV. it_events : internal table with all possible events
Tables :
i. t_outtab : internal table with output data.
This function module adds the data to the block.
Repeat this function for all the different blocks to be displayed one after the other.
C. REUSE_ALV_BLOCK_LIST_HS_APPEND
This function module is used for hierarchical sequential blocks.
D. REUSE_ALV_BLOCK_LIST_DISPLAY
Parameters : All the parameters are optional.
This function module display the list with data appended by the above function.
Here the functions REUSE_ALV_FIELDCATALOG_MERGE, REUSE_ALV_EVENTS_GET, REUSE_ALV_COMMENTARY_WRITE can be used.
3. Hierarchical reports :
Hierarchical sequential list output.
The function module is
A. REUSE_ALV_HIERSEQ_LIST_DISPLAY
Parameters:
I. Export:
i. I_CALLBACK_PROGRAM
ii. I_CALLBACK_PF_STATUS_SET
iii. I_CALLBACK_USER_COMMAND
iv. IS_LAYOUT
v. IT_FIELDCAT
vi. IT_EVENTS
vii. i_tabname_header : Name of the internal table in the program containing the
output data of the highest hierarchy level.
viii. i_tabname_item : Name of the internal table in the program containing the
output data of the lowest hierarchy level.
ix. is_keyinfo : This structure contains the header and item table field
names which link the two tables (shared key).
II. Tables
i. t_outtab_header : Header table with data to be output
ii. t_outtab_item : Name of the internal table in the program containing the
output data of the lowest hierarchy level.
slis_t_fieldcat_alv : This internal table contains the field attributes. This internal table can be populated automatically by using REUSE_ALV_FIELDCATALOG_MERGE.
Important Attributes :
A. col_pos : position of the column
B. fieldname : internal fieldname
C. tabname : internal table name
D. ref_fieldname : fieldname (dictionary)
E. ref_tabname : table (dictionary)
F. key(1) : column with key-color
G. icon(1) : icon
H. symbol(1) : symbol
I. checkbox(1) : checkbox
J. just(1) : (R)ight (L)eft (C)ent.
K. do_sum(1) : sum up
L. no_out(1) : (O)blig.(X)no out
M. outputlen : output length
N. seltext_l : long key word
O. seltext_m : middle key word
P. seltext_s : short key word
Q. reptext_ddic : heading (ddic)
R. ddictxt(1) : (S)hort (M)iddle (L)ong
S. datatype : datatype
T. hotspot(1) : hotspot
Simple ALV report
http://www.sapgenie.com/abap/controls/alvgrid.htm
http://wiki.ittoolbox.com/index.php/Code:Ultimate_ALV_table_toolbox
ALV
1. Please give me general info on ALV.
http://www.sapfans.com/forums/viewtopic.php?t=58286
http://www.sapfans.com/forums/viewtopic.php?t=76490
http://www.sapfans.com/forums/viewtopic.php?t=20591
http://www.sapfans.com/forums/viewtopic.php?t=66305 - this one discusses which way should you use - ABAP Objects calls or simple function modules.
2. How do I program double click in ALV?
http://www.sapfans.com/forums/viewtopic.php?t=11601
http://www.sapfans.com/forums/viewtopic.php?t=23010
3. How do I add subtotals (I have problem to add them)...
http://www.sapfans.com/forums/viewtopic.php?t=20386
http://www.sapfans.com/forums/viewtopic.php?t=85191
http://www.sapfans.com/forums/viewtopic.php?t=88401
http://www.sapfans.com/forums/viewtopic.php?t=17335
4. How to add list heading like top-of-page in ABAP lists?
http://www.sapfans.com/forums/viewtopic.php?t=58775
http://www.sapfans.com/forums/viewtopic.php?t=60550
http://www.sapfans.com/forums/viewtopic.php?t=16629
5. How to print page number / total number of pages X/XX in ALV?
http://www.sapfans.com/forums/viewtopic.php?t=29597 (no direct solution)
6. ALV printing problems. The favourite is: The first page shows the number of records selected but I don't need this.
http://www.sapfans.com/forums/viewtopic.php?t=64320
http://www.sapfans.com/forums/viewtopic.php?t=44477
7. How can I set the cell color in ALV?
http://www.sapfans.com/forums/viewtopic.php?t=52107
8. How do I print a logo/graphics in ALV?
http://www.sapfans.com/forums/viewtopic.php?t=81149
http://www.sapfans.com/forums/viewtopic.php?t=35498
http://www.sapfans.com/forums/viewtopic.php?t=5013
9. How do I create and use input-enabled fields in ALV?
http://www.sapfans.com/forums/viewtopic.php?t=84933
http://www.sapfans.com/forums/viewtopic.php?t=69878
10. How can I use ALV for reports that are going to be run in background?
http://www.sapfans.com/forums/viewtopic.php?t=83243
http://www.sapfans.com/forums/viewtopic.php?t=19224
11. How can I display an icon in ALV? (Common requirement is traffic light icon).
http://www.sapfans.com/forums/viewtopic.php?t=79424
http://www.sapfans.com/forums/viewtopic.php?t=24512
12. How can I display a checkbox in ALV?
http://www.sapfans.com/forums/viewtopic.php?t=88376
http://www.sapfans.com/forums/viewtopic.php?t=40968
http://www.sapfans.com/forums/viewtopic.php?t=6919
Go thru these programs they may help u to try on some hands on
ALV Demo program
BCALV_DEMO_HTML
BCALV_FULLSCREEN_DEMO ALV Demo: Fullscreen Mode
BCALV_FULLSCREEN_DEMO_CLASSIC ALV demo: Fullscreen mode
BCALV_GRID_DEMO Simple ALV Control Call Demo Program
BCALV_TREE_DEMO Demo for ALV tree control
BCALV_TREE_SIMPLE_DEMO
BC_ALV_DEMO_HTML_D0100
Reward points if useful
Regards
Anji -
Oracle Application server connection pool and database links
I am using Oracle application server 10g with connection pools, the db used by the application connects to another oracle db using a database link. My question is when the application starts it creates 10 connections, does it also create x amount of database links as well?
Hi,
Is there any way to use the connection pool or Datasource while connecting to database?If I am using a stateless sesssion bean and using a Data Access layer which just creates a database session to write the persistence toplink objects how I can make use of application server connection pool?Hi Vinod,
Yes, TopLink allows you to use the app server's connection pooling and transaction services. Chapter 2 of the Oracle9iAS TopLink Foundation Library Guide provides details as do the TopLink examples. The easiest way to set this up is by using the sessions.xml file. The sample XML below is from the file <toplink903>\examples\ias\examples\ejb\sessionbean\sessions.xml. Here we are adding the datasource defined in OC4J and specifying that we are using the OC4J transaction controller also.
<login>
<user-name>sa</user-name>
<password></password>
<datasource>java:comp/env/jdbc/ejbJTSDataSource</datasource>
<uses-external-transaction-controller>true</uses-external-transaction-controller>
<uses-external-connection-pool>true</uses-external-connection-pool>
</login>
<external-transaction-controller-class>oracle.toplink.jts.oracle9i.Oracle9iJTSExternalTransactionController</external-transaction-controller-class>
When using this approach you need to change your TopLink code slightly in the EJB methods:
a. Acquire the ACTIVE unit of work from the server
session (again, see the EmployeeSessionEJB code
example) with something like:
UnitOfWork uow = clientSession.getActiveUnitOfWork();
b. Calls to uow.commit() can be ommitted or commented out
because the EJB will handle this. Note that of course
the methods you create in the EJB that are using this
approach must have TX Required (default).
Hope this helps.
Pete -
Hello
I am a new user, so first of all: I am Sara, I'm italian and
new to flex and actionscript development :)
I am trying to build an LDAP browser using Flex 3 and JavaEE
(Tomcat, Java6, BlazeDS).
My application is composed by two mx:states, a login state
and an application state accessible only if the user authenticates
against out LDAP server.
The authentication part looks good: I create a RemoteObject
that performs the various lookups in LDAP. Since I need to store
LDAP environment informations, I create also a FlexSession object
in which this information is stored.
Through the Tomcat SessionManager I can verify that the LDAP
environmnet data are stored correctly.
At this point, I need to "populate" my main state, the
authState, if the user is authenticated.
I want to show to the user some data regarding his LDAP
entry, for example the principal. Using the same FlexSession class,
in the login phase I add to the session some other data such as the
user's principal string. The Tomcat session manager shows me that
the attribute "principal" is not null, but the flex client does not
retrieve this string.
While debugging I notice that the method by which the client
should retrieve the string is not called. The flex client correctly
calls the login() method but not the getPrincipal() method. If I
try to call it directly from actionscript, for example
[Bindable]
public var princ:String = "EMPTY";
public function doLogin():void
if (session.login(txUsername.text, txPassword.text)){
princ = session.getPrincipal().lastResult as String;
lbWelcome.text = "Welcome, " + princ;
the client throws an exception because session.getPrincipal()
returns a null object, BUT the attribute "principal" is populated
in the session.login() method (and exists in the tomcat session).
My doubt is: is FlexSession bound to a specific mx:state?
Should I retrieve the existing session in an other RemoteObject,
and then try to call attributes in the retrieved session? And how?
Thanks in advance
SaraYou have to write these statements in result hanlder. Coz, here you have just sent a request to server and in the next statement you are checking the result. At that time you will not get any result, actually some time will be spent to comeback from server.
princ = session.getPrincipal().lastResult as String;
lbWelcome.text = "Welcome, " + princ;
So handle everything in coll back.
<mx:RemoteObject id="Server" destination="ServerDestination" fault="faultHandler(event)">
<mx:method name="getPrincipal" result="getPrincipalHandler(event)"/>
</mx:RemoteObject>
private function getPrincipalHandler(event:ResultEvent):void
princ = event.result as String;
Try this, and let me know your results or concerns... -
Implicit vs explicit close of resultsets and statements?
Hi friends.I am a newbie Java Developer..Okay Here goes
I have just made a LAN based Java application using Swing,JDBC with backend as MS-Access..The backend is on a shared network drive..
The application is distributed as jar files on the LAN PCs..
Everywhere I have connected to the database I have just closed the connection explicitly like this
con.close();
I do not close the associated resultset and statement explicitly
The specification says associated statements and resultsets close when you close
the connection,even if you don't explicitly close them
Also I am not using connection pool..its simple basic connection using DriverManager
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
String url = "jdbcdbcSN name";
String user = "";
String pw = "";
con = DriverManager.getConnection(url, user, pw);
Statement stmt = con.createStatement();
String select = "" ;
ResultSet rows = stmt.executeQuery(select);
On the net everyone says to explicitly close everything..but I did not know that
earlier..
If specification says everything closes on
closing connection why do ppl insist on closing everything explicitly..?
Or is this driver dependent..don't the drivers go through the specification..
My driver is the Sun JDBC ODBC bridge.....
I found this method DriverManager.setLogwriter()
It prints out a trace of all JDBC operations..
So I ran a sample program with this method included...
I redirected output to a log file..
In that program I just explicitly close the connection without closing the
statements and resultsets explicitly...
After running the program and seeing the log I saw that the statements
and resultsets are closed implicitly If I just close the connection explicitly..
I am putting the log file and the code..
Have a look at the end of the log file..
Code
import java.sql.;
import java.io.;
class gc4test
public static void main(String args[])
Connection con = null;
try
FileWriter fwTrace = new FileWriter("c:\\log.txt");
PrintWriter pwTrace= new PrintWriter(fwTrace);
DriverManager.setLogWriter(pwTrace);
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
String url = "jdbc:odbc:pravahcon";
String user = "admin";
String pw = "ash123";
con = DriverManager.getConnection(url, user, pw);
Statement stmt = con.createStatement();
Statement stmt1 = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_UPDATABLE);
Statement stmt2 = con.createStatement();
Statement stmt3 = con.createStatement();
Statement stmt4 = con.createStatement();
Statement stmt5 = con.createStatement();
Statement stmt6 = con.createStatement();
Statement stmt7 = con.createStatement();
String select = "SELECT * FROM Users" ;
ResultSet rows = stmt.executeQuery(select);
ResultSet rows1 = stmt1.executeQuery(select);
while(rows.next())
con.close();
catch (ClassNotFoundException f)
System.out.println(f.getMessage());
System.exit(0);
catch (SQLException g)
System.out.println(g.getMessage());
System.exit(0);
catch (Exception e)
System.out.println(e.getMessage());
System.exit(0);
End of Log File
Setting statement option (SQLSetStmtAttr), hStmt=50275112, fOption=25
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
Fetching (SQLFetch), hStmt=50274224
End of result set (SQL_NO_DATA)
*Connection.close
8 Statement(s) to close
*Statement.close
Free statement (SQLFreeStmt), hStmt=50281544, fOption=1
deregistering Statement sun.jdbc.odbc.JdbcOdbcStatement@2e7263
*Statement.close
Free statement (SQLFreeStmt), hStmt=50277224, fOption=1
deregistering Statement sun.jdbc.odbc.JdbcOdbcStatement@1bf216a
*Statement.close
*ResultSet.close
*ResultSet has been closed
Free statement (SQLFreeStmt), hStmt=50274224, fOption=1
deregistering Statement sun.jdbc.odbc.JdbcOdbcStatement@156ee8e
*Statement.close
Free statement (SQLFreeStmt), hStmt=50280464, fOption=1
deregistering Statement sun.jdbc.odbc.JdbcOdbcStatement@c20e24
*Statement.close
Free statement (SQLFreeStmt), hStmt=50278304, fOption=1
deregistering Statement sun.jdbc.odbc.JdbcOdbcStatement@12ac982
*Statement.close
*ResultSet.close
*ResultSet has been closed
Free statement (SQLFreeStmt), hStmt=50275112, fOption=1
deregistering Statement sun.jdbc.odbc.JdbcOdbcStatement@e0e1c6
*Statement.close
Free statement (SQLFreeStmt), hStmt=50276144, fOption=1
deregistering Statement sun.jdbc.odbc.JdbcOdbcStatement@6ca1c
*Statement.close
Free statement (SQLFreeStmt), hStmt=50279384, fOption=1
deregistering Statement sun.jdbc.odbc.JdbcOdbcStatement@1389e4
Disconnecting (SQLDisconnect), hDbc=50271048
Closing connection (SQLFreeConnect), hDbc=50271048
Closing environment (SQLFreeEnv), hEnv=50270880
So like what these implicitly closed statements and resultsets are different from explicitly closed
resultsets and statements..?Please do not crosspost/doublepost the same question again. It is rude in terms of netiquette.
Stick to one topic: [http://forums.sun.com/thread.jspa?threadID=5393387&messageID=10745794#10745794].
Maybe you are looking for
-
We are using OWB repository 10.2.0.2.0 and OWB client 10.2.0.2.8. The Oracle version is 10 G (10.2.0.2.0). OWB is installed on Sun 64 bit server. As we use lookup in OWB mapping, We have a situation to create lookup from same table for different resu
-
Why is it i can still dial out using siri even if my phone is locked ?
are apple going to bring out an update to sort this out
-
Hi Friends i was going through new functions in 2004s . and found out this BI Integarated planning. i have doubts on please clarfify them . 1. Is this a part of bussiness panning and simulation? 2. can this be used without using BPS? 3. do i need any
-
SAP recommendations for locking transactions
Hi, Sappers. I recall seeing a document a while back that had a list of "sensitive" or "critical" transactions that were "recommended" to be locked at the customer in accordance with SOX or audit compliance. I asked an external audit company about th
-
Hi, This message is mostly directly to Darcy (head of JSR 269). I was wondering what Sun's plans are for the future after JSR 269? I was very disappointed to find out that Mustang's processor is almost as limited as Tiger's was, allowing the implemen