Performance Test resulting in more EJB bean instances
Hi Guys,
I am trying to profile my application using OptimizeIT.
If I conduct a load test using Load Runner , But for the test I am using only
virtual client continously repeating the same operation for a period of an hour
or so. I expect only entity bean instance to cater to the needs . What I observe
from OptimizeIT is the number of instances of entity bean continously increases
My question is when the same thread is doing the operation the Entity Bean instance
which catered to the need during the first round should be able to process client
request second time. Why should the bean instance continously increase?
Thanks in advance,
Kumar
Kumar Raman wrote:
Hi Rob,
I am unable to send the .snp file as the file size is coming out to be 6 MB which
our mail server is not allowing to send thorough (we have a corporate limit of
3MB). If U have any other way across please let me know.Did you try compressing it? Or, just split it in multiple files and
send them separately. If none of that works, send me a private email,
and I can get you a FTP upload.
>
As regards to 2 questions
1) I know as to why two instances are getting created as I can see the code here.
But I really wanted to know as to when these instances be released from the memory
? They'll be kept in the cache at least until the transaction ends. Since
you're deleting them, they'll be removed from the cache and sent to the
pool when the tx completes.
Is this going to be there till the pool size defined is filled? I haven,t defined
any pool size in our configuration. I feel the default size is 1000.
Yes, they will be in the pool, and the default pool size is 1000.
2) As regards to 2nd question , the add/delete are running in different transaction.
I wanted to know as to whether the instances created during add , be used for
delete operation as well.
They can/should be the same instance. What is your concurrency-strategy
setting for this bean? I know in the past that exclusive concurrency
was not reusing bean instances as well as some of the other concurrency
strategies (eg database / optimistic).
3) Also for each of the bean instance will there be corresponding home instances
also floating in memory. I feel the home instances should be reusable.
There's just 1 home instance for the deployment, not 1 per bean.
In case of simple Entity bean creation in weblogic, how many objects will be
created vis. a vis , home object , remote object so on...
You'll need a bean interface (local and/or remote) and a bean
implementation class.
As the number of instances which OptimizeIT shows is beyond my understanding.
I wanted to ensure is there any configuration to help me optimize these creations.
Ok, let's try to get the snapshot to me so I can help you out.
-- Rob
>
Thanks,
Kumar
Rob Woollen <[email protected]> wrote:
Kumar Raman wrote:
Hi,
Actually we are running a scenario using Load Runner tool to add arow onto a
DB using an Container managed Entity Bean. This Bean is getting instantiated
using a Session Bean. In the workflow after creation we are deletingthe row in
the table by using the remove method of the same entity bean.
If we analyze using the profiler, the number of EJB instances increasesby 2 during
add and increases by another 2 after delete.Is your session bean only creating one bean?
There seems to be 2 questions:
1) Why are you getting 2 beans on add/delete? I'm not sure if you
expect this or not.
2) Why are the beans used for the creation not being used again when
you
issue the delete?
For #2, my first question is if the create and remove are both running
in the same transaction?
I am sending the OptimizeIT (ver5.5) snapshots to you by mail.
Haven't received them yet, but they would be very helpful.
-- Rob
Please let me know as to why the instances are increasing inspite explicitlycalling
the remove method in the code.
Thanks,
Kumar
Rob Woollen <[email protected]> wrote:
We'd need a little more information to diagnose this one.
First off, if you have an OptimizeIt snapshot file (the .snp extension
not the HTML output file), I'd be willing to take a look at it and
give
you some ideas. If you're interested, send me an email at rwoollenat
bea dot com.
If you're using a custom primary key class (ie not something like
java.lang.String), make sure it's hashCode and equals method are correct.
Otherwise, it'd be helpful if you gave us some more info about yourtest
and what you're doing with the entity bean(s).
-- Rob
Kumar Raman wrote:
Hi Guys,
I am trying to profile my application using OptimizeIT.
If I conduct a load test using Load Runner , But for the test I amusing only
virtual client continously repeating the same operation for a periodof an hour
or so. I expect only entity bean instance to cater to the needs .
What
I observe
from OptimizeIT is the number of instances of entity bean continouslyincreases
My question is when the same thread is doing the operation the EntityBean instance
which catered to the need during the first round should be able toprocess client
request second time. Why should the bean instance continously increase?
Thanks in advance,
Kumar
Similar Messages
-
Dear forum users.
I wonder why "New I/O"(java.nio.*) is usefuel ?
I tested "New I/O" performance.
plz, see the code below..
public class ByteBufferPerformanceTest {
public static void main(String[] args) {
File fileName = new File("c:\\kandroid_book_3rd_edition[1].pdf"); // 20MB file
// ByteBuffer Usage
long start1 = System.nanoTime();
try {
int data=0;
FileInputStream fis = new FileInputStream(fileName);
FileChannel fc = fis.getChannel();
ByteBuffer bf = ByteBuffer.allocateDirect(1024);
while( fc.read(bf) != -1 ) {
//System.out.print(new String(bf.array(), 0, 1024));
bf.clear();
fis.close();
} catch (FileNotFoundException ffe) {
ffe.getStackTrace();
} catch (IOException ioe) {
ioe.getStackTrace();
long duration1 = System.nanoTime() - start1;
// BufferedInputStream Usage
BufferedInputStream bin = null;
long start2 = System.nanoTime();
try {
bin = new BufferedInputStream( new FileInputStream(fileName) );
byte[] contents = new byte[1024];
int bytesRead = 0;
String strFileContents;
while ((bytesRead = bin.read(contents)) != -1) {
//strFileContents = new String(contents, 0, bytesRead);
//System.out.print(strFileContents);
} catch (FileNotFoundException ffe ) {
ffe.getStackTrace();
} catch ( IOException ioe ) {
ioe.getStackTrace();
} finally {
try {
if ( bin != null)
bin.close();
}catch ( IOException e ) {
e.getStackTrace();
long duration2 = System.nanoTime() - start2;
// FileReader Usage
long start3 = System.nanoTime();
try {
FileReader fr = new FileReader(fileName);
BufferedReader br = new BufferedReader(fr);
String line;
while( (line=br.readLine())!=null ) {
br.close();
} catch ( FileNotFoundException ffe) {
ffe.getStackTrace();
} catch ( IOException ioe ) {
ioe.getStackTrace();
long duration3 = System.nanoTime() - start3;
System.out.println(String.format("%20s : %12d", "ByteBuffer", duration1));
System.out.println(String.format("%20s : %12d", "BufferedInputStream", duration2));
System.out.println(String.format("%20s : %12d", "FileReader", duration3));
}Result(nanoTime)
ByteBuffer : 60107360
BufferedInputStream : 22748701
FileReader : 597288203
result.. as you can see, best class for file I/O is BufferedInputStream.
so i mean, why is it need to use ByteBuffer ?
am i tested it wrong way ?
thx, for reading. thank you very much ..:)First of all: you're test is very, very flawed in multiple ways:
1.) You try reading the same file 3 times. The first one will take the cache hit, while the OS actually loads the file from the disk and the others will just test how fast accessing the OS cache is
2.) You're only doing a single read of the file and didn't tell us if you tried that experiment multiple times (to avoid small timing difference to influence the result)
3.) your three methods do different things. Specifically the last method converts the bytes to Strings which is meaningless for a binary file and takes additional time.
All that being said: NIO isn't simply "faster". It provides ways to implement non-blocking IO for tasks such as servers supporting a massive amount of connections and similar high-performance scenarios. If you simply want to read a file once, then "normal" IO will be perfectly fine for you. -
ActiveX Control recording but not playing back in a VS 2012 Web Performance Test
I am testing an application that loads an Active X control for entering some login information. While recording, this control works fine and I am able to enter information and it is recorded. However on playback in the playback window it has the error "An
add-on for this website failed to run. Check the security settings in Internet Options for potential conflicts."
Window 7 OS 64 bit
IE 8 recorded on 32 bit version
I see no obvious security conflicts. This runs fine when navigating through manually and recording. It is only during playback where this error occurs.Hi IndyJason,
Thank you for posting in MSDN forum.
As you said that you could not playback the Active X control successfully in web performance test. I know that the ActiveX controls in your Web application will fall into three categories, depending on how they work at the HTTP level.
Reference:
https://msdn.microsoft.com/en-us/library/ms404678%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396
I found that this confusion may be come from the browser preview in the Web test result viewer. The Web Performance Test Results Viewer does not allow script or ActiveX controls to run, because the Web performance test engine does not run the, and for security
reasons.
For more information, please you refer to this follwoing blog(Web Tests Can Succeed Even Though It Appears They Failed Part):
http://blogs.msdn.com/edglas/archive/2010/03/24/web-test-authoring-and-debugging-techniques-for-visual-studio-2010.aspx
Best Regards,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Load Test Results - time series request data for by URL in VS2013
I am trying to figure out how to export and then analyze the results of a load test, but after the test is over it seems I cannot find the data for each individual request by url. This data shows during the load test itself, but after it is over it seems
as if that data is no longer accessible and all I can find are totals. The data that I want is under the "Page response time" graph on the graphs window during the test. I know this is not the response time for every single request and is probably
averaged, but that would suffice for the calculations I want to make.
I have looked in the database on my local machine (LoadTest2010, where all of the summary data is stored) and I cannot find the data I'm looking for.
My goal is to plot (probably in excel) each request url against the user load and analyze the slope of the response time averages to determine which requests scale the worst (and best). During the load test I can see this data and get a visual idea but when
it ends I cannot seem to find it to export.
A) Can this data be exported from within visual studio? Is there a setting required to make VS persist this data to the database? I have, from under Run Settings, the "Results" section "Timing Details Storage" set to "All individual
details" and the Storage Type set to "Database".
B) If this data isn't available from within VS, is it in any of the tables in the LoadTest2010 database where all of the summary data is stored?
Thanks
LukeHi Luke,
Since the load test is used to
simulate many users accessing a server at the same time, it is mainly verify a wev server load stress.
As you said that you want to find the data
for each individual request by url, I know that generally we can analyze the url request from the Summary like the following screen shot.
>>I
have looked in the database on my local machine (LoadTest2010, where all of the summary data is stored) and I cannot find the data I'm looking for.
I suggest you can try to add the
SQL Tracing Connect String in the Run Setting properties to trace the data.
Reference:
https://social.msdn.microsoft.com/Forums/en-US/74ff1c3e-cdc5-403a-b82f-66fbd36b1cc2/sql-server-tracing-in-visual-studio-load-test?forum=vstest
In addition, you can try to create an excel to analyze the load test result, for more information:
http://msdn.microsoft.com/en-us/library/dd997707.aspx
Hope it help you!
Best Regards,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
EJB 3.0 Stateful - Limiting number of bean instances
Hello EJB Experts,
I have just started to learn EJB 3.0 and have some basic queries. The application server that I am using is Glassfish. Please find my queries below:
1. To remove an bean instance from the container, we can use the annotation '@Remove'. I basically had 2 methods and annotated the 2nd method with '@Remove'. Whenever the 2nd method is called, the container is removing this instance also in my program. My problem is that, i might get some invalid parameter values in the 2nd method, so in that case I have to just log the error message and whenever the input parameters are correct, then only the instance should be removed. But lets say, if someone calls my 2nd method with invalid parameters, I log the message and the container removes it, but after some time if the 2nd method is called with correct parameters, then the instance will not be available. Can we programatically tell the container when to remove a bean instance?
2. From the docs, I am clear that pooling works only for 'Stateless' beans. I however (I am using 'Statuful' bean), wanted to limit the max number of instance to 2. I did below mentioned configuration in the 'sun-ejb-jar.xml' file:
<bean-cache>
<is-cache-overflow-allowed>false</is-cache-overflow-allowed>
<cache-idle-timeout-in-seconds>1</cache-idle-timeout-in-seconds>
<max-cache-size>2</max-cache-size>
<resize-quantity>0</resize-quantity>
<removal-timeout-in-seconds>2</removal-timeout-in-seconds>
<victim-selection-policy>LRU</victim-selection-policy>
</bean-cache>
But i think, it is still creating more that 2 instances of this bean.
Please help me in getting answers to these questions. I will be very thankful for your replies.
Regards,
San
Edited by: SolarisUser1 on Jun 27, 2010 11:00 PM@Remove is used for stateful EJBs and you call it when your client has finished using that instance of the stateful EJB.
If you are passing in parameters to the method and letting it do some work with your parameters then perhaps it should not be a remove method at all. Make it a normal method and only put cleanup related logic in the remove method. You can also throw some application exception and rollback if the parameters are not correct. -
Remote and local interface on same ejb 3.0 bean instance
Hi,
Is it posible to get remote and local interface on same ejb 3.0 bean instance.
For example get local interface of a bean and than pass it as remote to client.
Both interfaces must operate on same bean instance.
Thanks
Zlajayes. You can implement multiple interfaces on a single class, so you can add a local and a remote interface. One trick to avoid duplicate code is to simply make the remote interface extend the local interface; then you only have to add the @Remote annotation and you're done.
For example get local interface of a bean and than pass it as remote to client.You don't pass an instances to a client, a client looks up a remote instance of the bean through JNDI. -
EJB skeleton tied to a specific bean instance?
Is a EJB skeleton tied to a specific EJB instance? For example, if in an application every access to the EJB was done via the same skeleton, would they have to queue or would the skeleton share the pool of beans amongst the requests.
I'm assuming by skeleton to mean the remote RMI reference to the EJB returned from the create() method.
ThanksHi,
It partly depends on what kind of ejb we're talking about. In the case of a stateful session bean, it doesn't make sense to have concurrent invocations of the same bean. If we're talking about stateless session beans, then the container typically dispatches concurrent requests to different bean instances on different threads.
Note that strictly speaking, it would be within the
rights of the container to serialize even stateless session bean
invocations. This would certainly be a dumb thing to do, but the
application would still work.
Regards,
Ken -
Performance testing of servlets / beans / jsp ?
Hi. I'd like to performance test my applications, anyone have a clue on what software to use?
I use Fort� for Java CE 3 as the IDE and TomCat 3.23 as the servlet / jsp container.
Hopefully there are some opensource tools to use for this?
Regards,
ChrisYou can precompile JSP's, this removes the small hickup when they are requested the first time (making the server translate and compile them). Check the documentation of your specific web/application server on how to do this.
Otherwise:
- buy better hardware
- use a better application server
- make sure your network is properly configured (so packets don't get routed around the network four times before they reach their destination for example)
- make sure your program logic doesn't create bottlenecks such as
unnecessary HTTP requests, redundant loops, etc.
- optimize your database access, use connection pooling
- optimize your database queries. Create indexes, make sure the SQL queries themselves aren't doing unnecessary trips around the database, etc. -
Hi,
I have a problem binding EJB bean (Stateful bean). Bean have two business methods:
SendPacketToTRSM and GetData
When I invoke SendPacketToTRSM method from process, application server create first instance of bean and invoke method SendPacketToTRSM
Next I invoke GetData method in process, application server create second instance of bean and invoke method GetData.
Every time, when I invoke method, application server create new instance of bean and don't remove it.
Application server after passivation remove instance of bean from container.
Environment: BPEL 10.0.2(OC4J), patch 4369818, 4406640, 4496111
EJB bean on JBoss 4.0.2
The following wsdl EJB binding:
<?xml version="1.0" ?>
<definitions targetNamespace="http://xmlns.unizeto.pl/TRSMBPEL"
xmlns:tns="http://xmlns.unizeto.pl/TRSMBPEL"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:format="http://schemas.xmlsoap.org/wsdl/formatbinding/"
xmlns:ejb="http://schemas.xmlsoap.org/wsdl/ejb/"
xmlns:plnk="http://schemas.xmlsoap.org/ws/2003/05/partner-link/"
xmlns="http://schemas.xmlsoap.org/wsdl/">
<!-- message declns -->
<message name="SendPacketToTRSMRequestMessage">
<part name="sender" type="xsd:int"/>
<part name="bufferToTRSM" type="xsd:string"/>
</message>
<message name="SendPacketToTRSMResponseMessage">
<part name="result" type="xsd:int"/>
</message>
<message name="GetDataRequestMessage">
</message>
<message name="GetDataResponseMessage">
<part name="result" type="xsd:string"/>
</message>
<message name="RemoveRequestMessage">
</message>
<message name="RemoveResponseMessage">
</message>
<message name="CreateRequestMessage">
</message>
<message name="CreateResponseMessage">
</message>
<!-- port type declns -->
<portType name="TRSMService">
<operation name="SendPacketToTRSM">
<input name="SendPacketToTRSMRequest" message="tns:SendPacketToTRSMRequestMessage"/>
<output name="SendPacketToTRSMResponse" message="tns:SendPacketToTRSMResponseMessage"/>
</operation>
<operation name="GetData">
<input name="GetDataRequest" message="tns:GetDataRequestMessage"/>
<output name="GetDataResponse" message="tns:GetDataResponseMessage"/>
</operation>
<operation name="Remove">
<input name="RemoveRequest" message="tns:RemoveRequestMessage"/>
<output name="RemoveResponse" message="tns:RemoveResponseMessage"/>
</operation>
<operation name="Create">
<input name="CreateRequest" message="tns:CreateRequestMessage"/>
<output name="CreateResponse" message="tns:CreateResponseMessage"/>
</operation>
<operation name="SSCDAuthorizedForget">
</portType>
<!-- binding declns -->
<binding name="EJBBinding" type="tns:TRSMService">
<ejb:binding/>
<format:typeMapping encoding="Java" style="Java">
<format:typeMap typeName="xsd:int" formatType="int"/>
<format:typeMap typeName="xsd:string" formatType="java.lang.String"/>
</format:typeMapping>
<operation name="SendPacketToTRSM">
<ejb:operation
methodName="SendBase64PacketToTRSM"
parameterOrder="sender bufferToTRSM"
interface="remote"
returnPart="result"/>
<input name="SendPacketToTRSMRequest"/>
<output name="SendPacketToTRSMResponse"/>
</operation>
<operation name="GetData">
<ejb:operation
methodName="GetBase64Data"
parameterOrder=""
interface="remote"
returnPart="result"/>
<input name="GetDataRequest"/>
<output name="GetDataResponse"/>
</operation>
<operation name="Remove">
<ejb:operation
methodName="remove"
interface="remote"/>
</operation>
<operation name="Create">
<ejb:operation
methodName="create"
interface="home"/>
</operation>
</binding>
<!-- service decln -->
<service name="TRSMService">
<port name="EJBPort" binding="tns:EJBBinding">
<ejb:address className="pl.unizeto.pki.des.ssp.trsmd.TRSMDRemoteHome"
jndiName="pl.unizeto.pki.des.ssp.trsmd.TRSMDBean"
initialContextFactory="org.jnp.interfaces.NamingContextFactory"
jndiProviderURL="192.168.129.202:1999"/>
</port>
</service>
<!-- partner links -->
<plnk:partnerLinkType name="TRSMService">
<plnk:role name="TRSMServiceProvider">
<plnk:portType name="tns:TRSMService"/>
</plnk:role>
</plnk:partnerLinkType>
</definitions>
and bpel source
<process name="TRSMBPEL" targetNamespace="http://xmlns.unizeto.pl/TRSMBPEL" xmlns="http://schemas.xmlsoap.org/ws/2003/03/business-process/" xmlns:bpws="http://schemas.xmlsoap.org/ws/2003/03/business-process/" xmlns:xp20="http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.Xpath20" xmlns:tns="http://xmlns.unizeto.pl/TRSMBPEL" xmlns:ns1="http://www.w3.org/2001/XMLSchema" xmlns:trsm="http://xmlns.unizeto.pl/TRSMBPEL" xmlns:ctask="http://services.oracle.com/bpel/task" xmlns:ldap="http://schemas.oracle.com/xpath/extension/ldap" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:taskMgr="http://services.oracle.com/bpel/task" xmlns:bpelx="http://schemas.oracle.com/bpel/extension" xmlns:ora="http://schemas.oracle.com/xpath/extension" xmlns:orcl="http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.ExtFunc"><!-- ================================================================= --><!-- PARTNERLINKS --><!-- List of services participating in this BPEL process --><!-- ================================================================= -->
<partnerLinks><!--
The 'client' role represents the requester of this service. It is
used for callback. The location and correlation information associated
with the client role are automatically set using WS-Addressing.
-->
<partnerLink name="client" partnerLinkType="tns:TRSMBPEL" myRole="TRSMBPELProvider"/>
<partnerLink name="TRSMService" partnerRole="TRSMServiceProvider" partnerLinkType="tns:TRSMService"/>
<partnerLink myRole="TaskManagerRequester" name="userTask" partnerRole="TaskManager" partnerLinkType="taskMgr:TaskManager"/>
</partnerLinks><!-- ================================================================= --><!-- VARIABLES --><!-- List of messages and XML documents used within this BPEL process --><!-- ================================================================= -->
<variables><!-- Reference to the message passed as input during initiation -->
<variable name="inputVariable" messageType="tns:TRSMBPELRequestMessage"/>
<variable name="outputVariable" messageType="tns:TRSMBPELResponseMessage"/>
<variable name="SendPacketToTRSM_SendPacketToTRSM_InputVariable" messageType="tns:SendPacketToTRSMRequestMessage"/>
<variable name="SendPacketToTRSM_SendPacketToTRSM_OutputVariable" messageType="tns:SendPacketToTRSMResponseMessage"/>
<variable name="GetData_GetData_InputVariable" messageType="tns:GetDataRequestMessage"/>
<variable name="GetData_GetData_OutputVariable" messageType="tns:GetDataResponseMessage"/>
<variable name="UserTask2.0Var1" element="ctask:task"/>
<variable name="Invoke_1_Create_InputVariable" messageType="tns:CreateRequestMessage"/>
<variable name="Invoke_1_Create_OutputVariable" messageType="tns:CreateResponseMessage"/>
<variable name="removeTRSMD_Remove_InputVariable" messageType="tns:RemoveRequestMessage"/>
<variable name="removeTRSMD_Remove_OutputVariable" messageType="tns:RemoveResponseMessage"/>
</variables><!-- ================================================================= --><!-- ORCHESTRATION LOGIC --><!-- Set of activities coordinating the flow of messages across the --><!-- services integrated within this business process --><!-- ================================================================= -->
<sequence name="main"><!-- Receive input from requestor.
Note: This maps to operation defined in TRSMBPEL.wsdl
-->
<receive name="receiveInput" partnerLink="client" portType="tns:TRSMBPEL" operation="process" variable="inputVariable" createInstance="yes"/>
<scope name="Scope_1">
<variables>
<variable name="Invoke_3_Create_InputVariable" messageType="tns:CreateRequestMessage"/>
<variable name="Invoke_3_Create_OutputVariable" messageType="tns:CreateResponseMessage"/>
<variable name="Invoke_1_Remove_InputVariable" messageType="tns:RemoveRequestMessage"/>
</variables>
<sequence name="Sequence_1">
<assign name="Init">
<copy>
<from variable="inputVariable" part="payload" query="/tns:TRSMBPELProcessRequest/tns:sender"/>
<to variable="SendPacketToTRSM_SendPacketToTRSM_InputVariable" part="sender"/>
</copy>
<copy>
<from variable="inputVariable" part="payload" query="/tns:TRSMBPELProcessRequest/tns:buffer"/>
<to variable="SendPacketToTRSM_SendPacketToTRSM_InputVariable" part="bufferToTRSM"/>
</copy>
</assign>
<invoke name="create" partnerLink="TRSMService" portType="tns:TRSMService" operation="Create" inputVariable="Invoke_3_Create_InputVariable" outputVariable="Invoke_3_Create_OutputVariable"/>
<invoke name="SendPacketToTRSM" partnerLink="TRSMService" portType="tns:TRSMService" operation="SendPacketToTRSM" inputVariable="SendPacketToTRSM_SendPacketToTRSM_InputVariable" outputVariable="SendPacketToTRSM_SendPacketToTRSM_OutputVariable"/>
<invoke name="GetData" partnerLink="TRSMService" portType="tns:TRSMService" operation="GetData" inputVariable="GetData_GetData_InputVariable" outputVariable="GetData_GetData_OutputVariable"/>
<invoke name="Remove" partnerLink="TRSMService" portType="tns:TRSMService" operation="Remove" inputVariable="Invoke_1_Remove_InputVariable"/>
</sequence>
</scope><!-- Generate reply to synchronous request -->
<assign name="Result">
<copy>
<from variable="GetData_GetData_OutputVariable" part="result"/>
<to variable="outputVariable" part="payload" query="/tns:TRSMBPELProcessResponse/tns:data"/>
</copy>
</assign>
<reply name="replyOutput" partnerLink="client" portType="tns:TRSMBPEL" operation="process" variable="outputVariable"/>
</sequence>
</process>
Could anyone explain, is it possible to binding stateful bean to process?
Thanks
NorbertDid some additional investigations and concluded"
The (embedded) OTC uses default the empty to obtain the reference to a Session Bean (EJB). In my case I was using the Remote Interface and my Context was empty { }:
Hashtable ht = ic.getEnvironment();
System.out.println(ht.toString());
When I supply the missing information, obtained via the Test Client that functions correctly, a new Bean instance was created for each Client. My getInitialContext() method looks like the example below.
public InitialContext getInitialContext() throws NamingException {
Properties p =new Properties();
p.setProperty( "java.naming.factory.initial", "com.evermind.server.rmi.RMIInitialContextFactory");
p.setProperty( "java.naming.provider.url", "ormi://localhost:23892/current-workspace-app" );
I tried the ApplicationInitialContextFactory and again the same Bean instance was shared among all Clients. I did not try ApplicationClientInitialContextFactory, but I expect that the Remote interface will be used!
Is it a Bug that ApplicationInitialContextFactory does not create a new instance for my Stateful Session Bean? I can use the Remote interface, but that would decrease the performance and it is less elegant...
Michael -
How do I specify that I want no more than x instances of my MDB ?
How do I specify that I want no more than x instances of my MDB ?
After all, I don't want as many MDB instances as messages, what's a queue for then...
Does Max Beans In Free Pool do the trick ?
Thank you.
[att1.html]
Hi Rosalie,
Max Beans in Free Pool is there for this purpose. Note
that MDB concurrency is also limited by the thread pool size,
and that it is often useful to give MDBs their own
thread pool to prevent them from stealing threads from other
applications. See:
http://edocs.bea.com/wls/docs81/ejb/DDreference-ejb-jar.html#dispatch-policy
You will likely find it useful to read through the
JMS Performance Guide if you haven't already done so. The above
information, as well as related info, is included:
http://dev2dev.bea.com/products/wlserver/whitepapers/WL_JMS_Perform_GD.jsp
Tom
Rosalie Mignon wrote:
> How do I specify that I want no more than x instances of my MDB ?
>
> After all, I don't want as many MDB instances as messages, what's a
> queue for then...
>
> Does Max Beans In Free Pool do the trick ?
>
> Thank you.
>
>
>
>
>
>
>
>
>
-
Log file sync top event during performance test -av 36ms
Hi,
During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 208,327 7,406 36 46.6 Commit
direct path write 646,833 3,604 6 22.7 User I/O
DB CPU 1,599 10.1
direct path read temp 1,321,596 619 0 3.9 User I/O
log buffer space 4,161 558 134 3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
I am not able to figure out why "log file sync" is having such slow response.
Below is the snapshot from the load profile.
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108127 16-May-13 20:15:22 105 6.5
End Snap: 108140 16-May-13 23:30:29 156 8.9
Elapsed: 195.11 (mins)
DB Time: 265.09 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,136M Std Block Size: 8K
Shared Pool Size: 1,120M 1,168M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 1.4 0.1 0.02 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 607,512.1 33,092.1
Logical reads: 3,900.4 212.5
Block changes: 1,381.4 75.3
Physical reads: 134.5 7.3
Physical writes: 134.0 7.3
User calls: 145.5 7.9
Parses: 24.6 1.3
Hard parses: 7.9 0.4
W/A MB processed: 915,418.7 49,864.2
Logons: 0.1 0.0
Executes: 85.2 4.6
Rollbacks: 0.0 0.0
Transactions: 18.4Some of the top background wait events:
^LBackground Wait Events DB/Inst: Snaps: 108127-108140
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 208,563 0 2,528 12 1.0 66.4
db file parallel write 4,264 0 785 184 0.0 20.6
Backup: sbtbackup 1 0 516 516177 0.0 13.6
control file parallel writ 4,436 0 97 22 0.0 2.6
log file sequential read 6,922 0 95 14 0.0 2.5
Log archive I/O 6,820 0 48 7 0.0 1.3
os thread startup 432 0 26 60 0.0 .7
Backup: sbtclose2 1 0 10 10094 0.0 .3
db file sequential read 2,585 0 8 3 0.0 .2
db file single write 560 0 3 6 0.0 .1
log file sync 28 0 1 53 0.0 .0
control file sequential re 36,326 0 1 0 0.2 .0
log file switch completion 4 0 1 207 0.0 .0
buffer busy waits 5 0 1 116 0.0 .0
LGWR wait for redo copy 924 0 1 1 0.0 .0
log file single write 56 0 1 9 0.0 .0
Backup: sbtinfo2 1 0 1 500 0.0 .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
{code}
Workload Comparison
~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
DB time: 0.78 1.36 74.36 0.02 0.07 250.00
CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
Parses: 7.28 24.55 237.23 0.19 1.34 605.26
Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
Transactions: 37.99 18.36 -51.67
First Second Diff
1st 2nd
Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
(ms) %DB time
SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
35.6 46.57
CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
5.6 22.66
log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
12.1 15.90
log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
N/A 10.06
SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
84.0 4.93
-direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
0.0 1.76
-db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
0.0 0.41
{code}
*To sum it sup:
1. Why is the IO response getting such an hit during the new perf test? Please suggest*
2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
{code}
select *from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for HPUX: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
{code}
Please let me know if you would like to see any other stats.
Edited by: Kunwar on May 18, 2013 2:20 PM1. A snapshot interval of 3 hours always generates meaningless results
Below are some details from the 1 hour interval AWR report.
Platform CPUs Cores Sockets Memory(GB)
HP-UX IA (64-bit) 4 4 3 31.95
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108129 16-May-13 20:45:32 140 8.0
End Snap: 108133 16-May-13 21:45:53 150 8.8
Elapsed: 60.35 (mins)
DB Time: 140.49 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,168M Std Block Size: 8K
Shared Pool Size: 1,120M 1,120M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 2.3 0.1 0.03 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 719,553.5 34,374.6
Logical reads: 4,017.4 191.9
Block changes: 1,521.1 72.7
Physical reads: 136.9 6.5
Physical writes: 158.3 7.6
User calls: 167.0 8.0
Parses: 25.8 1.2
Hard parses: 8.9 0.4
W/A MB processed: 406,220.0 19,406.0
Logons: 0.1 0.0
Executes: 88.4 4.2
Rollbacks: 0.0 0.0
Transactions: 20.9
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 73,761 6,740 91 80.0 Commit
log buffer space 3,581 541 151 6.4 Configurat
DB CPU 348 4.1
direct path write 238,962 241 1 2.9 User I/O
direct path read temp 487,874 174 0 2.1 User I/O
Background Wait Events DB/Inst: Snaps: 108129-108133
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 61,049 0 1,891 31 0.8 87.8
db file parallel write 1,590 0 251 158 0.0 11.6
control file parallel writ 1,372 0 56 41 0.0 2.6
log file sequential read 2,473 0 50 20 0.0 2.3
Log archive I/O 2,436 0 20 8 0.0 .9
os thread startup 135 0 8 60 0.0 .4
db file sequential read 668 0 4 6 0.0 .2
db file single write 200 0 2 9 0.0 .1
log file sync 8 0 1 152 0.0 .1
log file single write 20 0 0 21 0.0 .0
control file sequential re 11,218 0 0 0 0.1 .0
buffer busy waits 2 0 0 161 0.0 .0
direct path write 6 0 0 37 0.0 .0
LGWR wait for redo copy 380 0 0 0 0.0 .0
log buffer space 1 0 0 89 0.0 .0
latch: cache buffers lru c 3 0 0 1 0.0 .0 2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
We don't know anything about your online redo log configuration
Below is my redo log configuration.
GROUP# STATUS TYPE MEMBER IS_
1 ONLINE /oradata/fs01/PERFDB1/redo_1a.log NO
1 ONLINE /oradata/fs02/PERFDB1/redo_1b.log NO
2 ONLINE /oradata/fs01/PERFDB1/redo_2a.log NO
2 ONLINE /oradata/fs02/PERFDB1/redo_2b.log NO
3 ONLINE /oradata/fs01/PERFDB1/redo_3a.log NO
3 ONLINE /oradata/fs02/PERFDB1/redo_3b.log NO
6 rows selected.
04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
04:13:26 perf_monitor@PERFDB1> select *from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
1 1 40689 524288000 2 YES INACTIVE 13026185905545 18-MAY-13 01:00
2 1 40690 524288000 2 YES INACTIVE 13026185931010 18-MAY-13 03:32
3 1 40691 524288000 2 NO CURRENT 13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM -
Hiya
We have been doing Load Performance Testing using testing tool QALoad on our Forms 10g application. After about 56 virtual users(sessions) have logged-in into our application, if a new user tries to log-in into our application, the Forms crashes. As soon as we encounter the FRM-92101 error, no more new forms session are able to start.
The Load Testing software start up each process very quickly, about every 10 seconds.
The very first form that appears is the login form of our application. So before the login screen appears, we get FRM-92101 error message.
However, those users who have already logged-in into our application, they are able to carry on their tasks.
We are using Application Server 10g 10.1.2.0.2. I have checked the status on Application Server through Oracle Enterprise Manager Console. The OC4J Instance is up and running. Also, server's configuration is pretty good. It is running on 2 CPUs (AMD Opteron 3GHz) and has 32GB of memory. The memory used by those 56 sessions is less than 3GB.
The Applicatin Server is running on a Microsoft Windows Server 2003 64bit Enterprise Edition.
Any help will be much appreciated.
Cheers
MayurHi Shekhawat
In Windows Registry go to
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SubSystems
In the right hand side panel, you will find String Value as Windows. Now double click on it (Windows). In the pop up window you will see a string similar to the following one:
%SystemRoot%\system32\csrss.exe ObjectDirectory=\Windows SharedSection=1024,20480,768 Windows=On SubSystemType=Windows ServerDll=basesrv,1 ServerDll=winsrv:UserServerDllInitialization,3 ServerDll=winsrv:ConServerDllInitialization,2 ProfileControl=Off MaxRequestThreads=16
Now if you read it carefully in the above string, you will find this parameter
SharedSection=1024,20480,768
Here SharedSection specifies the system and desktop heaps using the following format:
SharedSection=xxxx,yyyy,zzzz
The default values are 1024,3072,512
All the values are in Kilobytes (KB)
xxxx = System-wide Heapsize. There is no need to modify this value.
yyyy = IO Desktop Heapsize. This is the heap for memory objects in the IO Desktop.
zzzz = Non-IO Desktop Heapsize. This is the heap for memory objects in the Non-IO Desktop.
On our server the values were as follows :
1024,20480,768
We changed the size of Non-IO desktop heapsize from 768 to 5112. With 5112 KB we managed to test our application for upto 495 virtual users.
Cheers
Mayur -
LabVIEW Embedded - Performance Testing - Different Platforms
Hi all,
I've done some performance testing of LabVIEW on various microcontroller development boards (LabVIEW Embedded for ARM) as well as on a cRIO 9122 Real-time Controller (LabVIEW Real-time) and a Dell Optiplex 790 (LabVIEW desktop). You may find the results interesting. The full report is attached and the final page of the report is reproduced below.
Test Summary
µC MIPS
Single Loop
Effective MIPS
Single Loop
Efficiency
Dual Loop
Effective MIPS
Dual Loop
Efficiency
MCB2300
65
31.8
49%
4.1
6%
LM3S8962
60
50.0
83%
9.5
16%
LPC1788
120
80.9
56%
12.0
8%
cRIO 9122
760
152.4
20%
223.0
29%
Optiplex 790
6114
5533.7
91%
5655.0
92%
Analysis
For microcontrollers, single loop programming can retain almost 100% of the processing power. Such programming would require that all I/O is non-blocking as well as use of interrupts. Multiple loop programming is not recommended, except for simple applications running at loop rates less than 200 Hz, since the vast majority of the processing power is taken by LabVIEW/OS overhead.
For cRIO, there is much more processing power available, however approximately 70 to 80% of it is lost to LabVIEW/OS overhead. The end result is that what can be achieved is limiting.
For the Desktop, we get the best of both worlds; extraordinary processing power and high efficiency.
Speculation on why LabVIEW Embedded for ARM and LabVIEW Real-time performance is so poor puts the blame on excessive context switch. Each context switch typically takes 150 to 200 machine cycles and these appear to be inserted for each loop iteration. This means that tight loops (fast with not much computation) consume enormous amounts of processing power. If this is the case, an option to force a context switch every Nth loop iteration would be useful.
Conclusion
LabVIEW Embedded
for ARM
LabVIEW Real-time for cRIO/sbRIO
LabVIEW Desktop for Windows
Development Environment Cost
High
Reasonable
Reasonable
Execution Platform Cost
Very low
Very High / High
Low
Processing Power
Low (current Tier 1)
Medium
Enormous
LabVIEW/OS efficiency
Low
Low
High
OEM friendly
Yes+
No
Yes
LabVIEW Desktop has many attractive features. This explain why LabVIEW Desktop is so successful and is the vast majority of National Instruments’ software sales (and consequently results in the vast majority of hardware sales). It is National Instruments’ flagship product and is the precursor to the other LabVIEW offerings. The execution platform is powerful, available in various form factors from various sources and is competitively priced.
LabVIEW Real-time on a cRIO/sb-RIO is a lot less attractive. To make this platform attractive the execution platform cost needs to be vastly decreased while increasing the raw processing power. It would also be beneficial to examine why the LabVIEW/OS overhead is so high. A single plug-in board no larger than 75 x 50 mm (3” x 2”) with a single unit price under $180 would certainly make the sb-RIO a viable execution platform. The peripheral connectors would not be part of the board and would be accessible via a connector. A developer mother board could house the various connectors, but these are not needed when incorporated into the final product. The recently released Xilinx Zynq would be a great chip to use ($15 in volume, 2 x ARM Cortex A9 at 800 MHz (4,000 MIPS), FPGA fabric and lots more).
LabVIEW Embedded for ARM is very OEM friendly with development boards that are open source with circuit diagrams available. To make this platform attractive, new more capable Tier 1 boards will need to be introduced, mainly to counter the large LabVIEW/OS overhead. As before, these target boards would come from microcontroller manufacturers, thereby making them inexpensive and open source. It would also be beneficial to examine why the LabVIEW/OS overhead is so high. What is required now is another Tier 1 boards (eg. DK-LM3S9D96 (ARM Cortex M3 80 MHz/96 MIPS)). Further Tier 1 boards should be targeted every two years (eg. BeagleBoard-xM (ARM Cortex A8 1000 MHz/2000 MIPS board)) to keep LabVIEW Embedded for ARM relevant.
Attachments:
LabVIEW Embedded - Performance Testing - Different Platforms.pdf 307 KBI've got to say though, it would really be good if NI could further develop the ARM embedded toolkit.
In the industry I'm in, and probably many others, control algorithm development and testing oocurs in labview. If you have a good LV developer or team, you'll end up with fairly solid, stable and tested code. But what happens now, once the concept is validated, is that all this is thrown away and the C programmers create the embedded code that will go into the real product.
The development cycle starts from scratch.
It would be amazing if you could strip down that code and deploy it onto ARM and expect it to not be too inefficient. Development costs and time to market go way down.. BUT, but especially in the industry I presently work in, the final product's COST is extremely important. (These being consumer products, chaper micro cheaper product) .
These concerns weight HEAVILY. I didn't get a warm fuzzy about the ARM toolkit for my application. I'm sure it's got its niches, but just imagine what could happen if some more work went into it to make it truly appealing to wider market... -
[Ann] FirstACT 2.2 released for SOAP performance testing
Empirix Releases FirstACT 2.2 for Performance Testing of SOAP-based Web Services
FirstACT 2.2 is available for free evaluation immediately at http://www.empirix.com/TryFirstACT
Waltham, MA -- June 5, 2002 -- Empirix Inc., the leading provider of test and monitoring
solutions for Web, voice and network applications, today announced FirstACT™ 2.2,
the fifth release of the industry's first and most comprehensive automated performance
testing tool for Web Services.
As enterprise organizations are beginning to adopt Web Services, the types of Web
Services being developed and their testing needs is in a state of change. Major
software testing solution vendor, Empirix is committed to ensuring that organizations
developing enterprise software using Web Services can continue to verify the performance
of their enterprise as quickly and cost effectively as possible regardless of the
architecture they are built upon.
Working with organizations developing Web Services, we have observed several emerging
trends. First, organizations are tending to develop Web Services that transfer a
sizable amount of data within each transaction by passing in user-defined XML data
types as part of the SOAP request. As a result, they require a solution that automatically
generates SOAP requests using XML data types and allows them to be quickly customized.
Second, organizations require highly scalable test solutions. Many organizations
are using Web Services to exchange information between business partners and have
Service Level Agreements (SLAs) in place specifying guaranteed performance metrics.
Organizations need to performance test to these SLAs to avoid financial and business
penalties. Finally, many organizations just beginning to use automated testing tools
for Web Services have already made significant investments in making SOAP scripts
by hand. They would like to import SOAP requests into an automated testing tool
for regression testing.
Empirix FirstACT 2.2 meets or exceeds the testing needs of these emerging trends
in Web Services testing by offering the following new functionality:
1. Automatic and customizable test script generation for XML data types – FirstACT
2.2 will generate complete test scripts and allow the user to graphically customize
test data without requiring programming. FirstACT now includes a simple-to-use XML
editor for data entry or more advanced SOAP request customization.
2. Scalability Guarantee – FirstACT 2.2 has been designed to be highly scalable to
performance test Web Services. Customers using FirstACT today regularly simulate
between several hundred to several thousand users. Empirix will guarantee to
performance test the numbers of users an organization needs to test to meet its business
needs.
3. Importing Existing Test Scripts – FirstACT 2.2 can now import existing SOAP request
directly into the tool on a user-by-user basis. As a result, some users simulated
can import SOAP requests; others can be automatically generated by FirstACT.
Web Services facilitates the easy exchange of business-critical data and information
across heterogeneous network systems. Gartner estimates that 75% of all businesses
with more than $100 million in sales will have begun to develop Web Services applications
or will have deployed a production system using Web Services technology by the end
of 2002. As part of this move to Web Services, "vendors are moving forward with
the technology and architecture elements underlying a Web Services application model,"
Gartner reports. While this model holds exciting potential, the added protocol layers
necessary to implement it can have a serious impact on application performance, causing
delays in development and in the retrieval of information for end users.
"Today Web Services play an increasingly prominent but changing role in the success
of enterprise software projects, but they can only deliver on their promise if they
perform reliably," said Steven Kolak, FirstACT product manager at Empirix. "With
its graphical user interface and extensive test-case generation capability, FirstACT
is the first Web Services testing tool that can be used by software developers or
QA test engineers. FirstACT tests the performance and functionality of Web Services
whether they are built upon J2EE, .NET, or other technologies. FirstACT 2.2 provides
the most comprehensive Web Services testing solution that meets or exceeds the changing
demands of organizations testing Web Services for performance, functionality, and
functionality under load.”
Learn more?
Read about Empirix FirstACT at http://www.empirix.com/FirstACT. FirstACT 2.2 is
available for free evaluation immediately at http://www.empirix.com/TryFirstACT.
Pricing starts at $4,995. For additional information, call (781) 993-8500.Simon,
I will admit, I almost never use SQL Developer. I have been a long time Toad user, but for this tool, I fumbled around a bit and got everything up and running quickly.
That said, I tried the new GeoRaptor tool using this tutorial (which is I think is close enough to get the jist). http://sourceforge.net/apps/mediawiki/georaptor/index.php?title=A_Gentle_Introduction:_Create_Table,_Metadata_Registration,_Indexing_and_Mapping
As I stumble around it, I'll try and leave some feedback, and probably ask some rather stupid questions.
Thanks for the effort,
Bryan -
EJB beans constantly loaded?
I have recently ported my EJB 2.1 project to EJB 3.0.
However, it seems that the new EJB 3.0 beans are constantly created and loaded from the database. E.g. when I look an entity bean I have used before during the session up, a new bean instance is created and loaded from the database. One would expect that the Entity Manager would find the old instance in the memory and return it. This is what ORM is all about, right?
In EJB 2.1 this behaviour can be changed by setting the commit-option to A. But, now this doesn't seem to help :(
Any help would be more than appreciated!
Best regards,
Igor VukmirovicYou can write Java code in your JSP which looks up an EJB and invokes methods on it.
However it's better to use ordinary Java beans to do the EJB lookup and invocation, and access the Java beans from a JSP.
Maybe you are looking for
-
Using More Than One iPod On One Computer
Hi everyone, I am sure my question will be an easy one to answer for the right person - who will no doubt have more experience than Apple's helpline which I just tried. I have a 60gb iPod with Colour display, which syncs with my iTunes (version 7.0)
-
Short Dump "ITAB_DUPLICATE_KEY" while executing DTP
Hi all, i am getting a short dump when i try to execute the DTP to Cube, following is the error details. i could not analyze the reason, need your inputs on this. Runtime Errors ITAB_DUPLICATE_KEY Da
-
I have a server that has 4 NIC's, one to the main router, and two to a NAS and an empty NIC. I have anothe PC right next to the server and I would like to connect that PC to the main network even though there is only one cable run to the server (it
-
How to generate the following XML
Hi all, I have 2 tables, demo_orders and demo_order_items (child) how can I generate something like this : <?xml version="1.0"?> <ROWSET> <ROW> <ORDER_ID>1</ORDER_ID> <CUSTOMER_ID>1</CUSTOMER_ID> <ORDER_TOTAL>1200</ORDER_TOTAL> <ORDER_TIMESTA
-
HI Gurus I need to execute the SAP Best practice scenario D85: Engagement Management with Project Controlling. Need help in activating the BC set and how to do that. I know it is done by Basis but still any steps for SD functionals. Thanks and Regard