UnitOfWork and server session cache
Hello,
seems like in some situations UoW does not update server session cache:
SessionManager manager = SessionManager.getManager();
Server server = (Server) manager.getDefaultSession();
Session client = server.acquireClientSession();
UnitOfWork unit = client.acquireUnitOfWork();
// stored procedure call preserves caching !
StoredProcedureCall call = new StoredProcedureCall();
call.setProcedureName("security.set_current_user");
call.addNamedArgumentValue("p_user", 123);
unit.executeNonSelectingCall(call);
// these objects should always be cached (if caching is enabled) !
List list = unit.readAllObjects(Naudotojas.class);
unit.commit();
client.release();
server.release();
The presented source code executes stored procedure and read query within singe transaction. Problem is that selected data is not cached. But if stored procedure call is removed, then records will be cached. Is it expected behaviour ? That workarounds could be applied to force caching ?
When the first modify call is performed in a UnitOfWork it then assumes that the transactional state of the database is transient and therefore and data read could be non-committed. At this point it is not safe to load the objects into the shared cache.
If you are attempting to make use of VPD security you should read up on the support offered in TopLink. It will give you additional call-backs for setting the user credentials.
Doug
Similar Messages
-
Load balancing and server session
if i configure SLB what will happen if the server initiate connection back to client for example authentication or for a data transfer.
Will the server still use the virtual ip?
example below server3 initiating a session (example authentication) back to client. will it use the virtual ip?
client ------virtual----server1----server3---clientThe server will use the VIP if you configure a source group as shown below. You must have a content rule with the same VIP and services.
owner myorganization
content serverfarm
add service server1
add service server2
vip address 172.16.36.58
active
group serverfarm
vip address 172.16.36.58
add service server1
add service server2
active -
How to add a new object to Toplink session cache?
Toplink Experts,
I have a question on Toplink 9.0.4.5 (I know this is older version of Toplink but this is what we are using right now). Let me give you little context before I ask the question.
Toplink server session brokers client sessions (that are spawned by it) and gets changes made in client sessions into server session through transaction commits. Process involved in getting some thing into database using Toplink is: 1) Get client session 2) Register object to be written into database with Toplink's UnitOfWork (UOW). This registration creates two copies of the object registered (original object). Working clone for client application for modifications, backup clone for Toplink to compare the changes made to the working clone later 4) Make changes to the working clone 5) Complete the transaction (and so commit UOW). If transaction commits successfully, Toplink writes the changes from the working clone into database first and applies the same changes made to the working clone to the original object in Toplink server session cache next. Oracle recommends using UOW to write some thing into database. Using UOW involves a transaction and it writes changes into database first before the original object in Toplink server session cache is refreshed.
Now comes my question: Is there a way to add a brand new object (persistent object) into Toplink server session cache with out writing it first into database? I am OK if I need to use UOW and also to use transaction to make the change but I just want to be able to refresh the original object in the server session cache with out writing any thing into database. If there is a way to do this? What are the disadvantages in doing this way though? I have read Oracle's Toplink developer guide (10.1.3) and Javadocs from Oracle on UnitOfWork/Session etc all related interfaces and classes but I could not find a way. Please help!
SyamThere are a couple things wrong with the method you posted. The first being uow.registerObject() is going to cause the object to be treated as new if you are not using the TopLink cache.
There really is not any supported mechanism to do what you are looking to do. TopLink 11g has the hooks to replace the TopLink cache with a distributed cache (and the TopLink-Grid product uses these to use Coherence as the TopLink cache) but earlier versions did not have this.
If you are willing to work outside what would be officially supported and no component of the application is using the ServerSession cache (ie all queries refresh) then something like the following may work for you:
public void prepareObjectForSave(Object userEditedObject, Object gigaSpacesObject)
UnitOfWork unitOfWork = this.dbContext.getSession().acquireUnitOfWork();
Descriptor descriptor = this.dbContext.getSession().getDescriptor(gigaSpacesObject);
Object pkVector = descriptor.getObjectBuilder().extractPrimaryKeyFromObject(gigaSpacesObject, this.dbContext.getSession());
Object cloneOfMostRecentlyUpdatedObject = (Object)unitOfWork.cloneAndRegisterObject(gigaSpacesObject, new CacheKey(pKVector), descriptor);
unitOfWork.mergeCloneWithReferences(userEditedObject);
unitOfWork.commit();
}You will need to tweak the above code to have it match the APIs in your version of TopLink and you should test thoroughly.
--Gordon -
Possible session caching issue in SSRS2014
Using custom FormsAuth, User A can sign into our own main asp.net mvc app (WIF cookie), then SSRS (FormsAuth cookie) and all is well. Here is where things go bad. User A signs out of our main application (WIF cookie deleted) then back in into
our main application as User B then back into SSRS. SSRS report that displays User!UserID show UserA instead of current User B. Its like there is either a session or cookie caching issue going on but not for sure.
1. What is the proper way to sign out of SSRS and prevent session caching?
2. Do I need to worry about making my SSRS logon page non-cacheable? If so, what is the recommended way of doing this?
thanks
scottHi scott_m,
According to your description, you used custom FormsAuthentication in Reporting Services, after user A sign out the application an sign in as user B, SSRS built-in user is shows user A instead of user B.
Based on my search, once we configured SSRS to use Custom (Forms) authentication by deploying a custom security extension, we can logon to MS Report Manager (MSRM) using credentials of our custom security framework via a logon web page. But there is no way
to logout or to expire the authentication cookie, so we need to close the browser manually. As a workaround, we can add a logout button to the Report Manager which is using Forms Authentication, then use code to clear the cookie and redirect to home page.
In addition, if you extend Reporting Services to use Forms Authentication, it’s better to use Secure Sockets Layer (SSL) for all communications with the report server to prevent malicious users from gaining access to another user's cookie. SSL enables clients
and a report server to authenticate each other and to ensure that no other computers can read the contents of communications between the two computers. All data sent from a client through an SSL connection is encrypted so that malicious users cannot intercept
passwords or data sent to a report server.
Here is a relevant thread you can reference:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/5e33949d-7757-45d1-9c43-6dc3911c3ced/how-do-you-logout-of-report-manager
For more information about Forms Authentication, please refer to the following document:
https://technet.microsoft.com/en-us/library/aa902691%28v=sql.80%29.aspx?f=255&MSPPError=-2147217396
If you have any more questions, please feel free to ask.
Thanks,
Wendy Fu
If you have any feedback on our support, please click
here.
Wendy Fu
TechNet Community Support -
Code to set and destroy session variables in Java Server Pages(JSP)
code to set and destroy session variables in Java Server Pages(JSP)
we have use following statement to set session variable
session.setAttribute("userClient",id);
we have use following statement to destroy session variable
session.setAttribute("userClient","");
and
the session.invalidate() is not working
Plz. solve this probemcode to set and destroy session variables in Java
Server Pages(JSP)
we have use following statement to set session
variable
session.setAttribute("userClient",id);
we have use following statement to destroy session
variable
session.setAttribute("userClient","");Perhaps if you tried using
session.setAttribute("userClient", null);
or
session.removeAttribute("userClient");
and
the session.invalidate() is not workingNot working how?
>
Plz. solve this probem -
WCF-SQL Adapter and idle SQL Server sessions
Hi,
I have a BizTalk 2013 server with a send port that periodically send SQL statements to a remote SQL Server. The remote SQL Server hosts the database of a Dynamics NAV server.
The DBA of this SQL Server warned me that a lot of SQL Server sessions remain idle after a long period of time (about 20 idle sessions coming from my BizTalk server with a Idle time that is more than 10 days). He thinks that we have a trouble in BizTalk to
close the connections to the SQL Server.
I wondered if I can handle the situation by changing some settings in the sqlBinding used by my send port, for example the MaxPoolSize or other parameters.
Any idea?
Thanks.So, is this causing an actual problem or is this something the DBA just happened to see?
20 idle sessions isn't a lot at all and is well below the default Connection Pool size, 100 in most cases.
So long as it does not keep increasing unchecked, I'd say what you're seeing is the expected and desired behavior.
You maybe have had a burst at some point causing 20 connections, then normal continuous operations only uses 1 or 2. -
JPA Project, TopLink Server session and ClassLoader issue
hi,
I'm working on the project where JPA annotations are used to describe the mapping between POJOs and the database tables. JpaHelper class is used to bridge the gap between toplink-essentials and toplink. The EntityManagerFactory and the oracle.toplink.threetier.Server are obtained as following:
emf = JpaHelper.getEntityManagerFactory(Persistence.createEntityManagerFactory("myUnit"));
server = JpaHelper.getServerSession(emf);
Due to different functional and architectural constraints the same Server session has to be shared across different web applications. Thus the EntityManagerFactory has been bound to Weblogic JNDI tree and used by web applications to obtain Server session. It works very well for the very first client that is trying to query the data, but due to the fact that in the web container different web applications are using different ClassLoaders the QueryException “Missing descriptor for [class ...]” is being thrown for all consecutive call from other web applications. The same behaviour is observer after re-deploying the “first” application.
Going throughout different threads in the forum I learned that it could be solved for the "pure TopLink" projects where “sessions.xml” is used as following:
oracle.toplink.internal.helper.ConversionManager.getDefaultManager().setShouldUseClassLoaderFromCurrentThread(true);
boolean shouldLoginSession = true;
boolean shouldRefreshSession = true;
boolean shouldCheckClassloader = true;
session = SessionManager.getManager().getSession(new XMLSessionConfigLoader(), "mySession",
Thread.currentThread().getContextClassLoader(), false, true, true);
Could you please let me know if there is something similar for JPA projects where “sessions.xml” isn’t used or other options have to be considered?
regards,I am not completely sure I understand what product and version you are using. TopLink Essentials and Oracle TopLink are independent products. They share the same heritage and much of the same functionality but typically you would only use one of them in an application.
If you have multiple web applications (WAR) that need to share the same JPA persistence unit I would package all of the JPA classes in their own JAR file and co-locate all of these WARs with the persistence JAR in a single EAR file. This should give you the capability to share the classes across the web applications.
Doug -
Sever Session Cache, Read-Only
Since the objects in the server cache are shared, we should not modify the objects in cache. We are thinking of enforcing this in a programmatic way.
Does TopLink have a plan to enhance the current caching capability by allowing user to config the cache to be read-only, read-write, ... thus updates to the read-only cached objects (and possibly their associations) are not allowed? Or is this just too difficult for TopLink to implement?
Thanks a lot for your information.
HaiweiDoug,
Please excuse my ignorance. But how does setting the flag on the postBuild() descrirptor method prevent anybody from modifying the server session object ?
I presume I have a class that is generic - PeristentObject that has a filed isEditable and I set this flag to false when the object is being built. But how can that help me from not modifying the cached object as I like ? I am free to invoke the setters am I not ?
Regards,
Aswin.
Haiwei,
TopLink has made some great enhancements to caching
and cache coordination in 10.1.3. Take a look at the
10.1.3 preview 3 for more details.
The Server cache's objects are to be considered
read-only. The UnitOfWork provides working copes
where changes are made and these are merged into the
shared cache on write. They are just read only to
prevent direct modification outside of a
transactional context.
I have worked with customers who maintained a flag on
their objects to know if setter should be permitted.
This was set to false (disallowing access) using the
postBuild descriptor event and then set to true
during the postClone method to allow modifications in
the UOW working copies. Its a little intrusive but
more portable then injecting a bunch of proprietary
TopLink code into your model.
Going forward we are looking at some simplification
solutions using AOP to eliminate the visibility of
the UnitOfWork and working copies so all of this is
automated behind the scenes. We hope to also preview
this when its available.
Doug -
Hi,
Should there be any issue mixing objects between multiple server sessions.
We have multiple services that each have each have their own ServerSession.
There are situations where objects retrieved through one ServerSession are ultimately part of an object that is persisted through another ServerSession.
Note: communication between client and services is via RMI and persistence is performed using the mergeWithReferences method.
Example,
- Person object has an Address object.
- Address is retrieved from ServerSession 1 (RMI)
- Person object is retrieved from ServerSession 2 (RMI)
- Address is added to Person and Person is persisted
thru ServerSession 2. (RMI)
The behavior I've observed is inconsistent but ultimately there seems to be problems.
From the example above, occasionally ServerSession2 will attempt to reinsert the Address object into the table.
In the case where Address is read only the cached version of Person in ServerSession2 will have a null Address attribute after persistence even though the database was updated correctly.
What accounts for this behavior?
Thanks
MarkMarc,
The problem you are seing is related to TopLink's existence checking. When you go to write an object from session 1 into session2 and it does not exist in the cache TopLink assumes the object is new. This is very similar to when you are running in a cluster and your write request ends up in a JVM where the TopLink session has not read in the object you plan to write.
In order to make this work you can change the existence-checking setting, which I don't recommend, or you can alter your pattern for dealing with the UnitOfWork.
When you serialize the object(s) across RMI to the server that is going to do your writing I recommend the following pattern:
1. Acquire UnitOfWork from session
2. Read objects from database
unitOfWork.read(person);
unitOfWork.read(address);
3. Merge objects (unitOfWork merge APIs)
4. Commit UnitOfWork
Step #2 is important here. If you are writing to a session that has the objects cached then no database call is required. If not, the database version will be read in. The merge will copy the values over to the working copy and the commit will calculate and write any necessary changes.
Other recent threads in this forum and the documentation detail the merge options.
Doug -
Isolated client session & caching
I have a 3-tier TopLink application that uses isolated client sessions and stateless EJBs with CMT. A separate client session is created for each EJB method call via server.acquireClientSession(). To retrieve a persistent object for read-only purposes, the application uses the client session. To retrieve a persistent object for read/write purposes, the application uses the UnitOfWork returned by clientSession.getActiveUnitOfWork(). Within the same EJB method call, retrieving the same persistent object via the client session and the UnitOfWork causes the same SELECT to be issued twice.
I noticed that the parent session of the UnitOfWork returned by clientSession.getActiveUnitOfWork() is not clientSession My guess is that this causes objects retrieved via the UnitOfWork to not be stored in clientSession's cache.
Why is the parent session of the UnitOfWork returned by clientSession.getActiveUnitOfWork() not equal to clientSession? Could the duplicate SELECT be avoided if the parent session was clientSession?Hello,
Instead of calling getActiveUnitOfWork() call acquireUnitOfWork on the clientSession. The acquired UnitOfWork will have the clientSession as the parent. The method getActiveUnitOfWork() goes to the TopLink CMT interface objects to acquire the UnitOfWork. This UnitOfWork is available throughout the transaction and as such is created with a different parentage.
If you would like to share the same UnitOfWork and ClientSession throughout the Transaction then you could call getActiveUnitOfWork on the serverSession (instead of acquireClientSession) then get the parent and execute your read-only queries on that ClientSession.
--Gordon -
Connection Pool and Database Sessions
Hi,
Is there any way to use the connection pool or Datasource while connecting to database?If I am using a stateless sesssion bean and using a Data Access layer which just creates a database session to write the persistence toplink objects how I can make use of application server connection pool?
Thanks,
VinodHi,
Is there any way to use the connection pool or Datasource while connecting to database?If I am using a stateless sesssion bean and using a Data Access layer which just creates a database session to write the persistence toplink objects how I can make use of application server connection pool?Hi Vinod,
Yes, TopLink allows you to use the app server's connection pooling and transaction services. Chapter 2 of the Oracle9iAS TopLink Foundation Library Guide provides details as do the TopLink examples. The easiest way to set this up is by using the sessions.xml file. The sample XML below is from the file <toplink903>\examples\ias\examples\ejb\sessionbean\sessions.xml. Here we are adding the datasource defined in OC4J and specifying that we are using the OC4J transaction controller also.
<login>
<user-name>sa</user-name>
<password></password>
<datasource>java:comp/env/jdbc/ejbJTSDataSource</datasource>
<uses-external-transaction-controller>true</uses-external-transaction-controller>
<uses-external-connection-pool>true</uses-external-connection-pool>
</login>
<external-transaction-controller-class>oracle.toplink.jts.oracle9i.Oracle9iJTSExternalTransactionController</external-transaction-controller-class>
When using this approach you need to change your TopLink code slightly in the EJB methods:
a. Acquire the ACTIVE unit of work from the server
session (again, see the EmployeeSessionEJB code
example) with something like:
UnitOfWork uow = clientSession.getActiveUnitOfWork();
b. Calls to uow.commit() can be ommitted or commented out
because the EJB will handle this. Note that of course
the methods you create in the EJB that are using this
approach must have TX Required (default).
Hope this helps.
Pete -
Http.keepAlive does not turn off SSL session cache?
Hi there,
I have a web service client that uses JSSE for making web service calls via https. In an effort to debug problems, I set http.keepAlive to false, I can see from the SSL debug output that KeepAlive timer messages no longer shows up, but I still see text such as "Cached client session" and "try to reuse cached session", etc.
Should not turning off keepAlive disable the use of persistent sessions?
Thanks.
YanThey are unrelated features.
HTTP Keep Alive allows the browser to maintain a Socket to the server and issue multiple HTTP requests over that same socket.
SSL Session caching is when an SSL Session is assigned an ID, and additional SSL connects may be established with the same ID. These additional sockets then do not need to perform the full SSL handshake, since much of the data has already been negotiated previously. -
Hello,
I've been scratching my head for while on a problem concerning SSL session caching. Upon examining memory usage of the session cache I noticed that the number of cached sessions is reset at some point while new session are created. For example sometimes at around 300 -1500 sessions the number drops to under 10. I have set the session cache size to unlimited (via SSLSessionContext.setSessionCacheSize()) and session timeout to 24 hours (via setSessionTimeout()).
I noticed that this has probably something to do with garbage collection. I made a server application that receives SSL connections and every few seconds prints the number sessions in the cache and the amount of consumed memory (in megs). I also enabled garbace collection information printing and ran java VM with a maximum heap size of 512M. I ran a client application against it that continuously initializes a new sessions. Here's what I got:
Sessions: 484 Memory: 1.7913589477539062
[GC [DefNew: 503K->4K(576K), 0.0007350 secs] 2234K->1734K(3796K), 0.0008700 secs]
[GC [DefNew: 513K->63K(576K), 0.0009260 secs] 2243K->1793K(3796K), 0.0010680 secs]
Sessions: 490 Memory: 1.7832870483398438
[GC [DefNew: 569K->37K(576K), 0.0021150 secs] 2299K->1773K(3796K), 0.0022560 secs]
Sessions: 495 Memory: 2.1244659423828125
[GC [DefNew: 543K->37K(576K), 0.0019000 secs] 2279K->1775K(3796K), 0.0034750 secs]
[GC [DefNew: 549K->57K(576K), 0.0009080 secs] 2287K->1796K(3796K), 0.0010290 secs]
[Full GC [Tenured: 1739K->1635K(3220K), 0.0865340 secs] 1962K->1635K(3796K), [Perm : 3267K->3267K(8192K)], 0.0885000 secs]
Sessions: 6 Memory: 1.7752304077148438
[GC [DefNew: 512K->58K(576K), 0.0016310 secs] 2147K->1694K(3796K), 0.0017680 secs]
[GC [DefNew: 568K->37K(576K), 0.0009750 secs] 2204K->1678K(3796K), 0.0011110 secs]
Sessions: 12 Memory: 1.7010269165039062
[GC [DefNew: 549K->56K(576K), 0.0014310 secs] 2190K->1699K(3796K), 0.0015600 secs]Notice how the number of sessions drops from 495 to 6? And in between there's a garbage collection print "[Full GC...". Why is this? Shouldn't the session cache keep the sessions until there's either too many of the them or they get too old? Here only a few minutes have elapsed and memory consumption is under control (1 to 3 megs).In my implementation I avoid session renegotiation as
much as possible to achieve high throughput. My goal
is to preserve sessions for the entire session
timeout time. This raises a few questions:Understood, but you also have to protect the server against resource exhaustion. It's in the client's interest to cache lots of sessions for a long time; the server's interest is to conserve resources so it can keep itself running.
You actually don't want to cache all the sessions, just the 'hot' ones, so you're better off having the SessionContext remove sessions on an LRU basis by having a finite limit, rather than just letting them be GC'd. The fact that so many sessions were collected in your runs indicates pretty severe memory usage.
(1) Is there any way to tweak this behavior? For
example can make the sessions live longer by using
some GC flags for the VM? Allocate more heap space, or cache fewer sessions.
(2) How about keeping regular references to all the
sessions in my application?That would save them from GC of course. But then you'll quickly discover that you really do need a finite limit.
(3) Is the behavior of the session cache or the inner
workings of the SSL API in general documented
somewhere?Only in the source code of JSSE, and that's only a property of Sun's JRE implementation. Session caching is not even a required feature, and J2ME implementations for example generally don't do it at all. -
SSL session cache memory consumption
Hello,
I am implementing a high availability application that uses SSL session caching. How much memory does it consume? I mean, is setting session cache size to unlimited a 'safe' option?
Thanks.Not for a server, limit it to something. It's in the server's interest to keep sessions few and short-lived, but conversely it's in the client's interest to keep sessions many and long-lived.
A 'session' object contains the master secret and the current session key, and references to the peer certificate chain. -
Bridge Freezing, Not Displaying Thumbnails, Says Disk Space is Low and to Purge Cache
Hello,
I'm working in Bridge CS5 on a Mac 0S X version 10.9.2. I have 12 GB of RAM and the server has 240 TB capacity, 76.69 available.
Recently I've received the following error message while processing large photo shoots with both DNG and JPEG files: Bridge is running low on memory. It is recommended that Bridge is restarted.
I have also been manually caching large photo shoots and my cache preferences were set to Keep 100% Preview In Cache and Automatically Export Cache To Folder When Possible. My cache size was set to 500,000. Yesterday, Bridge started to freeze while attempting to open folders. The wheel would spin at the bottom with the message Compiling Criteria. The only way to close Bridge was a Force Quit. This happened several times.
Today, image thumbnails will no longer display, only the DNG/JPEG icons. I received the following error message:
I have tried restarting Bridge while holding the alt/Option key to reset the preferences and checking all three boxes - no luck. I have done the same thing with the boxes unchecked - no luck. I have tried manually purging the cache from its location in the library - no luck. I have tried resetting the cache preferences from the toolbar - no luck. I have tried deleting the cache files from the folders - nothing changes. I have tried purging the cache for each folder from the tool bar - nothing happens.
I need to restore the thumbnails and rebuild the cache for each folder. Does anybody have an idea what is wrong or how to help?
Many Thanks!Okay, so I have a solution. If after purging the central cache on the server Bridge continues to not perform normally try emptying the trash. I did this and it worked.
Maybe you are looking for
-
GUI_DOWNLOAD - fieldnames does not exist
Hi All, I have managed to sort out the format of my question, so I have closed the other post and started a fresh I am wondering if you can help in anyway, I am trying to create a datafile and dump it to a PC, so far I use the cl_gui_frontend_service
-
We rented a movie on apple tv 2 and it's loading. it says anywhere from 7 hours to 10 hours before we can play it. wifi signal is strong. Never had this long to load before. any ideas?
-
Iphoto library gone on my side
My wife and I each have an account on my 1.8 duel G5. When I opened my iphoto 6 tonight my library was completely gone. All my photo books and every roll. Our iphoto folder lives in our shared file. When I go to my wife's side, she still has everthin
-
Big, black box obscuring photos
I'm running iOS4 on a 3G. I recently recieved a video attached to a text message. I saved the video and when I opened the photo app, the thumbnail for the video was in the lower right hand corner of a large, black box that obscured several other phot
-
Help manually restoring itunes library
hi all, I need some help please. I just have got a new laptop so i backed up my library onto 4 dvds and I have come to restore however it keeps hanging on the same track on the restore dvd 3. I cant find a way around it. All the data appears to be on