Transactional, blocking cache operation
Are cache operation calls, e.g. create, update, remove, read, asynchronous, by default, at the client's point of view?
How do I make fully transantional and blocking calls? Should I rely on event listener?
Message was edited by: superlinear
No conflicting behavior. Just wanted to know the
default mode of the calls. Suspected that I was
making asynchronous calls because, operation runtime
was kind of constant in terms of the number of hosts.Yup, that is exactly why Coherence rocks! :-)
Peace,
Cameron Purdy
Tangosol Coherence: The Java Data Grid
Similar Messages
-
Dear experts,
Someone Knows if in the SAP is there any co-relation from what´s transaction produce what operations(VRGNG) of Controlling?
For example: When I run the transaction KSV5 the system write the operation RK1V, When I run the FB50, the system write COIN.
Note: Isn´t Reference Operation.
A good job for all and thanks.Hi, As per my knowledge to all your questions the concept what I am telling can answer,
that,
For example you have done update or delete then in database for the original block one image block gets created and the reference points to the image block where as the original will get modifed to the new value, then if a person of different session reaches the table he will read from the image blocks, but the person in the
same session done the transaction will refer to the original block , as far as the commit is applied the state remains the same,
If commit is applied then the image blocks log the transactions and get deleted ,
If rollback happens then the image blocks log it and overwrites the original and gets deleted. -
Cross company transaction between different operating concerns
Dear Gurus
we have scenario for new implementation that we have to enable Cross company code transaction between different operating concerns. Stock movements and other inter company purchases etc
Please advise is this possible or not.Dear Mohsin,
is it only for stock transactions??
Regards,
Raman -
How do I stop or block caches?
I am now on Mountanin Lion.
I have been able to block cookies by using the Safari preferences to do this. However I still get hundreds of "Caches" from God only knows what websites that still appear in Safari until I remove them.
I don't trust caches because I think they are cookies in disguise gathering data until I slip up and turn off the cookie blocker.
What are they? How can I block caches as I do for cookies? Why isn't there a "caches blocker" as is done with cookies?
Also, the "Do Not Track" website thing doesn't appear to have any effect on either cookies or caches.
Suggestions or edification welcomed.I have not blocked any file (or done anything). I just find it odd that Safari allows me to block cookies but it does not allow me to block the hundreds of caches that pop up from hundreds of websites each time I surf the web.
No one can even tell me what information is in these "caches".
Thanks for trying though. -
APP-QA-16161: Cant find the Transaction Block
When I click on Create Service Request or View Service Request form I get message
APP-QA-16161: Cant find the Transaction Block
The error is coming from the pre-form trigger of the CSXSRISR.fmb.
We have copied the .fmx of this file and all associated files from other instnaces, but still the issue is present.
Also, the issue is not present when we enable FRD trace.
Anyone has any suggestions on the same?Hi,
Please verify that you have no invalid objects in the database -- See (Note: 308927.1 - APP-QA-16161 - Can't find transaction block. Please inform system administrator).
If the above does not help, see (Bug 7253526 - RCVTXERT: APP-QA-16161 CAN'T FIND TRANSACTION BLOCK), I believe you would need to log a SR as the solution is not mentioned.
Regards,
Hussein -
ST22 timeout for all LC related transactions LIVE cache start stop working from LC10
Hi Team
we are a getting a ST22 timeout for all LC related transactions LIVE cache start stop working from LC10
LC version 7.9
OS AIX
SAP SCM 7
SDBVERIFY giving the following error
Checking package Redist Python
Checking package Loader
Checking package ODBC
Checking package Messages
Checking package JDBC
Checking package DB Analyzer
ERR: Failed
ERR: Checking installation "Legacy" failed
ERR: Group of directory /sapdb/data/config/install changed [0 =>
sdbregview -l is showing good.
any idea what might went wrong.
trying to use sdbverify -repair_permissions , but not sure about the exact syntax to use.
and it is not related to timeout parameter, we tested with different timeout values, but still the same error.
thanks
Kishore ChHello Kishore,
you could check the sizing of the liveCache data.
* Report /SAPAPO/TS_LCM_REORG_SNP has checks of the SNP planning areas for superfluous objects.
* Delete old/temporary APO data.
* /SAPAPO/TS_LCM_REORG report checked TS superfluous objects.
If you didn't create the planning versions, copy of planning versions & data load to liveCache, then create the SAP message to check your system on the question of the dataarea usage.
If you have the long running APO transactions => performance of the SCM system has to be checked.
If you have the bottleneck in liveCache & could not solve the case by yourself => create the SAP message to BC-DB-LVC and get SAP support.
Best regards, Natalia Khlopina -
Query on data block and Operating system block
Hi ,
Does data base requests data in term's of Data blocks or operating system block's . If data block's , how an data block get accessed the operating system block . what are the advantages over separating the data blocks from the operating system block's . can any one please explain the what happens once the data base requests the data ?
Thank Yousybrand_b wrote:
This place is called 'Oracle Forum' It does not offer the Oracle University curriculum for free, nor does it offer free abstracts of the Oracle Concepts Manual.
Kindly read the Concepts Manual of your unmentioned version yourself.
Thank you.
Sybrand Bakker
Senior Oracle DBADear Sybrand
As you said, this place is called Forum. If you know answer, then answer the question. If you know the documentation, then refer to documentation. If you don't know the answer, then don't answer in such rough manner! Just stop. Just don't type anything. Just be polity -
Adobe Flash Player pop-up screens are blocking correct operation of IE
As soon as I start up Internet Explorer, Flash Player pop-up screens start appearing. After selecting the No-button it disappears and re-appears again and so on. Finally a request for an installation permit screen pops-up and IE is switched off and restarting again. Is there something I can do?
Sorry Pat,
I first tried another option, removing and re-installing Flash Player, and
that solved my problem so far. I am not able to send you screen views as
they don't appear any more this moment.
Thanks for your effort
Harro
2013/10/11 Pat Willener <[email protected]>
Re: Adobe Flash Player pop-up screens are blocking correct operation
of IE created by Pat Willener <http://forums.adobe.com/people/pwillener>in
Installing Flash Player - View the full discussion<http://forums.adobe.com/message/5752349#5752349 -
Select Statement Blocking Delete Operation
Hi Experts,
In a Single table am trying to do delete operation but it has been blocked by simple select query.
Database setting : Read Committed Snapshot already set to ON for that database.
Any Idea why SELECT blocking DELETE.
FYI : Before Select statement I could noticed the following setting "set transaction isolation level read committed".
Kindly Advise
Regards,
NandhuFYI : Before Select statement I could noticed the following setting "set transaction isolation level read committed".
Hi NandhuJana,
1. Did you read
Uri Dimant's response?
Please try to remove this statement and check again
2. When we ask you for DDL+DML , the idea is to let us the option to (1) reproduce the issue in some cases, (2) get the best information which is the real database structure, queries, etc'. In this case for example I do not see the isolation level statement
in your code and we need it to understand what is happening :-) The order and structure (nested transactions for example) of statements is what important here.
3. you can check your isolet level using one of those options to make sure what is going in your script:
SELECT TRANSACTION_ISOLATION_LEVEL = CASE transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'ReadUncommitted'
WHEN 2 THEN 'ReadCommitted'
WHEN 3 THEN 'Repeatable'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot'
END
FROM sys.dm_exec_sessions
where session_id = @@SPID
GO
DBCC USEROPTIONS WITH NO_INFOMSGS
GO
I hope this is helpful :-)
Ronen Ariely
[Personal Site] [Blog] [Facebook] -
Can a db slowdown with write-behind cause a slowdown in cache operations?
If we have a coherence cluster, and one cache configured with write-behind is having trouble writing to the db (ie, it's slow), and we keep adding objects to the cache that exceed the ability of the db to consume them; will flow-control kick in and cause the writes to the cache to block/slow-down? Ie, the classic producer-consumer problem, where we are adding objects to the cache, faster than the cachestore can consume them.
What happens in this case? Will flow-control kick in and block writes to the cache? Will an internal buffer just keep growing? Are there any knobs to tweak this behavior (eg, in the case of spikes, where temporarily the producer is producing faster than the consumer can consume for a brief period of time, but then things go back to normal)?user9222505 wrote:
I believe we discovered that the same thread pool is used for all requests to the cache, including gets, puts and calls into the cachestore. So if the writes are slow within the cachestore, then it uses up all of the threads and slows everything down.Hi,
This is not really correct.
If a cache in a service is configured to use write-behind then a separate thread for that service is started, which deals with write-behind store and storeAll operations.
The remove operations need to be handled synchronously to avoid corruption of the data-set in the scenario of reading a entry from the cache immediately after removing it (if it were not synchronously deleted from the backing storage, then reading it back could give an incorrect non-null value). Therefore remove operations are handled synchronously on the service / worker thread, and not delayed on the write-behind thread.
Gets are also synchronously handled, so they again are served on the service / worker thread.
So if the puts are slow and wait too much, that may delay other puts but should not contend with other threads. If the puts are computation intensive, then obviously they hinder other threads because of consumption of the same CPU resource, and not simply because they execute.
Best regards,
Robert -
Transactions across cache regions
Hello,
Can a single unit of work (/transaction) be guaranteed to co-ordinate across multiple cache regions (both with and without a CacheStore plugged into the nodes)?Hi good stuff,
it depends on what you mean transactions.
Coherence transactions (both via the local transaction API and the Cache Adapters) are available to gather operations on one more caches together, to provide isolation of such transactions from one another with locking and version checking and commit changes to the cache conditionally, subject to version checking).
However, such operations are NOT atomically rolled back in case an exception was thrown during serialization or from a cache store.
So provided that your serialization/deserialization code is error-free, and you don't use synchronous writes or removes on cache-stores (write-through operations), then yes, such transactions would succeed or would fail atomically in the prepare phase (due to version checking detecting version mismatch).
However, in case write-through cache stores are used, it is not guaranteed that the outcome would be an atomic success or atomic rollback, as each cache-store operation is a separate backend transaction on its own, and failure in the writing of a later entry cannot rollback an already committed write from an earlier processed entry, or an entry successfully written on another cache node.
In case write-behind, the resulting writes due to the put operations are delayed, so failure in those are not thrown back to the client (the transaction). The remove operations are handled synchronously, but I am not sure if failure in those are thrown back to the client. My guess would be that they are not, considering that failures in other operations in write-behind are NOT thrown back to the client. In case of failures of operations not thrown back to the client, from the point of the client transaction Coherence behaves as if no cache stores were used at all.
Best regards,
Robert -
Process, Block & Container Operation
Hi all,
What is the difference between process and block containers?? Where would a container operation step be adptly used?
I am trying to do a BPM scenario with a send step inside a block and I can't set multiline for a block container. I am little confused. Any help.
-TeresaHi Teresa,
Block :
Block is similar to what we use curly braces in most of the languages which defines the scope. For example if we are using loop step in bpm to loop through some variable then we can use block step to restrict the scope of loop step up to some point and block is something similar to defining scope in other prog. languages. the local variables defined in this has a scope of it only and others cannot access the same and block Recieve step is not required just use Send step if it is async message.
Block- is for logical grouping of the process steps
http://help.sap.com/saphelp_nw04/helpdata/en/f6/e1283f2bbad036e10000000a114084/content.htm
Container :
A transformation step in BPM is a abstraction of message/interface mappings, So u can use a container variable directly inside your message mappings to map container variable to data element.
U can look at the thread : "Container object in Message Mapping" to know how to access the container variable inside your message mapping.
Contaniers - Variables in the Integration process.
https://www.sdn.sap.com/sdn/collaboration.sdn?contenttype=url&content=https%3A//forums.sdn.sap.com/thread.jspa%3FforumID%3D44%26threadID%3D41502.
Regards
Agasthuri Doss
Message was edited by: Agasthuri Doss Baladandapani -
Non-blocking sockets operation, block/pausing for milli-seconds at a time.
The stack trace is always as follows. This operation can block for 1.0 to 2.1 ms approx once every 3 minutes, sometimes a second apart or not at all for 8 minutes. Have Java 6 update 18 on Centos 5.3. The server is lightly loaded.
at sun.nio.ch.NativeThread.current(Native Method)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:182)Looking at what this function does, its not clear why it should block/pause or how to fix it.
Any suggestions welcome.I would try running with hotspot disabled
in case something is optimized away
and you are actually blocked on some lock
and it only looks like you are blocked on NativeTread.current(). -
Clear cache operation blows up the cluster
Hi,
We are running a 9 node coherence 3.6 cluster and every night we have a java process that starts up, issues a cache.clear() command on each cache and terminates itself. This appears to be causing adverse effects on the cluster health. Right after this process enters and leaves, several nodes run out of memory and quits. See error below
2011-12-07 03:05:18.786/57714.576 Oracle Coherence GE 3.6.1.0 <D5> (thread=Cluster, member=1): Service guardian is 23184ms late, indicating that this JVM may be running slowly or experienced a long GC
2011-12-07 03:05:20.518/57716.308 Oracle Coherence GE 3.6.1.0 <Error> (thread=DistributedCache:OA-DistributedCache, member=1): Terminating PartitionedCache due to unhandled exception: java.lang.OutOfMemoryError
2011-12-07 03:05:20.518/57716.308 Oracle Coherence GE 3.6.1.0 <Error> (thread=DistributedCache:OA-DistributedCache, member=1):
java.lang.OutOfMemoryError: Java heap space
at com.tangosol.coherence.component.net.memberSet.ActualMemberSet.setMember(ActualMemberSet.CDB:11)
at com.tangosol.coherence.component.net.memberSet.ActualMemberSet.add(ActualMemberSet.CDB:6)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.getOwnershipMemberSet(PartitionedService.CDB:13)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.getOwnershipSenior(PartitionedService.CDB:10)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.checkDistribution(PartitionedService.CDB:71)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:15)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Unknown Source)
This creates a ripple effect to all members and the cluster goes down after thsi happens. Can you please let me know if you have seen this or what might be causing this?
Sairam
Edited by: SKR on Dec 7, 2011 9:47 AMIt is a simple java application that iterates through a list of cache names and issues clear() on each of them.
* This method permanently clears the cache
* specified by name <cacheName> and returns the
* number of rows cleared
* @param cacheName
* @return the number of objects cleared after this operation
public static long clearCache(String cacheName){
NamedCache cache = CacheFactory.getCache(cacheName);
long size = cache.size();
cache.clear();
return size;
* Main method with the arguments to use the utility
* the first argumeent args[0] will specify the operation and the
* second argument args[1] will specify the value required for that operation
* @param args
public static void main(String args[]){
int argsCount = args.length;
if (argsCount==2 && args[0].equalsIgnoreCase("-clearCache")){
String[] tokens= args[1].split(",");
for (int i = 0; i < tokens.length; i++) {
String cacheName = tokens;
long nbrOfRowsCleared = clearCache(cacheName);
if(nbrOfRowsCleared>0){
System.out.println("Cleared all objects from cache: "+ cacheName);
System.out.println("Number of objects cleared = "+ nbrOfRowsCleared);
}else{
System.out.println( "No objects found in cache "+ cacheName +". Please verify if the cache name is correct");
}else{
System.err.println("Invalid arguments. Valid arguments are \n" +
"-clearCache <comma separated list of cache names>: Clears the contents of the caches specified by a comma separated list of cache names without spaces. Warning: Cache names are case sensitive" +
" \n For Eg: -clearCache Option,Equity,Account");
return;
Edited by: SKR on Dec 7, 2011 12:08 PM -
Sales Order Transaction Block.
Hi Expert
My Client Wish To Block Sales Order Transaction When Payment is not made within Payment Terms & Credit Limit Over 75000. As Per Business Partner Master Data. This Both Condition Full Fill At Any Time
For Example;
SrNo. Date Inv No Payment days Amount
1 01/04/2011 1 30 50000
2 04/04/2011 2 15 25000
If Above First Invoice Payment is To be Over Due As Per Payment Days.
Then User Not To Create Sales Order.
As Per Standard SAP Behaviour SAP Throw Only Information Massage For Credit Limit But Not Block Transaction.
So Please Send us Any Solution
Best Regard,
Srujal Patel
Edited by: Srujal Patel 77 on Apr 11, 2011 4:09 PMHi,
CONFIRM CR. LIMIT DEVIATION AUTHORIZATION
SALES ORDER CHECK BOX IN BP TAB OF GENERAL SETTIGNS
ACTIVATING BUILT IN APPROVAL PROCEDURE
a COMBINATION OF THESE THINGS I GUESS WILL SERVE YOUR PURPOSE
THANKS
pRADNYA
Maybe you are looking for
-
i still have 5 Gb icloud storage but 2.1 gb on iphone I want to upgrade to 6.0 but need 2.6 gb on my phone can i use icloud to free up some space and still access the apps that I store there on my iphone.
-
General Error opening project won't go away
This is really bad. I'm supposed to show the current cut of my 15 minute movie tomorrow, but FCE HD 3.5.1 shows a General Error when I try to open my project. Not just my current project, but for all project files in my Autosave Vault and most projec
-
Reversing Single Entry in Incoming Payment
Dear Experts, I want to reverse a single invoice entry from an incoming payment which contains knocking off multiple invoices. Let's suppose i have knocked-off Invoice 1,2, and 3 in an incoming payment, and now i have to reverse the entry of Invoice
-
IDOC_CIMTYPE Empty - Change Pointers
Hello We have developed the user exit ZUE_ZXMGVU03 to send additional material customised field in BD10 when sending idoc from ECC. the problem is that before the select statement from MARA table we check the follwing condition IF idoc_cimtype = 'ZAF
-
I can't open my .pkg files
Well my problem is simple. I have an Intel iMac and i bought Toast Titanium.... Well... i accidentally set my .PKG or packages file to open all the files which is a problem because that means now i cannot open programs that people had made and i cann