[Solved] if(Transaction specified for a non-transactional database) then
I am getting started with BDBXML 2.4.14 transactions and XQuery update functionality and I am having some difficulty with 'node insert ...' and transactions failing with 'Transaction specified for a non-transactional database'
Thanks for helping out.
Setup:
I have coded up a singleton manager for the XmlManger with a ThreadLocal holding the transaction and a query method to execute XQueries. The setup goes like this:
environmentConfig = new EnvironmentConfig();
environmentConfig.setRunRecovery(true); environmentConfig.setTransactional(true); environmentConfig.setAllowCreate(true); environmentConfig.setRunRecovery(true); environmentConfig.setInitializeCache(true); environmentConfig.setTxnMaxActive(0); environmentConfig.setInitializeLocking(true); environmentConfig.setInitializeLogging(true); environmentConfig.setErrorStream(System.err);
environmentConfig.setLockDetectMode(LockDetectMode.MINWRITE); environmentConfig.setJoinEnvironment(true); environmentConfig.setThreaded(true);
xmlManagerConfig = new XmlManagerConfig(); xmlManagerConfig.setAdoptEnvironment(true); xmlManagerConfig.setAllowAutoOpen(true); xmlManagerConfig.setAllowExternalAccess(true);
xmlContainerConfig = new XmlContainerConfig(); xmlContainerConfig.setAllowValidation(false); xmlContainerConfig.setIndexNodes(true); xmlContainerConfig.setNodeContainer(true);
// initialize
instance.xmlManager = new XmlManager(instance.getEnvironment(), instance.getXmlManagerConfig());
instance.xmlContainer = instance.xmlManager.openContainer( containerName, instance.getXmlContainerConfig());
private ThreadLocal<XmlTransaction> transaction = new ThreadLocal<XmlTransaction>();
public XmlTransaction getTransaction() throws Exception {
if (transaction.get() == null) {
XmlTransaction t = xmlManager.createTransaction();
log.info("Transaction created, id: " + t.getTransaction().getId());
transaction.set(t);
} else if (log.isDebugEnabled()) {
log.debug("Reusing transaction, id: "
+ transaction.get().getTransaction().getId());
return transaction.get();
private XmlQueryContext createQueryContext(String docName) throws Exception {
XmlQueryContext context = xmlManager.createQueryContext(
XmlQueryContext.LiveValues, XmlQueryContext.Lazy);
List<NamespacePrefix> namespacePrefixs = documentPrefixes.get(docName);
// declare ddi namespaces
for (NamespacePrefix namespacePrefix : namespacePrefixs) {
context.setNamespace(namespacePrefix.getPrefix(), namespacePrefix
.getNamespace());
return context;
public XmlResults xQuery(String query) throws Exception {
XmlQueryExpression xmlQueryExpression = null;
XmlQueryContext xmlQueryContext = getQueryContext(docName);
try {
xmlQueryExpression = xmlManager.prepare(getTransaction(), query,
xmlQueryContext);
log.info(query.toString());
} catch (Exception e) {
if (xmlQueryContext != null) {
xmlQueryContext.delete();
throw new DDIFtpException("Error prepare query: " + query, e);
XmlResults rs = null;
try {
rs = xmlQueryExpression.execute(getTransaction(), xmlQueryContext);
// catch deadlock and implement retry
catch (Exception e) {
throw new DDIFtpException("Error on query execute of: " + query, e);
} finally {
if (xmlQueryContext != null) {
xmlQueryContext.delete();
xmlQueryExpression.delete();
return rs;
<?xml version="1.0" encoding="UTF-8"?>
<Test version="0.1">
<Project id="test-project" agency="dda">
<File id="large-doc.xml" type="ddi"/>
<File id="complex-doc.xml" type="ddi"/>
</Project>
<Project id="2nd-project" agency="test.org"/>
</Test>
Problem:
All the queries are run through the xQuery method and I do delete the XmlResults afterwards. How do I get around the 'Transaction specified for a non-transactional database' what is the transactions doing? How do I get state information out of a transaction? What am I doing wrong here?
1 First I insert a node:
Transaction created, id: -2147483647
Adding document: large-doc.xml to xml container
Reusing transaction, id: -2147483647
Working doc: ddieditor.xml
Root element: Test
Reusing transaction, id: -2147483647
insert nodes <Project id="JUnitTest" agency="test.org"></Project> into doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test
Reusing transaction, id: -2147483647
2 Then do a query:
Reusing transaction, id: -2147483647
doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project/@id
Reusing transaction, id: -2147483647
3 The same query again:
Reusing transaction, id: -2147483647
doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project/@id
Reusing transaction, id: -2147483647
4 Delete a node:
Reusing transaction, id: -2147483647
delete node for $x in doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project where $x/@id = '2nd-project' return $x
Reusing transaction, id: -2147483647
5 Then an error on query:
Reusing transaction, id: -2147483647
doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project/@id
Reusing transaction, id: -2147483647
Transaction specified for a non-transactional database
com.sleepycat.dbxml.XmlException: Error: Invalid argument, errcode = DATABASE_ERROR
at com.sleepycat.dbxml.dbxml_javaJNI.XmlResults_hasNext(Native Method)
at com.sleepycat.dbxml.XmlResults.hasNext(XmlResults.java:136)
Message was edited by:
jannikj
Ok got it solved by increasing the locks lockers and mutex's I allso increased the the log buffer size:
environmentConfig = new EnvironmentConfig();
// general environment
environmentConfig.setAllowCreate(true);
environmentConfig.setRunRecovery(true); // light recovery on startup
//environmentConfig.setRunFatalRecovery(true); // heavy recovery on startup
environmentConfig.setJoinEnvironment(true); // reuse of environment: ok
environmentConfig.setThreaded(true);
// log subsystem
environmentConfig.setInitializeLogging(true);
environmentConfig.setLogAutoRemove(true);
environmentConfig.setLogBufferSize(128 * 1024); // default 32KB
environmentConfig.setInitializeCache(true); // shared memory region
environmentConfig.setCacheSize(2500 * 1024 * 1024); // 250MB cache
// transaction
environmentConfig.setTransactional(true);
environmentConfig.setTxnMaxActive(0); // live forever, no timeout
// locking subsystem
environmentConfig.setInitializeLocking(true);
environmentConfig.setMutexIncrement(22);
environmentConfig.setMaxMutexes(200000);
environmentConfig.setMaxLockers(200000);
environmentConfig.setMaxLockObjects(200000); // default 1000
environmentConfig.setMaxLocks(200000);
// deadlock detection
environmentConfig.setLockDetectMode(LockDetectMode.MINWRITE);
In the docs by Oracle it is limited information given regarding the impact of these settings and their options. Can you guys point in a direction where I can find some written answers or it hands on?
Similar Messages
-
Should I specify the Parallel parameter for an non-RAC database?
The Oracle documatation state as the following:
"The Oracle Database 10g Release 2 database controls and balances all parallel operations, based upon available resources, request priorities and actual system load." It show that Oracle can optimize the Parallel level automaticly.
Should I specify the Parallel parameter for a non-RAC database? Most of the transactions are small OLTP.What parallel parameter are you talking about?
Generally, you may benefit from parallelization in a very similar manner on RAC as on single instance system. And it is in both cases not sufficient to change the value of any initialization parameter to achieve parallelization of queries, DDL or DML.
Kind regards
Uwe
http://uhesse.wordpress.com -
Transaction code for creation of transaction keys in obyc
Dear Experts,
Can u tell me what is the transaction code for creation of transaction keys in obyc asap.
Thanks & Regards,
Radhika.Hi
There is no TCode and the path is
SPRO-Material Management-Purchasing-Conditions-Price Determination....
Cheers
Srinivas -
I have tired for this!
When I use SSIS for extract data from ssas, that means,I use mdx query.
then random error occured.
Hope some one can understand my poor English....
And the Error Info show below.
Code Snippet
Error: 0xC0202009 at Data Flow Task - For Individual User Tech Points, OLE DB Source 1 1 [31]: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E05.
An OLE DB record is available. Source: "Microsoft OLE DB Provider for Analysis Services 2005" Hresult: 0x00000001 Description: "Error Code = 0x80040E05, External Code = 0x00000000:.".
Error: 0xC004701A at Data Flow Task - For Individual User Tech Points, DTS.Pipeline: component "OLE DB Source 1 1" (31) failed the pre-execute phase and returned error code 0xC0202009.I have had the same error on SQL2008 and now on SQL2012 SSIS, but have been able to eliminate / workaround.
We have a Loop Container in our Control flow that contains a data-flow task with an MDX source. The MDX query for the data-flow source is dynamically built (via an expression) on each iteration of the Loop container (however it always returns the "same shaped"
results - only the filters in the WHERE clause are different).
We've found the error to be somewhat intermittent - sometimes the package will complete successfully, other times it will fail with the 0x80040E05 error at varying iterations thru the container loop.
To alleviate the problem we setup the SQL Agent job-step for this package to re-try on failure for up to 5 retries - not an ideal workaround, but it helped to improve the success rate of the Job.
We have no idea why this error is occurring or what is causing it, however it appears to be timing-related in some way and I have only seen the issue when using a SSAS OLE-DB data source with a dynamically generated MDX query. I have managed to virtually
eliminate the error from occurring with a not ideal workaround in the SSIS package - no idea why this works/helps (hopefully Microsoft will be able to work it out and resolve the issue as it's been plaguing us since SQL2008 and is still here in SQL2012
SP1...
Workaround for MDX causing 0x80040E05 error:
Within our loop container we have added a Script task with OnSuccess precedent constraint to the data-flow task that contains the dynamically generated MDX source query. The script task simply introduces a WAIT in the processing immediately after the
data-flow task completes of about 5 seconds, before allowing SSIS to continue with the next iteration (e.g. System.Threading.Thread.Sleep(5000)).
With this delay in place we have had much more stable SSIS package executions - dont know why, but that's what we havce observed. Also note that when we migrated to SQL2012 SSIS packages the 0x80040E05 error returned, however we were able to eliminate it
once more by increasing the WAIT time to 10 seconds on this script task.
Now waiting for 10 seconds is not an ideal solution / workaround to this problem - particularly when it is contained within a Loop Container (in our case it has added nearly 30 minutes of "WAIT time" to the package execution duration), however this workaround
is better than having the package fail 80%+ of the time...
regards,
Piquet -
NON-transactional session bean access entity bean
We are currently profiling our product using Borland OptmizeIt tool, and we
found some interesting issues. Due to our design, we have many session beans which
are non transactional, and these session beans will access entity beans to do
the reading operations, such as getWeight, getRate, since it's read only, there
is no need to do transaction commit stuff which really takes time, this could
be seen through the profile. I know weblogic support readonly entity bean, but
it seems that it only has benefit on ejbLoad call, my test program shows that
weblogic still creates local transaction even I specified it as transaction not
supported, and Transaction.commit() will always be called in postInvoke(), from
the profile, we got that for a single method call, such as getRate(), 80% time
spent on postInvoke(), any suggestion on this? BTW, most of our entity beans are
using Exclusive lock, that's the reason that we use non-transactional session
bean to avoid dead lock problem.
ThanksSlava,
Thanks for the link, actually I read it before, and following is what I extracted
it from the doc:
<weblogic-doc>
Do not set db-is-shared to "false" if you set the entity bean's concurrency
strategy to the "Database" option. If you do, WebLogic Server will ignore the
db-is-shared setting.
</weblogic-doc>
Thanks
"Slava Imeshev" <[email protected]> wrote:
Hi Jinsong,
You may want to read this to get more detailed explanation
on db-is-shared (cache-between-transactions for 7.0):
http://e-docs.bea.com/wls/docs61/ejb/EJB_environment.html#1127563
Let me know if you have any questions.
Regards,
Slava Imeshev
"Jinsong HU" <[email protected]> wrote in message
news:[email protected]...
Thanks.
But it's still not clear to me in db-is-shared setting, if I specifiedentity
lock as database lock, I assumed db-is-shared is useless, because foreach
new
transaction, entity bean will reload data anyway. Correct me if I amwrong.
Jinsong
"Slava Imeshev" <[email protected]> wrote:
Jinsong,
See my answers inline.
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
Hi Slava,
Thanks for your reply, actually, I agree with you, we need to
review
our db
schema and seperate business logic to avoid db lock. I can not say,guys,
we need
to change this and that, since it's a big application and developedsince
EJB1.0
spec, I think they are afraid to do such a big change.Total rewrite is the worst thing that can happen to an app. The
better aproach would be identifying the most critical piece and
make a surgery on it.
Following are questions in my mind:
(1) I think there should be many companies using weblogic serverto
develop
large enterprise applications, I am just wondering what's the maintransaction/lock
mechanism that is used? Transional session / database lock,
db-is-shared
entity
I can't say for the whole community, as for my experience the standard
usage patthern is session fasades calling Entity EJBs while having
Required TX attribute plus plain transacted JDBC calls for bulk
reads or inserts.
is the dominant one? It seems that if you speficy database lock,
the
db-is-shared
should be true, right?Basically it's not true. One will need db-is-shared only if thereare
changes
to the database done from outside of the app server.
(2) For RO bean, if I specify read-idle-timeout to 0, it shouldonly
load
once at the first use time, right?I assume read-timeout-seconds was meant. That's right, but if
an application constantly reads new RO data, RO beans will be
constantly dropped from cache and new ones will be loaded.
You may want to looks at server console to see if there's a lot
of passivation for RO beans.
(3) For clustering part, have anyone use it in real enterpriseapplication?
My concern, since database lock is the only way to choose, how aboutthe
affect
of ejbLoad to performance, since most transactions are short live,if high
volume
transactions are in processing, I am just scared to death about
the
ejbLoad overhead.
ejbLoad is a part of bean's lifecycle, how would you be scared ofit?
If ejbLoads take too much time, it could be a good idea to profile
used SQLs. Right index optimization can make huge difference.
Also you may want cosider using CMP beans to let weblogic
take care about load optimization.
(4) If using Optimization lock, all the ejbStore need to do
version
check
or timestamp check, right? How about this overhead?As for optimistic concurrency, it performs quite well as you can
use lighter isolation levels.
HTH,
Slava Imeshev
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
We are using Exclusive Lock for entity bean, because of we do
not
want
to
load
data in each new transaction. If we use Database lock, that means
we
dedicate
data access calls to database, if database deadlock happens,
it's
hard
to
detect,
while using Exclusive lock, we could detect this dead lock in
container
level.
The problem is, using Exclusive concurrency mode you serialize
access to data represented by the bean. This aproach has negative
effect on ablity of application to process concurrent requests.As
a
result the app may have performance problems under load.
Actually, at the beginnning, we did use database lock and usingtransactional
The fact that you had database deadlocking issues tells that
application logic / database schema may need some review.
Normally to avoid deadlocking it's good to group database
operations mixing in updattes and inserts into one place so
that db locking sequence is not spreaded in time. Moving to
forced serialized data access just hides design/implementation
problems.
session bean, but the database dead lock and frequent ejbLoad
really
kill
us,
so we decided to move to use Exclusive lock and to avoid dead
lock,
we
change
some session bean to non-transactional.Making session beans non-transactions makes container
creating short-living transactions for each call to entity bean
methods. It's a costly process and it puts additional load to
both container and database.
We could use ReadOnly lock for some entity beans, but since weblogicserver will
always create local transaction for entity bean, and we found
transaction
commit
is expensive, I am arguing why do we need create container leveltransaction for
read only bean.First, read-only beans still need to load data. Also, you may seeRO
beans
contanly loading data if db-is-shared set to true. Other reason
can
be
that
RO semantics is not applicable the data presented by RO bean (forinstance,
you have a reporting engine that constantly produces "RO" data,
while
application-consumer of that data retrieves only new data and neverasks
for "old" data). RO beans are good when there is a relatively stable
data
accessed repeatedly for read only access.
You may want to tell us more about your app, we may be of help.
Regards,
Slava Imeshev
I will post the performance data, let's see how costful
transaction.commit
is.
"Cameron Purdy" <[email protected]> wrote:
We are currently profiling our product using Borland
OptmizeIt
tool,
and we
found some interesting issues. Due to our design, we have
many
session
beans which
are non transactional, and these session beans will access
entity
beans
to
do
the reading operations, such as getWeight, getRate, since
it's
read
only,
there
is no need to do transaction commit stuff which really takes
time,
this
could
be seen through the profile. I know weblogic support readonly
entity
bean,
but
it seems that it only has benefit on ejbLoad call, my test
program
shows
that
weblogic still creates local transaction even I specified
it
as
transaction not
supported, and Transaction.commit() will always be called
in
postInvoke(),
from
the profile, we got that for a single method call, such as
getRate(),
80%
time
spent on postInvoke(), any suggestion on this? BTW, most of
our
entity
beans are
using Exclusive lock, that's the reason that we use
non-transactional
session
bean to avoid dead lock problem.I am worried that you have made some decisions based on an improper
understand of what WebLogic is doing.
First, you say "non transactional", but from your description
you
should
have those marked as tx REQUIRED to avoid multiple transactions
(since
non-transactional just means that the database operation becomesits
own
little transaction).
Second, you say you are using exclusive lock, which you shouldonly
use
if
you are absolutely sure that you need it, (and note that it
does
not
work in
a cluster).
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
> -
In the WebLogic 5.1 documentation it mentions that for non-transactional
datastores in a cluster that you can use a custom "dictionary" object to
keep track of which webLogic server instance is currently hosting a given
entity bean. This approach is used to insure the integerity for entity EJBs
in the cluster.
My question is, has anyone done this or does someone at WebLogic have an
example or additional information on what all is required to implement this
"dictionary" object?
Thanks in advance,
Michael
I would suggest that you use read-only entity beans for caching
non-transactional data. Much of the data on websites is
non-transactional and can be cached for a while.
For transactional data, I would use a regular entity bean and incur the
overhead of going to the db.
-- Rob
Michael Rousey wrote:
>
> I am runningWL 5.1 in a clustered environment. I have an entity bean which
> represents a complex hierarchy of objects. I would like to be able to take
> advantage of a caching mechanism so that every call to the bean does not
> force a reload from the database. In order to accomplish this I need to
> either guarantee all calls to a single instance of the bean go to a single
> machine(so the cache is in sync with the DB), or have a clustered cache(I
> don't want to try and tackle this). The documentation under "entity EJBs
> in a cluster" talked about a "dictionary" object that appears to act as a
> router. The idea is for the dictionary object to keep track of which
> instance of the bean is deployed on a specific machine in the cluster. All
> actors would use the dictionary to obtain the specific instance of the bean
> they need. If the instance is new, then WL would create a new instance in a
> normal fashion; the dictionary object would keep track of the new instance
> and forward and new request to the machine where the instance was created.
> This is similar to factory based routing in WLE.
>
> Any help on how to implement this or any additional ideas would be helpful.
>
> Thanks,
>
> Michael
>
> "Rob Woollen" <[email protected]> wrote in message
> news:[email protected]...
> > Hmm, I'm not sure what the docs were talking about here.
> >
> > Can you provide some more details about what you are trying to do?
> >
> > -- Rob
> >
> > Michael Rousey wrote:
> >
> > > In the WebLogic 5.1 documentation it mentions that for non-transactional
> > > datastores in a cluster that you can use a custom "dictionary" object to
> > > keep track of which webLogic server instance is currently hosting a
> given
> > > entity bean. This approach is used to insure the integerity for entity
> EJBs
> > > in the cluster.
> > >
> > > My question is, has anyone done this or does someone at WebLogic have an
> > > example or additional information on what all is required to implement
> this
> > > "dictionary" object?
> > >
> > > Thanks in advance,
> > >
> > > Michael
> >
-
How do non-transactional reads interact with database locking?
I find that it is possible for a non-transactional read from one Persistence
Manager to return null for an object or try to load an object that has been
deleted from the datastore by another persistence manager (causing a JDO
exception)
What are the possible solutions?Clive,
Do your PMs live a long time? While there are some benefits to long
lived PMs in situations with low interaction across PM's objects,
sometimes you can get into the situation where data can get stale since
commits in one PM won't necessarily propagate to the other PM. You can
avoid this either by allocating short lived PMs which have low overhead
in general or calling PM.refresh () appropriately.
Clive Cox wrote:
How are you getting into these situations?Stephen,
I have a web service in which I have a persistence manager per member, in
which concurrent access is controlled with appropriate synchronization.
However,
(perhaps foolishly) I had allowed non-transactional reads and allow one
member to read
another's data from their persistence manager - there is a high degree of
interation between members.
I use Kodo JDO with mySQL.
The offending situation is with one class with a field that references
another class, which
is 2 tables in mySQL. In the persistence manager for a member I change the
reference to
a new object and delete the old reference while at the same time another
member in a
different persistence manager is reading this object structure I get either
a null pointer for
the reference or the other member tries to get from the datastore a object
that
no longer exists and throws a JDO user exception.
I assume this is what can happen if one foolishy uses non-transactional
reads.
However, I would like to know how other people use JDO in a web environment
in which
users (members) interact highly and change each others data and read each
others data.
What is the best way to design a web server architecture using JDO in this
situation?
Clive
Steve Kim
[email protected]
SolarMetric Inc.
http://www.solarmetric.com -
Transaction variant for RPCPCC00
I need to create a transaction variant for the program RPCPCC00 and default the company code value so that a given person can run the report only for the given default value. I'm not able to set a default. Can anybody help me in this? Thanks in advance.
Use the transaction SHD0- To create a transaction variant for the desired Transaction. For your case plug in the t-code for the report that you want to create the variant for.
See if the link helps.
http://www.mortenhjorthnielsen.dk/Security/transactionvariants.htm -
Hi All,
I have been exploring how to create/use a transaction variant for transaction code XD02. I have already done the first part of my requirement, which is to disable the screen fields of other General Data of XD02. Now, can I limit the access of the transaction variant to a specific customer account group?
Thanks in advance for your suggestions.
Regards,
RaniloHi,
If you have done the Transaction variant for a particular transaction which means its global(For all account groups). So to meet requirement search any user eixt or badi is avaialble..........
Regards
Ramakrishna Pathi -
Transaction key for account determination
Hi guru,
i would like to add a new transaction key for account determination to fullfil a business process .
Could you help me to find out the Transaction code in customizing?
A prompt reply is appreciated.
Thanks in advance
VirDear Vir,
The G/L account determination process in G/L Accounting mostly uses the
transaction key technique to obtain the required G/L accounts for business transactions.
For Foreign currency transactions:
SPRO -> Financial Accounting -> General Ledger Accounting -> Periodic Processing -> Valuate -> Foreign Currency Valuation
Other General Ledger Transactions
SPRO -> Financial Accounting -> General Ledger Accounting -> Business Transactions -> Closing -> Carried Forward -> Define Retained Earning Account
OBA1 G/L account determination for foreign currency valuation
OBYA Maintain inter-company clearing G/L accounts
OB53 Maintain retained earnings accounts
OBBV Assign AR/AP adjustment accounts
OBXD Maintain bad debt reserve accounts
ACACAD Accrual account determination ( simple or extended )
ACACADMETASGL Accrual account determination strategy
ACACTREE01 Create accrual objects
Pricing:
V/09 Condition Types: Account Determin.
V/10 Account Determination: Access Seqnc
V/11 Conditions: Account Determin.Proced.
V/12 Account Determination: Create Table
V/13 Account Determination: Change Table
V/14 Account Determination: Display Table
G/L Accounts in MM Account Determination:
The program RM07C030 check all the G/L account define in your Material Master valuation class.
Search is via Company code and Valuation area.
Visit the following link for more information:
http://www.sapscene.com/sapscene/accassg.html#SD-FI%20Account%20Determination
Reward points if this is helpful.
Regards,
Naveen. -
Transaction aborted (possibly due to transaction time out) - Question
This one is driving me insane!
Bellow you will find the code, the descriptor stuff and the exception. The code belongs to a CMP EJB using CMT (transactions). This gets called from a session bean. Please help.
Here is the code:
public Object ejbCreate( String id )
throws CreateException {
s_logger.debug( "ejbCreate()" ); // this prits
//setId( id );
return null;
public void ejbPostCreate( String id )
throws CreateException {
s_logger.debug( "ejbPostCreate()" ); // this also prints
here is descriptor stuff:
<container-transaction>
<method>
<ejb-name>AgentBean</ejb-name>
<method-name>*</method-name>
</method>
<trans-attribute>Required</trans-attribute>
</container-transaction>
here is the !@#$ exception (sorry been tracing this one for hours):
RemoteException occurred in server thread; nested exception is:
java.rmi.RemoteException: Transaction aborted (possibly due to transaction time out).; nested exception is: javax.transaction.RollbackException: Transaction marked for rollback; nested exception is:
javax.transaction.RollbackException: Transaction marked for rollback
java.rmi.ServerException: RemoteException occurred in server thread; nested exception is:
java.rmi.RemoteException: Transaction aborted (possibly due to transaction time out).; nested exception is: javax.transaction.RollbackException: Transaction marked for rollback; nested exception is:
javax.transaction.RollbackException: Transaction marked for rollback
java.rmi.RemoteException: Transaction aborted (possibly due to transaction time out).; nested exception is: javax.transaction.RollbackException: Transaction marked for rollback; nested exception is:
javax.transaction.RollbackException: Transaction marked for rollback
javax.transaction.RollbackException: Transaction marked for rollback
<<no stack trace available>>It usually means that you're doing something which creates a loop in your locking. Something like setting a value in each instance from an update method of one of the instances. Your posted code doesn't seem to be the trouble, but I could be missing something.
Success,
S. -
Got error - You must supply payment terms for your non -credit transaction
Hi Experts,
Recently we upgraded the 11.5.8 Instance to 11.5.10.2.
we have one customized application which is related to recievables.
If we submit Surcharge demand generation request it has completed with error.
but in ra_interface_errors_all table i'm seeing the error like You must supply payment terms for your non -credit transaction in message_text column.
we ran the profile option migration report concurrent request for product specific task after upgradation.
due to this issue we cant able to run the demand generation request.
Please help me to solve this issue.
Thanks
R.Sundaravelhave a look at Note 178298.1 & 331099.1!!!
HTH
Prashant Pathak -
BAPI for Vendor Non-PO Invoice processed through MIR7 Transaction
Hi All,
I have a requirement where i will get data from flat file. Using this data Vendor Non-PO Invoice
will be processed through MIR7 transaction with Multiple line items - Invoice Upload Parking the document for vendor non-po invoices.
The data is in below format in flat file:
Header fields in 1st line
Dr. Key related fields in 2nd line
Cr. Key related fields in 3rd line
Header fields in 4th line
Dr. Key related fields in 5th line
Cr. Key related fields in 6th line
etc .... in flat file
In these main fields coming in from flat file are: comp.code, doc.type, vendor no,
Debit/Credit indicatorSHKZG, G/L a/c no, inv/post dates etc..
My issues:
1. I am planning to use F.M BAPI_INCOMINGINVOICE_PARK is it the correct F.M!
But Here PO number seems to be mandatory for this F.M but it is not available as per our flat file,
if so can anybody suggest a BAPI for Vendor Non-PO Invoice parking for MIR7 transaction!
2. What are the parameters need to be used for from above flat file data especially for 2 line itemshttp://Dr./Cr. data other than header data. So as per Header & 2 Line items http://Dr. & Cr. Data which parameters will be updated with which data in above flat file! can anybody clarify!
3. Also in flat file WorkflowNo is available in which parameter this fields will need to be filled & also by doing this will it trigger workflow automatically!
Can anybody clarify about above points!
Thanks in advance.
Thanks,
Deep.Hello Deep,
I know it might be quite late for you for this answer, nevertheless I will write that, because there are lots of question like yours. bapi_incominginvoice_park is also a suitable bapi for you. All you need to do is call it this way:
data: headerdata like bapi_incinv_create_header value is initial,
table_item type table of bapi_incinv_create_item,
gl_account like bapi_incinv_create_gl_account,
table_gl type table of bapi_incinv_create_gl_account.
* fill headerdata and table_gl, leaving table_item empty
CALL FUNCTION 'BAPI_INCOMINGINVOICE_PARK'
EXPORTING
HEADERDATA = HEADERDATA
* ADDRESSDATA =
IMPORTING
INVOICEDOCNUMBER = lv_docnr
FISCALYEAR = lv_year
TABLES
ITEMDATA = table_item
* ACCOUNTINGDATA = table_acc
GLACCOUNTDATA = table_gl
* MATERIALDATA =
* TAXDATA = table_tax
* WITHTAXDATA =
* VENDORITEMSPLITDATA =
RETURN = lt_return.
This works perfectly for me, creating a mir7 document without refering to a PO. The trick here is to fulfill the glaccountingdata importing table while leaving itemdata empty. -
Transaction failed for unknown reason (100) Unable to complete backup at this time. Does anyone know how to solve this issue?
Thanks.The system is set up to backup files to the iCloud at the end of the day. This has ot happen for sometime now and the mesaage I get is the back up error.
-
"rollback" for non-transaction objects
Hello,
I have the following situation:
When I work with some entity in DAL layer it is atached, so all my changes will be flushed. After flushing some contraint of DB can be violated. My changes can be done in transaction or without any transaction, and here is my problem:
If it is in transaction, I should not rollback it (user can correct "mistake"); If it is out of transaction - it should be "rollbacked".
To handle situation with transaction I've made my exception '@ApplicationException' and it works.
But it is bad for the second situation, without transaction. In this case my data are flushed anyway (if exception thrown is ApplicationException). In my exception handler I tried to do this: "if I'm not in transaction so service.getEntityManager.clear();", but data are flushed anyway...
So how to handle this situation? Does "clear()" for non-transaction have to work as "rollback()" for transaction?
(I think that it could be done so: throw ApplicationException if I am in Transaction, otherwise RuntimeException to rollback any changes; but maybe there is another solution?)
Thanks in advance,
MaxymIf you use CMT EJBs, then you do not have to begin, commit, or roll back.
If your client calls methods that can throw application exceptions, then you
must handle the transaction (commit or roll back) in the cases that those
application exceptions are thrown.
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com
+1.617.623.5782
WebLogic Consulting Available
"TH Lim" <[email protected]> wrote in message
news:3a3989d6$[email protected]..
>
> Hi,
>
> Part of my client code looks like this,
> try {
> ..
> trans.begin(); // transaction begins
> .. // calling some EJB methods
> ..
> trans.commit(); // transaction commits
> }
> catch(Exception e)
> {
> }
>
> My question is do I have to explicitly do a trans.rollback() if there is a
RemoteException thrown between begin() and commit(). Do I have to
instantiate a new transaction if the current transaction is rollbacked?
Thank you.
Maybe you are looking for
-
Problem in comparision of two strings.please help..
I am retriving a string password from database and another string from html(user entered one).when i am printing those two strings its printing same strings.But whenever i am comparing strings and printing its not showing equal it is printing as fals
-
What replacement battery do I need to get for my Macbook Pro?
I have had my Macbook Pro since 2006. I need a new battery and wasn't sure what would be a better way to get it-at a Mac store or on ebay for cheaper. On ebay they list different model numbers. Mine says Model #A1187 so does that mean I have to get t
-
What am I supposed to do with these issues?
I found this: 2013-11-29 10:49:28 +0100 Encrypted file: /Users//Downloads/AdobeFlashPlayerInstaller_11_ltrosxd_aaa_aih-2.dmg 2013-11-29 10:49:29 +0100 Encrypted file: /Users/Downloads/AdobeFlashPlayerInstaller_11_ltrosxd_aaa_aih.dmg
-
app needs updating or a process anyone got an email address for apple on this?
-
Host Header Flash Forms Problems
Hello, I'm running CFMX 7.01 on IIS 6, Windows 2003 Server.... I've created an intranet application that uses several flash forms. The site is located in the folder of C:\Inetpub\wwwroot\CorpComWeb. This can be referenced from the Default website by