MySQL non-transactional changed tables ...
I'm trying to test the rollback functionality of my db connection by executing 2 INSERT statements:
INSERT INTO Users (login_id,name,language_code) VALUES ('Satanka','Satanka','2')
"INSERT INTO Users (tlogin_id,name,language_code) VALUES ('Satanka','Satanka','2') // this one should fail becase there is no tlogin_id column in the Users table
When I run my test program I got the followin error message:
java.sql.SQLException: Column not found: Unknown column 'tlogin_id' in 'field list'
at org.gjt.mm.mysql.MysqlIO.sendCommand MysqlIO.java:508)
at org.gjt.mm.mysql.MysqlIO.sqlQueryDirect(MysqlIO.java:561)
at org.gjt.mm.mysql.MysqlIO.sqlQuery(MysqlIO.java:646)
at org.gjt.mm.mysql.Connection.execSQL(Connection.java:973)
at org.gjt.mm.mysql.Connection.execSQL(Connection.java:897)
at org.gjt.mm.mysql.Statement.executeUpdate(Statement.java:230)
at org.gjt.mm.mysql.jdbc2.Statement.executeUpdate(Statement.java:99)
at com.pylon.portal.timesheet.datahandling.DBBrokerSingleton.ExecuteSQLUpdate(DBBrokerSingleton.java:70)
at DBClient.main(DBClient.java:12)
//to this point I understand everything but next line
java.sql.SQLException: General error: Warning: Some non-transactional changed tables couldn't be rolled back
at org.gjt.mm.mysql.MysqlIO.sendCommand(MysqlIO.java:508)
at org.gjt.mm.mysql.MysqlIO.sqlQueryDirect(MysqlIO.java:561)
at org.gjt.mm.mysql.MysqlIO.sqlQuery(MysqlIO.java:646)
at org.gjt.mm.mysql.Connection.execSQL(Connection.java:973)
at org.gjt.mm.mysql.Connection.execSQL(Connection.java:897)
at org.gjt.mm.mysql.Connection.rollback(Connection.java:590)
at com.pylon.portal.timesheet.datahandling.DBBrokerSingleton.ExecuteSQLU
pdate(DBBrokerSingleton.java:78)
at DBClient.main(DBClient.java:12)
Why a simple insert is a "non-transactional" change?
I'm creating the connection like this:
Connection con=DriverManager.getConnection("jdbc:mysql://"+url+"/TimesheetDB?user="+Name+"&password="+Pass);
con.setTransactionIsolation(java.sql.Connection.TRANSACTION_READ_UNCOMMITTED);
con.setAutoCommit( false );
I'm using JRE 1.3
JDBC driver mm.mysql-2.0.4-bin.jar
mysql server 4.0
Yes, you're right. I found that the default table type won't let me use transactions, and Mysql Server sent me the message.
I thought that
"java.sql.SQLException: General error: Warning: Some non-transactional changed tables couldn't be rolled back"
refers to a problem inside the virtual machine.
Now I'm trying to setup the MySql to let me create InnoDB tables. Not so simple eh...
Thanks for helping
Similar Messages
-
Hi.
I need make a logger table at database. When I insert to that table, all changes saved, even through other changes rollbacked . How I can do it?
Best Regards, Eugene KislovHow you are inserting records in that table? In PL/SQL??
If you are inserting in PL/SQL block then you can try AUTONOMOUS_TRANSACTION. This will commit the child transaction irrespective of the state of parent transaction.
E.g:
CREATE OR REPLACE PROCEDURE PROC_TEST
AS
BEGIN
PROC_INSERT_LOG();
END;
CREATE OR REPLACE PROCEDURE PROC_INSERT_LOG
AS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO LOGER_TABLE(..) VALUES (..);
COMMIT;
END; -
Change Profit centers in all transactional data tables
Hello Experts,
I have been assigned to change Profit center MASKTOOLS to BRION in all transaction data tables.
Is there any standard SAP programs/reports that will help me do this?
How do I identify all tables were Profit center is stored?
Bearing in mind if I make changes directly in tables (example via Abap programs) BW will not recognise any changed documents and therefore not update BW accordingly.
Regards in advance SteveThanks Karthik T,
What about Internal orders, Sales orders, Service orders, Production orders, PA segments etc etc. etc..
I also have to think about orders with settlement rules which hold both profit center and profitablility segments which also hold profit centers.
I am still hoping there is SAP standard reports that can help identify and change
Steve D -
How do non-transactional reads interact with database locking?
I find that it is possible for a non-transactional read from one Persistence
Manager to return null for an object or try to load an object that has been
deleted from the datastore by another persistence manager (causing a JDO
exception)
What are the possible solutions?Clive,
Do your PMs live a long time? While there are some benefits to long
lived PMs in situations with low interaction across PM's objects,
sometimes you can get into the situation where data can get stale since
commits in one PM won't necessarily propagate to the other PM. You can
avoid this either by allocating short lived PMs which have low overhead
in general or calling PM.refresh () appropriately.
Clive Cox wrote:
How are you getting into these situations?Stephen,
I have a web service in which I have a persistence manager per member, in
which concurrent access is controlled with appropriate synchronization.
However,
(perhaps foolishly) I had allowed non-transactional reads and allow one
member to read
another's data from their persistence manager - there is a high degree of
interation between members.
I use Kodo JDO with mySQL.
The offending situation is with one class with a field that references
another class, which
is 2 tables in mySQL. In the persistence manager for a member I change the
reference to
a new object and delete the old reference while at the same time another
member in a
different persistence manager is reading this object structure I get either
a null pointer for
the reference or the other member tries to get from the datastore a object
that
no longer exists and throws a JDO user exception.
I assume this is what can happen if one foolishy uses non-transactional
reads.
However, I would like to know how other people use JDO in a web environment
in which
users (members) interact highly and change each others data and read each
others data.
What is the best way to design a web server architecture using JDO in this
situation?
Clive
Steve Kim
[email protected]
SolarMetric Inc.
http://www.solarmetric.com -
Creating a non transaction datasource
All,
I am using Quartz to fire Hibernate Jobs. Quartz Scheduler runs as a servlet in tomcat, the Hibernate job is a standalone app.
The Scheduler calls the app like so:
Quartz>Shell>Hibernate.
When I start the Hibernate jobs, they instantly go into a blocked state, basically db dealocks. When view the connections with
mysql admin, I can see the connections are sleep!
I heard thru the grapevine that a non-transactional datasource would solve this problem.
How do I do this? Is it done in tomcat, or do I need to change my hibernate db connection?
Any ideas?You're deadlocking with another process - quite possibly with one of your earlier tasks that stalled for some reason.
Turning off transactions is a dumb way to fix this - it avoids you finding out what the real problem is, thus curing the symptom but probably not the disease. Find out what's deadlocking and why, then fix that problem properly. -
Non-transactional cursor writes block reads
I'm opening a cursor without using a transaction like so:
CursorConfig conf = new CursorConfig();
Cursor cursor = db.openCursor(null, conf);
I then iterate over the items, sometimes replacing a value. I want to be able to read (and maybe write) to any value from another thread while the cursor is still open, but gets result in the following error:
com.sleepycat.je.DeadlockException: (JE 3.3.74) Lock expired. Locker 32098350 -1_NioProcessor-3_ThreadLocker: waited for lock on database=settings LockAddr:1537969 node=2623139 type=READ grant=WAIT_NEW timeoutMillis=500 startTime=1229438453225 endTime=1229438453730
Owners: [<LockInfo locker="11743647 -1_NioProcessor-1_ThreadLocker" type="WRITE"/>]
Waiters: []
I would have expected that outside a transaction any update made through a cursor would be applied immediately, but instead the record is locked until the cursor is closed. This happens whether or not je.env.isTransactional is set to true.
Setting je.env.isLocking to false gives me the cursor behaviour that I want, but I'm not sure what else is affected by this setting, and the javadoc comment is just vaguely worrying rather than helpful ("This property should be set to false only in special circumstances when it is safe to run without record locking.")
I am not planning on using any transactions in this application.Cormac,
I was looking through the documentation for a reference page to direct you to, and I realized that probably the clearest explanation in the javadoc is in the com.sleepycat.je.LockMode page here: [http://www.oracle.com/technology/documentation/berkeley-db/je/java/com/sleepycat/je/LockMode.html]. There is a reference to it from the CursorConfig page, but I can see that the link between the two pages is not that clear.
From the LockMode javadoc, there is this paragraph.
bq. Locking Rules \\ Together with CursorConfig, TransactionConfig and EnvironmentConfig settings, lock mode parameters determine how records are \\ locked during read operations. Record locking is used to enforce the \\ isolation modes that are configured. Record locking is summarized below for \\ read and write operations. For more information on isolation levels and \\ transactions, see [Writing Transactional Applications|http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/index.html]. \\ With one exception, a record lock is always acquired when a record is \\ read or written, and a cursor will always hold the lock as long as it is \\ positioned on the record. The exception is when [READ_UNCOMMITTED|http://www.oracle.com/technology/documentation/berkeley-db/je/java/com/sleepycat/je/LockMode.html#READ_UNCOMMITTED] \\ is specified, which allows a record to be read without any locking.
Transactional cursors, by default, hold all locks until the cursor is closed. In your case, you are using a non transactional cursor, and locks are not collected the same way. But a lock for modified record is still taken in a non transactional cursor, and is held as long as the cursor is positioned at that record, to provide cursor stability. I am guessing that is what is happening in your case. If you close the cursor or move to a different position, you will be releasing the lock.
In a second posting, you comment on your experiences using the je.env.locking property. That property is really only to be used in cases where the application is very constrained and has high performance requirements; frankly, that was put in for a particular set of power users and we really don't recommend it for general purpose use. (We should doc it better too). One characteristic is that it disables log cleaning, because the lack of locks makes it impossible for us to properly coordinate access to the data from the log cleaning threads.
Regards,
Linda
Edited by: Linda Lee on Dec 16, 2008 2:19 PM (changed " link between the two pages is that clear" -> not that clear) -
NON-transactional session bean access entity bean
We are currently profiling our product using Borland OptmizeIt tool, and we
found some interesting issues. Due to our design, we have many session beans which
are non transactional, and these session beans will access entity beans to do
the reading operations, such as getWeight, getRate, since it's read only, there
is no need to do transaction commit stuff which really takes time, this could
be seen through the profile. I know weblogic support readonly entity bean, but
it seems that it only has benefit on ejbLoad call, my test program shows that
weblogic still creates local transaction even I specified it as transaction not
supported, and Transaction.commit() will always be called in postInvoke(), from
the profile, we got that for a single method call, such as getRate(), 80% time
spent on postInvoke(), any suggestion on this? BTW, most of our entity beans are
using Exclusive lock, that's the reason that we use non-transactional session
bean to avoid dead lock problem.
ThanksSlava,
Thanks for the link, actually I read it before, and following is what I extracted
it from the doc:
<weblogic-doc>
Do not set db-is-shared to "false" if you set the entity bean's concurrency
strategy to the "Database" option. If you do, WebLogic Server will ignore the
db-is-shared setting.
</weblogic-doc>
Thanks
"Slava Imeshev" <[email protected]> wrote:
Hi Jinsong,
You may want to read this to get more detailed explanation
on db-is-shared (cache-between-transactions for 7.0):
http://e-docs.bea.com/wls/docs61/ejb/EJB_environment.html#1127563
Let me know if you have any questions.
Regards,
Slava Imeshev
"Jinsong HU" <[email protected]> wrote in message
news:[email protected]...
Thanks.
But it's still not clear to me in db-is-shared setting, if I specifiedentity
lock as database lock, I assumed db-is-shared is useless, because foreach
new
transaction, entity bean will reload data anyway. Correct me if I amwrong.
Jinsong
"Slava Imeshev" <[email protected]> wrote:
Jinsong,
See my answers inline.
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
Hi Slava,
Thanks for your reply, actually, I agree with you, we need to
review
our db
schema and seperate business logic to avoid db lock. I can not say,guys,
we need
to change this and that, since it's a big application and developedsince
EJB1.0
spec, I think they are afraid to do such a big change.Total rewrite is the worst thing that can happen to an app. The
better aproach would be identifying the most critical piece and
make a surgery on it.
Following are questions in my mind:
(1) I think there should be many companies using weblogic serverto
develop
large enterprise applications, I am just wondering what's the maintransaction/lock
mechanism that is used? Transional session / database lock,
db-is-shared
entity
I can't say for the whole community, as for my experience the standard
usage patthern is session fasades calling Entity EJBs while having
Required TX attribute plus plain transacted JDBC calls for bulk
reads or inserts.
is the dominant one? It seems that if you speficy database lock,
the
db-is-shared
should be true, right?Basically it's not true. One will need db-is-shared only if thereare
changes
to the database done from outside of the app server.
(2) For RO bean, if I specify read-idle-timeout to 0, it shouldonly
load
once at the first use time, right?I assume read-timeout-seconds was meant. That's right, but if
an application constantly reads new RO data, RO beans will be
constantly dropped from cache and new ones will be loaded.
You may want to looks at server console to see if there's a lot
of passivation for RO beans.
(3) For clustering part, have anyone use it in real enterpriseapplication?
My concern, since database lock is the only way to choose, how aboutthe
affect
of ejbLoad to performance, since most transactions are short live,if high
volume
transactions are in processing, I am just scared to death about
the
ejbLoad overhead.
ejbLoad is a part of bean's lifecycle, how would you be scared ofit?
If ejbLoads take too much time, it could be a good idea to profile
used SQLs. Right index optimization can make huge difference.
Also you may want cosider using CMP beans to let weblogic
take care about load optimization.
(4) If using Optimization lock, all the ejbStore need to do
version
check
or timestamp check, right? How about this overhead?As for optimistic concurrency, it performs quite well as you can
use lighter isolation levels.
HTH,
Slava Imeshev
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
We are using Exclusive Lock for entity bean, because of we do
not
want
to
load
data in each new transaction. If we use Database lock, that means
we
dedicate
data access calls to database, if database deadlock happens,
it's
hard
to
detect,
while using Exclusive lock, we could detect this dead lock in
container
level.
The problem is, using Exclusive concurrency mode you serialize
access to data represented by the bean. This aproach has negative
effect on ablity of application to process concurrent requests.As
a
result the app may have performance problems under load.
Actually, at the beginnning, we did use database lock and usingtransactional
The fact that you had database deadlocking issues tells that
application logic / database schema may need some review.
Normally to avoid deadlocking it's good to group database
operations mixing in updattes and inserts into one place so
that db locking sequence is not spreaded in time. Moving to
forced serialized data access just hides design/implementation
problems.
session bean, but the database dead lock and frequent ejbLoad
really
kill
us,
so we decided to move to use Exclusive lock and to avoid dead
lock,
we
change
some session bean to non-transactional.Making session beans non-transactions makes container
creating short-living transactions for each call to entity bean
methods. It's a costly process and it puts additional load to
both container and database.
We could use ReadOnly lock for some entity beans, but since weblogicserver will
always create local transaction for entity bean, and we found
transaction
commit
is expensive, I am arguing why do we need create container leveltransaction for
read only bean.First, read-only beans still need to load data. Also, you may seeRO
beans
contanly loading data if db-is-shared set to true. Other reason
can
be
that
RO semantics is not applicable the data presented by RO bean (forinstance,
you have a reporting engine that constantly produces "RO" data,
while
application-consumer of that data retrieves only new data and neverasks
for "old" data). RO beans are good when there is a relatively stable
data
accessed repeatedly for read only access.
You may want to tell us more about your app, we may be of help.
Regards,
Slava Imeshev
I will post the performance data, let's see how costful
transaction.commit
is.
"Cameron Purdy" <[email protected]> wrote:
We are currently profiling our product using Borland
OptmizeIt
tool,
and we
found some interesting issues. Due to our design, we have
many
session
beans which
are non transactional, and these session beans will access
entity
beans
to
do
the reading operations, such as getWeight, getRate, since
it's
read
only,
there
is no need to do transaction commit stuff which really takes
time,
this
could
be seen through the profile. I know weblogic support readonly
entity
bean,
but
it seems that it only has benefit on ejbLoad call, my test
program
shows
that
weblogic still creates local transaction even I specified
it
as
transaction not
supported, and Transaction.commit() will always be called
in
postInvoke(),
from
the profile, we got that for a single method call, such as
getRate(),
80%
time
spent on postInvoke(), any suggestion on this? BTW, most of
our
entity
beans are
using Exclusive lock, that's the reason that we use
non-transactional
session
bean to avoid dead lock problem.I am worried that you have made some decisions based on an improper
understand of what WebLogic is doing.
First, you say "non transactional", but from your description
you
should
have those marked as tx REQUIRED to avoid multiple transactions
(since
non-transactional just means that the database operation becomesits
own
little transaction).
Second, you say you are using exclusive lock, which you shouldonly
use
if
you are absolutely sure that you need it, (and note that it
does
not
work in
a cluster).
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
> -
Hi,
After using sql loader to import informations in the table fa_mass_additions and after the functionnal uses a treatment to imputate this assets, it asks me to do an update on the table fa_additions to change the value of attribute1 but i get an error
Error report:
ORA-01779: cannot modify a column which maps to a non key-preserved table
ORA-06512: at line 11
01779. 00000 - "cannot modify a column which maps to a non key-preserved table"
*Cause: An attempt was made to insert or update columns of a join view which
map to a non-key-preserved table.
*Action: Modify the underlying base tables directly.
please how can i do this update?Hi,
The "fa_additions" is a view, not a table.
You should update the base table "fa_additions_b".
Regards,
Bashar -
Oracle CDC - Incorrect data in change tables
We are using Asynchronous autolog online CDC and the performance has been pretty good so far.
However, recently we have encountered an issue where there are a couple of invalid records in one of the change tables.
The change table seems to have update transactions on a record that had no updates made to it. In fact the updated records in the change table indicate a primary key that does not exist on the source table. Has anyone encountered a similar issue ? the data that has been brought over on one of the number columns is something like
xm<<1123.
gkThanks Justin. On digging further into metalink I found another bug 6454634 which appears to be similar. Both this and the bug mentioned by you are fixed in 10.2.0.4 patch 14. I will follow up further.
-
How to change table content ...
Hi All,
Pl. give us soltution for the following :
1. How to change table content of standard table ?
(eg. mara content not from MM02.).
2. How to add field in standard table ?
waiting reply.
YusufHi Yusuf,
Modifying a standard table through a program using SQL statements is highly risky. However if u wish to do it programatically and not through MM02, you have the option of using BAPIs.
Goto transaction BAPI and through the application hirarchy, goto Material Management section. There i am sure u will find some BAPIs to modify material master and they are the most stable mode of update programatically.
Hope this helps.
Regards,
Aditya -
ORA-01779: cannot modify a column which maps to a non key-preserved table
define cusname='GEORGE';
INSERT INTO (select s.prd_id, s.cus_id, s.qty_sold, s.price from sales s, customers c where s.cus_id=c.cus_id)
VALUES (102,(select cus_id from customers where upper(c_name) like '%GEORGE%'),14,(select price from product where prd_id = 102)*14)
I am getting the following error when I am trying to run the above query.
SQL Error: ORA-01779: cannot modify a column which maps to a non key-preserved table
01779. 00000 - "cannot modify a column which maps to a non key-preserved table"
*Cause: An attempt was made to insert or update columns of a join view which map to a non-key-preserved table.
*Action: Modify the underlying base tables directly.
CUSTOMERS TABLE
Name
Null
Type
CUS_ID
NOT NULL
NUMBER
C_NAME
VARCHAR2(50)
C_LIMIT
NUMBER
CITY
VARCHAR2(20)
PRODUCT TABLE
Name
Null
Type
PRD_ID
NOT NULL
NUMBER
PRICE
NUMBER
COST
NUMBER
SALES TABLE
Name
Null
Type
PRD_ID
NOT NULL
NUMBER
CUS_ID
NOT NULL
NUMBER
QTY_SOLD
NUMBER
PRICE
NUMBERHi,
As the error message sugggested, INSERT directly into the sales table. You can get the values from a sub-query that joins whatever tables are needed, including sales itself. MERGE might be simpler and more efficient than INSERT.
I hope this answers your question.
If not, post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) for all tables involved, and also post the results you want from that data.
If you're asking about a DML statement, such as INSERT, the sample data will be the contents of the table(s) before the DML, and the results will be state of the changed table(s) when everything is finished.
Explain, using specific examples, how you get those results from that data.
Always say which version of Oracle you're using (e.g., 11.2.0.2.0).
See the forum FAQ: https://forums.oracle.com/message/9362002 -
Toplink changes table and columns to uppercase
I found a way to make toplink not to change table and columns to upercase with setShouldForceFieldNamesToUpperCase=false setting (which should be false be default but so some reason isn't.)
However, when toplink is used with JSF how can setShouldForceFieldNamesToUpperCase be set to false? Is there a way to do it in persistence.xml or perhaps there is a better way altogether? (I know i can use @Table and @Column but that is a lot of redundant code if all the fields in java have exact match in DB and i am looking for a cleaner solution.)
My setup is a follows
toplink v2-b49
mysql 5.0 (with 5.0.5 driver)
servlet/jsp 2.5/2.1
jsf 1.2
tomcat 6.0.13
ThanksThanks Doug
I Created that class with
public void customize(Session session) throws Exception {
session.getDatasourceLogin().getPlatform().setShouldForceFieldNamesToUpperCase(false);
But that doesn't help. Then I checked if it is being set correcly and saw that it is false even before I set it, so the default works as advertised and the var is false.
So i guess this is not the problem. Any idea what can cause it?
here is the class
@Entity
@Table(schema="jpaTABLE")
public class User {
@Id
private int ID;
private String Name;
private String Password;
private String Status;
public User() {}
geters...
setters...
here is the code
Query q = em.createQuery("SELECT u FROM User u WHERE u.Name = :name AND u.Password = :password");
here is the error
[TopLink Fine]: 2007.06.07 04:20:24.636--ServerSession(9017297)--Connection(17227669)--Thread(Thread[http-8080-1,5,main])--SELECT ID, NAME, PASSWORD, STATUS FROM jpaTABLE.USER WHERE ((NAME = ?) AND (PASSWORD = ?))
bind => [abc, 123]
[TopLink Warning]: 2007.06.07 04:20:24.682--UnitOfWork(14633980)--Thread(Thread[http-8080-1,5,main])--Local Exception Stack:
Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.0 (Build b49-beta3 (05/31/2007))): oracle.toplink.essentials.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.MySQLSyntaxErrorException: Table 'jpaTABLE.USER' doesn't exist
Error Code: 1146
Call: SELECT ID, NAME, PASSWORD, STATUS FROM jpaTABLE.USER WHERE ((NAME = ?) AND (PASSWORD = ?))
bind => [abc, 123] -
Monthly wise GL Balances ( DEBIT , CREDIT Separately) in Transaction currency tables i required if any body knows pls help me.
HI anu,
The transaction currency tables are KEKO , BKPF , BSEG
Changing quantity into debit or credit
IF wa_bkpfbseg-shkzg = 'H'.
wa_bkpfbseg-wrbtr = ( -1 * wa_bkpfbseg-wrbtr ).
wa_display-wrbtr = wa_bkpfbseg-wrbtr .
ELSE.
wa_display-wrbtr = wa_bkpfbseg-wrbtr.
ENDIF.
if the BSEG table if wa_bkpfbseg-shkzg = 'H'. that means it is credit amount
else
it is debit amount.
Fetching quantity and amount in FI from BSEG table against entries
in BKPF table
Regards,
Prabhudas -
Oracle cdc change table column - cscn vs rsid
Hi all,
I am not able to understand the differences between the CSCN field vs the RSID field. Looking up internet guides it said:
RSID is the Unique row sequence ID within this transaction. The RSID$ column reflects an operation's capture order within a transaction, but not across transactions. The publisher cannot use the RSID$ column value by itself to order committed operations across transactions; it must be used in conjunction with the CSCN$ column value.
Can anyone explain to me in simplier terms what the CSCN field stand for? Also, why cant we use the timestamp$ field to order the operations?
Also, for a row in the change table, are we able to determine which column in the row changed first?
Edited by: user8915380 on 04-May-2010 03:22Hi,
you can wait for 11gR2 with CDC integration or build most of the code outside the owb. To use CDC you must do this things (http://www.oracle.com/technology/oramag/oracle/03-nov/o63tech_bi.html):
1. Identify the source tables.
2. Set up a publisher.
3. Create change tables.
4. Set up a subscriber.
5. Subscribe to the source tables, and activate the subscription.
6. Set up the CDC window.
7. Prepare a subscriber view.
8. Access data from the change tables.
9. Drop the subscriber view, and purge the CDC window.
10. Repeat steps 6 through 9 to see new data.
You can do only a few of this inside owb, most of it must be done outside.
Regards,
Detlef -
NEW GL's Changed Tables mapping
Hi,
I Need to know more about new GL's Changed Tables mapping .
Warm Regards,
Sachin DarekarDear Sridher,
Please cehck the SAP Note 812919 - My SAP ERP New G/L Migration.
other wise you can do other way.
but it's as on 01.04.xxxx year go live date
better to take the valus as on 31.03.xxxx
mean time you can activate the Document spliting and up load the values as on 01.04.xxxx
but in this case dont do the carry forward run for that F Year.
morever when you up grade the ECC - 6 version
this G/L function will be actived automated, and your transaction data will be store in GLTO and FAGLFLEXT
tables, after that if you want to de activate the classis G/L u can.
Regards
radha
Maybe you are looking for
-
Multiple Desktops (Spaces)
How are they supposed to work ? I have a problem with "This Desktop" setting, so after I restart OS X, some apps just lose that. Also desktop numbering is not constant (desktop shuffling).
-
My CSS file includes the following formatting for a "Note": p.Note { border-left-style: Solid; border-right-style: Solid; border-top-style: Solid; border-bottom-style: Solid; background-color: #fafade; padding-left: 12px; padding-right: 12px; padding
-
Best settings for YouTube video?
I've just finished editing my first video in iMovie and would like to export it to upload to YouTube. The YouTube site asks for MPEG4 at 320x240 pixels, 30fps, with MP3 audio. But what about all the other settings in the advanced export settings? Doe
-
How to execute DollyXs.sh in mac?
while i am trying to create plugin in mac, based on the ww.plugins.pdf reference , the instruction steps starts with executing the DollySx.sh file, i cant execute it by doub;e clicking it. when i double click the DollyXs.sh file termial get opened no
-
How to map reporting application to Classic Planning?
Hi, I'm having problem with mapping reporting application this is with reference for Smartlist, Please can you guide me on the steps...