Database commit on each instance created
Hi
I am new to BPEL, and I have a requirement as follows:
1. Read a csv/text file through file adapter
2. For each row from the file, using the DB adapter, insert data into a tableA.
3. Select the data inserted in step2, and depending on the value, do an update/insert on tableB.
BPEL is creating 1 instance in the console for step1 to step3, and I need to commit each instance.
Seems like BPEL is doing a commit at the end of all the rows read from the file, rather than for each line read from the csv file.
Could someone please help me on this, I would greatly appreciate your help.
Thanks in Advance,
Shashi.
Found the answer on this MetaLing document: 460293.1. We did it and it works like a champ. Since the servers are all firewalled inside and only 2 DBAs have access to the package and trigger we have no security exposure in the wallet password.
Similar Messages
-
Hello All,
I have two include in my user exit of the sales order.
In one include I am doing some validation on the Order quantity of the sales order depending upon the order quantity in one of my Z table. If every validation is OK then I am exporting this order quantity to memory ID.
In the second include I am importing this Order Quantity and updating this Quantity in my database in my same Z table.
If I process one order at a time then everything is working fine but when I try to create two sales order at the same time then only second order is getting validated and order quantity is getting updated whereas for the first order no validation is performed and also order quantity is not getting updated in the Z table.
I really do not know is this issue related to database commit or something else.
Kindly help.
Regards
Sachin yadavSachin,
I guess i am getting to understand your requirement a little bit,
In validation I am modify the records in the Ztable as well I mean to say for each order created I am updating the Quantity in my Z table.
So you are just updating the quantity fields in the Z table, and this doesn't have any affect on the actual order quantity what soever, is that right or is the order quantity changed or modified based on the entry in the Z table?
So for the next order the validation should be done depending upon the new order quantity not the old order quantity.
Now this is a little tricky, now by next order do you mean the next order with respect to the number range or the next order in general?
For ex: Orders 1234, 1235, 1236 in order, or let us say there are 10 people around the world creating order simultaneously depending on the sequence in which their orders are saved.
I mean to say for the first order the validation should be done depending upon the old Qty and for the second order validation should be done by new order Qty.
When I am saving the two order at same time validation on both the order is done by old qyt bcoz new oder Qty from the first order is not yet upadated in the database when second order try to fetch the data from the Z table as i am saving both the order at same time.
This is what will happen most probably, even though you use the lock concept, let me try and explain...once you save the order in the user exit you are locking the object with enqueue and dequeue...something like below.
Order 1
userexit_save_doc.
enqueue ztable
select from Z table
validate based on the entries from z table
update/modify the z table with udpated values
dequeue ztable
end of userexit.
Now by the above code it doesn't mean that your changes are available in the Z table immediately, the table will actually be updated by the update modules along with other standard tables. Now in the mean time if an other order that hits userexit_save_doc before the update module of the previous order is complete it will still be looking at the old values.
Now in order to overcome the above sitauation you might need to add a COMMIT WORK AND WAIT in the userexit. But this will compromise the data integrity of the order and IS NOT SUGGESTED.
So you see, by locking the table you still haven't achieved your results.
Now, you need to confirm if the entries in the Z table are used only for validation and updating of data in the Z table or does it have any kind of affect on the Order quantity or any other Order specific values in general.
If it is only to track the quantity in the Z Table, and there is no affect on the order data in general, then i suggest that you create a Z program and get all the orders that have been created/changed in a given period and then update the Z Table quantity accordingly, you can have this scheduled as a background job in the non business hours.
Let me know if i was completely off track in getting your requirement
Regards,
Chen -
Database commit event based dashboard page refresh
Hi All,
I have a requirement where the dashboard report page should get refreshed automatically whenever the fact table in the realtime datawarehouse gets updated with a new record insert. Then the dashboard report should show data from the new record too, plus all the data it was already showing.
I.e I need a database commit based event to trigger the report refresh. Is there a way to do this?
Also how do I ensure that the refresh looks for the new record from the databases and the existing ones from the cache? I do not want the query to run again on the database to pick all records, since the refreshed report needs to be displayed as soon as possible.
Thanks
GauravFor this I am creating an ibot with the Conditional Request having the report with the query :
select * from S_NQ_EPT where update_ts between sysdate and sysdate -1
This will fetch any record entered in the S_NQ_EPT table in the last 24 hrs.
For the ibot schedule I give the Recurrence : Daily, Repeat Every 60 minutes until 23:59:59 and End Date : None
In Delivery Content I select a report for which I want to rebuild the cache This is not a good approach as report will keep on executing after every one hour.If you are running daily ETL load then it should be executed only once after completion of ETL load.There are ways to automate this process.See the below link
iBot scheduling after ETL load
You need to do some scripting to get it work.
Now I want to rebuild the caches of all the reports. For that do I need to build a seperate ibot for each report, along the above lines or is there a better way?Generally for cache seeding through ibots you should create a detail level report and schedule that through IBOTS.For ex if you have 4 reports coming out of one subject area and supoose each have 3 columns each then create a single report having all the columns that 4 reports have and schedule it through IBOTS.Obviously you have to take a stand as to create schedule all reports or just one detail report.If detail report is taking too much time to execute then go for individual reports scheduling.
Regards,
Sandeep -
Restrict core for each instance
can we restrict core for each instance while creating database
Note: if we have 8 core of which i suppose to assign 4 core alone to single instance , is if possible to assign
can anyone help on thisThere are actually two ways depending on the OS, database version and edition you are using:
With Linux and Solaris Oracle most recently announced (11.2.0.4) the support for cgroups (control groups on Linux with kernel >= 2.6.32) or resource pools (Solaris >= 11 SRU 4). From my perspective it is the best resource throtteling you can have for an Oracle database and it works with all Editions (Enterprise, Standard, Standard One). Further information can be found on my blog (www.carajandb.com/blogs) in both English an German.
The other mechanism it instance caging (available since 11.2.0.1) which uses the parameter cpu_count to limit the number of CPUs for a single instance. But this is available for Enterprise Edition only in combination with the resource manager. -
I am working with Report Builder 3.0 I am using a matrix to produce grouped data on separate worksheets in excel.
The select is:
SELECT ID, Measurement, Value, [Date] FROM Measurements_Report. (please ignore the underscores they are just for formatting)
The contents of the Measurements_Report table:
ID__Measurement__Value__[Date]
1___Hot_________33_____10/1/2014
2___Hot_________44_____10/2/2014
3___Cold_________55_____10/2/2014
The matrix contains a single row group based on the field "measurement". The Measurement group has the page break option of "Between each instance of a group" selected.
There is a column group based on the field "Date".
When this is matrix is exported to excel on the first worksheet (Hot) there are three columns as shown below:
ID__10/1/2014____10/2/2014___10/2/1014
1___33
2_______________44
Notice the last column doesn't have a value.
On the second worksheet (Cold) there are also three columns as shown below:
ID__10/1/2014___10/2/2014___10/2/1014
3__________________________55
This time notice there is only one row and only a value in the last column.
I only want the columns with data for that worksheet to show up. How can I remove these empty/duplicate columns? Hopefully there is a simple fix. Thanks ahead of time.With the following contents of the Measurements_Report table:
ID__Measurement__Value__[Date]
1___Hot_________33______10/1/2014
2___Hot_________43______10/1/2014
2___Hot_________44______10/2/2014
3___Cold________55______10/2/2014
Returns on the first tab (Hot):
ID__10/1/2014____10/1/2014____10/2/2014
1___33
2_________________43
2______________________________44
In the excel worksheet it contains a separate column for each date with a value. Thanks again!
Why is the same date repeating on multiple columns? Do you've the time part also returned from database?
Please Mark This As Answer if it solved your issue
Please Mark This As Helpful if it helps to solve your issue
Visakh
My MSDN Page
My Personal Blog
My Facebook Page -
Program completion before database commit
Hello friends,
I am doing a mass update using batch input program. The issue is that my program completes before actual database commit.My requirement is to create info records and after calling the standard batch input program , i am checking the standard tables if the info record is created or not. If yes, am updating the numnbeer else am updating an error message,
The problem is that for some 200 odd records there is no issue but if the no of records exceeds say 1000 or more, the background process of batch input program continues and then my code to check if the info record is created gets called. At this time,the database is still getting updated and there is no info record getting filled( info record actually gets updated in the standard tbale after few mins).
I am pretty sure this is to do with database commit issue and can actually use wait statement but then i cannot always expliciltly say wait up to 100 seconds because the user can choose 1000 records or 1 record at a time. How to ensure that the database gets updated before my checking of the info record in the table.
Please help.
Regards,
PremHi,
Try the following command before looking for info records in the database..
COMMIT WORK AND WAIT.
WAIT UP TO 2 SECONDS.
If this doesnt work.. then i feel you should split the program...
1st create all info records..capture the success/error messages.. once done
call the 2nd part which is related to validation of info record in database.
Thanks,
Best regards,
Prashant -
MIGO BAdI or User-Exit - After Database Commit
Hello,
I'm looking for a BAdI or user-exit after database commit for MIGO posting. I want to create a FI document after MIGO posting.
I already tried MB_MIGO_BADI BAdI but system gives a short dump. So I have to find a exit or BAdI after database commit.
Thanks for helping in advance.
Regards,
BurakHello,
This issue solved.
I used MB_DOCUMENT_BADI / MB_DOCUMENT_UPDATE and it solved my problem.
FYI.
Regards,
Burak -
How to calcuate how much cpu/cores needed for an amalgamation of databases on a single instance
Hi all, We have been given a project of producing a high level hardware spec for a new oracle linux server that would be an amalgamation of our current 5 windows servers all running Oracle 11203 Windows 2008 r2. All our windows boxes are Intel dual core with 12 cores each. My question is how to measure how much cpu is our current each oracle database is using to determine the min cpu/cores required for the new linux box? Hope i make sense. Thanks!
Snap Id
Snap Time
Sessions
Cursors/Session
Begin Snap:
7967
16-Mar-13 00:00:19
120
3.4
End Snap:
7983
16-Mar-13 16:00:08
119
3.6
Elapsed:
959.81 (mins)
DB Time:
11,565.82 (mins)
Load Profile
Per Second
Per Transaction
Per Exec
Per Call
DB Time(s):
12.1
0.6
0.05
0.09
DB CPU(s):
2.5
0.1
0.01
0.02
Top 5 Timed Foreground Events
Event
Waits
Time(s)
Avg wait (ms)
% DB time
Wait Class
DB CPU
141,781
20.43
Host CPU (CPUs: 40 Cores: 20 Sockets: 2)
Load Average Begin
Load Average End
%User
%System
%WIO
%Idle
10.05
16.90
6.4
2.3
0.0
90.6
Instance CPU
%Total CPU
%Busy CPU
%DB time waiting for CPU (Resource Manager)
6.9
73.7
0.0
Operating System Statistics
*TIME statistic values are diffed. All others display actual values. End Value is displayed if different
ordered by statistic type (CPU Use, Virtual Memory, Hardware Config), Name
Statistic
Value
End Value
BUSY_TIME
21,616,011
IDLE_TIME
208,748,575
IOWAIT_TIME
20,115
NICE_TIME
7
SYS_TIME
5,241,553
USER_TIME
14,766,063
LOAD
10
17
RSRC_MGR_CPU_WAIT_TIME
3,348
VM_IN_BYTES
2,126,163,628,032
VM_OUT_BYTES
-3,086,139,181,056
PHYSICAL_MEMORY_BYTES
202,835,083,264
NUM_CPUS
40
NUM_CPU_CORES
20
NUM_CPU_SOCKETS
2
Hope that above excerpts from AWR reports can tell how much cpu is consuming each database , because for each database you will run awr report and then compare them . -
What is Database Commit and Database Rollback.
What is Database Commit and Database Rollback.
Hi Sir ,
Please have a look below .Hope it is suitable and simpler solution for your question.
Please do reward if useful.
Thankx.
In database level this will be used..
Commit is nothing but SAVE the current record..
If u rol back before commit means whatever u proceeded for the SAVING will be roll back and the data will not be stored..
This will be used,When some times u r filling a register form..after filling 20 fields,In the 21st field u will not to registrer means it will be rollbacked using the Rollbeck command.
In detail--->
ROLLBACK->
In a ROLLBACK, all the changes made by a transaction or a subtransaction on the database instance are reversed.
· Changes closed with a COMMIT can no longer be reversed with a ROLLBACK.
· As a result of a ROLLBACK, a new transaction is implicitly opened.
In normal database operation, the database system performs the required ROLLBACK actions independently. However, ROLLBACK can also be explicitly requested using appropriate SQL statements.
In a restart, the system checks which transactions were canceled or closed with a ROLLBACK. The actions are these transactions are undone.
COMMIT->
In a COMMIT, all the changes made by a transaction or a subtransaction on the database instance are recorded.
· Changes closed with a COMMIT can no longer be reversed with a ROLLBACK.
· As a result of a COMMIT, a new transaction is implicitly opened.
In normal database operation, the database system performs the required COMMIT actions independently. However, COMMIT can also be explicitly requested using appropriate SQL statements.
In a restart, the system checks which transactions were closed with a COMMIT. These actions are redone. Transactions not yet closed with a COMMIT are undone.
From the point of view of database programming, a database LUW is an inseparable sequence of database operations that ends with a database commit. The database LUW is either fully executed by the database system or not at all. Once a database LUW has been successfully executed, the database will be in a consistent state. If an error occurs within a database LUW, all of the database changes since the beginning of the database LUW are reversed. This leaves the database in the state it was in before the transaction started.
the statements
COMMIT WORK.
and
ROLLBACK WORK.
for confirming or undoing database updates. COMMIT WORK always concludes a database LUW and starts a new one. ROLLBACK WORK always undoes all changes back to the start of the database LUW. -
What is the "No database index" means when you Creating Secondary Indexes?
HI,
I'm Creating Secondary Indexes in the maintenance screen of the table(se11)
There are three options under "Non-unique Index":
1.Index on all database systems
2.For selected database systems
3.No database index
My questions is :
What do u mean by "No Database Index" and when is it used.
Can anybody plz tell me what's the difference of this three options ?
Here is what i found in the help:
No database index: The index is not created in the database. If you
choose this option for an index that already exists in the database,
it is deleted when you activate this option.Hi,
It is clear from the help documentation,
Here see what the help document says:
Create the index in the database (selection)
Whether an index improves or worsens performance often depends on the database system. You can therefore set whether an index defined in the ABAP Dictionary should be created in the database.
This makes it easier to install a platform-specific customer system.
You can set this option as follows:
1. Index in all database systems: The index is always created in the database.
2. In selected database systems: The index is created depending on the database system used. In this option, you must specify the databases in which the indexes are to be created. You can do this either on an inclusive (list of systems on which it should be created) or an exclusinve (list of systems on which it should not be created) basis. In either case, you can list up to four different database systems.
3. No database index:: The index is not created in the database. If you set this option for an index that already exists in the database, it is deleted when you activate the table in the ABAP Dictionary.
Note: Unique indexes have an extra function, and must therefore always be created in the database. The database system prevents entries or index fields being duplicated. Since programs may rely on this database function, you cannot delete unique indexes from the database.
Hope it helps you,
Regards,
Abhijit G. Borkar -
Title says it all!
How do I select all instances of a clip in the timeline at once without manually shift+clicking each instance?
The reason I want to do this is because I have a music video with MANY quick cuts. So clips are cut up and splattered all over the place. I am colour-correcting now and so I want to be able to select one clip everywhere it exists in the timeline so I can drag the effect onto all them at once, without having to find each instance of the clip in the timeline. This "batch select" would make it so I don't miss any instances of the clip and also saves a ton of time going through the timeline and shift clicking each time the clip turns up.
I hope PP is smart enough to do this
Thanks in advance!
-- TimPick one instance of the clip, maybe even the first bit ... and use the new Master Clip feature ... here's first a written explanation and then a video tutorial on doing this ... it's a great feature.
Adobe Premiere Pro Help | Master Clip Effects
How to apply effects to all instances of a clip | Adobe Premiere Pro CC tutorials
Neil -
BPEL Console doesn't show any instances created
Hi,
We are using file protocol to read the files from trading partner into B2B. File is successfully processes by B2B. I can see that in reports.
Then I have defined a BPEL process to Dequeue the messages from B2B using AQ Adapter, transforms and places the files in the local file system.
The BPEL process is simple flow with one receive , transform and invoke activity.
I have configured AQ adapter using WSIF browser. I can see the deployed document definition in browser.
I have the TP agreement deployed, BPEL process deployed, but I don't see the 850 file processed. BPEL Console doesn't show any instances created. How would I know where what went wrong?
Please help.Try using java oracle.tip.adapter.b2b.data.IPDequeue to dequeue the message and see if you can dequeue
Then you'l be able to pin point if the issue is with your BPEL or with B2B Queue's
Kalyan -
Executing a message mapping for each instance of a sub-message
Hi,
I have a message struct like the following.
MT_TEST (1..1)
|----
>IDOC (1..Unbounded)
There is a field in each instance of Idoc, depending on its value, I need to perform receiver determination.
The target for each IDOC instance will be different for each value of the field.
How do I write a XPath expression to achieve this?
Also, is there any other method of achieving the same goal?
Cheers,
EarlenceHmmm.. so idoc is at source side..... Ok so multiple idocs are coming in one bundle BUT you cannot just divide the payload based on some conditions in Receiver Determination.
In your mapping you can map the source date to different targets but that is different thing.
Note: You can either send the whole payload to one receiver or nothing. You cannot send a part of the payload to 2 different receivers.
So it means it doesn't seems to be possible.
Regards,
Sarvesh -
How to reference the class-instance, created in parent class.
Hi, I have following scenario. During compilation I am gettting error message -- "cannot resolve symbol - symbol : Variable myZ"
CODE :
under package A.1;
ClassZ
Servlet1
ClassZ myZ = new ClassZ;
under package A.2;
Servlet2 extends Servlet1
myZ.printHi();
How to reference the class-instance created in the parent class?some corrections ...
under package A.1;
ClassZ
Servlet1
init()
ClassZ myZ = new ClassZ;
under package A.2;
Servlet2 extends Servlet1
myZ.printHi(); -
File deleted but no instance created
Hi,
I have created a process using a file adpter to read files and a FTP adapter to put the file. File adpeter is configured to delete the files after reading.
The process deletes the files after reading. However, there is no instance created for the process and there is no file put to the FTP location.
This phenomenon is happening only when the FTP information in the oc4j-ra.xml is:
<connector-factory location="eis/FtpAdapter" connector-name="FTP Adapter">
<config-property name="host" value="xxx.xxx.xxx.xxx"/>
<config-property name="port" value="21"/>
<config-property name="username" value="xxxxxxx"/>
<config-property name="password" value="xxxxxxx"/>
<config-property name="serverLineSeparator" value="\n"/>
<config-property name="serverLocaleLanguage" value=""/>
<config-property name="serverLocaleCountry" value=""/>
<config-property name="serverLocaleVariant" value=""/>
<config-property name="serverEncoding" value=""/>
<config-property name="useFtps" value="true"/>
<config-property name="walletLocation" value=""/>
<config-property name="walletPassword" value=""/>
<config-property name="channelMask" value="both"/>
<config-property name="securePort" value="990"/>
</connector-factory>
The above configuration is wrong. However, if the files are getting deleted from the source location, then we should as well get the instance created with the an errored state.
Message was edited by:
user486065
Message was edited by:
user486065Without seeing logs I can't help you. But I bet there was an exception which rollbacked the whole transaction, even the audit trail. Therefore you don't see instance in BPEL console (and because file and ftp are non transactional adapters you ended in inconsistent state).
this can be caused by fact that your BPEL process is synchronous and you defined probably inMemoryOptimization to true. Which is valid just for synchronous (or transient) processes and basicaly it improves performance, because process is not dehydrated (it depends on completionPersistPolicy parameter). If inMemoryOptimization is false then this is good candidate for further investigation.
completionPersistPolicy says in which cases it should save audit trail.
and completionPersistLevel says how much data should be saved.
Have a look into BPEL admin guide for further explanations, because this was very-very lighweight explanation. http://www.oracle.com/technology/products/ias/bpel/documents/bpel_admin_10.1.3.1.0.pdf
But don't forget to check logs (you may check log4j logging level for your BPEL engine).
Maybe you are looking for
-
Pictures not appearing - sometimes
I am using the x5 version with Word 2003 SP3 and I have been having issues with some of my pictures not appearing when I compile. In this project I make a .hlp file and I don't remember if I have had this issue with the few .chm projects I have. If I
-
Web Analysis 9.3 errors
<p>We're pretty new to Hyperion and System 9. Wondering ifanyone has any insight to the below error that seems tosporadically occur when trying to open a report in Web Analysis9.3. We have a 2-server setup, with Workspace and WA on 1box; Core and S
-
When executing ksii give me error
No target costs in target cost version 0 - missing plan for ATY Message no. KV165 Diagnosis In target cost version 0, no target costs could be calculated for ATY from the planned costs for one of the following reasons: No cost elements hav
-
Launch Remote Automator Workflow or Application via Apple Remote Events
Hi All, I read in a MacScripter post that a remote application must already be running to send it Apple Remote Events. http://bbs.applescript.net/viewtopic.php?id=3727 However I have found that since the Finder is always running you can tell Finder t
-
Parallels Desktop 5.0 for Mac vs Virtual PC
Hello all! I'll be getting a new Macbook Pro this week and will be retiring my iBook G4. On my iBook, I had to run Virtual PC for college. Now, going back to school, I'll need a program to run Windows-based programs. I really didn't like the speed of