Audit Trace
Hi guys...
I want to enable the Audit trace on RAC database please can anybody give good procedure or Doc...
Hi Jic,
JIC wrote:
Hi guys...
I want to enable the Audit trace on RAC database please can anybody give good procedure or Doc...I don't have any document that explains the step by step configuring Audit Database, but the documentation has all the information wich you need.
Verifying Security Access with Auditing
http://download.oracle.com/docs/cd/E11882_01/network.112/e16543/auditing.htm#BCGHGJJH
Also you can use Chapter Oracle® Database Security Guide 11g Release 2 (11.2)
http://download.oracle.com/docs/cd/E11882_01/network.112/e16543/whatsnew.htm
P.S: Use the documentation of the product that is you are using. (ie: Oracle Database 10.2 then use Oracle Database Documentation 10.2)
Regards,
Levi Pereira
Similar Messages
-
Hi!
I would like to import AUDIT TRACES from all systems into SAP BI system in order to build a AUDIT history and to develop querys to auditors within SAP BI.
Did anyone developed something like this or know that SAP intends to deliver this functionality?
FFHi Kumar,
You can see that different clients implemented BI 7.0 long before may be in 2006 in India not so sure.
As now BW 7.3 is also available, you can see many implementations happening in 7.3 as well.
Regards,
Raghu -
Hi Experts,
This is the audit script, I need to include to capture RPC event along with this.
Can you please help
USE [BDADissemination]
GO
/****** Object: StoredProcedure [dbo].[prcInsertBD81] Script Date: 02/07/2014 07:38:09 ******/
SET
ANSI_NULLS ON
GO
SET
QUOTED_IDENTIFIER ON
GO
CREATE
PROC [dbo].[prcInsertBD81]
@ProcessDate
varchar(8),
@BD81_acc_cde
varchar(7),
@BD81_del_id
varchar(7),
@BD81_del_seq
varchar(3),
@BD81_acc_typ_cde
varchar(2),
@BD81_brn_cde
varchar(2),
@BD81_partner_cde
varchar(2),
@BD81_age_dte
varchar(8),
@BD81_instr_typ
varchar(1),
@BD81_instr_alpha
varchar(6),
@BD81_instr_version
varchar(3),
@BD81_ps_ind
varchar(1),
@BD81_tran_amt
varchar(16),
@BD81_tran_qty
varchar(12),
@BD81_con_chg_ind
varchar(1),
@BD81_con_nte_ind
varchar(1),
@BD81_consid
varchar(16),
@BD81_prce
varchar(9),
@BD81_del_tran_cde
varchar(2),
@BD81_trade_cap
varchar(1),
@BD81_trade_typ
varchar(2),
@BD81_rvsd_del_id
varchar(7),
@BD81_DealerCode
varchar(5),
@BD81_OrderNumber
varchar(12),
@BD81_UnitTrustQty
varchar(16),
@BD81_RandIndicator
varchar(1)
AS
INSERT
INTO Deals (
procs_dte,
acc_cde,
del_id,
del_seq,
acc_typ_cde,
brn_cde,
partner_cde,
age_dte,
instr_typ,
instr_alpha,
instr_version,
ps_ind,
tran_amt,
tran_qty,
con_chg_ind,
con_nte_ind,
consid,
prce,
del_tran_cde,
trade_cap,
trade_typ,
rvsd_del_id,
DealerCode,
OrderNumber,
UnitTrustQuantity,
RandIndicator
SELECT
@ProcessDate,
@BD81_acc_cde,
@BD81_del_id,
@BD81_del_seq,
@BD81_acc_typ_cde,
@BD81_brn_cde,
@BD81_partner_cde,
@BD81_age_dte,
@BD81_instr_typ,
@BD81_instr_alpha,
@BD81_instr_version,
@BD81_ps_ind,
cast(@BD81_tran_amt
as DECIMAL(18, 2))
/ 100,
@BD81_tran_qty,
@BD81_con_chg_ind,
@BD81_con_nte_ind,
cast(@BD81_consid
as DECIMAL(18, 2))
/ 100,
CASE @BD81_instr_typ
WHEN 'E'
THEN
cast(@BD81_prce
as DECIMAL(18, 2))
/ 10000
ELSE
cast(@BD81_prce
as DECIMAL(18, 2))
/ 100
END,
@BD81_del_tran_cde,
@BD81_trade_cap,
@BD81_trade_typ,
@BD81_rvsd_del_id,
@BD81_DealerCode,
@BD81_OrderNumber,
cast(@BD81_UnitTrustQty
as DECIMAL(18, 2))
/ 100000,
@BD81_RandIndicator
GO;
GO
ShashikalaThis is the audit script, I need to include to capture RPC event along with this.
I'm not sure I understand your question. Do you want to capture the RPC completed event when this stored procedure is called for auditing purposes? In that case, you can capture the RPC completed event to a file with a filtered Extended Event
session or SQL Trace, depending on the version of SQL Server you are using.
To create an unattended SQL Trace, create the desired trace using Profiler with the filter (proc name) and columns. Then export the trace definition script, modify it to provide the desired trace file path and properties. You can run it
continuously with a SQL Agent job that (re)creates it every time SQL Agent starts.
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
Hi,
I have configured audit in sys.aud$.
When I select timestamp# from sys.aud$ it gives me null values.
Please help
Rgds,Test auditing by just doing
AUDIT SESSSION
logging out and then logging in again.
Note : SYSDBA actions do NOT get recorded in SYS.AUD$. So if you
logged in AS SYSDBA and did a DROP USER (or a connect, if AUDIT SESSION
is enabled), Oracle would not capture anything in SYS.AUD$ for that.
However, AUDIT_SYS_OPERATIONS that you've set would cause an Audit Trace
File to be created (in 9i it would be in $ORACLE_HOME/rdbms/audit, not sure
about the location in 10gR1).
So, for SYSDBA actions look for the file if AUDIT_SYS_OPERATIONS is enabled.
For actions by other accounts, query SYS.AUD$ -
Unity Connection 8.6.2 report error and issues
Hello,
I found some other hits on this error and tried the workarounds (restarting tomcat, and report data harvester) but am still getting "Unable to find any report data based on parameter(s)" for a certain timeframe on multiple reports when I know activity occurred in that timeframe.
User Phone login report, Outcall Billing Detail Report, others.
CUC version is 8.6.2.20000-2
Problem is during this timeframe someone dialed into system and changed the transfer extensions to various LD numbers throughout an entire day and the reports aren't showing these outcalls. Call Manager CDR data show the calls being placed from Unity VM ports and the Unity Conversation Manager traces I grabbed for the timeframe show the calls to those numbers to. I'm just trying to match up with a report and figure out if there is some other log or trace file I can look to determine for sure the caller logged in to the voicemail box and changed the transfer extension on it. (guessed the voicemaill password). The failed log in report doesn't even show the failed login attempt in output from traces below. The transfer extension dest addr was different on the transfers on this same mailbox. Have locked down the system since but want to get better grip on this with the reports and traces.
I looked over the CUC 8.6.2 SU1 and SU2 release notes and not seeing any resolved bugs for these matters either.
Here are some samples lines from Conversation Manager traces:
00:32:46.399 HDR|02/04/2013 ,Significant
00:32:46.399 |9744,PhoneSystem-1-001,E68CA6DB3CAE4118BE9F9612BD3540A9,Arbiter,-1,Incoming Call [callerID='' callerName='' calledID='555123' redirectingID='555123' lastRedirectingID='555123' reason=4=FwdNoAnswer lastReason=8=FwdUncond] port=PhoneSystem-1-001 portsInUse=1 ansPortsFree=23 callGuid=E68CA6DB3CAE4118BE9F9612BD3540A9
00:33:55.006 |9744,PhoneSystem-1-001,E68CA6DB3CAE4118BE9F9612BD3540A9,MiuGeneral,25,Enter CAvMiuCall::TransferEx destAddr='715551231234' type=2=Release maxRings=4 mediaSwitch='00a499fa-a193-45b0-a153-564a64019eb8'
00:33:58.955 |9744,PhoneSystem-1-001,E68CA6DB3CAE4118BE9F9612BD3540A9,MiuGeneral,25,Exit CAvMiuCall::TransferEx=0x00000000=S_OK
00:34:37.090 |9832,PhoneSystem-1-001,9DB394E443A3419FA0020BC9D1A92806,Arbiter,-1,Incoming Call [callerID='' callerName='' calledID='555123' redirectingID='555123' lastRedirectingID='555123' reason=4=FwdNoAnswer lastReason=8=FwdUncond] port=PhoneSystem-1-001 portsInUse=1 ansPortsFree=23 callGuid=9DB394E443A3419FA0020BC9D1A92806
00:34:50.403 |9832,PhoneSystem-1-001,9DB394E443A3419FA0020BC9D1A92806,CDL,11,CCsCdlImsAccess::AuthenticateSubscriber: Cannot authenticate user: JOESMITH IMS Result Code: 1 on line 108 of file CdlIms/src/CsCdlImsAccess.cpp: Error: 0x80046505 Description: E_CDL_SP_EXEC_FAILURE
00:34:50.403 |9832,PhoneSystem-1-001,9DB394E443A3419FA0020BC9D1A92806,CDE,3,Authentication Failed for Subscriber 555123 (Src/CsCallSubscriber.cpp 2969)
00:34:50.404 |9832,PhoneSystem-1-001,9DB394E443A3419FA0020BC9D1A92806,-1,-1,An invalid password entered when trying to log into a user mailbox. Details - [].
00:36:29.331 |9832,PhoneSystem-1-001,9DB394E443A3419FA0020BC9D1A92806,MiuGeneral,25,Enter CAvMiuCall::TransferEx destAddr='715551231234' type=2=Release maxRings=4 mediaSwitch='00a499fa-a193-45b0-a153-564a64019eb8'
00:36:33.484 |9832,PhoneSystem-1-001,9DB394E443A3419FA0020BC9D1A92806,MiuGeneral,25,Exit CAvMiuCall::TransferEx=0x00000000=S_OK
00:44:53.444 |9743,PhoneSystem-1-001,FEB6975B82284F37A161CD66ECBA61C6,Arbiter,-1,Incoming Call [callerID='' callerName='' calledID='555123' redirectingID='555123' lastRedirectingID='555123' reason=4=FwdNoAnswer lastReason=8=FwdUncond] port=PhoneSystem-1-001 portsInUse=1 ansPortsFree=23 callGuid=FEB6975B82284F37A161CD66ECBA61C6
00:45:20.019 |9743,PhoneSystem-1-001,FEB6975B82284F37A161CD66ECBA61C6,MiuGeneral,25,Enter CAvMiuCall::TransferEx destAddr='715551231234' type=2=Release maxRings=4 mediaSwitch='00a499fa-a193-45b0-a153-564a64019eb8'
00:45:24.121 |9743,PhoneSystem-1-001,FEB6975B82284F37A161CD66ECBA61C6,MiuGeneral,25,Exit CAvMiuCall::TransferEx=0x00000000=S_OK
ThanksHello ebergquist,
As advised we could try to run a report before an upgrade was performed to verify if it will pull the information, which in the event it does we would be matching the bug, therefore access the CUC CLI and type the following command: file get install *
You will be required to have an SFTP available for the file transfer and port 22 open on the sftp client.
Once you receive the files you would need to check system-history.log, below is an example.
=======================================
Product Name - Cisco Unity Connection
Product Version - 8.5.1.12900-7
Kernel Image - 2.6.9-89.0.20.EL
=======================================
02/24/2013 13:35:02 | root: Boot 8.5.1.12900-7 Start
02/24/2013 16:03:45 | root: Install 8.5.1.12900-7 Success
Once you check the time stamp and date of the upgrade try to gather a report for a time range before the upgrade was performed, even though they might be overwritten, there still could be a possibility that it succeeds.
If this bug is not the issue then you might want to take into consideration:
1. Make sure that the service ReportDB is [STARTED] in the output of the command: utils service list.
2. If you have Digital Networking on your server make sure there is no stalled replication. Go to RTMT (Real Time Monitoring Tool)> Syslogs> Select on the dropdown your server> Doble click on Application logs> CiscoSyslog. You can then do a manual search or sort with the filter option:
Severity: Error
App ID: CuReplicator
Hit apply. Below is an example:
Date: Feb 19 08:01:15
Machine Name: LAB1111
Severity: Error
App ID: CuReplicator
Message: : 1151681: LAB1111.corpnet2.com: Feb 19 2013 08:01:11.662 UTC : UC_UCEVNT-3-EvtReplicatorStalledReceiveReplication %[ClusterID=][NodeID=LAB1111]: Detected stalled replication receiving from location LAB1111. Waiting for missing USN 2222. This situation may indicate network connectivity problems.
3.Make sure that replication is working properly in the cluster. You can check them via the following commands: showcuc cluster status and utils dbreplcation runtimestate
For the TransferEx that you state changed to ceros -> TransferEx=0x00000000=S_OK , this is just a hexadecimal value that is returned stating that the transfer that took place during the call was successful, in this case the actual transfer was to the following destination -> TransferEx destAddr='715551231234', and was during a call to ->calledID='555123' , this does not have to do with the actual change of a transfer number performed by a user.
Those traces won't be enough you will have to additionally set: ConvSub, level 3 and CDE, level 16 so that you may see output in the traces as the example posted in the previous thread.
Traces would be the path to follow in order to verify the user that changed the transfer rule, as this is as i understand the goal your pursuing.
However if it is of any help, another way to get on hold of the numbers that were updated on the Transfer Rule is to use the User Data Dump tool, this information reflects changes regardless of whether it was done via TUI (Telephone User Interface) or GUI (Graphic User Interface). For more information on the tool go to:
http://ciscounitytools.com/Applications/CxN/UserDataDump/Help/UserDataDump.htm
Traces are extensive to be reviewed if done manually, however you can do a text search with a tool as Notepad ++ to look for keywords as "NewTransferNumber =" or "CDE,16,Transfer allowed", once you find this you check the call GUIDwhich is the reference number of the call and that exists during the entire length of the call, whether it is while a person is leaving a voicemail or the user is accessing his/her's inbox. This would speed up your search.
So as explained in the first thread you would then do another text search returning all possible matches with that callGUID.
Now if you get to the very beginning of the call you would see something like:
Line 13712: 18:58:05.156 |24720,CCIE-1-001,0C9C97BDF5F444F0B535D231C32DDCB4,Arbiter,-1,Incoming Call [callerID='2199' callerName='' calledID='6789' redirectingID='' lastRedirectingID='' reason=1=DirectlastReason=1024=Unknown] port=CCIE-1-001 portsInUse=1 ansPortsFree=1callGuid=0C9C97BDF5F444F0B535D231C32DDCB4
In this example you see my test user callerID='2199' calling the Pilot number 6789 which is the voicemail button
calledID='6789, and the what type of call is it whether Forward or Direct, when logging to voicemail you are going to see Direct which is going to match the Direct Routing Rules and send the call to the attempt sign in conversation.
The other approach is a rule out method between the Audit traces and the User Data Dump tool.
So let's say you have a baseline document that states all of the Transfer numbers for all users.
If the User Data Dump reflects a different number and the Audit logs do not show any change, starting from the time the baseline was created and the point of time where the number was changed, then you could deduce the change was done via TUI (Telephone User Interface). Unfortunately you wouldn't be able to tell who did the change in this case.
Nevertheless, remember that a user may have for example an Alternate Extension so the change might not necessarily be done from the office phone: "Alternate extensions can make calling Cisco Unity Connection from an alternate device—such as a mobile phone, a home phone, or a phone at another work site—more convenient. When you specify the phone number for an alternative extension, Connection handles all calls from that number in the same way that it handles calls from a primary extension (assuming that ANI or caller ID is passed along to Connection from the phone system). This means that Connection associates the alternate phone number with the user account, and when a call comes from that number, Connection prompts the user to enter a PIN and sign in."
Best regards,
David Rojas
Cisco TAC Support Engineer, Unity
Email: [email protected]
Phone: 1-407- 241-2965 ext 6406
Mon, Wed, and Fri 12:00 pm to 9:00 pm ET, Tue and Thu 8:00 am to 5:00pm ET
Cisco Worldwide Contact link is below for further reference.
http://www.cisco.com/en/US/support/tsd_cisco_worldwide_contacts.html -
How to Access the XI_AF_MSG table
Hi Experts,
I request you to please let meknow that how can I access the different JAVA tables like XI_AF_MSG table or the AUDIT tables.
Actualy my requirement is to trace the audit log for a particular message ID in the Adapter Engine.This audit trace is not same as the audit log that found in the PI7.0 Runtime WorkBench.For exmaple.--
a message failed in the adapter engine after successfuly processed from PI Integration Engine. Now I want to trace the USERID by whom the message is resent or canceled.
please let me know how can I achive this and how can I access the different tables in JAVA layer.
Thanks
Sugata Bagchi MajumderThese 3 are the tables that are for XI Adapter in ABAP Stack.
SWFRXICNT
SWFRXIHDR
SWFRXIPRC
You can also try the following tables.
SXMSAEADPMOD XI: Adapter and Module Information
SXMSAEADPMODCHN XI: Adapter Module Chains
SXMSAEAGG XI: Adapter Runtime Data (Aggregated)
SXMSAERAW XI: Adapter Runtime Data (Raw Data)
Cheers,
Sarath.
Award if helpful. -
File to idocs - sequence of inbound processing
Hi everybody,
we have a file to (2) idocs scenario. In XI we map the received data of vendors into two idocs (adrmas and cremas). After building the idocs we send them in that sequence to the reveiver systems (recommended). The problem is that in the receiver system very often the idoc cremas wants to be processed before adrmas is ready. This leads to the error "vendor xy is blocked by user sapale". Especially for massprocessing I get a lot of these error-messages in bd87. As workaround I processed all the error-idocs by the report rbdmani2. (First step adrmas - second step cremas) But then I face missing data in some cases in table lfa1 - which seems to me that the sequence of 1. adrmas and 2. cremas was not correct processed.
Is there a possibility for inbound processing with the rule process adrmas - wait until ready - process the associated cremas and so on?
I checked the sap-help article for seralisation - but we could not achieve a better result by queing the outgoing messages in the xi.
Thanks very much.
With kind regards
JörgYou have to get a solution based on the following concepts
1. Do not use BPM it is not efficient
2. Understand what is the difference between an IDOC in received state and processed state. Received state mean IDOC is saved into IDOC table. Processed state mean IDOC processed into the business system.
3. You can ASK BASIS guy to turn on the immediate IDOC processing option in SAP system,so that SAP process the IDOC as soon as it arrive in IDOC table. This is not efficient, in case if your SAP system has to process SAP online client request and SAP document (inbound and outbound) same time.
4. Understand the concept of standard based integration, mean integration system provide the option to business parties to provide the successfull message transfer.
Based on all these points I recomend you to follow the steps below.
1. Extract each record from the input files into two idocs.
2. Send the first IDOC to the receiving system
3. Send the second IDOC to a ESB storage such as DATABASE, JMS Queue, MQ Series Q (if you have available) or even to another File.
4. Develop an RFC module to check the status of the IDOC being send to the receiving system. Status here mean whether the IDOC data processed into the business system. You can do this lookup using a custom RFC lookup using the attribute connecting the first IDOC record with second IDOC.
5. Process the records (second IDOC) from the intermediate storage using the RFC lookup into the business system updates its status as ready to deliver.
6. Using another process such as FILE to IDOC or JMS to IDOC or JDBC to IDOC send each record which are ready to process from intermediate storage to the receiving system.
7. Create a report using FILE or JMS, JDBC adaptor module to keep track of these three stage processing, so that in case an inconsistency happed you will have an auditing trace available.
This is the standard based integration approach.
ABAP guys, BASIS guys they may not get it when then repair a BMW in local auto workshop, cuz I had to fight with them 4 years ago to make it happen in Verizon supply chain project where I had to accomplish the same concept in SAME IDOCS you mentioned here.
BPM, turning on immediate processing of IDOCS etc will end up in buying another 16 CPU hardware and BASIS guys or ABAP guys running BAD record IDOC processing report for 350.00 hr consulting FEES.
SAP is a good company and XI is a good product, as long as it is being used as per right usage. -
JDBC sender issue no data in PI
Hello,
We have a scenario Database to SAP we are using JDBC adapter at sender side and the Qos is EO.
Now the issue is database sender complained that they have sent the data from database but the data is not updating in SAP system as a result no Idoc created.
In the DB trace the DBA said that the SELECT executed successfully.
But I checked in PI and no failed or scheduled message in the Component Channel monitoring. The message id is greyed out. All I see is that Polling started and Processing finished successfully.
In SXMB_MONI there are no xml messages.
SELECT * FROM sys.tables works fine and I can see xml messages in SXMB_MONI and also see the audit trace log in RWB under CC monitoring.
Why is it not picking up data from other user tables in SQL server database?
I have analyzed the issue end to end .
Where is the adapter sending the converted xml messages if it has picked up data? If it has not then why cannot i see the trace and why is the message id greyed out in the CC monitoring?
Please help me to resolve this problem .
Thank you,
TeresaDear Thomas,
Same issue i am facing in Production,
Please suggest me,
my scenario is also like JDBC -->XI--> SAP( Proxy).
The database server sending data to xi, here signals are coming xi perfectly but suppose 100 messages db people sending only 1 or 2 messages lost and it is not shown in xi.
when i checked with SXMB_MONI, RWB(Messages Monitoring, component monitoring) and logs no error message.
DB side it is showing XI_Read time status also changed 'N' to 'S'.
Query:
Select Query:SELECT * FROM MCON_INTF_OPS WHERE CSTATUS = 'N' FOR UPDATE
Update Query:UPDATE MCON_INTF_OPS set CSTATUS = 'S',CXI_READ = sysdate WHERE CSTATUS = 'N'
why few signals are lost please suggest me -
DB Transaction ROLLBACK issues
Hi All,
I am running BPEL version 10.1.2.0.2, and i am trying out DB Transactional Insert. It looks like it is not doing proper rollback when an exception is occuring in BPEL process flow as well as the audit trace is also not generated when an exception happens. Is this a bug in this version and is there a patch available.
I have set the property in bpel.xml as transaction=participate and in partner link participate=true. For Transactionality is there any other setting to be done in any config xml files like /../../data-source.xml or oc4j-ra.xml
Appreciate if any one can let me know if i need to do any other settings
Thank you
RMAccording to Clemens at his note:
there are some consideration for making this happen ..
a) process type should be sync (to ensure the process runs in one thread)
b) on each partnerlink specify participate=true (as property) and
c) db adapter's database connection MUST point to a XA-location to participate in the global tx ..
Re: the transaction on multiple PL/SQL Web Service on once BPEL process -
Transactional control statements in trigger
Hi,
Transaction Control statement, which cannot be executed within a trigger body??why...explain with reason
its helpful me..Ishan wrote:
"a way you can actually make it work" .... seriously?
Yes, I was serious. Why? What's wrong with that? Technically speaking, won't it work?
This is not a way to make it work, it's a way to break an application/process
Well !!! All the cases?
Here's a scenario
I want to audit a table where if any change done by the user has to be tracked in an audit table, irrespective whether the change has now been rolled back in the main table. I want to see what change was done? How would I do it? Going by your logic of breaking application/process, I should never use trigger as AUTONOMOUS_TRANSACTION. Am I right? How would I achieve it now?
Your auditing/tracing code should be in a separate procedure and that procedure should be autonomous, so that the requirement to write autonomous data for that purpose is kept isolated from the trigger code. Consider this scenario instead. You want to write audit/trace information from your triggers, so you make the trigger autonomous and put in your code to write the audit/trace information. Later on, someone who hasn't a clue comes along and decides they need to do something else in that trigger (well why write a new trigger when one already exists?) and they get it to write some data in relation to the data being created by the trigger. Now you suddenly have a transactionally unsafe application, where this 'child data' can still be written and committed even if there's a problem in the main transaction. By isolating auding/tracing away to it's own autonomous procedure, you make it clear that it's only the auditing/tracing that should be autonomous and prevent any problems from occuring inside the trigger itself.
That's more of a way to write bad code that is transactionally unsafe and demonstrates a lack of understanding of database transactional processing or how the rdbms on the server processes commits and could potentially lead to write contention in the writer processes, of which many would be spawned if the number of inserts (in this example) was high.
The above comment is based on the assumption(which, I never stated) that in whatever scenario my code would be implemented, it would ALWAYS break the process. There are scenarios where this code could be required.
Here's a link from Oracle Documentation, that uses an example of creating a trigger with AUTONOMOUS_TRANSACTION. If it is a so bad example which "demonstrates a lack of understanding of database transactional processing", why would Oracle even mention it in their documentation?
http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/sqloperations.htm#BABDHGAB
And the answer is because there is a way to do it and it could actually be required, even though it's rare.
I just showed OP, that it's not allowed directly and if required, it can be done as there is a way to do it. When to do it and what could be the consequences was never the scope of the question.
Yes, and there's ways to jump off cliffs and kill youself, but I wouldn't recommend demonstrating how to do that to people just because it's possible.
Just because it's in the oracle documentation doesn't always mean it's best practice. A lot of the documentation was written a long time ago and best practices have changed as well as further functionality added, and Oracle has been known to make mistakes in their documentation too. However in this example, it's using it for auditing purposes, so the idea was almost there, but they still haven't met best practices of keeping the auditing isolated, so their example is not the best. -
Infotype Log Points are sure for any hints . :)
Hi
In my company some one delete the 90* infotype from PM01 now can i know who (which id)
is deleted and when .
Please it is bit urgent.
Prash.
<b>Points are sure for any hints .</b><b></b>Hi,
You can use the Audit trace, like STAT, to trace who made the changes. This report may help you, RPUAUD00.
Good Luck.
Om. -
Server Side Links (Node Link): caught exception
HI ,
Following error is showing up in System Audit Trace Logs:
Server Side Links (Node Link): caught exception
I am not able to figure is out why this error is coming.
Can any please explain the reason of this error..
thanksHi
Thanks for the link, it is indeed a good example. However, whilst it has some interesting points, it seems to narrowly skirt around the issue i am having (as do most examples in fact).
Perhaps i should have been more clear that the problem i am having relates specifically to entities that have already been persisted, and all i am trying to do is edit them. In the example you sent, if bean-validation fails then the entity is not persisted, so no problems. However, when editing an already persisted entity, form submissions trigger an automatic database update via the EntityManager (provided JSF validations are passed), so my real problem is avoiding this.
Any input appreciated.
-Rich -
Tracing BPS Allocations in a Sequence ...
First off, I'd like to thank you for your contributions, and let you know that I found the discussions on this forum very informative.
I appreciate your taking the time to share your experiences
I'm working with a client who has been running allocations in R/3. Because of what they've seen in the ability to capture 'Detail Lists' in R/3, they expect 'similar' functionality in BPS
Hence, they expect to see the Sender, Receiver, Amount & Distribition basis for EVERY allocation stored in a way that they can literally step thru' every allocation step after the entire sequence is run ...
I'm aware of Trace in BPS, but I have seen that it only works when a single param gp is executed, & not in a sequence ... is that correct?
Is there any 'reasonable' way of recreating what is avlbl in standard SAP R/3 allocation (On TCOD KSU5 in R/3, there is a field that can be checked on the cycle execution screen that will create a 'Detail List' of each of the segments in the cycle and each of the senders/receivers within each segment. From this list, we are able to select the segment show how the senders or receivers performed)
FYI, I'm using the standard allocation function in 'distribution' mode.
I've considered introducing a GUID-like char into the Cube so that it carries a unique ID for the records changed in each Parameter Gp.
I'm not entirely clear if this is the best way, and what this means for the amt of FOX I'll have to write (currently, I'm using std functions, & I'd like to stick to it as far as possible) ... there are approx 600+ separate allocations, so it'll probably be a bit of a nightmare ;0), for me, to codify the rules, and for the client due to increased run-times ...
Is there any 'elegant' way of recreating the trace from the Basis Application Log (BAL* tables)? or is that not suggested?
Am I thinking straight? Are there other avenues within BPS that I shd consider?? Etc etc ...
Pls advise your thoughts.
Guidance greatly appreciated.
Best rgds
AnandAnand,
You have to communicate to the client that the allocation function in BPS is NOT the same as the allocation functionality in R/3. Allocation functionality in R/3 is auditable and traceable and very detailed and you can see what had happened a long time later. Allocation planning function in BPS is just some standard function and the % and accounts are actually configureation.
If they want to have that functionality in BPS, someone has to configure it and that takes time (and money). When I explained that to my client, they decided to retract BPS planned primary costs back to R/3 and do the allocation run there. For SOX compliance, clients usually want the audit trail and to configure the audit trace in BPS is a lot of work...
If they insist in doing it in BPS with all the time and cost implications, you might want to look into an allocation driver cube - and the data modelling excercise or maybe use the distribution function in order to keep the original amount in a separate key figure but then you have to be very careful you do not mistakeningly left something double counted by incorrect configuration.
Hope this helps. Feel free to award points if this has proved helpful.
Mary -
Ok, last version of OracleBPEL PM,Axis,Tomcat...
I want to create a file composed by 4 fields.
I invoke a simple WS using a variable int and i received an int. I repeated it 4 times and i put these values in the fields of a variable to write it in file.That's works!
I created the schema step1,step2,step3,step4
first,second,third,fourth.
I checked the first row as header. I specified the step1,2,3,4 were int/integer (?)
but i got this error.
<bindingFault xmlns="http://schemas.oracle.com/bpel/extension">
<part name="code">
<code>null</code>
</part>
<part name="summary">
<summary>file:/C:/Programmi/OracleBPELPM1/integration/orabpel/domains/default/tmp/.bpel_Communicator_1.0.jar/Write.wsdl [ Write_ptt::Write(Root-Element) ] - WSIF JCA Execute of operation 'Write' failed due to: Errore di conversione. Errore durante la conversione di un messaggio in formato nativo. ; nested exception is: ORABPEL-11017 Errore di conversione. Errore durante la conversione di un messaggio in formato nativo. Verificare lo stack di errori e correggere la causa dell'errore. Contattare il Supporto Oracle se non è possibile correggere l'errore.</summary>
</part>
<part name="detail">
<detail>null</detail>
</part>
</bindingFault>
Help!!!All values are int!!!
Thanks
EmaHi Emanuele,
Can you please post the BPEL Process and the XSD File created by the File Adapter (NXSD) Wizard? Also the Audit trace from the Console?
It looks like the tranlation is failing in the file adapter while writing to the file.
Also if possible can you please send the BPEL Process Project and the Webservice from where you are getting the values.
Regards,
Dhaval -
Hi all
I am new to RMAN.
We have inplemented it on Oracle Linux and my target DB is 10.2.0.2 on Oracle Linux.I am using recovery manager.I have archived log enebled.
My problem is that the file system that has my flash_recovery is full and I dont know how to get out of this problem.
I tried to delete some trace files but still get error it cannot write to audit files.can someone kindly advise.
Regards
KamleshCould you run this
sql> show parameter db_recovery
and where do you put your audit trace?
cheers
http://fiedizheng.blogspot.com/
Maybe you are looking for
-
Use Ipad 2 or 3 or iphone as external monitor for DSRL Cannon?
How can i use my ipad wirelessly to connect to my DSLR cannon? No spamming allowed to answer.
-
Why is the web service cannot be invoked?
Hi Experts, Scenario: I have created a web service from a function module from SAP CRM using SOAMANAGER. I wish to invoke this web service from Data Services. I have tried to invoke the web service using the 2 types of data store avaible in Data Serv
-
I have posted a Customer park credit memo through F-67 when i save the document i am getting the ERROR in SBWP Update was terminated System ID.... DEV Client....... 110 User..... 80001000 Transaction.. FBV1 Update key... AF458CE0235DF160850
-
hi All, I have one bex garbage collector already. but when ever i try to change query in BEx it gives me error message: "BEx transport request not available or not suitable" "use existing request" i chked the component, package everything for that re
-
Files relinked incorrectly when importing xml to FCP??
I exported an XML file from premiere for FCP but when it came into FCP all the audio reconnected to the wrong part of the file and some of the clips were out of sequence. Does anyone have any experience with this?