JMS - retry logic
Currently we have individual Queues for each destination. We are planning to have a single queue. Eventhough we can handle different destination in the message header, how to customise the rety logic.
I mean, say I have three destination A,B &C. When the first message message is targetted to A and suppose its down, how long will the queue keep on retying the same destination? Will it not send subsequent messages to other destinations? Can we customize the retry logic to try for some time and move to next message & destination?
Hi,
Adding the properties as you have them is correct.
When I did this I also had to make a custom fault policy in the DeQueue composite which would not catch the errors.
This meant that the fault caused the message to rollback to the queue.
Fault Bindings.xml:
<?xml version="1.0" encoding="UTF-8"?>
<faultPolicyBindings version="3.0"
xmlns="http://schemas.oracle.com/bpel/faultpolicy"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<!-- Imperial Fault Policy for Reliable ASync Design Patterns -->
<!-- Domain level Default Policy -->
<composite faultPolicy="IMPDefaultPolicy"/>
</faultPolicyBindings>
Fault-policies.xml:
<?xml version="1.0" encoding="UTF-8"?>
<faultPolicies xmlns="http://schemas.oracle.com/bpel/faultpolicy" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<faultPolicy version="3.0" id="IMPDefaultPolicy"
xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns="http://schemas.oracle.com/bpel/faultpolicy"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<!-- Imperial Fault Policy for Reliable ASync Design Patterns -->
<!-- We simply want faults to be re-thrown so that the messages rollback to the queues/topics where they came from -->
<Conditions>
<faultName>
<condition>
<action ref="ora-rethrow-fault"/>
</condition>
</faultName>
</Conditions>
<Actions>
<!-- This is an action will bubble up the fault-->
<Action id="ora-rethrow-fault">
<rethrowFault/>
</Action>
</Actions>
</faultPolicy>
</faultPolicies>
Similar Messages
-
Retry logic for TCPIP eway, when backend system is down
I am a newbie in JCAPS. My requirement is to read a text file and then send it contents through a socket connection to backend system. The stream retrieved back from backend system (via socket connection) is to be stored in Oracle DB.
My Approach:
I am using Batch eway to read file and then send the contents to a JCAPS queue. I then have a JCD which reads this Queue and establishes connection with socket, the read contents are then written to Oracle.
I have 2 questions:
1. Is this approach correct? Do you recommend any optimized/better way (considering the requirement)?
2. If the backened system is down, wont the messages in queue be lost? How can we make sure that messages are delivered whenever backend system comes up?
Thanks,Rajan,
You need to pick up a local file and shove it into an oracle database??
For a reliable system, I would recommend a JCAPS project that consists of:
2 JCDs, one doing the batch file get and the other doing the Oracle insert
1 JMS queue between the two JCDs (if your database connection goes down, or if something else goes wrong, your messages will be safe in this JMS queue)
I can point you towards some tutorials from Sun if you need help on this.
--Adam -
Foreign server or JMS Bridge?
Weblogic 11g
WebsphereMQ 7
Good morning,
I have a question regarding JMS architecture.
We’re going to deploy a new app using JMS but all the messaging architecture will be on WebshereMQ.
Our middleware team recommends us to create a foreign server to communicate with MQ.
So my question is quite simple, why using a Foregin server instead of a JMS Bridge?
I’d have used the latter as it aims to communicated with other Message Oriented Middleware.
Thanks for your wise advicesYou can find more information about integrating with a remote or foreign JMS provider at
http://docs.oracle.com/cd/E17904_01/web.1111/e13727/interop.htm#i1009767
This doc discusses a number of approaches and best practices for each of them.
The messaging bridge does introduce an extra hop; messages are put into a local destination and then forwarded to the final destination. This is useful when the remote destination is not highly available. The bridge will take the messages even when remote destination is not available and then forward them with build-in retry logic when the remote destination becomes available.
If the remote destination is highly available, foreign JMS server is preferable since it directly access the final destination without an extra hop.
Hope this helps.
Dongbo -
Wcf web service send port retry?
We are contacting external wcf web services. We don't control these services and they could be returning an exception for a short while (which is "ok"), so we need to retry for a given period of time, without suspending the orchestration/sendport.
It used to be simple with web services (.asmx) since the send port would handle the retries as soon as an exception was returned from the web services. With wcf services and the soap fault, it is much more cumbersome.
I know how to do it from an orchestration, but it does add a lot of clutter (have an extra "initialization" construct block so visual studio shuts up about the uninitialized message coming back from the wcf service, define documentspecnames in
our case, define xpath expressions for the fault and the expected response, have a loop, a delay, more variables/expressions, scope, exception block, etc. etc.) for such a simple requirement. We need to contact a couple of wcf services in an orchestration
and the handling of the retries makes it quite large and unfriendly (and this is mostly the retry logic for these wcf services).
So, is there any way to have the send port adapter handle the retries when receiving an exception from a wcf service like it used to be with regular web services?Thanks for you answer, yes I understand that the SOAP fault is just another message and that I need to act accordingly in the orchestration. I'm not refuting that, I'm just looking for a better solution than the one I have now (which seems overly complicated
to me).
I have to specify the documentspecname because we use the built-in xmlreceive pipeline and the type of the exception that the wcf services return is standardized in our organisation... so there are many copies of the schema of this exception deployed on
the servers. I need to specify the schema of the exception coming from my wcf reference in my assembly otherwise I get the "I can't find the right schema blabla" exception.
Of course, if I can stop using the XMLReceive pipeline, I wouldn't have to specify the documentspecname... I'll have to research in detail what this pipeline is doing and see if I can replicate it's functionality, with the addition of the check for the error
in the returned message.
I just want to be sure before proceeding... If I raise an exception in a custom pipeline, the send port retry configuration would apply? -
I have used Azure managed Cache (Basic tier) in my Web App deployed on cloud service.
Now I am getting exception. ErrorCode<ERRCA0017>:SubStatus<ES0007>:There is a temporary failure. Please retry later. (The request failed because the server is in throttled state) on Production environment.
I know, this exception is occurred when memory usage reached more than 95%.
I want to handle this issue in code so please provide the right solution.
My Clients are facing it... so please help me .
Its very urgent.
Server Error in '/' Application.
ErrorCode<ERRCA0017>:SubStatus<ES0007>:There is a temporary failure. Please retry later. (The request failed because the server is in throttled state)
Description: An unhandled exception occurred during the execution of the current
web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode<ERRCA0017>:SubStatus<ES0007>:There
is a temporary failure. Please retry later. (The request failed because the server is in throttled state)
Source Error:
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace
below.
Stack Trace:
[DataCacheException: ErrorCode<ERRCA0017>:SubStatus<ES0007>:There is a temporary failure. Please retry later. (The request failed because the server is in throttled state)]
Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ErrStatus errStatus, Guid trackingId, Exception responseException, Byte[][] payload, EndpointID destination) +551
Microsoft.ApplicationServer.Caching.SocketClientProtocol.ExecuteApi(IVelocityRequestPacket request, IMonitoringListener listener) +287
Microsoft.ApplicationServer.Caching.SocketClientProtocol.Upsert(VelocityPacketType type, String key, Object value, DataCacheItemVersion oldVersion, TimeSpan timeout, DataCacheTag[]
tags, String region, IMonitoringListener listener) +357
Microsoft.ApplicationServer.Caching.SocketClientProtocol.Put(String key, Object value, DataCacheItemVersion oldVersion, TimeSpan timeout, DataCacheTag[] tags, String region, IMonitoringListener
listener) +74
Microsoft.ApplicationServer.Caching.DataCache.InternalPut(String key, Object value, DataCacheItemVersion oldVersion, TimeSpan timeout, DataCacheTag[] tags, String region, IMonitoringListener
listener) +288
Microsoft.ApplicationServer.Caching.<>c__DisplayClass2f.<Put>b__2e() +146
Microsoft.ApplicationServer.Caching.DataCache.Put(String key, Object value, TimeSpan timeout) +263
Microsoft.Web.DistributedCache.<>c__DisplayClass31`1.<PerformCacheOperation>b__30() +19
Microsoft.Web.DistributedCache.DataCacheRetryWrapper.PerformCacheOperation(Action action) +208
Microsoft.Web.DistributedCache.DataCacheForwarderBase.PerformCacheOperation(Func`1 func) +134
Microsoft.Web.DistributedCache.DataCacheForwarderBase.Put(String key, Object value, TimeSpan timeout) +148
Microsoft.Web.DistributedCache.BlobBasedSessionStoreProvider.SetAndReleaseItemExclusive(HttpContextBase context, String id, SessionStateStoreData item, Object lockId, Boolean newItem)
+177
System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) +1021
System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +80
System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +165
Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.34009
Thanks
hemaYou should add retry logic in case of retriable exceptions. If the retries fail as well you should reset the DataCacheFactory as suggested in this link.
http://blogs.msdn.com/b/cie/archive/2014/04/29/cache-retry-fails-what-next.aspx
Thanks,
Pallav -
Hi,
I am facing this error every hour. It does not happen randomly but it triggers every hour precisely.
I have half a dozen services connecting to the cache and all of them would get this error at the same time every hour.
The cache is working properly expect this "downtime" every hour.
Any idea?
ThanksYou should add retry logic in case of retriable exceptions. If the retries fail as well you should reset the DataCacheFactory as suggested in this link.
http://blogs.msdn.com/b/cie/archive/2014/04/29/cache-retry-fails-what-next.aspx
Thanks,
Pallav -
The server encountered an internal error. Please retry the request.
Hi,
I have a VM with Windows 2012 Server. It has an automation script to start the VM every day at 7 o'clock and end at 23:30.
Point is that until yesterday it worked well but this morning didn't started. I've been trying manually but until now it have been impossible to start the VM.
After a while I have the following error message: "The server encountered an internal error. Please retry the request."
I don't know what else I could do to not lost the productive data in the server and start it without problem.
Any help will be appreciated.
Thank you in advance,
SergioHi,
In some cases as availability of free blades or free space within a blade in a cluster fluctuates, the constraints to find an allocation for the service may not be met. When these constraints are not satisfied, we return back an allocation failure error
message.
To avoid this error create multiple affinity groups to house services that are not required to be placed together, spin up smaller numbers of VM’s at a time, ensure retry logic is in place when an operation is unsuccessful and also utilize multiple datacenters
for your deployments.
Try Multiple Restarts for the VM, however if the VM still does not start, please try the below steps
Resize the Virtual Machine from the Portal but ensure to Back up the Data in the Temporary Drive ( D:\) since the data will be lost while Resizing.
If the issue is still not fixed, please recreate the VM by retaining the attached Disk’s and attaching the same to the new VM
Hope this helps !
Regards,
Sowmya -
WLS9.2 MP1 jms jdbcstore
Hi
I went through the JMS and JDBC store documentation
http://e-docs.bea.com/wls/docs92/jdbc_admin/oracle_rac.html#wp1080531
we are planning to use multipools with failover algorithm configured to access the JDBC store from JMS
Is this a supported configuration ?
The docs talk about a JMS feature with Automatic Retry
-Dweblogic.store.jdbc.IORetryDelayMillis=X
At the bottom there is a note "If an automatic retry attempt is not successful, you must restart weblogic server"
I was hoping to get more clarity on that statement
If we have a RAC and multipool on top of it wont the multipool or data source connection retry mechanism kick in after the limited JMS retry?
Or for JMS is it a special case where only the JMS retry mechanism counts and if the database is down for more than that (15000 milliseconds max value for IORetryDelayMillis) we will have to restart weblogic instance ?Hi. This is a known, fixed bug. Either upgrade
to 9.2mp3 or ask for a patch for CR312353.
Joe -
SMTP: retry send message
Hi All,
I have to implement email alerts as part of my application.(SMTP) I have to retry multiple times if the email is not sent successfully. I assume I have to retry if I get MessagingException or SendFailedException during Transport.send(). I only retry the validUnsent addresses in SendFailedException. Correct?
I couldn't find any info related to retry in http://www.oracle.com/technetwork/java/faq-135477.html
Thanks,
BinduNo.
Unless you enable "partial send", a send failure means that the message wasn't sent to any
of the recipients.
However, you probably want to do more analysis of the failure. If sending failed because one
of the recipients is invalid, retrying isn't likely to make any difference.
Generally the best approach is to make sure you're using a mail server that's at least as reliable
as your application, possibly colocated with your application, and let it handle all the messy retry
logic. If you can't talk to your local mail server, something is seriously wrong.
If you're stuck using a public mail server of questionable reliability, consider inserting your own
mail server between your application and it. -
ORAMED-03302:[Exception in oneway execution]Unexpected exception in oneway
Hi,
I have a composite which looks like below.
JMS Adapter -------> Mediator -------> WebService
The JMS adapter will get a payload from a queue and push the same to the given webservice through mediator.
My Problem here starts when the given webservice is not available. The composite gets strucked when the endpoint is not available throwing me the below fault
Non Recoverable System Fault :
*ORAMED-03302:[Exception in oneway execution]Unexpected exception in oneway operation "CreateExceptionLog" on reference "ExceptionLogger1".Possible Fix:Check whether the reference service is properly configured and running or look at exception for analysing the reason or contact oracle support. Cause:Unable to access the following endpoint(s): http://10.103.96.161:8088/mockExceptionEventSOAP11Binding*
I thought of implementing retry logic through Fault Policies. But it is not working. I need to configure in such a way that the composite should retry to push the payload till the webservice comes ON(if fails human intervention should be made).
How can i achieve this ?? Isnt possible to handle exception in mediator level??
With smile,
Siva Balan VHi,
I have a composite which looks like below.
JMS Adapter -------> Mediator -------> WebService
The JMS adapter will get a payload from a queue and push the same to the given webservice through mediator.
My Problem here starts when the given webservice is not available. The composite gets strucked when the endpoint is not available throwing me the below fault
Non Recoverable System Fault :
*ORAMED-03302:[Exception in oneway execution]Unexpected exception in oneway operation "CreateExceptionLog" on reference "ExceptionLogger1".Possible Fix:Check whether the reference service is properly configured and running or look at exception for analysing the reason or contact oracle support. Cause:Unable to access the following endpoint(s): http://10.103.96.161:8088/mockExceptionEventSOAP11Binding*
I thought of implementing retry logic through Fault Policies. But it is not working. I need to configure in such a way that the composite should retry to push the payload till the webservice comes ON(if fails human intervention should be made).
How can i achieve this ?? Isnt possible to handle exception in mediator level??
With smile,
Siva Balan V -
Hi,
We are using WebLogic as our application server. We want to send JMS messages
to an external MQSeries listener. And we also want the JMS listener within WebLogic
can received JMS messages sent by MQSeries. Besides using Messaging Bridge, we
can also use MQSeries classes within WebLogic JVM. For example, we can create
an MQSeries message listener inside WebLogic.
Could anyone tell us what the advantage of using Messaging Bridge instead of MQSeries
classes?
Thanks.
Ye Chen
1) MQ, of course, can not understand a WL dest as a target. To accomplish
this, encode an MQ destination into a message property (and not ReplyTo), have the MQ
side respond to this MQ destination, and bridge from the MQ response dest
into a WL response dest. Alternatively to the bridge, encode a WL dest
into the property and have the MQ side use the WL client library directly.
2) If your WL client needs to post synchronous receives into
MQ land, what is preventing you from directly using the MQSeries JMS client
libraries to do this? Just treat it like any other MQ client.
A bridge solution improves high-availability, as the MQ side
need not be up 100% of the time for the WL side to function, but your app
may not need this. A bridge solution also is helpful in handling the
cross-vendor transactions, if you need exactly once, but again, you
can do this on your own...
Tom
Ye Chen wrote:
> Hi,
>
> Thank you for your response.
>
> In our case, we have two different systems locating in different VMs: one running
> WebLogic server, the other running MQSeries server. We want JMS messages can be
> exchanged between these two system. But the MQSeries part can only understand
> and send MQSeries' JMS messages. That means our WebLogic part must have the ability
> to send and receive MQSeries' JMS messages.
>
> We did try the Messaging Bridge approach but we need to send synchronous messages
> which means the sender will wait for the response message of receiver. We met
> the following two problems.
> 1. If we use temporary destination (temporary queue/topic), it seems Messaging
> Bridge cannot handle temporary destinations at all.
> 2. If we use a non-temporary destination and set it as the ReplyTo property of
> the JMS message. For example, the WebLogic server sends out a message and set
> the ReplyTo as the Topic "WebLogicReplyToDest". When the message is passed through
> Messaging Bridge and received by the MQSeries listener, the listener cannot understand
> that WebLogic destion. So the response cannot be sent back.
>
> Do you have any idea about how to send synchronous messages using Messaging Bridge?
>
> Thanks.
>
> Ye
>
> "Dongbo Xiao" <[email protected]> wrote:
> >The messaging bridge has built-in capability of retry, transaction, and
> >start/stop. With the retry policy, the bridge will re-establish the
> >connection
> >to the remote server after detecting a failure, therefore the bridge
> >can
> >tolerate temporary down time of the remote server. You can choose a
> >proper quality of services according to the nature and requirement of
> >your
> >application. If needed, transaction will be involved automatically. You
> >can also start and stop the bridge at any time. All of those can be done
> >in
> >your listener's code but just more work. With proper configuration you
> >get all of those automatically in the bridge.
> >
> >However, the bridge only moves messages from one destination to another.
> >You have to have a client listening to the target destination and doing
> >your
> >application logic.
> >
> >You can also use Message Driven Bean (MDB) within WebLogic to listen
> >to
> >a MQSeries destination. MDB has retry logic and transaction capability
> >as
> >well. However, I don't know if you can have the similar support from
> >MQSeries for an external MQSeries listener listening to a WebLogic Server
> >destination. You can also use a MDB in WebLogic, which listens to the
> >WebLogic server destination and sends the messages to a MQSeries
> >destination.
> >Then you have to handle the retries of connection to the MQSeries queue
> >manager yourself for sending the messages.
> >
> >Hope this helps.
> >
> >Dongbo
> >
> >"Ye Chen" <[email protected]> wrote in message
> >news:[email protected]...
> >>
> >> Hi,
> >>
> >> We are using WebLogic as our application server. We want to send JMS
> >messages
> >> to an external MQSeries listener. And we also want the JMS listener
> >within
> >WebLogic
> >> can received JMS messages sent by MQSeries. Besides using Messaging
> >Bridge, we
> >> can also use MQSeries classes within WebLogic JVM. For example, we
> >can
> >create
> >> an MQSeries message listener inside WebLogic.
> >>
> >> Could anyone tell us what the advantage of using Messaging Bridge instead
> >of MQSeries
> >> classes?
> >>
> >> Thanks.
> >>
> >> Ye Chen
> >>
> >>
> >
> >
-
Project Analytics 7.9.6.1 - Error while running a full load
Hi All,
I am performing a full load for Projects Analytics and get the following error,
=====================================
ERROR OUTPUT
=====================================
1103 SEVERE Wed Nov 18 02:49:36 WST 2009 Could not attach to workflow because of errorCode 36331 For workflow SDE_ORA_CodeDimension_Gl_Account
1104 SEVERE Wed Nov 18 02:49:36 WST 2009
ANOMALY INFO::: Error while executing : INFORMATICA TASK:SDE_ORA11510_Adaptor:SDE_ORA_CodeDimension_Gl_Account:(Source : FULL Target : FULL)
MESSAGE:::
Irrecoverable Error
Error while contacting Informatica server for getting workflow status for SDE_ORA_CodeDimension_Gl_Account
Error Code = 36331:Unknown reason for error code 36331
Pmcmd output :
Session log initialises NULL value to mapping parameter MPLT_ADI_CODES.$$CATEGORY. This is then used insupsequent SQL and results in ORA-00936: missing expression error following are the initialization section and the load section containing the error in the log
Initialisation
DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
DIRECTOR> VAR_27028 Use override value [ORA_11_5_10] for session parameter:[$DBConnection_OLTP].
DIRECTOR> VAR_27028 Use override value [ORA_11_5_10.DATAWAREHOUSE.SDE_ORA11510_Adaptor.SDE_ORA_CodeDimension_Gl_Account_Segments.log] for session parameter:[$PMSessionLogFile].
DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[MPLT_ADI_CODES.$$CATEGORY].
DIRECTOR> VAR_27028 Use override value [4] for mapping parameter:[MPLT_SA_ORA_CODES.$$DATASOURCE_NUM_ID].
DIRECTOR> VAR_27028 Use override value [DEFAULT] for mapping parameter:[MPLT_SA_ORA_CODES.$$TENANT_ID].
DIRECTOR> TM_6014 Initializing session [SDE_ORA_CodeDimension_Gl_Account_Segments] at [Wed Nov 18 02:49:11 2009].
DIRECTOR> TM_6683 Repository Name: [repo_service]
DIRECTOR> TM_6684 Server Name: [int_service]
DIRECTOR> TM_6686 Folder: [SDE_ORA11510_Adaptor]
DIRECTOR> TM_6685 Workflow: [SDE_ORA_CodeDimension_Gl_Account_Segments] Run Instance Name: [] Run Id: [17]
DIRECTOR> TM_6101 Mapping name: SDE_ORA_CodeDimension_GL_Account_Segments [version 1].
DIRECTOR> TM_6963 Pre 85 Timestamp Compatibility is Enabled
DIRECTOR> TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS]
DIRECTOR> TM_6827 [C:\Informatica\PowerCenter8.6.1\server\infa_shared\Storage] will be used as storage directory for session [SDE_ORA_CodeDimension_Gl_Account_Segments].
DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
DIRECTOR> TM_6703 Session [SDE_ORA_CodeDimension_Gl_Account_Segments] is run by 32-bit Integration Service [node01_ASG596138], version [8.6.1], build [1218].
MANAGER> PETL_24058 Running Partition Group [1].
MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
MANAGER> PETL_24001 Parallel Pipeline Engine running.
MANAGER> PETL_24003 Initializing session run.
MAPPING> CMN_1569 Server Mode: [UNICODE]
MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
MAPPING> TM_6151 The session sort order is [Binary].
MAPPING> TM_6185 Warning. Code page validation is disabled in this session.
MAPPING> TM_6156 Using low precision processing.
MAPPING> TM_6180 Deadlock retry logic will not be implemented.
MAPPING> TM_6307 DTM error log disabled.
MAPPING> TE_7022 TShmWriter: Initialized
MAPPING> DBG_21075 Connecting to database [orcl], user [DAC_REP]
MAPPING> CMN_1716 Lookup [mplt_ADI_Codes.Lkp_Master_Map] uses database connection [Relational:DataWarehouse] in code page [MS Windows Latin 1 (ANSI), superset of Latin1]
MAPPING> CMN_1716 Lookup [mplt_ADI_Codes.Lkp_Master_Code] uses database connection [Relational:DataWarehouse] in code page [MS Windows Latin 1 (ANSI), superset of Latin1]
MAPPING> CMN_1716 Lookup [mplt_ADI_Codes.Lkp_W_CODE_D] uses database connection [Relational:DataWarehouse] in code page [MS Windows Latin 1 (ANSI), superset of Latin1]
MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_CodeDimension_Gl_Account_Segments]
DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
MANAGER> PETL_24004 Starting pre-session tasks. : (Wed Nov 18 02:49:14 2009)
MANAGER> PETL_24027 Pre-session task completed successfully. : (Wed Nov 18 02:49:14 2009)
DIRECTOR> PETL_24006 Starting data movement.
MAPPING> TM_6660 Total Buffer Pool size is 32000000 bytes and Block size is 128000 bytes.
READER_1_1_1> DBG_21438 Reader: Source is [asgdev], user [APPS]
READER_1_1_1> BLKR_16051 Source database connection [ORA_11_5_10] code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
READER_1_1_1> BLKR_16003 Initialization completed successfully.
WRITER_1_*_1> WRT_8147 Writer: Target is database [orcl], user [DAC_REP], bulk mode [OFF]
WRITER_1_*_1> WRT_8221 Target database connection [DataWarehouse] code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL INSERT statement:
INSERT INTO W_CODE_D(DATASOURCE_NUM_ID,SOURCE_CODE,SOURCE_CODE_1,SOURCE_CODE_2,SOURCE_CODE_3,SOURCE_NAME_1,SOURCE_NAME_2,CATEGORY,LANGUAGE_CODE,MASTER_DATASOURCE_NUM_ID,MASTER_CODE,MASTER_VALUE,W_INSERT_DT,W_UPDATE_DT,TENANT_ID) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL UPDATE statement:
UPDATE W_CODE_D SET SOURCE_CODE_1 = ?, SOURCE_CODE_2 = ?, SOURCE_CODE_3 = ?, SOURCE_NAME_1 = ?, SOURCE_NAME_2 = ?, MASTER_DATASOURCE_NUM_ID = ?, MASTER_CODE = ?, MASTER_VALUE = ?, W_INSERT_DT = ?, W_UPDATE_DT = ?, TENANT_ID = ? WHERE DATASOURCE_NUM_ID = ? AND SOURCE_CODE = ? AND CATEGORY = ? AND LANGUAGE_CODE = ?
WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL DELETE statement:
DELETE FROM W_CODE_D WHERE DATASOURCE_NUM_ID = ? AND SOURCE_CODE = ? AND CATEGORY = ? AND LANGUAGE_CODE = ?
WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_CODE_D]
WRITER_1_*_1> WRT_8003 Writer initialization complete.
READER_1_1_1> BLKR_16007 Reader run started.
WRITER_1_*_1> WRT_8005 Writer run started.
WRITER_1_*_1> WRT_8158
Load section
*****START LOAD SESSION*****
Load Start Time: Wed Nov 18 02:49:16 2009
Target tables:
W_CODE_D
READER_1_1_1> RR_4029 SQ Instance [mplt_BC_ORA_Codes_GL_Account_Segments.Sq_Fnd_Flex_Values] User specified SQL Query [SELECT
FND_FLEX_VALUES.FLEX_VALUE_SET_ID,
FND_FLEX_VALUES.FLEX_VALUE,
MAX(FND_FLEX_VALUES_TL.DESCRIPTION),
FND_ID_FLEX_SEGMENTS.ID_FLEX_NUM,
FND_ID_FLEX_SEGMENTS.APPLICATION_COLUMN_NAME
FROM
FND_FLEX_VALUES,
FND_FLEX_VALUES_TL,
FND_ID_FLEX_SEGMENTS,
FND_SEGMENT_ATTRIBUTE_VALUES
WHERE
FND_FLEX_VALUES.FLEX_VALUE_ID = FND_FLEX_VALUES_TL.FLEX_VALUE_ID AND FND_FLEX_VALUES_TL.LANGUAGE ='US' AND
FND_ID_FLEX_SEGMENTS.FLEX_VALUE_SET_ID =FND_FLEX_VALUES.FLEX_VALUE_SET_ID AND
FND_ID_FLEX_SEGMENTS.APPLICATION_ID = 101 AND
FND_ID_FLEX_SEGMENTS.ID_FLEX_CODE ='GL#' AND
FND_ID_FLEX_SEGMENTS.ID_FLEX_NUM =FND_SEGMENT_ATTRIBUTE_VALUES.ID_FLEX_NUM AND
FND_SEGMENT_ATTRIBUTE_VALUES.APPLICATION_ID =101 AND
FND_SEGMENT_ATTRIBUTE_VALUES.ID_FLEX_CODE = 'GL#' AND
FND_ID_FLEX_SEGMENTS.APPLICATION_COLUMN_NAME=FND_SEGMENT_ATTRIBUTE_VALUES.APPLICATION_COLUMN_NAME AND
FND_SEGMENT_ATTRIBUTE_VALUES.ATTRIBUTE_VALUE ='Y'
GROUP BY
FND_FLEX_VALUES.FLEX_VALUE_SET_ID,
FND_FLEX_VALUES.FLEX_VALUE,
FND_ID_FLEX_SEGMENTS.ID_FLEX_NUM,
FND_ID_FLEX_SEGMENTS.APPLICATION_COLUMN_NAME]
READER_1_1_1> RR_4049 SQL Query issued to database : (Wed Nov 18 02:49:17 2009)
READER_1_1_1> RR_4050 First row returned from database to reader : (Wed Nov 18 02:49:17 2009)
LKPDP_3> DBG_21312 Lookup Transformation [mplt_ADI_Codes.Lkp_W_CODE_D]: Lookup override sql to create cache: SELECT W_CODE_D.SOURCE_NAME_1 AS SOURCE_NAME_1, W_CODE_D.SOURCE_NAME_2 AS SOURCE_NAME_2, W_CODE_D.MASTER_DATASOURCE_NUM_ID AS MASTER_DATASOURCE_NUM_ID, W_CODE_D.MASTER_CODE AS MASTER_CODE, W_CODE_D.MASTER_VALUE AS MASTER_VALUE, W_CODE_D.W_INSERT_DT AS W_INSERT_DT, W_CODE_D.TENANT_ID AS TENANT_ID, W_CODE_D.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, W_CODE_D.SOURCE_CODE AS SOURCE_CODE, W_CODE_D.CATEGORY AS CATEGORY, W_CODE_D.LANGUAGE_CODE AS LANGUAGE_CODE FROM W_CODE_D
WHERE
W_CODE_D.CATEGORY IN () ORDER BY DATASOURCE_NUM_ID,SOURCE_CODE,CATEGORY,LANGUAGE_CODE,SOURCE_NAME_1,SOURCE_NAME_2,MASTER_DATASOURCE_NUM_ID,MASTER_CODE,MASTER_VALUE,W_INSERT_DT,TENANT_ID
LKPDP_3> TE_7212 Increasing [Index Cache] size for transformation [mplt_ADI_Codes.Lkp_W_CODE_D] from [1000000] to [4734976].
LKPDP_3> TE_7212 Increasing [Data Cache] size for transformation [mplt_ADI_Codes.Lkp_W_CODE_D] from [2000000] to [2007040].
READER_1_1_1> BLKR_16019 Read [625] rows, read [0] error rows for source table [FND_ID_FLEX_SEGMENTS] instance name [mplt_BC_ORA_Codes_GL_Account_Segments.FND_ID_FLEX_SEGMENTS]
READER_1_1_1> BLKR_16008 Reader run completed.
LKPDP_3> TM_6660 Total Buffer Pool size is 609824 bytes and Block size is 65536 bytes.
LKPDP_3:READER_1_1> DBG_21438 Reader: Source is [orcl], user [DAC_REP]
LKPDP_3:READER_1_1> BLKR_16051 Source database connection [DataWarehouse] code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
LKPDP_3:READER_1_1> BLKR_16003 Initialization completed successfully.
LKPDP_3:READER_1_1> BLKR_16007 Reader run started.
LKPDP_3:READER_1_1> RR_4049 SQL Query issued to database : (Wed Nov 18 02:49:18 2009)
LKPDP_3:READER_1_1> CMN_1761 Timestamp Event: [Wed Nov 18 02:49:18 2009]
LKPDP_3:READER_1_1> RR_4035 SQL Error [
ORA-00936: missing expression
Could you please suggest what the issue might be and how it can be fixed?
Many thanks,
KiranI have continued related detains in the following thread,
Mapping Parameter $$CATEGORY not included in the parameter file (7.9.6.1)
Apologies for the inconvenience.
Thanks,
Kiran -
Error While running the ETL Load in DAC (BI Financial Analytics)
Hi All,
I have Installed and Configured BI Applictions 7.9.5 and Informatic8.1.1. For the first time when we run the ETL Load in DAC it has failed.for us every Test Connection was sucess.and getting the error message as below.
The log file which I pasted below is from the path
/u01/app/oracle/product/Informatica/PowerCenter8.1.1/server/infa_shared
/SessLogs
SDE_ORAR12_Adaptor.SDE_ORA_GL_AP_LinkageInformation_Extract_Full.log
DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
DIRECTOR> VAR_27028 Use override value [ORA_R12] for session parameter:[$DBConnection_OLTP].
DIRECTOR> VAR_27028 Use override value [9] for mapping parameter:[$$DATASOURCE_NUM_ID].
DIRECTOR> VAR_27028 Use override value ['Y'] for mapping parameter:[$$FILTER_BY_LEDGER_ID].
DIRECTOR> VAR_27028 Use override value ['N'] for mapping parameter:[$$FILTER_BY_LEDGER_TYPE].
DIRECTOR> VAR_27028 Use override value [04/02/2007] for mapping parameter:[$$INITIAL_EXTRACT_DATE].
DIRECTOR> VAR_27028 Use override value [] for mapping parameter:[$$LAST_EXTRACT_DATE].
DIRECTOR> VAR_27028 Use override value [1] for mapping parameter:[$$LEDGER_ID_LIST].
DIRECTOR> VAR_27028 Use override value ['NONE'] for mapping parameter:[$$LEDGER_TYPE_LIST].
DIRECTOR> TM_6014 Initializing session [SDE_ORA_GL_AP_LinkageInformation_Extract_Full] at [Thu Feb 12 12:49:33 2009]
DIRECTOR> TM_6683 Repository Name: [DEV_Oracle_BI_DW_Rep]
DIRECTOR> TM_6684 Server Name: [DEV_Oracle_BI_DW_Rep_Integration_Service]
DIRECTOR> TM_6686 Folder: [SDE_ORAR12_Adaptor]
DIRECTOR> TM_6685 Workflow: [SDE_ORA_GL_AP_LinkageInformation_Extract_Full]
DIRECTOR> TM_6101 Mapping name: SDE_ORA_GL_AP_LinkageInformation_Extract [version 1]
DIRECTOR> TM_6827 [u01/app/oracle/product/Informatica/PowerCenter8.1.1/server/infa_shared/Storage] will be used as storage directory for session [SDE_ORA_GL_AP_LinkageInformation_Extract_Full].
DIRECTOR> CMN_1805 Recovery cache will be deleted when running in normal mode.
DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
DIRECTOR> TM_6708 Using configuration property [SiebelUnicodeDB,apps@devr12 bawdev@devbi]
DIRECTOR> TM_6703 Session [SDE_ORA_GL_AP_LinkageInformation_Extract_Full] is run by 64-bit Integration Service [node01_oratestbi], version [8.1.1 SP4], build [0817].
MANAGER> PETL_24058 Running Partition Group [1].
MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
MANAGER> PETL_24001 Parallel Pipeline Engine running.
MANAGER> PETL_24003 Initializing session run.
MAPPING> CMN_1569 Server Mode: [ASCII]
MAPPING> CMN_1570 Server Code page: [ISO 8859-1 Western European]
MAPPING> TM_6151 Session Sort Order: [Binary]
MAPPING> TM_6156 Using LOW precision decimal arithmetic
MAPPING> TM_6180 Deadlock retry logic will not be implemented.
MAPPING> TM_6307 DTM Error Log Disabled.
MAPPING> TE_7022 TShmWriter: Initialized
MAPPING> TE_7004 Transformation Parse Warning; transformation continues...
MAPPING> TE_7004 Transformation Parse Warning; transformation continues...
MAPPING> TE_7004 Transformation Parse Warning; transformation continues...
MAPPING> TE_7004 Transformation Parse Warning; transformation continues...
MAPPING> TE_7004 Transformation Parse Warning; transformation continues...
MAPPING> TE_7004 Transformation Parse Warning; transformation continues...
MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_GL_AP_LinkageInformation_Extract_Full]
DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
MANAGER> PETL_24004 Starting pre-session tasks. : (Thu Feb 12 12:49:34 2009)
MANAGER> PETL_24027 Pre-session task completed successfully. : (Thu Feb 12 12:49:34 2009)
DIRECTOR> PETL_24006 Starting data movement.
MAPPING> TM_6660 Total Buffer Pool size is 12582912 bytes and Block size is 128000 bytes.
READER_1_1_1> DBG_21438 Reader: Source is [devr12.tessco.com], user [apps]
READER_1_1_1> BLKR_16003 Initialization completed successfully.
WRITER_1_*_1> WRT_8146 Writer: Target is database [DEVBI], user [bawdev], bulk mode [ON]
WRITER_1_*_1> WRT_8106 Warning! Bulk Mode session - recovery is not guaranteed.
WRITER_1_*_1> WRT_8124 Target Table W_GL_LINKAGE_INFORMATION_GS :SQL INSERT statement:
INSERT INTO W_GL_LINKAGE_INFORMATION_GS(SOURCE_DISTRIBUTION_ID,JOURNAL_LINE_INTEGRATION_ID,LEDGER_ID,LEDGER_TYPE,DISTRIBUTION_SOURCE,JE_BATCH_NAME,JE_HEADER_NAME,JE_LINE_NUM,POSTED_ON_DT,SLA_TRX_INTEGRATION_ID,DATASOURCE_NUM_ID) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_GL_LINKAGE_INFORMATION_GS]
WRITER_1_*_1> WRT_8003 Writer initialization complete.
READER_1_1_1> BLKR_16007 Reader run started.
WRITER_1_*_1> WRT_8005 Writer run started.
WRITER_1_*_1> WRT_8158
*****START LOAD SESSION*****
Load Start Time: Thu Feb 12 12:49:34 2009
Target tables:
W_GL_LINKAGE_INFORMATION_GS
READER_1_1_1> RR_4029 SQ Instance [SQ_XLA_AE_LINES] User specified SQL Query [SELECT
DLINK.SOURCE_DISTRIBUTION_ID_NUM_1 DISTRIBUTION_ID,
DLINK.SOURCE_DISTRIBUTION_TYPE SOURCE_TABLE,
DLINK.ACCOUNTING_LINE_CODE LINE_CODE,
AELINE.ACCOUNTING_CLASS_CODE,
GLIMPREF.JE_HEADER_ID JE_HEADER_ID,
GLIMPREF.JE_LINE_NUM JE_LINE_NUM,
AELINE.AE_HEADER_ID AE_HEADER_ID,
AELINE.AE_LINE_NUM AE_LINE_NUM,
T.LEDGER_ID LEDGER_ID,
T.LEDGER_CATEGORY_CODE LEDGER_TYPE,
JBATCH.NAME BATCH_NAME,
JHEADER.NAME HEADER_NAME,
PER.END_DATE
FROM XLA_DISTRIBUTION_LINKS DLINK
, GL_IMPORT_REFERENCES GLIMPREF
, XLA_AE_LINES AELINE
, GL_JE_HEADERS JHEADER
, GL_JE_BATCHES JBATCH
, GL_LEDGERS T
, GL_PERIODS PER
WHERE DLINK.SOURCE_DISTRIBUTION_TYPE IN
( 'AP_INV_DIST', 'AP_PMT_DIST'
, 'AP_PREPAY')
AND DLINK.APPLICATION_ID = 200
AND AELINE.APPLICATION_ID = 200
AND AELINE.GL_SL_LINK_TABLE = GLIMPREF.GL_SL_LINK_TABLE
AND AELINE.GL_SL_LINK_ID = GLIMPREF.GL_SL_LINK_ID
AND AELINE.AE_HEADER_ID = DLINK.AE_HEADER_ID
AND AELINE.AE_LINE_NUM = DLINK.AE_LINE_NUM
AND GLIMPREF.JE_HEADER_ID = JHEADER.JE_HEADER_ID
AND JHEADER.JE_BATCH_ID = JBATCH.JE_BATCH_ID
AND JHEADER.LEDGER_ID = T.LEDGER_ID
AND JHEADER.STATUS = 'P'
AND T.PERIOD_SET_NAME = PER.PERIOD_SET_NAME
AND JHEADER.PERIOD_NAME = PER.PERIOD_NAME
AND JHEADER.CREATION_DATE >=
TO_DATE('04/02/2007 00:00:00'
, 'MM/DD/YYYY HH24:MI:SS' )
AND DECODE('Y', 'Y', T.LEDGER_ID, 1) IN (1)
AND DECODE('N', 'Y', T.LEDGER_CATEGORY_CODE, 'NONE') IN ('NONE')]
READER_1_1_1> RR_4049 SQL Query issued to database : (Thu Feb 12 12:49:34 2009)
READER_1_1_1> CMN_1761 Timestamp Event: [Thu Feb 12 12:49:34 2009]
READER_1_1_1> RR_4035 SQL Error [
ORA-01114: IO error writing block to file 513 (block # 328465)
ORA-27072: File I/O error
Linux-x86_64 Error: 28: No space left on device
Additional information: 4
Additional information: 328465
Additional information: -1
ORA-01114: IO error writing block to file 513 (block # 328465)
ORA-27072: File I/O error
Linux-x86_64 Error: 28: No space left on device
Additional information: 4
Additional information: 328465
Additional information: -1
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
DLINK.SOURCE_DISTRIBUTION_ID_NUM_1 DISTRIBUTION_ID,
DLINK.SOURCE_DISTRIBUTION_TYPE SOURCE_TABLE,
DLINK.ACCOUNTING_LINE_CODE LINE_CODE,
AELINE.ACCOUNTING_CLASS_CODE,
GLIMPREF.JE_HEADER_ID JE_HEADER_ID,
GLIMPREF.JE_LINE_NUM JE_LINE_NUM,
AELINE.AE_HEADER_ID AE_HEADER_ID,
AELINE.AE_LINE_NUM AE_LINE_NUM,
T.LEDGER_ID LEDGER_ID,
T.LEDGER_CATEGORY_CODE LEDGER_TYPE,
JBATCH.NAME BATCH_NAME,
JHEADER.NAME HEADER_NAME,
PER.END_DATE
FROM XLA_DISTRIBUTION_LINKS DLINK
, GL_IMPORT_REFERENCES GLIMPREF
, XLA_AE_LINES AELINE
, GL_JE_HEADERS JHEADER
, GL_JE_BATCHES JBATCH
, GL_LEDGERS T
, GL_PERIODS PER
WHERE DLINK.SOURCE_DISTRIBUTION_TYPE IN
( 'AP_INV_DIST', 'AP_PMT_DIST'
, 'AP_PREPAY')
AND DLINK.APPLICATION_ID = 200
AND AELINE.APPLICATION_ID = 200
AND AELINE.GL_SL_LINK_TABLE = GLIMPREF.GL_SL_LINK_TABLE
AND AELINE.GL_SL_LINK_ID = GLIMPREF.GL_SL_LINK_ID
AND AELINE.AE_HEADER_ID = DLINK.AE_HEADER_ID
AND AELINE.AE_LINE_NUM = DLINK.AE_LINE_NUM
AND GLIMPREF.JE_HEADER_ID = JHEADER.JE_HEADER_ID
AND JHEADER.JE_BATCH_ID = JBATCH.JE_BATCH_ID
AND JHEADER.LEDGER_ID = T.LEDGER_ID
AND JHEADER.STATUS = 'P'
AND T.PERIOD_SET_NAME = PER.PERIOD_SET_NAME
AND JHEADER.PERIOD_NAME = PER.PERIOD_NAME
AND JHEADER.CREATION_DATE >=
TO_DATE('04/02/2007 00:00:00'
, 'MM/DD/YYYY HH24:MI:SS' )
AND DECODE('Y', 'Y', T.LEDGER_ID, 1) IN (1)
AND DECODE('N', 'Y', T.LEDGER_CATEGORY_CODE, 'NONE') IN ('NONE')
Oracle Fatal Error
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
DLINK.SOURCE_DISTRIBUTION_ID_NUM_1 DISTRIBUTION_ID,
DLINK.SOURCE_DISTRIBUTION_TYPE SOURCE_TABLE,
DLINK.ACCOUNTING_LINE_CODE LINE_CODE,
AELINE.ACCOUNTING_CLASS_CODE,
GLIMPREF.JE_HEADER_ID JE_HEADER_ID,
GLIMPREF.JE_LINE_NUM JE_LINE_NUM,
AELINE.AE_HEADER_ID AE_HEADER_ID,
AELINE.AE_LINE_NUM AE_LINE_NUM,
T.LEDGER_ID LEDGER_ID,
T.LEDGER_CATEGORY_CODE LEDGER_TYPE,
JBATCH.NAME BATCH_NAME,
JHEADER.NAME HEADER_NAME,
PER.END_DATE
FROM XLA_DISTRIBUTION_LINKS DLINK
, GL_IMPORT_REFERENCES GLIMPREF
, XLA_AE_LINES AELINE
, GL_JE_HEADERS JHEADER
, GL_JE_BATCHES JBATCH
, GL_LEDGERS T
, GL_PERIODS PER
WHERE DLINK.SOURCE_DISTRIBUTION_TYPE IN
( 'AP_INV_DIST', 'AP_PMT_DIST'
, 'AP_PREPAY')
AND DLINK.APPLICATION_ID = 200
AND AELINE.APPLICATION_ID = 200
AND AELINE.GL_SL_LINK_TABLE = GLIMPREF.GL_SL_LINK_TABLE
AND AELINE.GL_SL_LINK_ID = GLIMPREF.GL_SL_LINK_ID
AND AELINE.AE_HEADER_ID = DLINK.AE_HEADER_ID
AND AELINE.AE_LINE_NUM = DLINK.AE_LINE_NUM
AND GLIMPREF.JE_HEADER_ID = JHEADER.JE_HEADER_ID
AND JHEADER.JE_BATCH_ID = JBATCH.JE_BATCH_ID
AND JHEADER.LEDGER_ID = T.LEDGER_ID
AND JHEADER.STATUS = 'P'
AND T.PERIOD_SET_NAME = PER.PERIOD_SET_NAME
AND JHEADER.PERIOD_NAME = PER.PERIOD_NAME
AND JHEADER.CREATION_DATE >=
TO_DATE('04/02/2007 00:00:00'
, 'MM/DD/YYYY HH24:MI:SS' )
AND DECODE('Y', 'Y', T.LEDGER_ID, 1) IN (1)
AND DECODE('N', 'Y', T.LEDGER_CATEGORY_CODE, 'NONE') IN ('NONE')
Oracle Fatal Error].
READER_1_1_1> CMN_1761 Timestamp Event: [Thu Feb 12 12:49:34 2009]
READER_1_1_1> BLKR_16004 ERROR: Prepare failed.
WRITER_1_*_1> WRT_8333 Rolling back all the targets due to fatal session error.
WRITER_1_*_1> WRT_8325 Final rollback executed for the target [W_GL_LINKAGE_INFORMATION_GS] at end of load
WRITER_1_*_1> WRT_8035 Load complete time: Thu Feb 12 12:49:34 2009
LOAD SUMMARY
============
WRT_8036 Target: W_GL_LINKAGE_INFORMATION_GS (Instance Name: [W_GL_LINKAGE_INFORMATION_GS])
WRT_8044 No data loaded for this target
WRITER_1__1> WRT_8043 ****END LOAD SESSION*****
MANAGER> PETL_24031
***** RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****
Thread [READER_1_1_1] created for [the read stage] of partition point [SQ_XLA_AE_LINES] has completed: Total Run Time = [0.673295] secs, Total Idle Time = [0.000000] secs, Busy Percentage = [100.000000].
Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [SQ_XLA_AE_LINES] has completed. The total run time was insufficient for any meaningful statistics.
Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_GL_LINKAGE_INFORMATION_GS] has completed. The total run time was insufficient for any meaningful statistics.
MANAGER> PETL_24005 Starting post-session tasks. : (Thu Feb 12 12:49:35 2009)
MANAGER> PETL_24029 Post-session task completed successfully. : (Thu Feb 12 12:49:35 2009)
MAPPING> TM_6018 Session [SDE_ORA_GL_AP_LinkageInformation_Extract_Full] run completed with [0] row transformation errors.
MANAGER> PETL_24002 Parallel Pipeline Engine finished.
DIRECTOR> PETL_24013 Session run completed with failure.
DIRECTOR> TM_6022
SESSION LOAD SUMMARY
================================================
DIRECTOR> TM_6252 Source Load Summary.
DIRECTOR> CMN_1740 Table: [SQ_XLA_AE_LINES] (Instance Name: [SQ_XLA_AE_LINES])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
DIRECTOR> TM_6253 Target Load Summary.
DIRECTOR> CMN_1740 Table: [W_GL_LINKAGE_INFORMATION_GS] (Instance Name: [W_GL_LINKAGE_INFORMATION_GS])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
DIRECTOR> TM_6023
===================================================
DIRECTOR> TM_6020 Session [SDE_ORA_GL_AP_LinkageInformation_Extract_Full] completed at [Thu Feb 12 12:49:36 2009]
Thanks in Advance,
Prashanth
Edited by: user10719430 on Feb 11, 2009 7:33 AM
Edited by: user10719430 on Feb 12, 2009 11:31 AMNeed to increase temp tablespace.
-
Error in SDE_ORA_AbsenceEvent_Full workflow
I am getting a task failure on the SDE_ORA_AbsenceEvent task when running the Human Resource - Oracle R1212 execution plan from DAC. I have copied and pasted the session log below. The error I see within it is an 'ORA-01846: not a valid day of the week' error. The offending SQL is within the session log below. I am running the Human Resources - Oracle R1212 execution plan out of the box in order to detect any failures before beginning any configuration work. The Informatica workflow that fails is in the SDE_ORAR1212_Adaptor folder and is named SDE_ORA_AbsenceEvent_Full. The workflow log shows error on the 3rd from last and the last lines of the log.
Please help because I cannot find much about this in the blogosphere except for performance issues with this workflow because of a certain API call within the offending SQL.
Thanks and the session log now follows:
DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
DIRECTOR> VAR_27028 Use override value [ORA_R1212] for session parameter:[$DBConnection_OLTP].
DIRECTOR> VAR_27028 Use override value [ORA_R1212.DATAWAREHOUSE.SDE_ORAR1212_Adaptor.SDE_ORA_AbsenceEvent_Full.log] for session parameter:[$PMSessionLogFile].
DIRECTOR> VAR_27028 Use override value [27] for mapping parameter:[$$DATASOURCE_NUM_ID].
DIRECTOR> VAR_27028 Use override value [TO_DATE('1980-01-01 00:00:00', 'YYYY-MM-D HH24:MI:SS')] for mapping parameter:[mplt_BC_ORA_AbsenceEvent.$$HR_ABSNC_EXTRACT_DATE].
DIRECTOR> VAR_27028 Use override value [] for mapping parameter:[mplt_BC_ORA_AbsenceEvent.$$LAST_EXTRACT_DATE].
DIRECTOR> VAR_27028 Use override value [DEFAULT] for mapping parameter:[$$TENANT_ID].
DIRECTOR> TM_6014 Initializing session [SDE_ORA_AbsenceEvent_Full] at [Tue Nov 16 09:01:17 2010].
DIRECTOR> TM_6683 Repository Name: [p2]
DIRECTOR> TM_6684 Server Name: [ip2]
DIRECTOR> TM_6686 Folder: [SDE_ORAR1212_Adaptor]
DIRECTOR> TM_6685 Workflow: [SDE_ORA_AbsenceEvent_Full] Run Instance Name: [] Run Id: [979]
DIRECTOR> TM_6101 Mapping name: SDE_ORA_AbsenceEvent [version 1].
DIRECTOR> TM_6963 Pre 85 Timestamp Compatibility is Enabled
DIRECTOR> TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS]
DIRECTOR> TM_6827 [mnt/disk07/informatica/server/infa_shared/Storage] will be used as storage directory for session [SDE_ORA_AbsenceEvent_Full].
DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
DIRECTOR> TM_6703 Session [SDE_ORA_AbsenceEvent_Full] is run by 64-bit Integration Service [node01_tstrdobiap168], version [8.6.1 HotFix11], build [0504].
MANAGER> PETL_24058 Running Partition Group [1].
MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
MANAGER> PETL_24001 Parallel Pipeline Engine running.
MANAGER> PETL_24003 Initializing session run.
MAPPING> CMN_1569 Server Mode: [UNICODE]
MAPPING> CMN_1570 Server Code page: [UTF-8 encoding of Unicode]
MAPPING> TM_6151 The session sort order is [Binary].
MAPPING> TM_6185 Warning. Code page validation is disabled in this session.
MAPPING> TM_6156 Using low precision processing.
MAPPING> TM_6180 Deadlock retry logic will not be implemented.
MAPPING> TM_6307 DTM error log disabled.
MAPPING> TE_7022 TShmWriter: Initialized
MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_AbsenceEvent_Full]
DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
MANAGER> PETL_24004 Starting pre-session tasks. : (Tue Nov 16 09:01:17 2010)
MANAGER> PETL_24027 Pre-session task completed successfully. : (Tue Nov 16 09:01:17 2010)
DIRECTOR> PETL_24006 Starting data movement.
MAPPING> TM_6660 Total Buffer Pool size is 32000000 bytes and Block size is 128000 bytes.
LKPDP_1> DBG_21693 mplt_SA_ORA_AbsenceEvent.Lkp_Absence_Status{{SRT}}: Cache size for partition [1] = [19922944]
LKPDP_1> SORT_40419 For Transformation [mplt_SA_ORA_AbsenceEvent.Lkp_Absence_Status{{SRT}}], memory used is [19922944] bytes and row size is 300 bytes.
LKPDP_1> TE_7212 Increasing [Index Cache] size for transformation [mplt_SA_ORA_AbsenceEvent.Lkp_Absence_Status] from [1048576] to [1050000].
READER_1_1_1> DBG_21438 Reader: Source is [aebts02], user [apps]
READER_1_1_1> BLKR_16051 Source database connection [ORA_R1212] code page: [UTF-8 encoding of Unicode]
READER_1_1_1> BLKR_16003 Initialization completed successfully.
WRITER_1_*_1> WRT_8147 Writer: Target is database [orcl], user [edw], bulk mode [OFF]
WRITER_1_*_1> WRT_8221 Target database connection [DataWarehouse] code page: [UTF-8 encoding of Unicode]
WRITER_1_*_1> WRT_8124 Target Table W_ABSENCE_EVENT_DS :SQL INSERT statement:
INSERT INTO W_ABSENCE_EVENT_DS(ABSENCE_BEGIN_DT,ABSENCE_END_DT,ABSENCE_BEGIN_TM,ABSENCE_END_TM,ORIGINAL_BEGIN_DT,NOTIFIED_DT,APPROVED_DT,ABSENCE_REASON_CODE,ABSENCE_TYPE_CODE,ABSENCE_CATEGORY_CODE,APPROVAL_STATUS,STATUS_CODE,STATUS_NAME,PAID_UNPAID_CODE,DAYS_DURATION,HOURS_DURATION,DAY_DURATION_START,HOURS_DURATION_START,DAY_DURATION_MID,HOURS_DURATION_MID,DAY_DURATION_END,HOURS_DURATION_END,DAYS_NOTIFIED,ABSENCE_TYPE_RSN_ID,ABSENCE_STATUS_ID,CREATED_BY_ID,EMPLOYEE_ID,ASSIGNMENT_INTEGRATION_ID,CHANGED_BY_ID,CREATED_ON_DT,CHANGED_ON_DT,AUX1_CHANGED_ON_DT,AUX2_CHANGED_ON_DT,AUX3_CHANGED_ON_DT,AUX4_CHANGED_ON_DT,DELETE_FLG,DATASOURCE_NUM_ID,INTEGRATION_ID,TENANT_ID,X_CUSTOM) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
WRITER_1_*_1> WRT_8124 Target Table W_ABSENCE_EVENT_DS :SQL DELETE statement:
DELETE FROM W_ABSENCE_EVENT_DS WHERE DATASOURCE_NUM_ID = ? AND INTEGRATION_ID = ?
WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_ABSENCE_EVENT_DS]
WRITER_1_*_1> WRT_8003 Writer initialization complete.
READER_1_1_1> BLKR_16007 Reader run started.
READER_1_1_1> RR_4029 SQ Instance [mplt_BC_ORA_AbsenceEvent.SQ_PER_ABSENCE_ATTENDANCES] User specified SQL Query [SELECT
/*+ use_hash(per_absence_attendances per_all_assignments_f
per_absence_attendance_types per_abs_attendance_reasons )*/
tab.absence_attendance_id absence_attendance_id
,tab.date_start absence_begin_dt
,NVL(tab.date_end, to_date('31-DEC-4712','DD-MON-YYYY')) absence_end_dt
,to_date(to_char(tab.date_start, 'YYYYMMDD ') || NVL(tab.time_start,'00:00'), 'YYYYMMDD HH24:MI') absence_begin_tm
,to_date(to_char(tab.date_end, 'YYYYMMDD ') || NVL(tab.time_end,'00:00'), 'YYYYMMDD HH24:MI') absence_end_tm
,tab.date_start original_begin_dt
,tab.date_notification notified_dt
,to_date(null) approved_dt
,tab.name absence_reason_code
,tab.absence_attendance_type_id absence_attendance_type_id
,tab.absence_category absence_category_code
,tab.approval_status approval_status
,null status_code
,null paid_unpaid_code
,to_char(tab.absence_attendance_type_id) || '~' || NVL(tab.name,'Unspecified') absence_type_rsn_id
,'Unspecified' absence_status_id
,tab.person_id employee_id
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.days_duration IS NOT NULL
THEN tab.days_duration
WHEN tab.utl_hours_in_day > 0
THEN tab.hours_duration / tab.utl_hours_in_day
END days_duration
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.hours_duration IS NOT NULL
THEN tab.hours_duration
ELSE tab.days_duration * tab.utl_hours_in_day
END hours_duration
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.days_duration IS NOT NULL
THEN tab.days_duration / tab.no_days
WHEN tab.utl_hours_in_day > 0
THEN (tab.hours_duration / tab.utl_hours_in_day )/ tab.no_days
END day_duration_start
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.hours_duration IS NOT NULL
THEN tab.hours_duration / tab.no_days
ELSE (tab.days_duration * tab.utl_hours_in_day)/ tab.no_days
END hours_duration_start
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.days_duration IS NOT NULL
THEN tab.days_duration / tab.no_days
WHEN tab.utl_hours_in_day > 0
THEN (tab.hours_duration / tab.utl_hours_in_day )/ tab.no_days
END day_duration_mid
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.hours_duration IS NOT NULL
THEN tab.hours_duration / tab.no_days
ELSE (tab.days_duration * tab.utl_hours_in_day)/ tab.no_days
END hours_duration_mid
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.days_duration IS NOT NULL
THEN tab.days_duration / tab.no_days
WHEN tab.utl_hours_in_day > 0
THEN (tab.hours_duration / tab.utl_hours_in_day )/ tab.no_days
END day_duration_end
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.hours_duration IS NOT NULL
THEN tab.hours_duration / tab.no_days
ELSE (tab.days_duration * tab.utl_hours_in_day)/ tab.no_days
END hours_duration_end
,CASE WHEN tab.date_notification < tab.date_start
THEN tab.date_start - tab.date_notification
ELSE 0
END days_notified
,tab.assignment_id assignment_integration_id
FROM
(SELECT
paa.absence_attendance_id
,paa.date_start
,NVL(paa.date_end, to_date('31-DEC-4712','DD-MON-YYYY'))
date_end
,CASE when paa.date_start > TRUNC(sysdate) then 0
WHEN paa.date_start <= NVL(paa.date_end, trunc(sysdate))
THEN NVL(paa.date_end, trunc(sysdate)) - paa.date_start + 1
ELSE to_number(null)
END no_days
,paa.date_notification
,paa.person_id
,pat.absence_category
,par.name
,paa.absence_attendance_type_id
,paa.abs_attendance_reason_id
,paa.time_start
,paa.time_end
,paa.approval_status
,asg.assignment_id
,CASE WHEN paa.absence_days IS NOT NULL
THEN paa.absence_days
WHEN paa.absence_hours IS NOT NULL OR
(paa.date_start = paa.date_end AND
paa.time_start IS NOT NULL AND
paa.time_end IS NOT NULL)
THEN to_number(null)
WHEN paa.date_start < paa.date_end
THEN paa.date_end - paa.date_start + 1
when paa.absence_days is null and paa.absence_hours is null
then nvl(paa.date_end,trunc(sysdate))-paa.date_start + 1
END days_duration
,CASE WHEN paa.absence_hours IS NOT NULL OR
(paa.date_start = paa.date_end AND
paa.time_start IS NOT NULL AND
paa.time_end IS NOT NULL)
THEN NVL(paa.absence_hours, 24 * (to_date(NVL(paa.time_end,'00:00') , 'HH24:MI') -
to_date(NVL(paa.time_start,'00:00'), 'HH24:MI')))
END hours_duration
, case when paa.date_start > TRUNC(sysdate) then 0
else hri_bpl_utilization.convert_days_to_hours
(asg.assignment_id, asg.business_group_id, paa.date_start, TRUNC(sysdate), 1)
end utl_hours_in_day
FROM
per_absence_attendances paa
,per_absence_attendance_types pat
,per_abs_attendance_reasons par
,per_all_assignments_f asg
WHERE paa.absence_attendance_type_id = pat.absence_attendance_type_id
AND paa.abs_attendance_reason_id = par.abs_attendance_reason_id (+)
AND paa.person_id = asg.person_id
AND asg.primary_flag = 'Y'
AND asg.assignment_type IN ('E','C')
AND paa.date_start BETWEEN asg.effective_start_date
AND asg.effective_end_date
AND paa.date_start IS NOT NULL
AND NVL(paa.date_end, to_date('31-DEC-4712','DD-MON-YYYY')) >= TO_DATE('1980-01-01 00:00:00', 'YYYY-MM-D HH24:MI:SS') ) tab]
READER_1_1_1> RR_4049 SQL Query issued to database : (Tue Nov 16 09:01:18 2010)
WRITER_1_*_1> WRT_8005 Writer run started.
WRITER_1_*_1> WRT_8158
*****START LOAD SESSION*****
Load Start Time: Tue Nov 16 09:01:18 2010
Target tables:
W_ABSENCE_EVENT_DS
READER_1_1_1> CMN_1761 Timestamp Event: [Tue Nov 16 09:01:18 2010]
READER_1_1_1> RR_4035 SQL Error [
ORA-01846: not a valid day of the week
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
/*+ use_hash(per_absence_attendances per_all_assignments_f
per_absence_attendance_types per_abs_attendance_reasons )*/
tab.absence_attendance_id absence_attendance_id
,tab.date_start absence_begin_dt
,NVL(tab.date_end, to_date('31-DEC-4712','DD-MON-YYYY')) absence_end_dt
,to_date(to_char(tab.date_start, 'YYYYMMDD ') || NVL(tab.time_start,'00:00'), 'YYYYMMDD HH24:MI') absence_begin_tm
,to_date(to_char(tab.date_end, 'YYYYMMDD ') || NVL(tab.time_end,'00:00'), 'YYYYMMDD HH24:MI') absence_end_tm
,tab.date_start original_begin_dt
,tab.date_notification notified_dt
,to_date(null) approved_dt
,tab.name absence_reason_code
,tab.absence_attendance_type_id absence_attendance_type_id
,tab.absence_category absence_category_code
,tab.approval_status approval_status
,null status_code
,null paid_unpaid_code
,to_char(tab.absence_attendance_type_id) || '~' || NVL(tab.name,'Unspecified') absence_type_rsn_id
,'Unspecified' absence_status_id
,tab.person_id employee_id
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.days_duration IS NOT NULL
THEN tab.days_duration
WHEN tab.utl_hours_in_day > 0
THEN tab.hours_duration / tab.utl_hours_in_day
END days_duration
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.hours_duration IS NOT NULL
THEN tab.hours_duration
ELSE tab.days_duration * tab.utl_hours_in_day
END hours_duration
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.days_duration IS NOT NULL
THEN tab.days_duration / tab.no_days
WHEN tab.utl_hours_in_day > 0
THEN (tab.hours_duration / tab.utl_hours_in_day )/ tab.no_days
END day_duration_start
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.hours_duration IS NOT NULL
THEN tab.hours_duration / tab.no_days
ELSE (tab.days_duration * tab.utl_hours_in_day)/ tab.no_days
END hours_duration_start
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.days_duration IS NOT NULL
THEN tab.days_duration / tab.no_days
WHEN tab.utl_hours_in_day > 0
THEN (tab.hours_duration / tab.utl_hours_in_day )/ tab.no_days
END day_duration_mid
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.hours_duration IS NOT NULL
THEN tab.hours_duration / tab.no_days
ELSE (tab.days_duration * tab.utl_hours_in_day)/ tab.no_days
END hours_duration_mid
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.days_duration IS NOT NULL
THEN tab.days_duration / tab.no_days
WHEN tab.utl_hours_in_day > 0
THEN (tab.hours_duration / tab.utl_hours_in_day )/ tab.no_days
END day_duration_end
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.hours_duration IS NOT NULL
THEN tab.hours_duration / tab.no_days
ELSE (tab.days_duration * tab.utl_hours_in_day)/ tab.no_days
END hours_duration_end
,CASE WHEN tab.date_notification < tab.date_start
THEN tab.date_start - tab.date_notification
ELSE 0
END days_notified
,tab.assignment_id assignment_integration_id
FROM
(SELECT
paa.absence_attendance_id
,paa.date_start
,NVL(paa.date_end, to_date('31-DEC-4712','DD-MON-YYYY'))
date_end
,CASE when paa.date_start > TRUNC(sysdate) then 0
WHEN paa.date_start <= NVL(paa.date_end, trunc(sysdate))
THEN NVL(paa.date_end, trunc(sysdate)) - paa.date_start + 1
ELSE to_number(null)
END no_days
,paa.date_notification
,paa.person_id
,pat.absence_category
,par.name
,paa.absence_attendance_type_id
,paa.abs_attendance_reason_id
,paa.time_start
,paa.time_end
,paa.approval_status
,asg.assignment_id
,CASE WHEN paa.absence_days IS NOT NULL
THEN paa.absence_days
WHEN paa.absence_hours IS NOT NULL OR
(paa.date_start = paa.date_end AND
paa.time_start IS NOT NULL AND
paa.time_end IS NOT NULL)
THEN to_number(null)
WHEN paa.date_start < paa.date_end
THEN paa.date_end - paa.date_start + 1
when paa.absence_days is null and paa.absence_hours is null
then nvl(paa.date_end,trunc(sysdate))-paa.date_start + 1
END days_duration
,CASE WHEN paa.absence_hours IS NOT NULL OR
(paa.date_start = paa.date_end AND
paa.time_start IS NOT NULL AND
paa.time_end IS NOT NULL)
THEN NVL(paa.absence_hours, 24 * (to_date(NVL(paa.time_end,'00:00') , 'HH24:MI') -
to_date(NVL(paa.time_start,'00:00'), 'HH24:MI')))
END hours_duration
, case when paa.date_start > TRUNC(sysdate) then 0
else hri_bpl_utilization.convert_days_to_hours
(asg.assignment_id, asg.business_group_id, paa.date_start, TRUNC(sysdate), 1)
end utl_hours_in_day
FROM
per_absence_attendances paa
,per_absence_attendance_types pat
,per_abs_attendance_reasons par
,per_all_assignments_f asg
WHERE paa.absence_attendance_type_id = pat.absence_attendance_type_id
AND paa.abs_attendance_reason_id = par.abs_attendance_reason_id (+)
AND paa.person_id = asg.person_id
AND asg.primary_flag = 'Y'
AND asg.assignment_type IN ('E','C')
AND paa.date_start BETWEEN asg.effective_start_date
AND asg.effective_end_date
AND paa.date_start IS NOT NULL
AND NVL(paa.date_end, to_date('31-DEC-4712','DD-MON-YYYY')) >= TO_DATE('1980-01-01 00:00:00', 'YYYY-MM-D HH24:MI:SS') ) tab
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
/*+ use_hash(per_absence_attendances per_all_assignments_f
per_absence_attendance_types per_abs_attendance_reasons )*/
tab.absence_attendance_id absence_attendance_id
,tab.date_start absence_begin_dt
,NVL(tab.date_end, to_date('31-DEC-4712','DD-MON-YYYY')) absence_end_dt
,to_date(to_char(tab.date_start, 'YYYYMMDD ') || NVL(tab.time_start,'00:00'), 'YYYYMMDD HH24:MI') absence_begin_tm
,to_date(to_char(tab.date_end, 'YYYYMMDD ') || NVL(tab.time_end,'00:00'), 'YYYYMMDD HH24:MI') absence_end_tm
,tab.date_start original_begin_dt
,tab.date_notification notified_dt
,to_date(null) approved_dt
,tab.name absence_reason_code
,tab.absence_attendance_type_id absence_attendance_type_id
,tab.absence_category absence_category_code
,tab.approval_status approval_status
,null status_code
,null paid_unpaid_code
,to_char(tab.absence_attendance_type_id) || '~' || NVL(tab.name,'Unspecified') absence_type_rsn_id
,'Unspecified' absence_status_id
,tab.person_id employee_id
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.days_duration IS NOT NULL
THEN tab.days_duration
WHEN tab.utl_hours_in_day > 0
THEN tab.hours_duration / tab.utl_hours_in_day
END days_duration
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.hours_duration IS NOT NULL
THEN tab.hours_duration
ELSE tab.days_duration * tab.utl_hours_in_day
END hours_duration
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.days_duration IS NOT NULL
THEN tab.days_duration / tab.no_days
WHEN tab.utl_hours_in_day > 0
THEN (tab.hours_duration / tab.utl_hours_in_day )/ tab.no_days
END day_duration_start
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.hours_duration IS NOT NULL
THEN tab.hours_duration / tab.no_days
ELSE (tab.days_duration * tab.utl_hours_in_day)/ tab.no_days
END hours_duration_start
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.days_duration IS NOT NULL
THEN tab.days_duration / tab.no_days
WHEN tab.utl_hours_in_day > 0
THEN (tab.hours_duration / tab.utl_hours_in_day )/ tab.no_days
END day_duration_mid
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.hours_duration IS NOT NULL
THEN tab.hours_duration / tab.no_days
ELSE (tab.days_duration * tab.utl_hours_in_day)/ tab.no_days
END hours_duration_mid
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.days_duration IS NOT NULL
THEN tab.days_duration / tab.no_days
WHEN tab.utl_hours_in_day > 0
THEN (tab.hours_duration / tab.utl_hours_in_day )/ tab.no_days
END day_duration_end
,CASE when tab.date_start > TRUNC(sysdate) then 0
WHEN tab.hours_duration IS NOT NULL
THEN tab.hours_duration / tab.no_days
ELSE (tab.days_duration * tab.utl_hours_in_day)/ tab.no_days
END hours_duration_end
,CASE WHEN tab.date_notification < tab.date_start
THEN tab.date_start - tab.date_notification
ELSE 0
END days_notified
,tab.assignment_id assignment_integration_id
FROM
(SELECT
paa.absence_attendance_id
,paa.date_start
,NVL(paa.date_end, to_date('31-DEC-4712','DD-MON-YYYY'))
date_end
,CASE when paa.date_start > TRUNC(sysdate) then 0
WHEN paa.date_start <= NVL(paa.date_end, trunc(sysdate))
THEN NVL(paa.date_end, trunc(sysdate)) - paa.date_start + 1
ELSE to_number(null)
END no_days
,paa.date_notification
,paa.person_id
,pat.absence_category
,par.name
,paa.absence_attendance_type_id
,paa.abs_attendance_reason_id
,paa.time_start
,paa.time_end
,paa.approval_status
,asg.assignment_id
,CASE WHEN paa.absence_days IS NOT NULL
THEN paa.absence_days
WHEN paa.absence_hours IS NOT NULL OR
(paa.date_start = paa.date_end AND
paa.time_start IS NOT NULL AND
paa.time_end IS NOT NULL)
THEN to_number(null)
WHEN paa.date_start < paa.date_end
THEN paa.date_end - paa.date_start + 1
when paa.absence_days is null and paa.absence_hours is null
then nvl(paa.date_end,trunc(sysdate))-paa.date_start + 1
END days_duration
,CASE WHEN paa.absence_hours IS NOT NULL OR
(paa.date_start = paa.date_end AND
paa.time_start IS NOT NULL AND
paa.time_end IS NOT NULL)
THEN NVL(paa.absence_hours, 24 * (to_date(NVL(paa.time_end,'00:00') , 'HH24:MI') -
to_date(NVL(paa.time_start,'00:00'), 'HH24:MI')))
END hours_duration
, case when paa.date_start > TRUNC(sysdate) then 0
else hri_bpl_utilization.convert_days_to_hours
(asg.assignment_id, asg.business_group_id, paa.date_start, TRUNC(sysdate), 1)
end utl_hours_in_day
FROM
per_absence_attendances paa
,per_absence_attendance_types pat
,per_abs_attendance_reasons par
,per_all_assignments_f asg
WHERE paa.absence_attendance_type_id = pat.absence_attendance_type_id
AND paa.abs_attendance_reason_id = par.abs_attendance_reason_id (+)
AND paa.person_id = asg.person_id
AND asg.primary_flag = 'Y'
AND asg.assignment_type IN ('E','C')
AND paa.date_start BETWEEN asg.effective_start_date
AND asg.effective_end_date
AND paa.date_start IS NOT NULL
AND NVL(paa.date_end, to_date('31-DEC-4712','DD-MON-YYYY')) >= TO_DATE('1980-01-01 00:00:00', 'YYYY-MM-D HH24:MI:SS') ) tab].
READER_1_1_1> CMN_1761 Timestamp Event: [Tue Nov 16 09:01:18 2010]
READER_1_1_1> BLKR_16004 ERROR: Prepare failed.
WRITER_1_*_1> WRT_8333 Rolling back all the targets due to fatal session error.
WRITER_1_*_1> WRT_8325 Final rollback executed for the target [W_ABSENCE_EVENT_DS] at end of load
WRITER_1_*_1> WRT_8035 Load complete time: Tue Nov 16 09:01:18 2010
LOAD SUMMARY
============
WRT_8036 Target: W_ABSENCE_EVENT_DS (Instance Name: [W_ABSENCE_EVENT_DS])
WRT_8044 No data loaded for this target
WRITER_1__1> WRT_8043 ****END LOAD SESSION*****
MANAGER> PETL_24031
***** RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****
Thread [READER_1_1_1] created for [the read stage] of partition point [mplt_BC_ORA_AbsenceEvent.SQ_PER_ABSENCE_ATTENDANCES] has completed. The total run time was insufficient for any meaningful statistics.
Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [mplt_BC_ORA_AbsenceEvent.SQ_PER_ABSENCE_ATTENDANCES] has completed. The total run time was insufficient for any meaningful statistics.
Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_ABSENCE_EVENT_DS] has completed. The total run time was insufficient for any meaningful statistics.
MAPPING> CMN_1793 The index cache size that would hold [0] rows in the lookup table for [mplt_SA_ORA_AbsenceEvent.Lkp_Absence_Status], in memory, is [0] bytes
MAPPING> CMN_1792 The data cache size that would hold [0] rows in the lookup table for [mplt_SA_ORA_AbsenceEvent.Lkp_Absence_Status], in memory, is [0] bytes
MANAGER> PETL_24005 Starting post-session tasks. : (Tue Nov 16 09:01:18 2010)
MANAGER> PETL_24029 Post-session task completed successfully. : (Tue Nov 16 09:01:18 2010)
MAPPING> TE_7216 Deleting cache files [PMLKUP4513_131076_0_979L64] for transformation [mplt_SA_ORA_AbsenceEvent.Lkp_Absence_Status].
MAPPING> TM_6018 The session completed with [0] row transformation errors.
MANAGER> PETL_24002 Parallel Pipeline Engine finished.
DIRECTOR> PETL_24013 Session run completed with failure.
DIRECTOR> TM_6022
SESSION LOAD SUMMARY
================================================
DIRECTOR> TM_6252 Source Load Summary.
DIRECTOR> CMN_1740 Table: [SQ_PER_ABSENCE_ATTENDANCES] (Instance Name: [mplt_BC_ORA_AbsenceEvent.SQ_PER_ABSENCE_ATTENDANCES])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
DIRECTOR> TM_6253 Target Load Summary.
DIRECTOR> CMN_1740 Table: [W_ABSENCE_EVENT_DS] (Instance Name: [W_ABSENCE_EVENT_DS])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
DIRECTOR> TM_6023
===================================================
DIRECTOR> TM_6020 Session [SDE_ORA_AbsenceEvent_Full] completed at [Tue Nov 16 09:01:19 2010].Hi
Did you manage to find a solution to this problem?
okay i been puzzling with this one a bit,how i resolved it was as follows:
'AND NVL(paa.date_end, to_date('31-DEC-4712','DD-MON-YYYY')) >= TO_DATE('1980-01-01 00:00:00', 'YYYY-MM-D HH24:MI:SS') ) tab]'
you will notice the yyyy-mm-dd function is missing a d from the parameter file
if you take the sql and run it with the proper conversion of TO_DATE('1980-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') the sql works and the workflow completes, it would seem this is a bug when the dac generates the parameter $$HR_ABSNC_EXTRACT_DATE.
hope this helps
Regards
Nick
Edited by: Nickho on Feb 7, 2011 7:55 AM -
SDE_ORAR1213_Adaptor.SDE_ORA_GLBalanceFact_Full
Dears,
when im trying run full load in finance analysts via dac, im getting error,
Log file for your reference.
DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
DIRECTOR> VAR_27028 Use override value [ORA_R1213] for session parameter:[$DBConnection_OLTP].
DIRECTOR> VAR_27028 Use override value [ORA_R1213.DATAWAREHOUSE.SDE_ORAR1213_Adaptor.SDE_ORA_GLBalanceFact_Full.log] for session parameter:[$PMSessionLogFile].
DIRECTOR> VAR_27028 Use override value [1000] for mapping parameter:[$$DATASOURCE_NUM_ID].
DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_GL_Balance_Fact.$$FILTER_BY_LEDGER_ID].
DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_GL_Balance_Fact.$$FILTER_BY_LEDGER_TYPE].
DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_GL_Balance_Fact.$$INITIAL_EXTRACT_DATE].
DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_GL_Balance_Fact.$$LAST_EXTRACT_DATE].
DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_GL_Balance_Fact.$$LEDGER_ID_LIST].
DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_GL_Balance_Fact.$$LEDGER_TYPE_LIST].
DIRECTOR> VAR_27028 Use override value [DEFAULT] for mapping parameter:[$$TENANT_ID].
DIRECTOR> TM_6014 Initializing session [SDE_ORA_GLBalanceFact_Full] at [Fri Apr 12 22:05:36 2013].
DIRECTOR> TM_6683 Repository Name: [Oracle_BI_DW_Base]
DIRECTOR> TM_6684 Server Name: [Oracle_BI_DW_Base_Integration_Service]
DIRECTOR> TM_6686 Folder: [SDE_ORAR1213_Adaptor]
DIRECTOR> TM_6685 Workflow: [SDE_ORA_GLBalanceFact_Full] Run Instance Name: [] Run Id: [2324]
DIRECTOR> TM_6101 Mapping name: SDE_ORA_GLBalanceFact [version 1].
DIRECTOR> TM_6963 Pre 85 Timestamp Compatibility is Enabled
DIRECTOR> TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS]
DIRECTOR> TM_6827 [E:\Informatica\9.0.1\server\infa_shared\Storage] will be used as storage directory for session [SDE_ORA_GLBalanceFact_Full].
DIRECTOR> CMN_1805 Recovery cache will be deleted when running in normal mode.
DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
DIRECTOR> TM_6708 Using configuration property [DisableDB2BulkMode,Yes]
DIRECTOR> TM_6708 Using configuration property [ServerPort,4006]
DIRECTOR> TM_6708 Using configuration property [SiebelUnicodeDB,apps@R12PLY baw@orcl]
DIRECTOR> TM_6708 Using configuration property [overrideMptlVarWithMapVar,Yes]
DIRECTOR> TM_6703 Session [SDE_ORA_GLBalanceFact_Full] is run by 64-bit Integration Service [node01_OBIEETESTAPP], version [9.0.1 HotFix2], build [1111].
MANAGER> PETL_24058 Running Partition Group [1].
MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
MANAGER> PETL_24001 Parallel Pipeline Engine running.
MANAGER> PETL_24003 Initializing session run.
MAPPING> CMN_1569 Server Mode: [ASCII]
MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
MAPPING> TM_6151 The session sort order is [Binary].
MAPPING> TM_6156 Using low precision processing.
MAPPING> TM_6180 Deadlock retry logic will not be implemented.
MAPPING> TM_6187 Session target-based commit interval is [10000].
MAPPING> TM_6307 DTM error log disabled.
MAPPING> TE_7022 TShmWriter: Initialized
MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_GLBalanceFact_Full]
DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
MANAGER> PETL_24004 Starting pre-session tasks. : (Fri Apr 12 22:05:37 2013)
MANAGER> PETL_24027 Pre-session task completed successfully. : (Fri Apr 12 22:05:37 2013)
DIRECTOR> PETL_24006 Starting data movement.
MAPPING> TM_6660 Total Buffer Pool size is 12582912 bytes and Block size is 128000 bytes.
READER_1_1_1> DBG_21438 Reader: Source is [R12PLY], user [apps]
READER_1_1_1> BLKR_16003 Initialization completed successfully.
WRITER_1_*_1> WRT_8146 Writer: Target is database [orcl], user [baw], bulk mode [ON]
WRITER_1_*_1> WRT_8106 Warning! Bulk Mode session - recovery is not guaranteed.
WRITER_1_*_1> WRT_8124 Target Table W_ACCT_BUDGET_FS :SQL INSERT statement:
INSERT INTO W_ACCT_BUDGET_FS(GL_ACCOUNT_ID,LEDGER_ID,BUDGET_LEDGER_ID,BUDGET_ID,PERIOD_BEGIN_DT,PERIOD_END_DT,BUDGET_LOC_AMT,LOC_CURR_CODE,CHANGED_BY_ID,CHANGED_ON_DT,AUX1_CHANGED_ON_DT,AUX2_CHANGED_ON_DT,ADJUSTMENT_FLG,DATASOURCE_NUM_ID,INTEGRATION_ID,TENANT_ID,X_CUSTOM) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
WRITER_1_*_1> WRT_8146 Writer: Target is database [orcl], user [baw], bulk mode [ON]
WRITER_1_*_1> WRT_8106 Warning! Bulk Mode session - recovery is not guaranteed.
WRITER_1_*_1> WRT_8124 Target Table W_GL_BALANCE_FS :SQL INSERT statement:
INSERT INTO W_GL_BALANCE_FS(BUSN_AREA_ORG_ID,GL_ACCOUNT_ID,BALANCE_DATE,DB_CR_IND,BALANCE_ACCT_AMT,BALANCE_LOC_AMT,ACTIVITY_ACCT_AMT,ACTIVITY_LOC_AMT,ACCT_CURR_CODE,LOC_CURR_CODE,INTEGRATION_ID,DATASOURCE_NUM_ID,TENANT_ID,X_CUSTOM,ADJUSTMENT_FLAG,BALANCE_TYPE_FLAG,TRANSLATED_FLAG,SUMMARY_ACCOUNT_FLAG,LAST_UPDATE_DATE,AUX1_CHANGED_ON_DT,AUX2_CHANGED_ON_DT,LEDGER_ID) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_ACCT_BUDGET_FS, W_GL_BALANCE_FS]
WRITER_1_*_1> WRT_8003 Writer initialization complete.
READER_1_1_1> BLKR_16007 Reader run started.
WRITER_1_*_1> WRT_8005 Writer run started.
WRITER_1_*_1> WRT_8158
*****START LOAD SESSION*****
Load Start Time: Fri Apr 12 22:05:37 2013
Target tables:
W_ACCT_BUDGET_FS
W_GL_BALANCE_FS
READER_1_1_1> RR_4029 SQ Instance [mplt_BC_ORA_GL_Balance_Fact.SQ_GL_BALANCES] User specified SQL Query [SELECT
BAL.LEDGER_ID,
BAL.CODE_COMBINATION_ID,
BAL.CURRENCY_CODE,
LED.CURRENCY_CODE,
PER.PERIOD_NAME,
BAL.ACTUAL_FLAG,
BAL.TRANSLATED_FLAG,
BAL.TEMPLATE_ID,
BAL.PERIOD_NET_DR,
BAL.PERIOD_NET_CR,
( BAL.BEGIN_BALANCE_DR + BAL.PERIOD_NET_DR ),
( BAL.BEGIN_BALANCE_CR + BAL.PERIOD_NET_CR) ,
BAL.PERIOD_NET_DR_BEQ,
BAL.PERIOD_NET_CR_BEQ,
( BAL.BEGIN_BALANCE_DR_BEQ + BAL.PERIOD_NET_DR_BEQ ) PERIOD_END_BALANCE_DR_BEQ,
( BAL.BEGIN_BALANCE_CR_BEQ + BAL.PERIOD_NET_CR_BEQ ) PERIOD_END_BALANCE_CR_BEQ,
PER.START_DATE,
PER.END_DATE,
BAL.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_BAL,
BAL.LAST_UPDATED_BY AS LAST_UPDATED_BY_BAL,
PER.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_PERIODS,
PER.LAST_UPDATED_BY AS LAST_UPDATED_BY_PERIODS,
LED.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_SOB,
LED.LAST_UPDATED_BY AS LAST_UPDATED_BY_SOB,
BAL.BUDGET_VERSION_ID AS BUDGET_VERSION_ID,
PER.ADJUSTMENT_PERIOD_FLAG AS ADJUSTMENT_PERIOD_FLAG,
CASE WHEN
BAL.TRANSLATED_FLAG = 'Y' THEN 'TRANSLATED'
WHEN
BAL.TRANSLATED_FLAG = 'R' THEN 'ENTERED_FOREIGN'
WHEN
BAL.CURRENCY_CODE = 'STAT' THEN 'STAT'
WHEN
((BAL.PERIOD_NET_DR_BEQ = 0) OR (BAL.PERIOD_NET_DR_BEQ IS NULL)) AND
((BAL.PERIOD_NET_CR_BEQ = 0) OR (BAL.PERIOD_NET_CR_BEQ IS NULL)) AND
((BAL.BEGIN_BALANCE_DR_BEQ = 0) OR (BAL.BEGIN_BALANCE_DR_BEQ IS NULL)) AND
((BAL.BEGIN_BALANCE_CR_BEQ = 0) OR (BAL.BEGIN_BALANCE_CR_BEQ IS NULL))
THEN 'BASE'
ELSE 'ENTERED_LEDGER'
END CURRENCY_BALANCE_TYPE
FROM
GL_BALANCES BAL
, GL_LEDGERS LED
, GL_PERIODS PER
WHERE LED.LEDGER_ID = BAL.LEDGER_ID
AND PER.PERIOD_SET_NAME = LED.PERIOD_SET_NAME
AND BAL.PERIOD_NAME = PER.PERIOD_NAME
AND BAL.PERIOD_TYPE = PER.PERIOD_TYPE
AND NVL(BAL.TRANSLATED_FLAG, 'X') IN ('Y', 'X', 'R')
AND BAL.ACTUAL_FLAG IN ( 'A','B')
AND BAL.TEMPLATE_ID IS NULL
AND
BAL.LAST_UPDATE_DATE >=
TO_DATE('', 'MM/DD/YYYY HH24:MI:SS')
OR PER.LAST_UPDATE_DATE >=
TO_DATE('', 'MM/DD/YYYY HH24:MI:SS')
AND DECODE(, 'Y', LED.LEDGER_ID, 1) IN ()
AND DECODE(, 'Y', LED.LEDGER_CATEGORY_CODE, 'NONE') IN ()]
READER_1_1_1> RR_4049 SQL Query issued to database : (Fri Apr 12 22:05:37 2013)
READER_1_1_1> CMN_1761 Timestamp Event: [Fri Apr 12 22:05:37 2013]
*READER_1_1_1> RR_4035 SQL Error [*
ORA-00936: missing expression
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
BAL.LEDGER_ID,
BAL.CODE_COMBINATION_ID,
BAL.CURRENCY_CODE,
LED.CURRENCY_CODE,
PER.PERIOD_NAME,
BAL.ACTUAL_FLAG,
BAL.TRANSLATED_FLAG,
BAL.TEMPLATE_ID,
BAL.PERIOD_NET_DR,
BAL.PERIOD_NET_CR,
( BAL.BEGIN_BALANCE_DR + BAL.PERIOD_NET_DR ),
( BAL.BEGIN_BALANCE_CR + BAL.PERIOD_NET_CR) ,
BAL.PERIOD_NET_DR_BEQ,
BAL.PERIOD_NET_CR_BEQ,
( BAL.BEGIN_BALANCE_DR_BEQ + BAL.PERIOD_NET_DR_BEQ ) PERIOD_END_BALANCE_DR_BEQ,
( BAL.BEGIN_BALANCE_CR_BEQ + BAL.PERIOD_NET_CR_BEQ ) PERIOD_END_BALANCE_CR_BEQ,
PER.START_DATE,
PER.END_DATE,
BAL.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_BAL,
BAL.LAST_UPDATED_BY AS LAST_UPDATED_BY_BAL,
PER.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_PERIODS,
PER.LAST_UPDATED_BY AS LAST_UPDATED_BY_PERIODS,
LED.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_SOB,
LED.LAST_UPDATED_BY AS LAST_UPDATED_BY_SOB,
BAL.BUDGET_VERSION_ID AS BUDGET_VERSION_ID,
PER.ADJUSTMENT_PERIOD_FLAG AS ADJUSTMENT_PERIOD_FLAG,
CASE WHEN
BAL.TRANSLATED_FLAG = 'Y' THEN 'TRANSLATED'
WHEN
BAL.TRANSLATED_FLAG = 'R' THEN 'ENTERED_FOREIGN'
WHEN
BAL.CURRENCY_CODE = 'STAT' THEN 'STAT'
WHEN
((BAL.PERIOD_NET_DR_BEQ = 0) OR (BAL.PERIOD_NET_DR_BEQ IS NULL)) AND
((BAL.PERIOD_NET_CR_BEQ = 0) OR (BAL.PERIOD_NET_CR_BEQ IS NULL)) AND
((BAL.BEGIN_BALANCE_DR_BEQ = 0) OR (BAL.BEGIN_BALANCE_DR_BEQ IS NULL)) AND
((BAL.BEGIN_BALANCE_CR_BEQ = 0) OR (BAL.BEGIN_BALANCE_CR_BEQ IS NULL))
THEN 'BASE'
ELSE 'ENTERED_LEDGER'
END CURRENCY_BALANCE_TYPE
FROM
GL_BALANCES BAL
, GL_LEDGERS LED
, GL_PERIODS PER
WHERE LED.LEDGER_ID = BAL.LEDGER_ID
AND PER.PERIOD_SET_NAME = LED.PERIOD_SET_NAME
AND BAL.PERIOD_NAME = PER.PERIOD_NAME
AND BAL.PERIOD_TYPE = PER.PERIOD_TYPE
AND NVL(BAL.TRANSLATED_FLAG, 'X') IN ('Y', 'X', 'R')
AND BAL.ACTUAL_FLAG IN ( 'A','B')
AND BAL.TEMPLATE_ID IS NULL
AND
BAL.LAST_UPDATE_DATE >=
TO_DATE('', 'MM/DD/YYYY HH24:MI:SS')
OR PER.LAST_UPDATE_DATE >=
TO_DATE('', 'MM/DD/YYYY HH24:MI:SS')
AND DECODE(, 'Y', LED.LEDGER_ID, 1) IN ()
AND DECODE(, 'Y', LED.LEDGER_CATEGORY_CODE, 'NONE') IN ()
Oracle Fatal Error
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
BAL.LEDGER_ID,
BAL.CODE_COMBINATION_ID,
BAL.CURRENCY_CODE,
LED.CURRENCY_CODE,
PER.PERIOD_NAME,
BAL.ACTUAL_FLAG,
BAL.TRANSLATED_FLAG,
BAL.TEMPLATE_ID,
BAL.PERIOD_NET_DR,
BAL.PERIOD_NET_CR,
( BAL.BEGIN_BALANCE_DR + BAL.PERIOD_NET_DR ),
( BAL.BEGIN_BALANCE_CR + BAL.PERIOD_NET_CR) ,
BAL.PERIOD_NET_DR_BEQ,
BAL.PERIOD_NET_CR_BEQ,
( BAL.BEGIN_BALANCE_DR_BEQ + BAL.PERIOD_NET_DR_BEQ ) PERIOD_END_BALANCE_DR_BEQ,
( BAL.BEGIN_BALANCE_CR_BEQ + BAL.PERIOD_NET_CR_BEQ ) PERIOD_END_BALANCE_CR_BEQ,
PER.START_DATE,
PER.END_DATE,
BAL.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_BAL,
BAL.LAST_UPDATED_BY AS LAST_UPDATED_BY_BAL,
PER.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_PERIODS,
PER.LAST_UPDATED_BY AS LAST_UPDATED_BY_PERIODS,
LED.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_SOB,
LED.LAST_UPDATED_BY AS LAST_UPDATED_BY_SOB,
BAL.BUDGET_VERSION_ID AS BUDGET_VERSION_ID,
PER.ADJUSTMENT_PERIOD_FLAG AS ADJUSTMENT_PERIOD_FLAG,
CASE WHEN
BAL.TRANSLATED_FLAG = 'Y' THEN 'TRANSLATED'
WHEN
BAL.TRANSLATED_FLAG = 'R' THEN 'ENTERED_FOREIGN'
WHEN
BAL.CURRENCY_CODE = 'STAT' THEN 'STAT'
WHEN
((BAL.PERIOD_NET_DR_BEQ = 0) OR (BAL.PERIOD_NET_DR_BEQ IS NULL)) AND
((BAL.PERIOD_NET_CR_BEQ = 0) OR (BAL.PERIOD_NET_CR_BEQ IS NULL)) AND
((BAL.BEGIN_BALANCE_DR_BEQ = 0) OR (BAL.BEGIN_BALANCE_DR_BEQ IS NULL)) AND
((BAL.BEGIN_BALANCE_CR_BEQ = 0) OR (BAL.BEGIN_BALANCE_CR_BEQ IS NULL))
THEN 'BASE'
ELSE 'ENTERED_LEDGER'
END CURRENCY_BALANCE_TYPE
FROM
GL_BALANCES BAL
, GL_LEDGERS LED
, GL_PERIODS PER
WHERE LED.LEDGER_ID = BAL.LEDGER_ID
AND PER.PERIOD_SET_NAME = LED.PERIOD_SET_NAME
AND BAL.PERIOD_NAME = PER.PERIOD_NAME
AND BAL.PERIOD_TYPE = PER.PERIOD_TYPE
AND NVL(BAL.TRANSLATED_FLAG, 'X') IN ('Y', 'X', 'R')
AND BAL.ACTUAL_FLAG IN ( 'A','B')
AND BAL.TEMPLATE_ID IS NULL
AND
BAL.LAST_UPDATE_DATE >=
TO_DATE('', 'MM/DD/YYYY HH24:MI:SS')
OR PER.LAST_UPDATE_DATE >=
TO_DATE('', 'MM/DD/YYYY HH24:MI:SS')
AND DECODE(, 'Y', LED.LEDGER_ID, 1) IN ()
AND DECODE(, 'Y', LED.LEDGER_CATEGORY_CODE, 'NONE') IN ()
Oracle Fatal Error].
READER_1_1_1> CMN_1761 Timestamp Event: [Fri Apr 12 22:05:37 2013]
READER_1_1_1> BLKR_16004 ERROR: Prepare failed.
WRITER_1_*_1> WRT_8333 Rolling back all the targets due to fatal session error.
WRITER_1_*_1> WRT_8325 Final rollback executed for the target [W_ACCT_BUDGET_FS, W_GL_BALANCE_FS] at end of load
WRITER_1_*_1> WRT_8035 Load complete time: Fri Apr 12 22:05:37 2013
LOAD SUMMARY
============
WRT_8036 Target: W_ACCT_BUDGET_FS (Instance Name: [W_ACCT_BUDGET_FS])
WRT_8044 No data loaded for this target
WRT_8036 Target: W_GL_BALANCE_FS (Instance Name: [W_GL_BALANCE_FS])
WRT_8044 No data loaded for this target
WRITER_1__1> WRT_8043 ****END LOAD SESSION*****
MANAGER> PETL_24031
***** RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****
Thread [READER_1_1_1] created for [the read stage] of partition point [mplt_BC_ORA_GL_Balance_Fact.SQ_GL_BALANCES] has completed. The total run time was insufficient for any meaningful statistics.
Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [mplt_BC_ORA_GL_Balance_Fact.SQ_GL_BALANCES] has completed. The total run time was insufficient for any meaningful statistics.
Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_ACCT_BUDGET_FS, W_GL_BALANCE_FS] has completed. The total run time was insufficient for any meaningful statistics.
MANAGER> PETL_24005 Starting post-session tasks. : (Fri Apr 12 22:05:37 2013)
MANAGER> PETL_24029 Post-session task completed successfully. : (Fri Apr 12 22:05:38 2013)
MAPPING> TM_6018 The session completed with [0] row transformation errors.
MANAGER> PETL_24002 Parallel Pipeline Engine finished.
DIRECTOR> PETL_24013 Session run completed with failure.
DIRECTOR> TM_6022
SESSION LOAD SUMMARY
================================================
DIRECTOR> TM_6252 Source Load Summary.
DIRECTOR> CMN_1740 Table: [SQ_GL_BALANCES] (Instance Name: [mplt_BC_ORA_GL_Balance_Fact.SQ_GL_BALANCES])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
DIRECTOR> TM_6253 Target Load Summary.
DIRECTOR> CMN_1740 Table: [W_ACCT_BUDGET_FS] (Instance Name: [W_ACCT_BUDGET_FS])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
DIRECTOR> CMN_1740 Table: [W_GL_BALANCE_FS] (Instance Name: [W_GL_BALANCE_FS])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
DIRECTOR> TM_6023
===================================================
DIRECTOR> TM_6020 Session [SDE_ORA_GLBalanceFact_Full] completed at [Fri Apr 12 22:05:38 2013].Check this..
BI Apps load - DAC variables not substituted.....
Its very common error message.
Ensure that you restart the informatica services and DAC services once you follow above resolution.
please mark correct if it resolves your issue.
Regards,
Veeresh Rayan
Maybe you are looking for
-
Change the topic icon in Webhelp TOC?
We are using RoboHelp 9 on a Windows 7 platform. When we publish Webhelp, the topic icon in the published output has a question mark in it. We have had almost universal feedback from our target audience that they are confused by the question mark (th
-
This one's a little tricky. First some history: After suffering a massive Kernal Panic problem following an upgrade to 10.4.6, I had to reinstall to 10.4.3. I'm afraid to upgrade any further at this time. Since that time, I believe Safari is responsi
-
Can two AA versions co-exist?
I am currently running AA1 and have purchased AA3. Before I proceed can AA3 co-exist with AA1 after the install? I have several projects produced using AA1, and if I encounter issues with AA3 I want the option of using the original software. As fa
-
4.7X200 DB instance installation error CJS-00095
Hi All, Please help me in resolving this issue I am trying to install 4.7X200 on windows 2K SP4 using Oracle 9i. Successfully installed CI installation and encountered error in DB instance at "Create Oracle Database" Phase. Execution of the c
-
Appstore Download Button Doesn't work
Hi everybody, I Have iPhone 5. I'll update 6.02 to 6.1 on ota. I Have this appstore problem. And yesterday I'll restore to 6.1. But continues to this problem . "Download" and "Update" Button not work.. What can i do?