DAC task failure: SIL_PositionDimensionHierarchy_AsIsUpdate_Full
Hi,
The Incremental DAC load failed yesterday night for two workflows:
SIL_PositionDimensionHierarchy_AsIsUpdate
SIL_PositionDimensionHierarchy_AsIsUpdate_Full
The workflows belong to the Out-Of-Box Container SIL_Vert.
The workflows populate zero records on other regular days, but this is for the first time it resulted in an error during DAC load.
The DAC log doesn't show any detectable error message with the root cause information. But since there were many dependent tasks running on top of it, they all got stopped and henceforth resulting in the DAC failure.
###########################DAC LOG######################################
=====================================
STD OUTPUT
=====================================
Informatica(r) PMCMD, version [8.6.1 HotFix10], build [412.0123], Windows 32-bit
Copyright (c) Informatica Corporation 1994 - 2010
All Rights Reserved.
Invoked at Fri Jan 28 02:15:04 2011
Connected to Integration Service: [Integration_Service_SAMPGHSBL112].
Integration Service status: [Running]
Integration Service startup time: [Sat Dec 04 18:52:14 2010]
Integration Service current time: [Fri Jan 28 02:15:04 2011]
Folder: [SIL_Vert]
Workflow: [SIL_PositionDimensionHierarchy_Type1Update] version [1].
Workflow run status: [Terminated unexpectedly]
Workflow run error code: [0]
Workflow run id [46743].
Start time: [Fri Jan 28 02:14:43 2011]
End time: [Fri Jan 28 02:14:48 2011]
Workflow log file: [G:\Informatica\PowerCenter8.6.1\server\infa_shared\WorkflowLogs\SIL_PositionDimensionHierarchy_Type1Update.log]
Workflow run type: [User request]
Run workflow as user: [Administrator]
Run workflow with Impersonated OSProfile in domain: []
Integration Service: [Integration_Service_SAMPGHSBL112]
Disconnecting from Integration Service
Completed at Fri Jan 28 02:15:04 2011
=====================================
ERROR OUTPUT
=====================================
Error Message : Unable to get Informatica workflow return code. Check Informatica workflow/session logs.
ErrorCode : -1
Re-Queue to attempt to run again or attach to running workflow
if Execution Plan is still running or re-submit Execution Plan to execute the workflow.
EXCEPTION CLASS::: com.siebel.analytics.etl.etltask.IrrecoverableException
com.siebel.analytics.etl.etltask.InformaticaTask.doExecute(InformaticaTask.java:179)
com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:410)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:306)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:213)
com.siebel.analytics.etl.etltask.GenericTaskImpl.run(GenericTaskImpl.java:585)
com.siebel.analytics.etl.taskmanager.XCallable.call(XCallable.java:63)
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
java.util.concurrent.FutureTask.run(FutureTask.java:138)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
java.util.concurrent.FutureTask.run(FutureTask.java:138)
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:619)
12972 SEVERE Fri Jan 28 02:15:05 EST 2011
ANOMALY INFO::: Error while executing task Informatica Session Batch
MESSAGE:::Submitted task failed during execution
EXCEPTION CLASS::: com.siebel.analytics.etl.etltask.FailedTaskException
com.siebel.analytics.etl.etltask.ParallelTaskBatch.submitIncompleteTasks(ParallelTaskBatch.java:251)
com.siebel.analytics.etl.etltask.ParallelTaskBatch.doExecuteNormal(ParallelTaskBatch.java:360)
com.siebel.analytics.etl.etltask.ParallelTaskBatch.doExecute(ParallelTaskBatch.java:164)
com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:410)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:306)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:213)
com.siebel.analytics.etl.etltask.ParallelTaskBatch.doExecuteNormal(ParallelTaskBatch.java:326)
com.siebel.analytics.etl.etltask.ParallelTaskBatch.doExecute(ParallelTaskBatch.java:164)
com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:410)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:306)
com.siebel.etl.engine.core.Session.executeTasks(Session.java:2697)
com.siebel.etl.engine.core.Session.run(Session.java:3246)
java.lang.Thread.run(Thread.java:619)
12973 SEVERE Fri Jan 28 02:15:05 EST 2011
ANOMALY INFO::: Error while executing : com.siebel.analytics.etl.etltask.ParallelTaskBatch:Informatica Session Batch
MESSAGE:::com.siebel.analytics.etl.etltask.FailedTaskException: Submitted task failed during execution
EXCEPTION CLASS::: java.lang.RuntimeException
com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:469)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:306)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:213)
com.siebel.analytics.etl.etltask.ParallelTaskBatch.doExecuteNormal(ParallelTaskBatch.java:326)
com.siebel.analytics.etl.etltask.ParallelTaskBatch.doExecute(ParallelTaskBatch.java:164)
com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:410)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:306)
com.siebel.etl.engine.core.Session.executeTasks(Session.java:2697)
com.siebel.etl.engine.core.Session.run(Session.java:3246)
java.lang.Thread.run(Thread.java:619)
::: CAUSE :::
MESSAGE:::Submitted task failed during execution
EXCEPTION CLASS::: com.siebel.analytics.etl.etltask.FailedTaskException
com.siebel.analytics.etl.etltask.ParallelTaskBatch.submitIncompleteTasks(ParallelTaskBatch.java:251)
com.siebel.analytics.etl.etltask.ParallelTaskBatch.doExecuteNormal(ParallelTaskBatch.java:360)
com.siebel.analytics.etl.etltask.ParallelTaskBatch.doExecute(ParallelTaskBatch.java:164)
com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:410)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:306)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:213)
com.siebel.analytics.etl.etltask.ParallelTaskBatch.doExecuteNormal(ParallelTaskBatch.java:326)
com.siebel.analytics.etl.etltask.ParallelTaskBatch.doExecute(ParallelTaskBatch.java:164)
com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:410)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:306)
com.siebel.etl.engine.core.Session.executeTasks(Session.java:2697)
com.siebel.etl.engine.core.Session.run(Session.java:3246)
java.lang.Thread.run(Thread.java:619)
12974 SEVERE Fri Jan 28 02:15:05 EST 2011
ANOMALY INFO::: Error while executing task All Task Batches
MESSAGE:::Execution of child batch Informatica Session Batch failed.
EXCEPTION CLASS::: com.siebel.analytics.etl.etltask.FailedTaskException
#######################END OF DAC LOG#################################
After the first fail of the DAC load due, I marked the task as completed and restarted the Execution Plan again; the load failed again due to the second workflow mentioned above. I repeated the same steps and this time, it ran till the end to complete the load process.
The Informatica Log file for the particular workflow shows:
Message: Use override value [DATAWAREHOUSE.DATAWAREHOUSE.SIL_Vert.SIL_PositionDimensionHierarchy_AsIsUpdate.log] for session parameter:[$PMSessionLogFile].
Please suggest us if we can bypass the above mentioned tasks and we can exclude the same from our current Execution Profiles. If the dependencies on these tasks create issues, how do we modulate the same for successful DAC loads going forward. Also please specify the significance of these particular workflows.
Thanks-
Ramaswamy Pappula
Ph: 412-320-6796
Hi gs,
This is database dependent command
depending on conncetion type @DAC_SOURCE_DBTYPE
which DAC reads when you define Source database type in Physical datasources,
By default it will invoke the informatica workflkow SIL_PositionDimensionHierarchy_AsIsUpdate_ORCL
however if your
source is DB2 it will invoke workflow: SIL_PositionDimensionHierarchy_AsIsUpdate_DB2
source is mSSQL it will invoke workflow: SIL_PositionDimensionHierarchy_AsIsUpdate_MSSQL
source is Teradata it will invoke workflow: SIL_PositionDimensionHierarchy_AsIsUpdate_TD
Let me know if this helps.
Thanks,
Ani
Similar Messages
-
Need Help to check DAC task log in DAC
Hi there,
Can someone pls help me to check DAC task log in DAC.
I mean i want to see DAC task log.
The thing is I have a scenario: Where DAC task has failed but the Informatica Workflow invoked by the DAC task is succeeded.
So I want to check DAC task log to find the reason for the dac task failure.
Thanks,
RaghuYou can check this by going to the DAC Client -> find the Task that failed-> Click Details. You should see where it failed..for instance, since the WF succeeded, it may have failed on an Index creation. You will see all the steps associated with the task. Also, once you find the step that failed, you can click on the "Status Description" column to see additional details. If needed, you can also check the logs on the server itself but you will probably find the issue from the DAC client.
If this was correct, please mark the response as helpful or correct. -
Need some help in Synchronising a DAC task
Hi There,
I am a new bee to DAC. Just worked on few tasks.
I created a dac task, while synchronising I got the below error:
MESSAGE:::Error while inserting a record!
EXCEPTION CLASS::: com.siebel.etl.gui.core.RecordManipulationException
com.siebel.analytics.etl.client.core.DACMessage.convertToRME(DACMessage.java:31)
com.siebel.analytics.etl.client.data.model.UpdatableDataTableModel.upsertNewRecord(UpdatableDataTableModel.java:141)
com.siebel.analytics.etl.infa.fileParsing.InfaDacWriter.insertTableList(InfaDacWriter.java:534)
com.siebel.analytics.etl.infa.fileParsing.InfaDacWriter.insertNodeTables(InfaDacWriter.java:407)
com.siebel.analytics.etl.infa.fileParsing.InfaDacWriter.insertNodeTables(InfaDacWriter.java:318)
com.siebel.analytics.etl.infa.fileParsing.TaskSync.sync(TaskSync.java:171)
com.siebel.analytics.etl.client.action.TaskSynchronizationAction.doOperation(TaskSynchronizationAction.java:144)
com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
Can anybody share your ideas on this issue?
Thanks,
Raghurestart dac server and then try syncronize. it should work.
-
DAC task with Informatica mapping and stored procedure (very slow)
Hello,
We have a DAC task that launch an Informatica Workflow with a simple query and stored procedure, like this:
SQL QUERY
==========================
SELECT
W_ACTIVITY_F.ROW_WID,
W_AGREE_D.AGREE_NUM,
W_PRODUCT_D.ATTRIB_51,
W_SRVREQ_D.ATTRIB_05,
W_ORG_DH.TOP_LVL_NAME,
W_ORG_D.ATTRIB_06,
W_PRODUCT_GROUPS_D.PRODUCT_LINE,
W_PRODUCT_D.PROD_NAME
FROM
W_AGREE_D,
W_SRVREQ_F,
W_ACTIVITY_F,
W_PRODUCT_D LEFT OUTER JOIN W_PRODUCT_GROUPS_D ON W_PRODUCT_D.PR_PROD_LN = W_PRODUCT_GROUPS_D.PRODUCT_LINE,
W_ORG_D,
W_SRVREQ_D,
W_ORG_DH
WHERE
W_SRVREQ_F.AGREEMENT_WID = W_AGREE_D.ROW_WID AND
W_SRVREQ_F.SR_WID = W_ACTIVITY_F.SR_WID AND
W_SRVREQ_F.PROD_WID = W_PRODUCT_D.ROW_WID AND
W_SRVREQ_F.ACCNT_WID = W_ORG_D.ROW_WID AND
W_SRVREQ_F.SR_WID = W_SRVREQ_D.ROW_WID AND
W_ORG_D.ROW_WID = W_ORG_DH.ROW_WID
STORED PROCEDURE
===========================
ConvSubProy(W_AGREE_D.AGREE_NUM,
W_PRODUCT_D.ATTRIB_51,
W_SRVREQ_D.ATTRIB_05,
W_ORG_DH.TOP_LVL_NAME,
W_ORG_D.ATTRIB_06,
W_PRODUCT_GROUPS_D.PRODUCT_LINE,
W_PRODUCT_D.PROD_NAME)
The mapping is very simple:
Source Qualifier -> Stored procedure -> Update strategy (only two ports: ROW_WID and custom column) -> Target Table
When I launch the DAC Execution Plan the corresponding task take much time (40 minuts). But when I launch the corresponding Workflow from Informatica PowerCenter Workflow Manager this only take 50 seconds... when I see the log session for the task I can see that much time is spent on the time of the updates. For example, when DAC is running the writer updates 10000 records every 6/7 minuts, but when Workflow Manager is running thw writer updates 10000 records every 8/9 seconds.
So, what happens (in the DAC) to that so much time difference? Is there a way to reduce the execution time when the task is launched from DAC?
Thanks
Best Regards
Benjamin TeyHave you tried using bulk load type?
In Workflow Manager can you open the associated task, navigate to the mapping tab and seled the target table.
What is the value for "Target load type" and which of the following boxes are checked: Insert, Update as Update, Update as Insert, Update else Insert, Delete? -
Dears,
I am getting the error while executing DAC task for "Change Capture for OLTP", the error is "Error creating view: ORA-00955: name is already used by an existing object", here is the log, please help.. thanks very much!
# of prune days: 45
Capturing data modified after: 2009-05-27 16:07:51.0 in Incremental Mode.
2009-07-11 16:26:32.337 - Executing :
DBMS_STATS.GATHER_TABLE_STATS(ownname => 'e2466c', tabname => 'S_ETL_R_IMG_C4', estimate_percent => 30, method_opt => 'FOR ALL COLUMNS SIZE AUTO',cascade => true )
2009-07-11 16:26:32.446 - Executed Successfully.
after 1 attempts
2009-07-11 16:26:32.446 - Executing :
DBMS_STATS.GATHER_TABLE_STATS(ownname => 'e2466c', tabname => 'S_ETL_D_IMG_C4', estimate_percent => 30, method_opt => 'FOR ALL COLUMNS SIZE AUTO',cascade => true )
2009-07-11 16:26:32.462 - Executed Successfully.
after 1 attempts
2009-07-11 16:26:32.462 - Executing :
TRUNCATE TABLE S_ETL_I_IMG_C4
2009-07-11 16:26:32.508 - Executed Successfully.
2009-07-11 16:26:32.508 - Executing :
INSERT /*+APPEND*/ INTO S_ETL_I_IMG_C4
(ROW_ID, MODIFICATION_NUM, OPERATION, LAST_UPD)
SELECT
ROW_ID
,MODIFICATION_NUM
,'I'
,LAST_UPD
FROM
s_employee
WHERE
s_employee.LAST_UPD > TO_DATE('2009-05-27 04:07:51', 'YYYY-MM-DD HH:MI:SS')
AND NOT EXISTS
SELECT
ROW_ID
,MODIFICATION_NUM
,'I'
,LAST_UPD
FROM
S_ETL_R_IMG_C4
WHERE
S_ETL_R_IMG_C4.ROW_ID = s_employee.ROW_ID
AND S_ETL_R_IMG_C4.MODIFICATION_NUM = s_employee.MODIFICATION_NUM
AND S_ETL_R_IMG_C4.LAST_UPD = s_employee.LAST_UPD
2009-07-11 16:26:32.524 - Executed Successfully.
2009-07-11 16:26:32.524 - Executing :
INSERT /*+APPEND*/ INTO S_ETL_I_IMG_C4
(ROW_ID, MODIFICATION_NUM, OPERATION, LAST_UPD)
SELECT
ROW_ID
,MODIFICATION_NUM
,'D'
,LAST_UPD
FROM
S_ETL_D_IMG_C4
WHERE NOT EXISTS
SELECT
'X'
FROM
S_ETL_I_IMG_C4
WHERE
S_ETL_I_IMG_C4.ROW_ID = S_ETL_D_IMG_C4.ROW_ID
2009-07-11 16:26:32.524 - Executed Successfully.
2009-07-11 16:26:32.524 - Executing :
DBMS_STATS.GATHER_TABLE_STATS(ownname => 'e2466c', tabname => 'S_ETL_I_IMG_C4', estimate_percent => 30, method_opt => 'FOR ALL COLUMNS SIZE AUTO',cascade => true )
2009-07-11 16:26:32.633 - Executed Successfully.
after 1 attempts
2009-07-11 16:26:32.633 - Executing removing rowid duplicates.
2009-07-11 16:26:32.633 - Executed successfully removing rowid duplicates.
2009-07-11 16:26:32.633 - Executing :
DROP VIEW V_employee
2009-07-11 16:26:32.633 - Executed Successfully.
2009-07-11 16:26:32.649 - Executing :
CREATE VIEW V_employee AS
SELECT
FROM
S_EMPLOYEE,
S_ETL_I_IMG_C4
WHERE
S_EMPLOYEE.ROW_ID = S_ETL_I_IMG_C4.ROW_ID
2009-07-11 16:26:32.649 - ERROR: While executing :
CREATE VIEW V_employee AS
SELECT
S_EMPLOYEE.ROW_ID
,S_EMPLOYEE.MODIFICATION_NUM
,S_EMPLOYEE.LAST_UPD
,S_EMPLOYEE.EMP_NUM
,S_EMPLOYEE.EMP_NAME
FROM
S_EMPLOYEE,
S_ETL_I_IMG_C4
WHERE
S_EMPLOYEE.ROW_ID = S_ETL_I_IMG_C4.ROW_ID
ORA-00955: name is already used by an existing object
Error creating view: ORA-00955: name is already used by an existing object
Will try again!
2009-07-11 16:26:32.665 - ERROR: While executing :
CREATE VIEW V_employee AS
SELECT
S_EMPLOYEE.ROW_ID
,S_EMPLOYEE.MODIFICATION_NUM
,S_EMPLOYEE.LAST_UPD
,S_EMPLOYEE.EMP_NUM
,S_EMPLOYEE.EMP_NAME
FROM
S_EMPLOYEE,
S_ETL_I_IMG_C4
WHERE
S_EMPLOYEE.ROW_ID = S_ETL_I_IMG_C4.ROW_ID
ORA-00955: name is already used by an existing object
Error creating view: ORA-00955: name is already used by an existing object
Will try again!
2009-07-11 16:26:32.665 - ERROR: While executing :
CREATE VIEW V_employee AS
SELECT
S_EMPLOYEE.ROW_ID
,S_EMPLOYEE.MODIFICATION_NUM
,S_EMPLOYEE.LAST_UPD
,S_EMPLOYEE.EMP_NUM
,S_EMPLOYEE.EMP_NAME
FROM
S_EMPLOYEE,
S_ETL_I_IMG_C4
WHERE
S_EMPLOYEE.ROW_ID = S_ETL_I_IMG_C4.ROW_ID
ORA-00955: name is already used by an existing object
Error creating view: ORA-00955: name is already used by an existing object
Will try again!
2009-07-11 16:26:32.68 - ERROR: While executing :
CREATE VIEW V_employee AS
SELECT
S_EMPLOYEE.ROW_ID
,S_EMPLOYEE.MODIFICATION_NUM
,S_EMPLOYEE.LAST_UPD
,S_EMPLOYEE.EMP_NUM
,S_EMPLOYEE.EMP_NAME
FROM
S_EMHI Chin,
If u created containers and Table in the Metadata in dac, then check out did u created environment variable in ur local system,
Thanks
Ranga -
DAC Task are queued but not running
Hi Everybody,
Currently we are running the ETL for only Financials and there are 394 tasks in total out of which 285 have been completed successfully. At times some of the tasks would just get stuck on particular action. For e.g The analyze task after dimension population would remain in the queued status and would never switch to run mode. So we had to abort the ETL and restart it. This action resulted in successful completion of the previously queued task. Now for past two days the ETL is once again stuck (queued but not running).
For e.g The following task is showing status of running
51 SIL_InventoryProductDimension_SCDUpdate Running 2009-03-02 21:09:59.0 DataWarehouse DataWarehouse SILOS Update Slowly Changing Dimension Informatica 0 0 -2 CUST R11 5 10
But if I right click on the details
I get this
1 SIL_InventoryProductDimension_SCDUpdate_Full Running 2009-03-02 17:40:41.0 2009-03-02 21:09:09.0 3 hour(s), 28 min(s), 28 sec(s) Verifying if workflow still running on Informatica Server 'Oracle_BI_DW_Server'.
If still running will attach to the workflow. DataWarehouse Informatica 0 -2 0
And the remaining detail tasks for this are in queued status.
What could be the reason?
Does it need the Infomartica workflow manager open? It is open currently open but must have been accidently closed in between.
Informatica services have always been running.
Thank you very much
Nilesh
[excel dashboard|http://www.exceldashboard.org]
dashboard Software
[Access Dashboard|http://www.accessdashboards.com]
[Dashboard Reporting|http://www.reportingdashboard.com]
Edited by: njethwa on Jan 6, 2010 5:39 PMHi
This Sometime happens with DAC.We faced the same issue and had an SR open with Siebel for almost 2 weeks,they Couldnt help us out.
Restart your DAC Server and Do a Dryrun and then run your full ETL .
The Analyze task would not hang anymore.
I dot have any logical reason why this happens,but i am guessing it has to do with some java heap memory.So restarting the Server would free up the memory.(i am guessing but not sure).
But it Works.
Try it
Let me know if this helps
Thanks
H -
DAC tasks are not running in Sync.Parent - Child ROW_WID not proper.
After the completion of DAC, When I tested the reports, one of the report is not populated. The root cause which I analyzed is the child table W_product_DX is not having same ROW_WID ids as parent W_Product_D and hence the join fails to pull the child records.
When I see the log file , the custom task that populated the child table has run ahead of the parent dimension table therefore the child has not populated with new/updated roe_wid values.
Do we have do some setting in each task so that it runs in a particular order of execution....
Any ideas how to resolve this problem. Help appreciated. __.____Did you create an entirely new custom task or modify an existing one?
If you did one from scratch you wil need to analyze the location of the other tasks and modify the order in your execution plan.
This section of the documentation covers execution order:
http://download.oracle.com/docs/cd/E10783_01/doc/bi.79/e10759/dacdesignetl.htm#i1041265 -
We sometimes see this failure intermitently when using the FlexUnit Ant task to run tests in a CI environment. The Ant task throws this exception:
java.util.concurrent.ExecutionException: could not close client/server socket
I have seen this for a while now, and still see it with the latest 4.1 RC versions.
Here is the console output seen along with the above exception:
FlexUnit player target: flash
Validating task attributes ...
Generating default values ...
Using default working dir [C:\DJTE\commons.formatter_swc\d3flxcmn32\extracted\Source\Flex]
Using the following settings for the test run:
FLEX_HOME: [C:\dev\vert-d3flxcmn32\302100.41.0.20110323122739_d3flxcmn32]
haltonfailure: [false]
headless: [false]
display: [99]
localTrusted: [true]
player: [flash]
port: [1024]
swf: [C:\DJTE\commons.formatter_swc\d3flxcmn32\extracted\build\commons.formatter.tests.unit.sw f]
timeout: [1800000ms]
toDir: [C:\DJTE\commons.formatter_swc\d3flxcmn32\reports\xml]
Setting up server process ...
Entry [C:\DJTE\commons.formatter_swc\d3flxcmn32\extracted\build] already available in local trust file at [C:\Users\user\AppData\Roaming\Macromedia\Flash Player\#Security\FlashPlayerTrust\flexUnit.cfg].
Executing 'rundll32' with arguments:
'url.dll,FileProtocolHandler'
'C:\DJTE\commons.formatter_swc\d3flxcmn32\extracted\build\commons.formatter.tests.unit.swf '
The ' characters around the executable and arguments are
not part of the command.
Starting server ...
Opening server socket on port [1024].
Waiting for client connection ...
Client connected.
Setting inbound buffer size to [262144] bytes.
Receiving data ...
Sending acknowledgement to player to start sending test data ...
Stopping server ...
End of test data reached, sending acknowledgement to player ...
When the problem occurs, it is not always during the running of any particular test (that I am aware of). Recent runs where this failure was seen had the following number of tests executed (note: the total number that should be run is 45677): 18021, 18, 229.
Here is a "good" run when the problem does not occur:
Setting inbound buffer size to [262144] bytes.
Receiving data ...
Sending acknowledgement to player to start sending test data ...
Stopping server ...
End of test data reached, sending acknowledgement to player ...
Closing client connection ...
Closing server on port [1024] ...
Analyzing reports ...
Suite: com.formatters.help.TestGeographicSiteUrls
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.008 sec
Suite: com.formatters.functionalUnitTest.testCases.TestNumericUDF
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.071 sec
Results :
Tests run: 45,677, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 201.186 sec
Has anyone else ran across this problem?
Thanks,
TrevorI am not sure if this information will help everyone, but here goes...
For us, these problems with FlexUnit tests crashing the Flash Player appear to be related to couple of factors. Recently, we moved up from Flex 3.2 to Flex 4.1 as our development baseline. Many people complained that their development environment (Flash Builder, etc.) was much more unstable. Apparently, 4.1 produces SWFs that require more memory to run than 3.2 does? Anyway, we still had Flash Player 10.1 as our runtime baseline. Apparently, that version of the player was not as capable of running larger FlexUnit test SWFs, and would crash (as I posted months earlier). I upgraded to the latest 10.3 standalone player versions, and the crashes have now ceased. It would be nice to know exactly what was causing the crashes, but memory management (or lack of) is my best guess.
So, if you are seeing these issues, try upgrading to the latest Flash Player version.
Regards,
Trevor -
I have a function module that performs a call transaction to create a transfer order. I have two tasks that call this function module; one tasks method calls the function and passes a flag to inform the transaction to be processed in foreground, the other tasks method calls the same function, but passes a flag to inform the transaction is to be processed in background mode.
The first task tries to create the transfer order in background; if it cannot create a transfer order, an exception is raised, and in the workflow the exception path then invokes the other task to create the order in the foreground.
In the test environment, this works as desired.
In the production environment, the background task **always** fails; However:
1) Processing the foreground transaction from the inbox always succeeds (I have 50 instances of this so far without failure)
2) If I use the Business Object Builder, and invoke the method used to create the object in background mode that the task uses, it works with no errors.
So in summary:
Test environment - no problem.
Production environment - background task always fails.
Production environment - foreground task always succeeds with no errors; uses same Function module but calls transaction in foreground mode.
Production environment - Can call the method in Business Object Builder test tool and it works fine.
This is also happening with another two tasks that I wrote with a different function module to do a similar transfer order creation.
Anybody shine some light on this madness?
- Tony.Hi Mike,
WF-BATCH is exactly the same in Test and Production, APART from the email address was missing in production; I have just maintained that and now they are identical.
There is a task before these steps that provides the workflow container with material numbers, but this is to guide the workflow, and reads from a database entry that definitely exists at time of read (there is at least a day between the creation of the sales order where it reads this from and the creation of the transfer orders).
One other difference; both of the failing tasks are in two separate parallel paths of a fork (one for each material of up to two in the order). Can't see how that could affect things, but then again I can't see what's going wrong here anyway.
I have functionality in all of my tasks to write the transaction messages back to the IDoc that started the workflow off; I'm going to activate that when I get back to work, and see if that shows anything.
Cheers,
Tony. -
SSIS Database Transfer Task failure
Hi,
The Database Trasnfer Task has failed with the following error......
failed with the following error: "Invalid object name 'dbo.exampleViewName.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
Does anybody know what this means?
Thanks,
Ben
Mr Shaw... One day I might know a thing or two about SQL Server!That means that it cant find the object in the database. Has the table being dropped since the creation of the package?
Please check if table is still present in db and is under correct schema
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page
No the table has no been dropped.
I am running the task with the source database online.
Mr Shaw... One day I might know a thing or two about SQL Server! -
DAC - Task Group inactivated - but Child tasks got executed.
Hi
I have inactivated a Task Group and assembled the subject area.
Built the Execution plan and executed.
When i checked for Task Group in the execution history after completion. Task Group is not present but its child tasks are present.
I am not familiar with DAC much, can anybody help me finding the problem.
DAC Version information:
Oracle BI Data Warehouse Administration Console
Dac Build AN 10.1.3.4.1.patch.20110427.0538, Build date: April 27 2011
Management console to setup, configure, load and administer the Oracle Business Analytics Warehouse
Schema version - 34, Index version - 33, Repository version - 6, Seed data version - 9
Regards,
SureshYou need to inactivate tasks as well along with task group.
Let me know if you need any other detials here, If not please mark this post as helpful -
Hi All,
I am seeing username against a failure of a task that is assigned to a Group "System Administrator". When I see the "Task Assignment History", I see two rows for it. These are as follows:
Row 1:
Task Status: Rejected
Task Action: User
Assign Type: Group
Assigned To User: Angelo Saiano [10026374]
Assigned To Group: SYSTEM ADMINISTRATORS
Assigned By: 10026374
Assigned Date: April 18, 2012
Row 2:
Task Status: Pending
Task Action: Engine
Assign Type: Group
Assigned To User:
Assigned To Group: SYSTEM ADMINISTRATORS
Assigned By:
Assigned Date: April 18, 2012
As per me, it should have gone to Engine. Why it is going to the user? Am I missing some setting in OIM that is causing this error? This is for OIM 9102 BP11.
This task pertains to change AD password. It is sporadic in nature as it works for most of the users and fails for some. Once a system admin reset the password of user, it starts working for that user.
Please advise.
Edited by: devinderc on Apr 18, 2012 9:21 AMHi
your JDK version is not supported.
You need to use JDK1.6.0_10 and later.
Please correct this and
Other reason could be your environment variables. Ensure that you follow installation guide.
Regards
Shashidhar -
DAC task shows as Running when in fact Informatica workflow is completed
Good day,
My environment:
Dac Build AN 10.1.3.4.1.20090415.0146 , Windows 2003 Server Release 2
Informatica 8.1.1. SP5 0129 135 (R117 D86) , Windows 2003 Server Release 2
I am having two separate issues with a DAC execution plan.
1. During the runtime of DAC execution plan some SDE mapping which were completed in Informatica still being displayed with status ‘Running’ in DAC. As a result not all SIL mapping get executed. I am able to unit test this mapping (both SDE and SIL), so the problem is not with Informatica.
2. Even though the execution plan was generated with an option for both FULL and Incremental tasks each time I submit the execution plan the FULL mapping gets executed. $$LAST_UPDATE_DATE variable is defined in both Informatica and DAC.
Any help is greatly appreciated.
Thank you,
MaximWhat version of BI Applications are you on? Also, have you applied all the DAC patches? I recall there was a DAC bug and an associated patch related to this. Check this metalink note:
OBIA DAC ETL Execution Plan does not complete even though all steps are complete [ID 848117.1]
If this was helpful, please mark as answered. -
DAC tasks failing cuz Parameter files are missing
We are unable to find any of the parameter file in our installation. Although all the source files and lookup files have been copied to the informatica Src and lookup directories from dac's dirctory.
I nenver had any issue with these source, lookup and parameter files they get installed in there respective directories when we do the installation and copy SrcFiles and lookup files to a different locaton as per instructions.
Please advise what's the best approach. I am hoping we dont' need to reinstall everyting from scratch.
Below the error.
ERROR : TM_6292 : (6432|6620) Session task instance [SIL_InsertRowInRunTable]: VAR_27015 [Cannot find specified [b]parameter file [D:\Informatica PowerCenter 7.1.4\Server\SrcFiles\SILOS.SIL_InsertRowInRunTable.txt] for session [SIL_InsertRowInRunTable].].This error usually happens when you do not have a Physical connection in the Workflow Manager defined. As a part of that particular workflow a file is created on the fly to run the test insert. Double check your physical connections.
Go to Page 119 in the documentation: The section "To configure the database connections" (http://download-uk.oracle.com/docs/cd/E10021_01/doc/bi.79/b31979.pdf)
You need to make connections that match the values from the Setup tab in the DAC in the Informatica Workflow Manager. -
Hi Gurus,
i unable to get primary source and primary target in task option. how will i confugure that
thanksHI Chin,
If u created containers and Table in the Metadata in dac, then check out did u created environment variable in ur local system,
Thanks
Ranga
Maybe you are looking for
-
Billing plan (Downpayment) for saved and open sales orders at header level?
Hi gurus, I have configured billing plan in my SD environment at Item Level. I want to change it to header level. Questions: 1- When I make the changes to update the system to have billing plan at header level for future sales orders, is that possibl
-
Hi, I have a Hp lj 1018 printer with Foomatic/foo2zjs drivers (cups version:1.3.6). The print-test-page is printed correctly but when I try to print pdf from preview appears an error message, when I try to print from acrobat I have no error message b
-
Displaying *.pdf file on browser with servlet
hi all this RAMESH,struggling to display a pdf file on browser from a remote mechine earlier i tried with servlet response.setContentenType("application/pdf") out.println(); by this i am getting only some data as below endobj 4 0 obj << /ProcSet [ /P
-
How do you fix the indexing ipod issue in chevy equinox
After working properly for over a year, now when I connect my iPod Classic via USB to my Chevy Equinox it starts "Indexing iPod". If play is interrupted by changing to the radio, answering a phone call, turning off the engine, etc., it will sometimes
-
Drilling template for Aironet Antenna (AIR-ANT5160NP-R)
Hi folks, due to a customer project I need a drilling template for the Aironet Antenna AIR-ANT5160NP-R. The customer will construct an specific mounting adapter to mount the antenna on the wall. I hope you can help me. Kind regards, Stefan