0orgunit_att full load failed
The full load failed. when i analyzed the error is job terminated in source system
i checked the job overview in source system and there is a dump.
the dump is
Runtime Errors ITAB_DUPLICATE_KEY
Date and Time 02.02.2009 08:44:59
Short text
A row with the same key already exists.
What happened?
Error in the ABAP Application Program
The current ABAP program "SAPLHRMS_BW_PA_OS" had to be terminated because it
has
come across a statement that unfortunately cannot be executed.
What can you do?
Note down which actions and inputs caused the error.
To process the problem further, contact you SAP system
administrator.
Using Transaction ST22 for ABAP Dump Analysis, you can look
at and manage termination messages, and you can also
keep them for a long time.
Error analysis
An entry was to be entered into the table
"\FUNCTION=HR_BW_EXTRACT_IO_ORGUNIT\DATA=MAIN_COSTCENTERS[]" (which should
have
had a unique table key (UNIQUE KEY)).
However, there already existed a line with an identical key.
The insert-operation could have ocurred as a result of an INSERT- or
MOVE command, or in conjunction with a SELECT ... INTO.
The statement "INSERT INITIAL LINE ..." cannot be used to insert several
initial lines into a table with a unique key.
to correct the error
Probably the only way to eliminate the error is to correct the program.
If the error occures in a non-modified SAP program, you may be able to
find an interim solution in an SAP Note.
If you have access to SAP Notes, carry out a search with the following
keywords:
"ITAB_DUPLICATE_KEY" " "
"SAPLHRMS_BW_PA_OS" or "LHRMS_BW_PA_OSU06"
"HR_BW_EXTRACT_IO_ORGUNIT"
If you cannot solve the problem yourself and want to send an error
notification to SAP, include the following information:
please suggest <removed by moderator>.
pramod
Edited by: Siegfried Szameitat on Feb 2, 2009 11:06 AM
Hi asish,
Runtime Errors ITAB_DUPLICATE_KEY
Date and Time 02.02.2009 06:18:44
180 i0027_flag = ' '
181 ombuffer_mode = ' '
182 TABLES
183 in_objects = in_objects
184 main_costcenters = main_co
185 EXCEPTIONS
186 OTHERS = 0.
187
188
>>>>> INSERT LINES OF main_co INTO TABLE main_costcenters.
190
191 last_plvar = orgunits-plvar.
192 REFRESH in_objects.
193 MOVE-CORRESPONDING orgunits TO in_objects.
194 APPEND in_objects.
195 ENDIF.
196
197 ENDLOOP.
198 ENDIF.
199
200 LOOP AT orgunits.
201
202 CLEAR: infty1000, main_co, infty1008, l_t_hrobject.
203 REFRESH: infty1000, main_co, infty1008, l_t_hrobject.
204
205 APPEND orgunits TO l_t_hrobject.
206
207 LOOP AT l_t_i1000_all INTO infty1000 WHERE objid = orgunits-objid.
208 APPEND infty1000.
i checked this this reflects a structure. i cannont modify as this is standard table.
pramod
Similar Messages
-
Process chain for a full load failed
Hi,
I have a process chain where load data from an InfoCube to a DSO with the following steps:
Begin
Load data: Execute an Infopackage with full load.
Update from PSA
Activate Data: Activate data of DSO.
When I execute the process chain, the step of execute InfoPackage succes, but the step of Update from PSA failed. If I saw the errors, I can see that when the infopackage finished need to Activate the data of DSO, and the status of the request is yellow and to Update from PSA need the QM status was green. What can I do? The data transfer correctly to the DSO but not is active.
Thank you very much.
PD: May be, some steps and options are not exactly like I wrote them but I'm working in Spanish and the meaning is the sameHi....
Look if Update from PSA has failed......then go the details tab in the IP monitor.........expand the failed data packet and see the error message........there may be several reasons .......try to solve that issue............ then delete the request from the target without making the QM status red.......the reconstruct the request................
Actually I am not getting you clearly....I think your process chain is fine......Are you trying to say your ODS activation also failed...........but how ODS activation will start..........the load itself failed.........Is the link is on success or failure....
Regards,
Debjani....
Edited by: Debjani Mukherjee on Oct 16, 2008 2:02 PM -
Full load failed with [Microsoft][ODBC SQL Server Driver]Datetime field
Hi,
we are doing a full load with RDBMS SQLServer.
It failed due to the below error.
[Microsoft][ODBC SQL Server Driver]Datetime field overflow. Can you please help
thank you968651 wrote:
Hi,
we are doing a full load with RDBMS SQLServer.
It failed due to the below error.
[Microsoft][ODBC SQL Server Driver]Datetime field overflow. Can you please help
thank youhttp://bit.ly/XUL950
Thanks,
Hussein -
Hi Experts,
When i am loading the data from ODS to cube (FUll load) load is failed due to the below error.
Short dump in the Warehouse
Diagnosis
The data update was not completed. A short dump has probably been logged in BW providing information about the error.
System response
"Caller 70" is missing.
Further analysis:
Search in the BW short dump overview for the short dump belonging to the request. Pay attention to the correct time and date on the selection screen.
You get a short dump list using the Wizard or via the menu path "Environment -> Short dump -> In the Data Warehouse".
Error correction:
Follow the instructions in the short dump.
and i checked in ST22 and the error analysis is,
An exception occurred. This exception is dealt with in more detail below
. The exception, which is assigned to the class 'CX_SY_OPEN_SQL_DB', was
neither
caught nor passed along using a RAISING clause, in the procedure "WRITE_ICFA
"(FORM)"
Since the caller of the procedure could not have expected this exception
to occur, the running program was terminated.
The reason for the exception is:
The database system recognized that your last operation on the database
would have led to a deadlock.
Therefore, your transaction was rolled back
to avoid this.
ORACLE always terminates any transaction that would result in deadlock.
The other transactions involved in this potential deadlock
are not affected by the termination.
Please can any one tell me why the error occured and how to resolved.
Thanks in advance
DavidDavid,
It appears that there was a Table lock when you executed your DTP. This means that there was a multiple read on any of the tables used by DTP at the same time. This has resulted in the error. Check your PC's once.
As of now, delete the request in IC and reload!
-VA -
OBIA 7.9.5 "Financial Analytics" Full load fails
Hi All ,
we are implementing OBIA7.9.5 for Oracle Vision instance 11.5.10 ,
Installation of the components were successful.
We followed the respective Configuration steps for the Analytics module and configured it.
when we start the Full load for Financial Analytics ,
Out of 321 tasks , 215 ran successfully , 2 tasks failed and remaining were stopped
below were the failed task
Load into Position Dimension------------------------->Create Index INDEX W_POSITION_D_U1
TASK_GROUP_Extract_EmployeeDimension----->Create Index INDEX W_EMPLOYEE_DS_U1
we were getting the below error log, on Index Creation.
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
EXCEPTION CLASS::: java.sql.SQLException
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:745)
oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:210)
oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:961)
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1190)
oracle.jdbc.driver.OracleStatement.executeUpdateInternal(OracleStatement.java:1657)
oracle.jdbc.driver.OracleStatement.executeUpdate(OracleStatement.java:1626)
com.siebel.etl.database.DBUtils.executeUpdate(DBUtils.java:266)
com.siebel.etl.database.WeakDBUtils.executeUpdate(WeakDBUtils.java:357)
com.siebel.analytics.etl.etltask.SQLTask.doExecute(SQLTask.java:122)
com.siebel.analytics.etl.etltask.CreateIndexTask.doExecute(CreateIndexTask.java:90)
com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:271)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:200)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:149)
com.siebel.analytics.etl.etltask.GenericTaskImpl.run(GenericTaskImpl.java:430)
com.siebel.analytics.etl.taskmanager.XCallable.call(XCallable.java:63)
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:269)
java.util.concurrent.FutureTask.run(FutureTask.java:123)
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675)
java.lang.Thread.run(Thread.java:595)
459 SEVERE Tue Nov 10 20:40:50 GMT+05:30 2009 Failure detected while executing CREATE INDEX:W_POSITION_D:W_POSITION_D_U1.
Error Code: 12801.
Error Message: Error while execution : CREATE UNIQUE INDEX
W_POSITION_D_U1
ON
W_POSITION_D
INTEGRATION_ID Asc
,DATASOURCE_NUM_ID Asc
,EFFECTIVE_FROM_DT ASC
NOLOGGING PARALLEL
with error java.sql.SQLException: ORA-12801: error signaled in parallel query server P000
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
while Analyzing the above error code in the oracle forum, got a hint from below link
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
even after doing the changes mentioned in the above link and restarting the ETL again ,we are getting the same error message.
Let us know how to go about in resolving this
thanks
saranWith index failures the first thing I typically do is check to see the data that's breaking the index.
So for the index:
W_POSITION_D_U1
ON
W_POSITION_D
INTEGRATION_ID Asc
,DATASOURCE_NUM_ID Asc
,EFFECTIVE_FROM_DT ASC
I issue this query to my DB:
SELECT INTEGRATION_ID, DATASOURCE_NUM_ID, EFFECTIVE_FROM_DT, COUNT(*) FROM W_POSITION_D
GROUP BY INTEGRATION_ID, DATASOURCE_NUM_ID, EFFECTIVE_FROM_DT HAVING COUNT(*) > 1;
This will at least list the records the index is failing on. -
Hi Experts,
One of my master data full load failed, see the below erro message in status tab,
Incorrect data records - error requests (total status RED)
Diagnosis
Data records were recognized as incorrect.
System response
The valid records were updated in the data target.
The request was marked as incorrect so that the data that was already updated cannot be used in reporting.
The incorrect records were not written in the data targets but were posted retroactively under a new request number in the PSA.
Procedure
Check the data in the error requests, correct the errors and post the error requests. Then set this request manually to green.
can any one please give me solution.
Thanks
DavidHI,
I am loading the data from application server, i dont have R/3 for this load. and the below message is showing in status tab;
Request still running
Diagnosis
No errors could be found. The current process has probably not finished yet.
System response
The ALE inbox of the SAP BW is identical to the ALE outbox of the source system
and/or
the maximum wait time for this request has not yet run out
and/or
the batch job in the source system has not yet ended.
Current status
Thanks
David -
Task fails while running Full load ETL
Hi All,
I am running full load ETL For Oracle R12(vanila Instance) HR But 4 tasks are failing SDE_ORA_JobDimention, SDE_ORA_HRPositionDimention, SDE_ORA_CodeDimension_Pay_level and SDE_ORA_CodeDimensionJob, I changed the parameter for all these task as mentioned in the Installation guide and rebuilled. Please help me out.
Log is like this for SDE_ORA_JobDimention
DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
DIRECTOR> VAR_27028 Use override value [ORA_R12] for session parameter:[$DBConnection_OLTP].
DIRECTOR> VAR_27028 Use override value [9] for mapping parameter:[$$DATASOURCE_NUM_ID].
DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$JOBCODE_FLXFLD_SEGMENT_COL].
DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$JOBFAMILYCODE_FLXFLD_SEGMENT_COL].
DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$LAST_EXTRACT_DATE].
DIRECTOR> VAR_27028 Use override value [DEFAULT] for mapping parameter:[$$TENANT_ID].
DIRECTOR> TM_6014 Initializing session [SDE_ORA_JobDimension_Full] at [Fri Sep 26 10:52:05 2008]
DIRECTOR> TM_6683 Repository Name: [Oracle_BI_DW_Base]
DIRECTOR> TM_6684 Server Name: [Oracle_BI_DW_Base_Integration_Service]
DIRECTOR> TM_6686 Folder: [SDE_ORAR12_Adaptor]
DIRECTOR> TM_6685 Workflow: [SDE_ORA_JobDimension_Full]
DIRECTOR> TM_6101 Mapping name: SDE_ORA_JobDimension [version 1]
DIRECTOR> TM_6827 [C:\Informatica\PowerCenter8.1.1\server\infa_shared\Storage] will be used as storage directory for session [SDE_ORA_JobDimension_Full].
DIRECTOR> CMN_1805 Recovery cache will be deleted when running in normal mode.
DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
DIRECTOR> TM_6703 Session [SDE_ORA_JobDimension_Full] is run by 32-bit Integration Service [node01_HSCHBSCGN20031], version [8.1.1], build [0831].
MANAGER> PETL_24058 Running Partition Group [1].
MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
MANAGER> PETL_24001 Parallel Pipeline Engine running.
MANAGER> PETL_24003 Initializing session run.
MAPPING> CMN_1569 Server Mode: [ASCII]
MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
MAPPING> TM_6151 Session Sort Order: [Binary]
MAPPING> TM_6156 Using LOW precision decimal arithmetic
MAPPING> TM_6180 Deadlock retry logic will not be implemented.
MAPPING> TM_6307 DTM Error Log Disabled.
MAPPING> TE_7022 TShmWriter: Initialized
MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_JobDimension_Full]
DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
MANAGER> PETL_24004 Starting pre-session tasks. : (Fri Sep 26 10:52:13 2008)
MANAGER> PETL_24027 Pre-session task completed successfully. : (Fri Sep 26 10:52:14 2008)
DIRECTOR> PETL_24006 Starting data movement.
MAPPING> TM_6660 Total Buffer Pool size is 32000000 bytes and Block size is 1280000 bytes.
READER_1_1_1> DBG_21438 Reader: Source is [dev], user [apps]
READER_1_1_1> BLKR_16003 Initialization completed successfully.
WRITER_1_*_1> WRT_8146 Writer: Target is database [orcl], user [obia], bulk mode [ON]
WRITER_1_*_1> WRT_8106 Warning! Bulk Mode session - recovery is not guaranteed.
WRITER_1_*_1> WRT_8124 Target Table W_JOB_DS :SQL INSERT statement:
INSERT INTO W_JOB_DS(JOB_CODE,JOB_NAME,JOB_DESC,JOB_FAMILY_CODE,JOB_FAMILY_NAME,JOB_FAMILY_DESC,JOB_LEVEL,W_FLSA_STAT_CODE,W_FLSA_STAT_DESC,W_EEO_JOB_CAT_CODE,W_EEO_JOB_CAT_DESC,AAP_JOB_CAT_CODE,AAP_JOB_CAT_NAME,ACTIVE_FLG,CREATED_BY_ID,CHANGED_BY_ID,CREATED_ON_DT,CHANGED_ON_DT,AUX1_CHANGED_ON_DT,AUX2_CHANGED_ON_DT,AUX3_CHANGED_ON_DT,AUX4_CHANGED_ON_DT,SRC_EFF_FROM_DT,SRC_EFF_TO_DT,DELETE_FLG,DATASOURCE_NUM_ID,INTEGRATION_ID,TENANT_ID,X_CUSTOM) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_JOB_DS]
WRITER_1_*_1> WRT_8003 Writer initialization complete.
WRITER_1_*_1> WRT_8005 Writer run started.
READER_1_1_1> BLKR_16007 Reader run started.
READER_1_1_1> RR_4029 SQ Instance [mplt_BC_ORA_JobDimension.Sq_Jobs] User specified SQL Query [SELECT
PER_JOBS.JOB_ID,
PER_JOBS.BUSINESS_GROUP_ID,
PER_JOBS.JOB_DEFINITION_ID,
PER_JOBS.DATE_FROM,
PER_JOBS.DATE_TO,
PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT, PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
PER_JOBS.NAME,
PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS. AS JOB_CODE,
'0' AS X_CUSTOM
FROM
PER_JOBS, PER_JOB_DEFINITIONS
WHERE
PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID]
WRITER_1_*_1> WRT_8158
*****START LOAD SESSION*****
Load Start Time: Fri Sep 26 10:53:05 2008
Target tables:
W_JOB_DS
READER_1_1_1> RR_4049 SQL Query issued to database : (Fri Sep 26 10:53:05 2008)
READER_1_1_1> CMN_1761 Timestamp Event: [Fri Sep 26 10:53:06 2008]
READER_1_1_1> RR_4035 SQL Error [
ORA-01747: invalid user.table.column, table.column, or column specification
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
PER_JOBS.JOB_ID,
PER_JOBS.BUSINESS_GROUP_ID,
PER_JOBS.JOB_DEFINITION_ID,
PER_JOBS.DATE_FROM,
PER_JOBS.DATE_TO,
PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT, PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
PER_JOBS.NAME,
PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS. AS JOB_CODE,
'0' AS X_CUSTOM
FROM
PER_JOBS, PER_JOB_DEFINITIONS
WHERE
PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID
Oracle Fatal Error
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
PER_JOBS.JOB_ID,
PER_JOBS.BUSINESS_GROUP_ID,
PER_JOBS.JOB_DEFINITION_ID,
PER_JOBS.DATE_FROM,
PER_JOBS.DATE_TO,
PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT, PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
PER_JOBS.NAME,
PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS. AS JOB_CODE,
'0' AS X_CUSTOM
FROM
PER_JOBS, PER_JOB_DEFINITIONS
WHERE
PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID
Oracle Fatal Error].
READER_1_1_1> CMN_1761 Timestamp Event: [Fri Sep 26 10:53:06 2008]
READER_1_1_1> BLKR_16004 ERROR: Prepare failed.
WRITER_1_*_1> WRT_8333 Rolling back all the targets due to fatal session error.
WRITER_1_*_1> WRT_8325 Final rollback executed for the target [W_JOB_DS] at end of load
WRITER_1_*_1> WRT_8035 Load complete time: Fri Sep 26 10:53:06 2008
LOAD SUMMARY
============
WRT_8036 Target: W_JOB_DS (Instance Name: [W_JOB_DS])
WRT_8044 No data loaded for this target
WRITER_1__1> WRT_8043 ****END LOAD SESSION*****
MANAGER> PETL_24031
***** RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****
Thread [READER_1_1_1] created for [the read stage] of partition point [mplt_BC_ORA_JobDimension.Sq_Jobs] has completed. The total run time was insufficient for any meaningful statistics.
Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [mplt_BC_ORA_JobDimension.Sq_Jobs] has completed. The total run time was insufficient for any meaningful statistics.
Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_JOB_DS] has completed. The total run time was insufficient for any meaningful statistics.
MANAGER> PETL_24005 Starting post-session tasks. : (Fri Sep 26 10:53:06 2008)
MANAGER> PETL_24029 Post-session task completed successfully. : (Fri Sep 26 10:53:06 2008)
MAPPING> TM_6018 Session [SDE_ORA_JobDimension_Full] run completed with [0] row transformation errors.
MANAGER> PETL_24002 Parallel Pipeline Engine finished.
DIRECTOR> PETL_24013 Session run completed with failure.
DIRECTOR> TM_6022
SESSION LOAD SUMMARY
================================================
DIRECTOR> TM_6252 Source Load Summary.
DIRECTOR> CMN_1740 Table: [Sq_Jobs] (Instance Name: [mplt_BC_ORA_JobDimension.Sq_Jobs])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
DIRECTOR> TM_6253 Target Load Summary.
DIRECTOR> CMN_1740 Table: [W_JOB_DS] (Instance Name: [W_JOB_DS])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
DIRECTOR> TM_6023
===================================================
DIRECTOR> TM_6020 Session [SDE_ORA_JobDimension_Full] completed at [Fri Sep 26 10:53:07 2008]To make use of the warehouse you would probably want to connect to an EBS instance in order to populate the warehouse.
Since the execution plan you intend to run is designed for the EBS data-model. I guess if you really didn't want to connect to the EBS instance to pull data you could build one using the universal adapter. This allows you to load out of flat-files if you wish, but I wouldn't reccomend making this a habit for actual implementation as it does create another potential point of failure (populating the flat-files).
Thanks,
Austin -
Full load from r/3 failed due to bad data and no psa option selected
SDN Experts,
Full load from R/3 failed loading in to ODS and in the infopackage on psa selected. how to fix it? can i rerun the infopackage again to load from R/3? May be it will fail again? is this a design defect?
i will assign points. Thank you.
LesHi,
There is obsolutely no prob. in re executing the package, but before that, check
your ODS and Info Source is re active and update rules are also active if any..
and you can check the option of target info provider selected properly along with
the PSA option and also check whether you have any subsequent process like
updating data from ODS to Cube and so on..
You can also check the data availability at R/3 for the respective DS by executing
RSA3.
Hope this helps..
assign points if useful..
Cheers,
Pattan. -
Full load works, but delta fails - "Error in the Extractor"
Good morning,
We are using datasource 3FI_SL_ZZ_SI (Special Ledger line items) to load a cube, and are having trouble with the delta loads. If I run a full load, everything runs fine. If I run a delta load, it will initially fail with an error that simply states "Error in the Extractor" (no long text). If I repeat the delta load, it completes successfully with 0 records returned. If I then rerun the delta, I get the error again.
I've run extractions using RSA3, but they work fine - as I would expect since the full loads work. Unfortunately, I have not been able to find why the deltas aren't working. After searching the Forums, I've tried replicating the datasource, checked the job log in R/3 (nothing), and run the program RS_TRANSTRU_ACTIVATE_ALL, all to no avail.
Any ideas?
Thanks
We're running BW 3.5, R/3 4.71And it's just that easy....
Yes, it appears this is what the problem was. I'd been running the delta init without data transfer, and it was failing during the first true delta run. Once I changed the delta init so that it transferred data, the deltas worked fine. This was in our development system. I took a look in our production system where deltas have been running for quite some time, and it turns out the delta initialization there was done with data transfer.
Thank you very much! -
Execution Plan Failed for Full Load
Hi Team,
When i run the full Load the execution plan failed and verified the log and found below information from SEBL_VERT_8_1_1_FLATFILE.DATAWAREHOUSE.SIL_Vert.SIL_InsertRowInRunTable
DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
DIRECTOR> VAR_27028 Use override value [SEBL_VERT_8_1_1_FLATFILE.DATAWAREHOUSE.SIL_Vert.SIL_InsertRowInRunTable.log] for session parameter:[$PMSessionLogFile].
DIRECTOR> VAR_27028 Use override value [1] for mapping parameter:[mplt_SIL_InsertRowInRunTable.$$DATASOURCE_NUM_ID].
DIRECTOR> VAR_27028 Use override value [21950495] for mapping parameter:[MPLT_GET_ETL_PROC_WID.$$ETL_PROC_WID].
DIRECTOR> TM_6014 Initializing session [SIL_InsertRowInRunTable] at [Mon Sep 26 15:53:45 2011].
DIRECTOR> TM_6683 Repository Name: [infa_rep]
DIRECTOR> TM_6684 Server Name: [infa_service]
DIRECTOR> TM_6686 Folder: [SIL_Vert]
DIRECTOR> TM_6685 Workflow: [SIL_InsertRowInRunTable] Run Instance Name: [] Run Id: [8]
DIRECTOR> TM_6101 Mapping name: SIL_InsertRowInRunTable [version 1].
DIRECTOR> TM_6963 Pre 85 Timestamp Compatibility is Enabled
DIRECTOR> TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS]
DIRECTOR> TM_6827 [H:\Informatica901\server\infa_shared\Storage] will be used as storage directory for session [SIL_InsertRowInRunTable].
DIRECTOR> CMN_1805 Recovery cache will be deleted when running in normal mode.
DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
DIRECTOR> TM_6703 Session [SIL_InsertRowInRunTable] is run by 32-bit Integration Service [node01_eblnhif-czc80685], version [9.0.1 HotFix2], build [1111].
MANAGER> PETL_24058 Running Partition Group [1].
MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
MANAGER> PETL_24001 Parallel Pipeline Engine running.
MANAGER> PETL_24003 Initializing session run.
MAPPING> CMN_1569 Server Mode: [ASCII]
MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
MAPPING> TM_6151 The session sort order is [Binary].
MAPPING> TM_6156 Using low precision processing.
MAPPING> TM_6180 Deadlock retry logic will not be implemented.
MAPPING> TM_6187 Session target-based commit interval is [10000].
MAPPING> TM_6307 DTM error log disabled.
MAPPING> TE_7022 TShmWriter: Initialized
MAPPING> DBG_21075 Connecting to database [Connect_to_OLAP], user [OBAW]
MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
MAPPING> CMN_1022 Database driver error...
CMN_1022 [
Database driver error...
Function Name : Logon
ORA-12154: TNS:could not resolve service name
Database driver error...
Function Name : Connect
Database Error: Failed to connect to database using user [OBAW] and connection string [Connect_to_OLAP].]
MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
MAPPING> CMN_1076 ERROR creating database connection.
MAPPING> DBG_21520 Transform : LKP_W_PARAM_G_Get_ETL_PROC_WID, connect string : Relational:DataWarehouse
MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
MAPPING> TE_7017 Internal error. Failed to initialize transformation [MPLT_GET_ETL_PROC_WID.LKP_ETL_PROC_WID]. Contact Informatica Global Customer Support.
MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
MAPPING> TM_6006 Error initializing DTM for session [SIL_InsertRowInRunTable].
MANAGER> PETL_24005 Starting post-session tasks. : (Mon Sep 26 15:53:45 2011)
MANAGER> PETL_24029 Post-session task completed successfully. : (Mon Sep 26 15:53:45 2011)
MAPPING> TM_6018 The session completed with [0] row transformation errors.
MANAGER> PETL_24002 Parallel Pipeline Engine finished.
DIRECTOR> PETL_24013 Session run completed with failure.
DIRECTOR> TM_6022
SESSION LOAD SUMMARY
================================================
DIRECTOR> TM_6252 Source Load Summary.
DIRECTOR> CMN_1740 Table: [SQ_FILE_DUAL] (Instance Name: [SQ_FILE_DUAL])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
DIRECTOR> TM_6253 Target Load Summary.
DIRECTOR> TM_6023
===================================================
DIRECTOR> TM_6020 Session [SIL_InsertRowInRunTable] completed at [Mon Sep 26 15:53:46 2011].
I checked physical datasource connection in DAC and workflow manager and tested and verified but still facing same issue. I connected through oracle merant ODBC driver.
Pls. let me know the solution for this error
Regards,
VSRHi,
Did you try using Oracle 10g/11g drivers as datasource? If not I think you should try that but before that ensure that from your DAC server box you are able to tnsping the OLAP database. Hope this helps
If this is helpful mark is helpful
Regards,
BI Learner -
URGENT!pls hlpDAC Full Load always in 'Running' status at a particular task
Hi Friends,
I started a full load yesterday.There are totally 257 tasks.The load went fine without issues till 248th task.But while executing 249th task(Load into Activity Fact),it is always in 'Running' status and is not getting completed even after executing for 2 hours. I checked in the informatica workflow monitor and found that the workflow is in 'running' state and is not getting completed. When right-clicked the session and selected run properties,I can see that 0 rows are inserted into the target table.So I manually tried to stop the workflow.Even after that the task is always in 'Stopping' status and is not getting stopped.Then I manually aborted the workflow.
Below is the session log file.Could you please check and let me know.
Regards,
Vijay
Edited by: vijayobi on Jul 22, 2011 4:26 AMHi Friends,
We executed a Full-Load again on Saturday i.e 23rd July 2011.This time we allowed the task 'Load into Activity Fact_CUSTOM' to execute without stopping it manully like we did in the previous data load.It got executed for 3 hours and 45 minutes and then 'Failed' giving the following error ORA-01652(unable to extend temp segment by string in tablespace string).This task got executed successfully in our dev environment.Below is what we found in the sessio .log file and help us resolve this issue.Please revert back as soon as possible as we have this issue in our prod environment.
2011-07-23 14:56:07 : ERROR : (8128 | LKPDP_25:READER_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : RR_4035 : SQL Error [
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
Database driver error...
Function Name : Execute
SQL Stmt : SELECT distinct LOOKUP_TABLE.ROW_WID AS ROW_WID, LOOKUP_TABLE.GEO_WID AS GEO_WID, LOOKUP_TABLE.INTEGRATION_ID AS INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT AS EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT AS EFFECTIVE_TO_DT FROM W_PARTY_D LOOKUP_TABLE,W_ACTIVITY_FS LEFT OUTER JOIN W_CUSTOMER_ACCOUNT_DON (W_ACTIVITY_FS.CUSTOMER_ACCOUNT_ID=W_CUSTOMER_ACCOUNT_D.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=W_CUSTOMER_ACCOUNT_D.DATASOURCE_NUM_ID)WHERECOALESCE(W_ACTIVITY_FS.CUSTOMER_ID,W_CUSTOMER_ACCOUNT_D.PARTY_ID)=LOOKUP_TABLE.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=LOOKUP_TABLE.DATASOURCE_NUM_IDAND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) >= LOOKUP_TABLE.EFFECTIVE_FROM_DT AND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) < LOOKUP_TABLE.EFFECTIVE_TO_DTORDER BY LOOKUP_TABLE.INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT, LOOKUP_TABLE.ROW_WID, LOOKUP_TABLE.GEO_WID -- ORDER BY INTEGRATION_ID,DATASOURCE_NUM_ID,EFFECTIVE_FROM_DT,EFFECTIVE_TO_DT,ROW_WID,GEO_WID
Oracle Fatal Error
Database driver error...
Function Name : Execute
SQL Stmt : SELECT distinct LOOKUP_TABLE.ROW_WID AS ROW_WID, LOOKUP_TABLE.GEO_WID AS GEO_WID, LOOKUP_TABLE.INTEGRATION_ID AS INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT AS EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT AS EFFECTIVE_TO_DT FROM W_PARTY_D LOOKUP_TABLE,W_ACTIVITY_FS LEFT OUTER JOIN W_CUSTOMER_ACCOUNT_DON (W_ACTIVITY_FS.CUSTOMER_ACCOUNT_ID=W_CUSTOMER_ACCOUNT_D.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=W_CUSTOMER_ACCOUNT_D.DATASOURCE_NUM_ID)WHERECOALESCE(W_ACTIVITY_FS.CUSTOMER_ID,W_CUSTOMER_ACCOUNT_D.PARTY_ID)=LOOKUP_TABLE.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=LOOKUP_TABLE.DATASOURCE_NUM_IDAND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) >= LOOKUP_TABLE.EFFECTIVE_FROM_DT AND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) < LOOKUP_TABLE.EFFECTIVE_TO_DTORDER BY LOOKUP_TABLE.INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT, LOOKUP_TABLE.ROW_WID, LOOKUP_TABLE.GEO_WID -- ORDER BY INTEGRATION_ID,DATASOURCE_NUM_ID,EFFECTIVE_FROM_DT,EFFECTIVE_TO_DT,ROW_WID,GEO_WID
Oracle Fatal Error].
2011-07-23 14:56:07 : ERROR : (8128 | LKPDP_25:READER_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : BLKR_16004 : ERROR: Prepare failed.
2011-07-23 14:56:07 : INFO : (8128 | WRITER_1_*_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : WRT_8333 : Rolling back all the targets due to fatal session error.
2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.LKP_W_PARTY_D_With_Geo_Wid], and the session is terminating.
2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXP_Decode_CustomerId], and the session is terminating.
2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXP_Decode_CustomerId], and the session is terminating.
2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.LKP_W_CUSTOMER_ACCOUNT_D_With_Party_ID], and the session is terminating.
2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.LKP_W_CUSTOMER_ACCOUNT_D_With_Party_ID], and the session is terminating.
2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXPTRANS], and the session is terminating.
2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXPTRANS], and the session is terminating.
2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [FIL_ETL_PROC_WID], and the session is terminating.
2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [FIL_ETL_PROC_WID], and the session is terminating.
2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [MPLT_Get_ETL_Proc_WID.Exp_Decide_Etl_Proc_Wid], and the session is terminating.
2011-07-23 14:56:07 : INFO : (8128 | WRITER_1_*_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : WRT_8325 : Final rollback executed for the target [W_ACTIVITY_F] at end of load
2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [MPLT_Get_ETL_Proc_WID.Exp_Decide_Etl_Proc_Wid], and the session is terminating.
2011-07-23 14:56:07 : INFO : (8128 | MANAGER) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : PETL_24007 : Received request to stop session run. Attempting to stop worker threads.
2011-07-23 14:56:07 : INFO : (8128 | WRITER_1_*_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : WRT_8035 : Load complete time: Sat Jul 23 14:56:07 2011
Thanks in advance.
Vinay -
Incremental load fails with the error LM_44127 Failed to prepare the task
Guys,
I have created a custom mapping and cretaed a execution plan for this mapping in the DAC. The full load completes successfully. But when ever the incremental lod is run , i am getting the below error and the task fails(The SDE load completes sucessfully , but the SIL load fails with the below error).
LM_44127 Failed to prepare the task
Please help!!!i googled it..
http://datawarehouse.ittoolbox.com/groups/technical-functional/informatica-l/lm_44127-failed-to-prepare-task-when-running-workflow-in-informatica-86-on-aix-3199309
you can try for better links now.. !! -
Errors: ORA-00054 & ORA-01452 while running DAC Full Load
Hi Friends,
Previously, I ran full load...it went well. And, I did some sample reports also in BI APPS 7.9.6.2
Now, I modified few parameters as per the Business and I try to run Full Load again...But I struck with few similar errors. I cleared couple of DB Errors.
Please, help me out to solve the below errors.
1. ANOMALY INFO::: Error while executing : TRUNCATE TABLE:W_SALES_BOOKING_LINE_F
MESSAGE:::com.siebel.etl.database.IllegalSQLQueryException: DataWarehouse:TRUNCATE TABLE W_SALES_BOOKING_LINE_F
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
--I checked W_SALES_BOOKING_LINE_F, it contain s data.
2. ANOMALY INFO::: Error while executing : CREATE INDEX:W_GL_REVN_F:W_GL_REVN_F_U1
MESSAGE:::java.lang.Exception: Error while execution : CREATE UNIQUE INDEX
W_GL_REVN_F_U1
ON
W_GL_REVN_F
INTEGRATION_ID ASC
,DATASOURCE_NUM_ID ASC
NOLOGGING
with error DataWarehouse:CREATE UNIQUE INDEX
W_GL_REVN_F_U1
ON
W_GL_REVN_F
INTEGRATION_ID ASC
,DATASOURCE_NUM_ID ASC
NOLOGGING
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
-- Yes, I found duplicate values in this table W_GL_REVN_F. But, how can I rectify it. I did some engineering, but failed.
please tell me the steps to acheive....
Thanks in advance..
StoneHi, Please see the answers (in bold) below.
1. ANOMALY INFO::: Error while executing : TRUNCATE TABLE:W_SALES_BOOKING_LINE_F
MESSAGE:::com.siebel.etl.database.IllegalSQLQueryException: DataWarehouse:TRUNCATE TABLE W_SALES_BOOKING_LINE_F
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
--I checked W_SALES_BOOKING_LINE_F, it contain s data.
Just restart the load, It seems like your DB processes are busy and the table still has a lock on it which means something is not yet Commited/Rolled Back.
If this issue repeats you can mail your DBA and ask him to look in to the issue
2. ANOMALY INFO::: Error while executing : CREATE INDEX:W_GL_REVN_F:W_GL_REVN_F_U1
MESSAGE:::java.lang.Exception: Error while execution : CREATE UNIQUE INDEX
W_GL_REVN_F_U1
ON
W_GL_REVN_F
INTEGRATION_ID ASC
,DATASOURCE_NUM_ID ASC
NOLOGGING
with error DataWarehouse:CREATE UNIQUE INDEX
W_GL_REVN_F_U1
ON
W_GL_REVN_F
INTEGRATION_ID ASC
,DATASOURCE_NUM_ID ASC
NOLOGGING
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
-- Yes, I found duplicate values in this table W_GL_REVN_F. But, how can I rectify it. I did some engineering, but failed.
please tell me the steps to achieve....
please execute this sql and get the duplicate values. If the count is less you can delete the records based on ROW_WID
How many duplicates do you have in total?
*1. SELECT INTEGRATION_ID,DATASOURCE_NUM_ID,count(*) FROM W_GL_REVN_F*
GROUP BY INTEGRATION_ID, DATASOURCE_NUM_ID
HAVING COUNT()>1*
*2. SELECT ROW_WID,DATASOURCE_NUM_ID,INTEGRATION_ID FROM W_GL_REVN_F*
WHERE INTEGRATION_ID= (from 1st query)
*3. DELETE from W_GL_REVN_F where ROW_WID=( from 2nd query)*
Hope this helps !! -
BI Apps error while performing Full Load....
Hi Experts,
I am trying to implement BI Apps on my laptop......with 2 vms.......one with BI Apps, Biee, Informatica, and Dac; the other VM has EBS R12.1.1 with Vision demo database.......I am following below blog post from Deliver BI blog...
http://deliverbi.blogspot.com/search/label/OBIA%20Setup%20Steps
PDF: http://www.box.net/shared/7q0gavzd63
I am now on page 73.....I clicked Run Now, and when I see the status of the Task it give an error.........
Status Description:
Some steps failed.Number of incomplete tasks whose status got updated to stopped :0
Number of incomplete task details whose status got updated to stopped :1721
Complete Description in the Description tab:
ETL Process Id : 21627633
ETL Name : SHA_LOAD_HR
Run Name : SHA_LOAD_HR: ETL Run - 2011-02-13 22:32:05.907
DAC Server : localhost(biapps)
DAC Port : 3141
Status: Failed
Log File Name: SHA_LOAD_HR.21627633.log
Database Connection(s) Used :
DataWarehouse jdbc:oracle:thin:@biapps.ven.com:1521:orcl10
ORA_R1211 jdbc:oracle:thin:@ebsr12.ven.com:1521:VIS10
Informatica Server(s) Used :
Oracle_BI_DW_Base_Integration-Oracle_BI_DW_Base_Integration:(10)
Start Time: 2011-02-13 22:32:05.922
Message: Some steps failed.Number of incomplete tasks whose status got updated to stopped :0
Number of incomplete task details whose status got updated to stopped :1721
Actual Start Time: 2011-02-13 22:32:05.922
End Time: 2011-02-13 22:34:01.503
Total Time Taken: 1 Minutes
Start Time For This Run: 2011-02-13 22:32:05.922
Total Time Taken For This Run: 1 MinutesLog file mentioned in the above description:
http://www.mediafire.com/?j9br75v6ecga8h6
Below are the screen shots of the Tasks tab:
1st half: http://img11.imageshack.us/i/111bud.jpg/
2nd half: http://img340.imageshack.us/i/222hj.jpg/
I successfully followed each and every step mentioned int he above document upto page 72.....but on page 73 i got the above errors........
for source: I used apps/apps@VIS account........where VIS is the instance name for Vision demo database........but in the above document I was asked to give ebs12 as the connection string name.........I am able to loin to apps/apps@VIS account from my biapps VM........
Can anyone please help me in running Full Load on EBS R12.1.1 Vision Demo database.........please.....
Thanks in advance,
DK
Edited by: user12296343 on Feb 13, 2011 9:00 PMHi DK,
It was combination of couple of things,
Did you apply the cumulative Patch 10052370?
Also check your steps from below links
http://gerardnico.com/wiki/obia/installation_7961
http://ahmedshareefuddin.blogspot.com/2010/12/installation-and-configuration-of.html
Hope this helps,
regards, -
Regarding Financial Analytics Full Load in DAC
Hi All,
Today i have started full Load for financial analytics for subject areas : "Revenue, Receivables, Paybles, General Ledger, Cost of goods Sold". The issue is "SDE_ORA_GLBlanceFact_Full", "SDE_ORA_GLJournals_Full" these two tasks failed due Database driven error(Unable to execute the query in Sourcequalifier). Because of these two tasks remaining tasks are gone into "Stopped" status in DAC. Out of 400 tasks only 56 tasks successful, 342 in stopped status, 2 failure. Can anyone please tell me how to resolve the error and start the remaining tasks. Please guide me if am i miss any steps while configuration.
Regards
SundarSDE_ORA_GLBalanceFact:
I had changed the number data type precision and scale from (22,7) to (28,10). Even though i am getting same error. Please find the below Informatica log file
Severity Timestamp Node Thread Message Code Message
ERROR 10/4/2011 6:04:04 PM node01_WIN-O0IX1SFES7T READER_1_1_1 RR_4035 SQL Error [
ORA-00936: missing expression
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
BAL.LEDGER_ID,
BAL.CODE_COMBINATION_ID,
BAL.CURRENCY_CODE,
LED.CURRENCY_CODE,
PER.PERIOD_NAME,
BAL.ACTUAL_FLAG,
BAL.TRANSLATED_FLAG,
BAL.TEMPLATE_ID,
BAL.PERIOD_NET_DR,
BAL.PERIOD_NET_CR,
( BAL.BEGIN_BALANCE_DR + BAL.PERIOD_NET_DR ),
( BAL.BEGIN_BALANCE_CR + BAL.PERIOD_NET_CR) ,
BAL.PERIOD_NET_DR_BEQ,
BAL.PERIOD_NET_CR_BEQ,
( BAL.BEGIN_BALANCE_DR_BEQ + BAL.PERIOD_NET_DR_BEQ ) PERIOD_END_BALANCE_DR_BEQ,
( BAL.BEGIN_BALANCE_CR_BEQ + BAL.PERIOD_NET_CR_BEQ ) PERIOD_END_BALANCE_CR_BEQ,
PER.START_DATE,
PER.END_DATE,
BAL.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_BAL,
BAL.LAST_UPDATED_BY AS LAST_UPDATED_BY_BAL,
PER.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_PERIODS,
PER.LAST_UPDATED_BY AS LAST_UPDATED_BY_PERIODS,
LED.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_SOB,
LED.LAST_UPDATED_BY AS LAST_UPDATED_BY_SOB,
BAL.BUDGET_VERSION_ID AS BUDGET_VERSION_ID,
PER.ADJUSTMENT_PERIOD_FLAG AS ADJUSTMENT_PERIOD_FLAG,
CASE WHEN
BAL.TRANSLATED_FLAG = 'Y' THEN 'TRANSLATED'
WHEN
BAL.TRANSLATED_FLAG = 'R' THEN 'ENTERED_FOREIGN'
WHEN
BAL.CURRENCY_CODE = 'STAT' THEN 'STAT'
WHEN
((BAL.PERIOD_NET_DR_BEQ = 0) OR (BAL.PERIOD_NET_DR_BEQ IS NULL)) AND
((BAL.PERIOD_NET_CR_BEQ = 0) OR (BAL.PERIOD_NET_CR_BEQ IS NULL)) AND
((BAL.BEGIN_BALANCE_DR_BEQ = 0) OR (BAL.BEGIN_BALANCE_DR_BEQ IS NULL)) AND
((BAL.BEGIN_BALANCE_CR_BEQ = 0) OR (BAL.BEGIN_BALANCE_CR_BEQ IS NULL))
THEN 'BASE'
ELSE 'ENTERED_LEDGER'
END CURRENCY_BALANCE_TYPE
FROM
GL_BALANCES BAL
, GL_LEDGERS LED
, GL_PERIODS PER
WHERE LED.LEDGER_ID = BAL.LEDGER_ID
AND PER.PERIOD_SET_NAME = LED.PERIOD_SET_NAME
AND BAL.PERIOD_NAME = PER.PERIOD_NAME
AND BAL.PERIOD_TYPE = PER.PERIOD_TYPE
AND NVL(BAL.TRANSLATED_FLAG, 'X') IN ('Y', 'X', 'R')
AND BAL.ACTUAL_FLAG IN ( 'A','B')
AND BAL.TEMPLATE_ID IS NULL
AND
BAL.LAST_UPDATE_DATE >=
TO_DATE('', 'MM/DD/YYYY HH24:MI:SS')
OR PER.LAST_UPDATE_DATE >=
TO_DATE('', 'MM/DD/YYYY HH24:MI:SS')
AND DECODE(, 'Y', LED.LEDGER_ID, 1) IN ()
AND DECODE(, 'Y', LED.LEDGER_CATEGORY_CODE, 'NONE') IN ()
Oracle Fatal Error
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
BAL.LEDGER_ID,
BAL.CODE_COMBINATION_ID,
BAL.CURRENCY_CODE,
LED.CURRENCY_CODE,
PER.PERIOD_NAME,
BAL.ACTUAL_FLAG,
BAL.TRANSLATED_FLAG,
BAL.TEMPLATE_ID,
BAL.PERIOD_NET_DR,
BAL.PERIOD_NET_CR,
( BAL.BEGIN_BALANCE_DR + BAL.PERIOD_NET_DR ),
( BAL.BEGIN_BALANCE_CR + BAL.PERIOD_NET_CR) ,
BAL.PERIOD_NET_DR_BEQ,
BAL.PERIOD_NET_CR_BEQ,
( BAL.BEGIN_BALANCE_DR_BEQ + BAL.PERIOD_NET_DR_BEQ ) PERIOD_END_BALANCE_DR_BEQ,
( BAL.BEGIN_BALANCE_CR_BEQ + BAL.PERIOD_NET_CR_BEQ ) PERIOD_END_BALANCE_CR_BEQ,
PER.START_DATE,
PER.END_DATE,
BAL.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_BAL,
BAL.LAST_UPDATED_BY AS LAST_UPDATED_BY_BAL,
PER.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_PERIODS,
PER.LAST_UPDATED_BY AS LAST_UPDATED_BY_PERIODS,
LED.LAST_UPDATE_DATE AS LAST_UPDATE_DATE_SOB,
LED.LAST_UPDATED_BY AS LAST_UPDATED_BY_SOB,
BAL.BUDGET_VERSION_ID AS BUDGET_VERSION_ID,
PER.ADJUSTMENT_PERIOD_FLAG AS ADJUSTMENT_PERIOD_FLAG,
CASE WHEN
BAL.TRANSLATED_FLAG = 'Y' THEN 'TRANSLATED'
WHEN
BAL.TRANSLATED_FLAG = 'R' THEN 'ENTERED_FOREIGN'
WHEN
BAL.CURRENCY_CODE = 'STAT' THEN 'STAT'
WHEN
((BAL.PERIOD_NET_DR_BEQ = 0) OR (BAL.PERIOD_NET_DR_BEQ IS NULL)) AND
((BAL.PERIOD_NET_CR_BEQ = 0) OR (BAL.PERIOD_NET_CR_BEQ IS NULL)) AND
((BAL.BEGIN_BALANCE_DR_BEQ = 0) OR (BAL.BEGIN_BALANCE_DR_BEQ IS NULL)) AND
((BAL.BEGIN_BALANCE_CR_BEQ = 0) OR (BAL.BEGIN_BALANCE_CR_BEQ IS NULL))
THEN 'BASE'
ELSE 'ENTERED_LEDGER'
END CURRENCY_BALANCE_TYPE
FROM
GL_BALANCES BAL
, GL_LEDGERS LED
, GL_PERIODS PER
WHERE LED.LEDGER_ID = BAL.LEDGER_ID
AND PER.PERIOD_SET_NAME = LED.PERIOD_SET_NAME
AND BAL.PERIOD_NAME = PER.PERIOD_NAME
AND BAL.PERIOD_TYPE = PER.PERIOD_TYPE
AND NVL(BAL.TRANSLATED_FLAG, 'X') IN ('Y', 'X', 'R')
AND BAL.ACTUAL_FLAG IN ( 'A','B')
AND BAL.TEMPLATE_ID IS NULL
AND
BAL.LAST_UPDATE_DATE >=
TO_DATE('', 'MM/DD/YYYY HH24:MI:SS')
OR PER.LAST_UPDATE_DATE >=
TO_DATE('', 'MM/DD/YYYY HH24:MI:SS')
AND DECODE(, 'Y', LED.LEDGER_ID, 1) IN ()
AND DECODE(, 'Y', LED.LEDGER_CATEGORY_CODE, 'NONE') IN ()
Oracle Fatal Error].
ERROR 10/4/2011 6:04:04 PM node01_WIN-O0IX1SFES7T READER_1_1_1 BLKR_16004 ERROR: Prepare failed.
INFO 10/4/2011 6:04:04 PM node01_WIN-O0IX1SFES7T WRITER_1_*_1 WRT_8333 Rolling back all the targets due to fatal session error.
INFO 10/4/2011 6:04:04 PM node01_WIN-O0IX1SFES7T WRITER_1_*_1 WRT_8325 Final rollback executed for the target [W_ACCT_BUDGET_FS, W_GL_BALANCE_FS] at end of load
SDE_ORA_GLJournals Task Log File
Severity Timestamp Node Thread Message Code Message
ERROR 10/4/2011 6:04:05 PM node01_WIN-O0IX1SFES7T READER_1_1_1 RR_4035 SQL Error [
ORA-00936: missing expression
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
JEL.JE_HEADER_ID,
JEL.JE_LINE_NUM,
JEL.LAST_UPDATE_DATE,
JEL.LAST_UPDATED_BY,
JEL.LEDGER_ID,
JEL.CODE_COMBINATION_ID,
JEL.PERIOD_NAME,
JEL.STATUS,
JEL.CREATION_DATE,
JEL.CREATED_BY,
JEL.ENTERED_DR,
JEL.ENTERED_CR,
JEL.ACCOUNTED_DR,
JEL.ACCOUNTED_CR,
JEL.REFERENCE_1,
JEL.REFERENCE_2,
JEL.REFERENCE_3,
JEL.REFERENCE_4,
JEL.REFERENCE_5,
JEL.REFERENCE_6,
JEL.REFERENCE_7,
JEL.REFERENCE_8,
JEL.REFERENCE_9,
JEL.REFERENCE_10,
JEL.GL_SL_LINK_ID,
JEH.JE_CATEGORY,
JEH.JE_SOURCE,
JEH.NAME,
JEH.CURRENCY_CODE,
JEH.POSTED_DATE,
JEB.NAME,
PRDS.START_DATE,
PRDS.END_DATE,
GL.LEDGER_CATEGORY_CODE,
PRDS.ADJUSTMENT_PERIOD_FLAG
FROM
GL_JE_LINES JEL,
GL_JE_HEADERS JEH,
GL_JE_BATCHES JEB,
GL_PERIOD_STATUSES PRDS,
GL_LEDGERS GL
WHERE
JEL.JE_HEADER_ID = JEH.JE_HEADER_ID
AND JEH.ACTUAL_FLAG = 'A'
AND JEB.STATUS = 'P'
AND JEH.JE_BATCH_ID = JEB.JE_BATCH_ID (+)
AND JEL.PERIOD_NAME = PRDS.PERIOD_NAME
AND JEL.LEDGER_ID = PRDS.SET_OF_BOOKS_ID
AND JEL.LEDGER_ID = GL.LEDGER_ID
AND PRDS.APPLICATION_ID = 101
AND JEH.CURRENCY_CODE<>'STAT'
AND ( JEB.CREATION_DATE >=
TO_DATE('01/01/1753 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
AND DECODE(, 'Y', GL.LEDGER_ID, 1) IN ()
AND DECODE(, 'Y', GL.LEDGER_CATEGORY_CODE, 'NONE') IN ()
Oracle Fatal Error
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
JEL.JE_HEADER_ID,
JEL.JE_LINE_NUM,
JEL.LAST_UPDATE_DATE,
JEL.LAST_UPDATED_BY,
JEL.LEDGER_ID,
JEL.CODE_COMBINATION_ID,
JEL.PERIOD_NAME,
JEL.STATUS,
JEL.CREATION_DATE,
JEL.CREATED_BY,
JEL.ENTERED_DR,
JEL.ENTERED_CR,
JEL.ACCOUNTED_DR,
JEL.ACCOUNTED_CR,
JEL.REFERENCE_1,
JEL.REFERENCE_2,
JEL.REFERENCE_3,
JEL.REFERENCE_4,
JEL.REFERENCE_5,
JEL.REFERENCE_6,
JEL.REFERENCE_7,
JEL.REFERENCE_8,
JEL.REFERENCE_9,
JEL.REFERENCE_10,
JEL.GL_SL_LINK_ID,
JEH.JE_CATEGORY,
JEH.JE_SOURCE,
JEH.NAME,
JEH.CURRENCY_CODE,
JEH.POSTED_DATE,
JEB.NAME,
PRDS.START_DATE,
PRDS.END_DATE,
GL.LEDGER_CATEGORY_CODE,
PRDS.ADJUSTMENT_PERIOD_FLAG
FROM
GL_JE_LINES JEL,
GL_JE_HEADERS JEH,
GL_JE_BATCHES JEB,
GL_PERIOD_STATUSES PRDS,
GL_LEDGERS GL
WHERE
JEL.JE_HEADER_ID = JEH.JE_HEADER_ID
AND JEH.ACTUAL_FLAG = 'A'
AND JEB.STATUS = 'P'
AND JEH.JE_BATCH_ID = JEB.JE_BATCH_ID (+)
AND JEL.PERIOD_NAME = PRDS.PERIOD_NAME
AND JEL.LEDGER_ID = PRDS.SET_OF_BOOKS_ID
AND JEL.LEDGER_ID = GL.LEDGER_ID
AND PRDS.APPLICATION_ID = 101
AND JEH.CURRENCY_CODE<>'STAT'
AND ( JEB.CREATION_DATE >=
TO_DATE('01/01/1753 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
AND DECODE(, 'Y', GL.LEDGER_ID, 1) IN ()
AND DECODE(, 'Y', GL.LEDGER_CATEGORY_CODE, 'NONE') IN ()
Oracle Fatal Error].
ERROR 10/4/2011 6:04:05 PM node01_WIN-O0IX1SFES7T READER_1_1_1 BLKR_16004 ERROR: Prepare failed.
Maybe you are looking for
-
How do I host multiple domains on a single Messaging Server?
How do I host multiple domains on a single Messaging Server? <p> To host multiple domains on one Messaging Server, use the mailAlternateAddress attribute. If you want to host two domains (customer1.com and customer2.com) on your server mail1.domain.c
-
Photoshop cs6 running extremely slow on opening a file. have reset prefs and set type to none. have to force quit most of the time.
-
DIV Style background image not showing
Hey Friends - can anyone take a look at line 136 to show why my background-image is not showing in this DIV Style? http://www.gratefulcreative.com/Andre_Madiz/index.html Thanks in advance! ken d creative director grateful creative www.gratefulcreativ
-
Unable to start EPM Architect Precss Manager service
I am getting below error. ervice cannot be started. Hyperion.DimensionServer.ProcessManager.Interface.ProcessManagerException: Process Manager could not start because database connectivity could not be established. ---> System.Data.SqlClient.SqlExcep
-
How to measure arbitrary value?
Hi, I need to measure the arbitrary angle of the document and need to rotate the canvas based on that angel.. Is this poosible in javascript