Cube full load failed
Hi Experts,
When i am loading the data from ODS to cube (FUll load) load is failed due to the below error.
Short dump in the Warehouse
Diagnosis
The data update was not completed. A short dump has probably been logged in BW providing information about the error.
System response
"Caller 70" is missing.
Further analysis:
Search in the BW short dump overview for the short dump belonging to the request. Pay attention to the correct time and date on the selection screen.
You get a short dump list using the Wizard or via the menu path "Environment -> Short dump -> In the Data Warehouse".
Error correction:
Follow the instructions in the short dump.
and i checked in ST22 and the error analysis is,
An exception occurred. This exception is dealt with in more detail below
. The exception, which is assigned to the class 'CX_SY_OPEN_SQL_DB', was
neither
caught nor passed along using a RAISING clause, in the procedure "WRITE_ICFA
"(FORM)"
Since the caller of the procedure could not have expected this exception
to occur, the running program was terminated.
The reason for the exception is:
The database system recognized that your last operation on the database
would have led to a deadlock.
Therefore, your transaction was rolled back
to avoid this.
ORACLE always terminates any transaction that would result in deadlock.
The other transactions involved in this potential deadlock
are not affected by the termination.
Please can any one tell me why the error occured and how to resolved.
Thanks in advance
David
David,
It appears that there was a Table lock when you executed your DTP. This means that there was a multiple read on any of the tables used by DTP at the same time. This has resulted in the error. Check your PC's once.
As of now, delete the request in IC and reload!
-VA
Similar Messages
-
0orgunit_att full load failed
The full load failed. when i analyzed the error is job terminated in source system
i checked the job overview in source system and there is a dump.
the dump is
Runtime Errors ITAB_DUPLICATE_KEY
Date and Time 02.02.2009 08:44:59
Short text
A row with the same key already exists.
What happened?
Error in the ABAP Application Program
The current ABAP program "SAPLHRMS_BW_PA_OS" had to be terminated because it
has
come across a statement that unfortunately cannot be executed.
What can you do?
Note down which actions and inputs caused the error.
To process the problem further, contact you SAP system
administrator.
Using Transaction ST22 for ABAP Dump Analysis, you can look
at and manage termination messages, and you can also
keep them for a long time.
Error analysis
An entry was to be entered into the table
"\FUNCTION=HR_BW_EXTRACT_IO_ORGUNIT\DATA=MAIN_COSTCENTERS[]" (which should
have
had a unique table key (UNIQUE KEY)).
However, there already existed a line with an identical key.
The insert-operation could have ocurred as a result of an INSERT- or
MOVE command, or in conjunction with a SELECT ... INTO.
The statement "INSERT INITIAL LINE ..." cannot be used to insert several
initial lines into a table with a unique key.
to correct the error
Probably the only way to eliminate the error is to correct the program.
If the error occures in a non-modified SAP program, you may be able to
find an interim solution in an SAP Note.
If you have access to SAP Notes, carry out a search with the following
keywords:
"ITAB_DUPLICATE_KEY" " "
"SAPLHRMS_BW_PA_OS" or "LHRMS_BW_PA_OSU06"
"HR_BW_EXTRACT_IO_ORGUNIT"
If you cannot solve the problem yourself and want to send an error
notification to SAP, include the following information:
please suggest <removed by moderator>.
pramod
Edited by: Siegfried Szameitat on Feb 2, 2009 11:06 AMHi asish,
Runtime Errors ITAB_DUPLICATE_KEY
Date and Time 02.02.2009 06:18:44
180 i0027_flag = ' '
181 ombuffer_mode = ' '
182 TABLES
183 in_objects = in_objects
184 main_costcenters = main_co
185 EXCEPTIONS
186 OTHERS = 0.
187
188
>>>>> INSERT LINES OF main_co INTO TABLE main_costcenters.
190
191 last_plvar = orgunits-plvar.
192 REFRESH in_objects.
193 MOVE-CORRESPONDING orgunits TO in_objects.
194 APPEND in_objects.
195 ENDIF.
196
197 ENDLOOP.
198 ENDIF.
199
200 LOOP AT orgunits.
201
202 CLEAR: infty1000, main_co, infty1008, l_t_hrobject.
203 REFRESH: infty1000, main_co, infty1008, l_t_hrobject.
204
205 APPEND orgunits TO l_t_hrobject.
206
207 LOOP AT l_t_i1000_all INTO infty1000 WHERE objid = orgunits-objid.
208 APPEND infty1000.
i checked this this reflects a structure. i cannont modify as this is standard table.
pramod -
Process chain for a full load failed
Hi,
I have a process chain where load data from an InfoCube to a DSO with the following steps:
Begin
Load data: Execute an Infopackage with full load.
Update from PSA
Activate Data: Activate data of DSO.
When I execute the process chain, the step of execute InfoPackage succes, but the step of Update from PSA failed. If I saw the errors, I can see that when the infopackage finished need to Activate the data of DSO, and the status of the request is yellow and to Update from PSA need the QM status was green. What can I do? The data transfer correctly to the DSO but not is active.
Thank you very much.
PD: May be, some steps and options are not exactly like I wrote them but I'm working in Spanish and the meaning is the sameHi....
Look if Update from PSA has failed......then go the details tab in the IP monitor.........expand the failed data packet and see the error message........there may be several reasons .......try to solve that issue............ then delete the request from the target without making the QM status red.......the reconstruct the request................
Actually I am not getting you clearly....I think your process chain is fine......Are you trying to say your ODS activation also failed...........but how ODS activation will start..........the load itself failed.........Is the link is on success or failure....
Regards,
Debjani....
Edited by: Debjani Mukherjee on Oct 16, 2008 2:02 PM -
Full load failed with [Microsoft][ODBC SQL Server Driver]Datetime field
Hi,
we are doing a full load with RDBMS SQLServer.
It failed due to the below error.
[Microsoft][ODBC SQL Server Driver]Datetime field overflow. Can you please help
thank you968651 wrote:
Hi,
we are doing a full load with RDBMS SQLServer.
It failed due to the below error.
[Microsoft][ODBC SQL Server Driver]Datetime field overflow. Can you please help
thank youhttp://bit.ly/XUL950
Thanks,
Hussein -
OBIA 7.9.5 "Financial Analytics" Full load fails
Hi All ,
we are implementing OBIA7.9.5 for Oracle Vision instance 11.5.10 ,
Installation of the components were successful.
We followed the respective Configuration steps for the Analytics module and configured it.
when we start the Full load for Financial Analytics ,
Out of 321 tasks , 215 ran successfully , 2 tasks failed and remaining were stopped
below were the failed task
Load into Position Dimension------------------------->Create Index INDEX W_POSITION_D_U1
TASK_GROUP_Extract_EmployeeDimension----->Create Index INDEX W_EMPLOYEE_DS_U1
we were getting the below error log, on Index Creation.
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
EXCEPTION CLASS::: java.sql.SQLException
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:745)
oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:210)
oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:961)
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1190)
oracle.jdbc.driver.OracleStatement.executeUpdateInternal(OracleStatement.java:1657)
oracle.jdbc.driver.OracleStatement.executeUpdate(OracleStatement.java:1626)
com.siebel.etl.database.DBUtils.executeUpdate(DBUtils.java:266)
com.siebel.etl.database.WeakDBUtils.executeUpdate(WeakDBUtils.java:357)
com.siebel.analytics.etl.etltask.SQLTask.doExecute(SQLTask.java:122)
com.siebel.analytics.etl.etltask.CreateIndexTask.doExecute(CreateIndexTask.java:90)
com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:271)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:200)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:149)
com.siebel.analytics.etl.etltask.GenericTaskImpl.run(GenericTaskImpl.java:430)
com.siebel.analytics.etl.taskmanager.XCallable.call(XCallable.java:63)
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:269)
java.util.concurrent.FutureTask.run(FutureTask.java:123)
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675)
java.lang.Thread.run(Thread.java:595)
459 SEVERE Tue Nov 10 20:40:50 GMT+05:30 2009 Failure detected while executing CREATE INDEX:W_POSITION_D:W_POSITION_D_U1.
Error Code: 12801.
Error Message: Error while execution : CREATE UNIQUE INDEX
W_POSITION_D_U1
ON
W_POSITION_D
INTEGRATION_ID Asc
,DATASOURCE_NUM_ID Asc
,EFFECTIVE_FROM_DT ASC
NOLOGGING PARALLEL
with error java.sql.SQLException: ORA-12801: error signaled in parallel query server P000
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
while Analyzing the above error code in the oracle forum, got a hint from below link
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
even after doing the changes mentioned in the above link and restarting the ETL again ,we are getting the same error message.
Let us know how to go about in resolving this
thanks
saranWith index failures the first thing I typically do is check to see the data that's breaking the index.
So for the index:
W_POSITION_D_U1
ON
W_POSITION_D
INTEGRATION_ID Asc
,DATASOURCE_NUM_ID Asc
,EFFECTIVE_FROM_DT ASC
I issue this query to my DB:
SELECT INTEGRATION_ID, DATASOURCE_NUM_ID, EFFECTIVE_FROM_DT, COUNT(*) FROM W_POSITION_D
GROUP BY INTEGRATION_ID, DATASOURCE_NUM_ID, EFFECTIVE_FROM_DT HAVING COUNT(*) > 1;
This will at least list the records the index is failing on. -
Hi Experts,
One of my master data full load failed, see the below erro message in status tab,
Incorrect data records - error requests (total status RED)
Diagnosis
Data records were recognized as incorrect.
System response
The valid records were updated in the data target.
The request was marked as incorrect so that the data that was already updated cannot be used in reporting.
The incorrect records were not written in the data targets but were posted retroactively under a new request number in the PSA.
Procedure
Check the data in the error requests, correct the errors and post the error requests. Then set this request manually to green.
can any one please give me solution.
Thanks
DavidHI,
I am loading the data from application server, i dont have R/3 for this load. and the below message is showing in status tab;
Request still running
Diagnosis
No errors could be found. The current process has probably not finished yet.
System response
The ALE inbox of the SAP BW is identical to the ALE outbox of the source system
and/or
the maximum wait time for this request has not yet run out
and/or
the batch job in the source system has not yet ended.
Current status
Thanks
David -
Full load from r/3 failed due to bad data and no psa option selected
SDN Experts,
Full load from R/3 failed loading in to ODS and in the infopackage on psa selected. how to fix it? can i rerun the infopackage again to load from R/3? May be it will fail again? is this a design defect?
i will assign points. Thank you.
LesHi,
There is obsolutely no prob. in re executing the package, but before that, check
your ODS and Info Source is re active and update rules are also active if any..
and you can check the option of target info provider selected properly along with
the PSA option and also check whether you have any subsequent process like
updating data from ODS to Cube and so on..
You can also check the data availability at R/3 for the respective DS by executing
RSA3.
Hope this helps..
assign points if useful..
Cheers,
Pattan. -
Full load works, but delta fails - "Error in the Extractor"
Good morning,
We are using datasource 3FI_SL_ZZ_SI (Special Ledger line items) to load a cube, and are having trouble with the delta loads. If I run a full load, everything runs fine. If I run a delta load, it will initially fail with an error that simply states "Error in the Extractor" (no long text). If I repeat the delta load, it completes successfully with 0 records returned. If I then rerun the delta, I get the error again.
I've run extractions using RSA3, but they work fine - as I would expect since the full loads work. Unfortunately, I have not been able to find why the deltas aren't working. After searching the Forums, I've tried replicating the datasource, checked the job log in R/3 (nothing), and run the program RS_TRANSTRU_ACTIVATE_ALL, all to no avail.
Any ideas?
Thanks
We're running BW 3.5, R/3 4.71And it's just that easy....
Yes, it appears this is what the problem was. I'd been running the delta init without data transfer, and it was failing during the first true delta run. Once I changed the delta init so that it transferred data, the deltas worked fine. This was in our development system. I took a look in our production system where deltas have been running for quite some time, and it turns out the delta initialization there was done with data transfer.
Thank you very much! -
Full load from a DSO to a cube processes less records than available in DSO
We have a scenario, where every Sunday I have to make a full load from a DSO with OnHand Stock information to a cube, where I register on material and stoer level a counter if there is stock available.
The DTP has no filters at all and has a semantic group on 0MATERIAL and 0PLANT.
The key in the DSO is:
0MATERIAL
0PLANT
0STOCKTYPE
0STOR_LOC
0BOM
of which only 0MATERIAL, 0PLANT and 0STORE_LOC are later used in the transformation.
As we had a growing number of records, we decided to delete in the START routine all records, where the inventory is not GT zero, thus eliminating zero and negative inventory records.
Now comes the funny part of the story:
Prior to these changes I would [in a test system, just copied from PROD] read some 33 million of records and write out the same amount of records. Of course, after the change we expected to write out less. To my total surprise I was reading now however 45 million of records with the same unchanged DTP, and writing out the expected less records.
When checking the number of records in the DSO I found the 45 million, but cannot explain why in the loads before we only retrieved some 33 millions from the same unchanged amount of records.
When checking in PROD - same result: we have some 45 million records in the DSO, but when we do the full load from the DSO to the cube the DTP only processes some 33 millions.
What am I missing - is there a compression going on? Why would the amount of records in a DSO differ from the amount of records processed in the DataPackgages when I am making a FULL load without any filter restrictions and only a semantic grouping in place for parts of the DSO key?
ANY idea, thought is appreciated.Thanks Gaurav.
I did check if there were more/any loads doen inbetween - there were none in the test system. As I mentioned that it was a new copy from PROD to TEST, I compared the number of entries in the DSO and that seems to be a match between TEST and PROD, ok a some more in PROD but they can be accounted for. In test I loaded the day before the changes were imported to have a comparison, and between that load and the one ofter the changes were imported nothing in the DSO was changed.
Both DTPs in TEST and PW2 load from actived DSO [without archive]. The DTPs were not changed in quite a while - so I ruled that one out. Same with activation of data in the DSO - this DSO get's loaded and activated in PROD daily via process chain and we load daily deltas into the cube in question. Only on Sundays, for the begin of the new week/fiscal period, we need to make a full load to capture all materials per site with inventory. The deltas loaded during the week are less than 1 million, but the difference between the number of records in the DSO and the amount processed in the DataPackages is more than 10 millions per full load even in PROD.
I really appreciated the knowledgable answer, I just wished you would pointed out something that I missed out on. -
Task fails while running Full load ETL
Hi All,
I am running full load ETL For Oracle R12(vanila Instance) HR But 4 tasks are failing SDE_ORA_JobDimention, SDE_ORA_HRPositionDimention, SDE_ORA_CodeDimension_Pay_level and SDE_ORA_CodeDimensionJob, I changed the parameter for all these task as mentioned in the Installation guide and rebuilled. Please help me out.
Log is like this for SDE_ORA_JobDimention
DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
DIRECTOR> VAR_27028 Use override value [ORA_R12] for session parameter:[$DBConnection_OLTP].
DIRECTOR> VAR_27028 Use override value [9] for mapping parameter:[$$DATASOURCE_NUM_ID].
DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$JOBCODE_FLXFLD_SEGMENT_COL].
DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$JOBFAMILYCODE_FLXFLD_SEGMENT_COL].
DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$LAST_EXTRACT_DATE].
DIRECTOR> VAR_27028 Use override value [DEFAULT] for mapping parameter:[$$TENANT_ID].
DIRECTOR> TM_6014 Initializing session [SDE_ORA_JobDimension_Full] at [Fri Sep 26 10:52:05 2008]
DIRECTOR> TM_6683 Repository Name: [Oracle_BI_DW_Base]
DIRECTOR> TM_6684 Server Name: [Oracle_BI_DW_Base_Integration_Service]
DIRECTOR> TM_6686 Folder: [SDE_ORAR12_Adaptor]
DIRECTOR> TM_6685 Workflow: [SDE_ORA_JobDimension_Full]
DIRECTOR> TM_6101 Mapping name: SDE_ORA_JobDimension [version 1]
DIRECTOR> TM_6827 [C:\Informatica\PowerCenter8.1.1\server\infa_shared\Storage] will be used as storage directory for session [SDE_ORA_JobDimension_Full].
DIRECTOR> CMN_1805 Recovery cache will be deleted when running in normal mode.
DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
DIRECTOR> TM_6703 Session [SDE_ORA_JobDimension_Full] is run by 32-bit Integration Service [node01_HSCHBSCGN20031], version [8.1.1], build [0831].
MANAGER> PETL_24058 Running Partition Group [1].
MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
MANAGER> PETL_24001 Parallel Pipeline Engine running.
MANAGER> PETL_24003 Initializing session run.
MAPPING> CMN_1569 Server Mode: [ASCII]
MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
MAPPING> TM_6151 Session Sort Order: [Binary]
MAPPING> TM_6156 Using LOW precision decimal arithmetic
MAPPING> TM_6180 Deadlock retry logic will not be implemented.
MAPPING> TM_6307 DTM Error Log Disabled.
MAPPING> TE_7022 TShmWriter: Initialized
MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_JobDimension_Full]
DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
MANAGER> PETL_24004 Starting pre-session tasks. : (Fri Sep 26 10:52:13 2008)
MANAGER> PETL_24027 Pre-session task completed successfully. : (Fri Sep 26 10:52:14 2008)
DIRECTOR> PETL_24006 Starting data movement.
MAPPING> TM_6660 Total Buffer Pool size is 32000000 bytes and Block size is 1280000 bytes.
READER_1_1_1> DBG_21438 Reader: Source is [dev], user [apps]
READER_1_1_1> BLKR_16003 Initialization completed successfully.
WRITER_1_*_1> WRT_8146 Writer: Target is database [orcl], user [obia], bulk mode [ON]
WRITER_1_*_1> WRT_8106 Warning! Bulk Mode session - recovery is not guaranteed.
WRITER_1_*_1> WRT_8124 Target Table W_JOB_DS :SQL INSERT statement:
INSERT INTO W_JOB_DS(JOB_CODE,JOB_NAME,JOB_DESC,JOB_FAMILY_CODE,JOB_FAMILY_NAME,JOB_FAMILY_DESC,JOB_LEVEL,W_FLSA_STAT_CODE,W_FLSA_STAT_DESC,W_EEO_JOB_CAT_CODE,W_EEO_JOB_CAT_DESC,AAP_JOB_CAT_CODE,AAP_JOB_CAT_NAME,ACTIVE_FLG,CREATED_BY_ID,CHANGED_BY_ID,CREATED_ON_DT,CHANGED_ON_DT,AUX1_CHANGED_ON_DT,AUX2_CHANGED_ON_DT,AUX3_CHANGED_ON_DT,AUX4_CHANGED_ON_DT,SRC_EFF_FROM_DT,SRC_EFF_TO_DT,DELETE_FLG,DATASOURCE_NUM_ID,INTEGRATION_ID,TENANT_ID,X_CUSTOM) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_JOB_DS]
WRITER_1_*_1> WRT_8003 Writer initialization complete.
WRITER_1_*_1> WRT_8005 Writer run started.
READER_1_1_1> BLKR_16007 Reader run started.
READER_1_1_1> RR_4029 SQ Instance [mplt_BC_ORA_JobDimension.Sq_Jobs] User specified SQL Query [SELECT
PER_JOBS.JOB_ID,
PER_JOBS.BUSINESS_GROUP_ID,
PER_JOBS.JOB_DEFINITION_ID,
PER_JOBS.DATE_FROM,
PER_JOBS.DATE_TO,
PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT, PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
PER_JOBS.NAME,
PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS. AS JOB_CODE,
'0' AS X_CUSTOM
FROM
PER_JOBS, PER_JOB_DEFINITIONS
WHERE
PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID]
WRITER_1_*_1> WRT_8158
*****START LOAD SESSION*****
Load Start Time: Fri Sep 26 10:53:05 2008
Target tables:
W_JOB_DS
READER_1_1_1> RR_4049 SQL Query issued to database : (Fri Sep 26 10:53:05 2008)
READER_1_1_1> CMN_1761 Timestamp Event: [Fri Sep 26 10:53:06 2008]
READER_1_1_1> RR_4035 SQL Error [
ORA-01747: invalid user.table.column, table.column, or column specification
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
PER_JOBS.JOB_ID,
PER_JOBS.BUSINESS_GROUP_ID,
PER_JOBS.JOB_DEFINITION_ID,
PER_JOBS.DATE_FROM,
PER_JOBS.DATE_TO,
PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT, PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
PER_JOBS.NAME,
PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS. AS JOB_CODE,
'0' AS X_CUSTOM
FROM
PER_JOBS, PER_JOB_DEFINITIONS
WHERE
PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID
Oracle Fatal Error
Database driver error...
Function Name : Execute
SQL Stmt : SELECT
PER_JOBS.JOB_ID,
PER_JOBS.BUSINESS_GROUP_ID,
PER_JOBS.JOB_DEFINITION_ID,
PER_JOBS.DATE_FROM,
PER_JOBS.DATE_TO,
PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT, PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
PER_JOBS.NAME,
PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS. AS JOB_CODE,
'0' AS X_CUSTOM
FROM
PER_JOBS, PER_JOB_DEFINITIONS
WHERE
PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID
Oracle Fatal Error].
READER_1_1_1> CMN_1761 Timestamp Event: [Fri Sep 26 10:53:06 2008]
READER_1_1_1> BLKR_16004 ERROR: Prepare failed.
WRITER_1_*_1> WRT_8333 Rolling back all the targets due to fatal session error.
WRITER_1_*_1> WRT_8325 Final rollback executed for the target [W_JOB_DS] at end of load
WRITER_1_*_1> WRT_8035 Load complete time: Fri Sep 26 10:53:06 2008
LOAD SUMMARY
============
WRT_8036 Target: W_JOB_DS (Instance Name: [W_JOB_DS])
WRT_8044 No data loaded for this target
WRITER_1__1> WRT_8043 ****END LOAD SESSION*****
MANAGER> PETL_24031
***** RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****
Thread [READER_1_1_1] created for [the read stage] of partition point [mplt_BC_ORA_JobDimension.Sq_Jobs] has completed. The total run time was insufficient for any meaningful statistics.
Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [mplt_BC_ORA_JobDimension.Sq_Jobs] has completed. The total run time was insufficient for any meaningful statistics.
Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_JOB_DS] has completed. The total run time was insufficient for any meaningful statistics.
MANAGER> PETL_24005 Starting post-session tasks. : (Fri Sep 26 10:53:06 2008)
MANAGER> PETL_24029 Post-session task completed successfully. : (Fri Sep 26 10:53:06 2008)
MAPPING> TM_6018 Session [SDE_ORA_JobDimension_Full] run completed with [0] row transformation errors.
MANAGER> PETL_24002 Parallel Pipeline Engine finished.
DIRECTOR> PETL_24013 Session run completed with failure.
DIRECTOR> TM_6022
SESSION LOAD SUMMARY
================================================
DIRECTOR> TM_6252 Source Load Summary.
DIRECTOR> CMN_1740 Table: [Sq_Jobs] (Instance Name: [mplt_BC_ORA_JobDimension.Sq_Jobs])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
DIRECTOR> TM_6253 Target Load Summary.
DIRECTOR> CMN_1740 Table: [W_JOB_DS] (Instance Name: [W_JOB_DS])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
DIRECTOR> TM_6023
===================================================
DIRECTOR> TM_6020 Session [SDE_ORA_JobDimension_Full] completed at [Fri Sep 26 10:53:07 2008]To make use of the warehouse you would probably want to connect to an EBS instance in order to populate the warehouse.
Since the execution plan you intend to run is designed for the EBS data-model. I guess if you really didn't want to connect to the EBS instance to pull data you could build one using the universal adapter. This allows you to load out of flat-files if you wish, but I wouldn't reccomend making this a habit for actual implementation as it does create another potential point of failure (populating the flat-files).
Thanks,
Austin -
Loading through Process Chains 2 Delta Loads and 1 Full Load (ODS to Cube).
Dear All,
I am loading through Process chains with 2 Delta Loads and 1 Full load from ODS to Cube in 3.5. Am in the development process.
My loading process is:
Start - 2 Delta Loads - 1 Full Load - ODS Activation - Delete Index - Further Update - Delete overlapping requests from infocube - Creating Index.
My question is:
When am loading for the first am getting some data and for the next load i should get as Zero as there is no data for the next load but am getting same no of records for the next load. May be it is taking data from full upload, i guess. Please, guide me.
Krishna.Hi,
The reason you are getting the same no. of records is as you said (Full load), after running the delta you got all the changed records but after those two delta's again you have a full load step which will pick whole of the data all over again.
The reason you are getting same no. of records is:
1> You are running the chain for the first time.
2> You ran this delta ip's for the first time, as such while initializing these deltas you might have choosen "Initialization without data transfer", as such now when you ran these deltas for the first time they picked whole of the data.Running a full load after that will also pick the same no. of records too.
If the two delats you are talking are one after another then is say u got the data because of some changes, since you are loading for a single ods to a cube both your delta and full will pick same "For the first time " during data marting, for they have the same data source(ODS).
Hope fully this will serve your purpose and will be expedite.
Thax & Regards
Vaibhave Sharma
Edited by: Vaibhave Sharma on Sep 3, 2008 10:28 PM -
Inventory cube load: can I use full load?
hi,
I have an inventory cube with non-cumolative chars. Can I use full load option or I thave to load with delta for non-cumulative KFs?
Regards,
AndrzejHi Aksik,
The load will not matter in the Addregation option.You can load by full or delta.
Now the reports creation has to be done with this in mind that the figures has to come correctly.Specify the aggregation required in the properties of the keyfigure in the query.If required use a ODS in the data flow.
regards
Happy Tony -
Delta and Full Load question for cube and ODS
Hi all,
I need to push full load from Delta ODS.
I have process chain.... in which the steps are like below,
1. R/3 extractor for ODS1 (delta)
2. ODS1 to ODS2 (delta)
3. ODS2 to Cube ---> needs to be full load
Now when i run process chain by further processing automatically ODS2 does init/delta for Cube.
How can i make it possible for full load ??
can any one guide anything in this ?
Thanks,
KSHi,
1. R/3 extractor for ODS1 (delta) : This is OK, normally you can put the Delta InfoPack in Process Chian
2. ODS1 to ODS2 (delta): It automatically flow from ODS1 to ODS2 (you need to select Update Data automaticall in the Targets at the time of ODS creation)
3. ODS2 to Cube ---> needs to be full load :
This you create a Update rules from ODS1 to Cube then Create InfoPackage in between ODS2 and Cube then do full loads. You can delete the data in the CUbe before the load ann dthen do Full load to Cube.
Note: In ODS2 don't select Upadate Data autmaticlly to Data Targets
Thanks
Reddy
Edited by: Surendra Reddy on Nov 21, 2008 1:57 PM -
Execution Plan Failed for Full Load
Hi Team,
When i run the full Load the execution plan failed and verified the log and found below information from SEBL_VERT_8_1_1_FLATFILE.DATAWAREHOUSE.SIL_Vert.SIL_InsertRowInRunTable
DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
DIRECTOR> VAR_27028 Use override value [SEBL_VERT_8_1_1_FLATFILE.DATAWAREHOUSE.SIL_Vert.SIL_InsertRowInRunTable.log] for session parameter:[$PMSessionLogFile].
DIRECTOR> VAR_27028 Use override value [1] for mapping parameter:[mplt_SIL_InsertRowInRunTable.$$DATASOURCE_NUM_ID].
DIRECTOR> VAR_27028 Use override value [21950495] for mapping parameter:[MPLT_GET_ETL_PROC_WID.$$ETL_PROC_WID].
DIRECTOR> TM_6014 Initializing session [SIL_InsertRowInRunTable] at [Mon Sep 26 15:53:45 2011].
DIRECTOR> TM_6683 Repository Name: [infa_rep]
DIRECTOR> TM_6684 Server Name: [infa_service]
DIRECTOR> TM_6686 Folder: [SIL_Vert]
DIRECTOR> TM_6685 Workflow: [SIL_InsertRowInRunTable] Run Instance Name: [] Run Id: [8]
DIRECTOR> TM_6101 Mapping name: SIL_InsertRowInRunTable [version 1].
DIRECTOR> TM_6963 Pre 85 Timestamp Compatibility is Enabled
DIRECTOR> TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS]
DIRECTOR> TM_6827 [H:\Informatica901\server\infa_shared\Storage] will be used as storage directory for session [SIL_InsertRowInRunTable].
DIRECTOR> CMN_1805 Recovery cache will be deleted when running in normal mode.
DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
DIRECTOR> TM_6703 Session [SIL_InsertRowInRunTable] is run by 32-bit Integration Service [node01_eblnhif-czc80685], version [9.0.1 HotFix2], build [1111].
MANAGER> PETL_24058 Running Partition Group [1].
MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
MANAGER> PETL_24001 Parallel Pipeline Engine running.
MANAGER> PETL_24003 Initializing session run.
MAPPING> CMN_1569 Server Mode: [ASCII]
MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
MAPPING> TM_6151 The session sort order is [Binary].
MAPPING> TM_6156 Using low precision processing.
MAPPING> TM_6180 Deadlock retry logic will not be implemented.
MAPPING> TM_6187 Session target-based commit interval is [10000].
MAPPING> TM_6307 DTM error log disabled.
MAPPING> TE_7022 TShmWriter: Initialized
MAPPING> DBG_21075 Connecting to database [Connect_to_OLAP], user [OBAW]
MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
MAPPING> CMN_1022 Database driver error...
CMN_1022 [
Database driver error...
Function Name : Logon
ORA-12154: TNS:could not resolve service name
Database driver error...
Function Name : Connect
Database Error: Failed to connect to database using user [OBAW] and connection string [Connect_to_OLAP].]
MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
MAPPING> CMN_1076 ERROR creating database connection.
MAPPING> DBG_21520 Transform : LKP_W_PARAM_G_Get_ETL_PROC_WID, connect string : Relational:DataWarehouse
MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
MAPPING> TE_7017 Internal error. Failed to initialize transformation [MPLT_GET_ETL_PROC_WID.LKP_ETL_PROC_WID]. Contact Informatica Global Customer Support.
MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
MAPPING> TM_6006 Error initializing DTM for session [SIL_InsertRowInRunTable].
MANAGER> PETL_24005 Starting post-session tasks. : (Mon Sep 26 15:53:45 2011)
MANAGER> PETL_24029 Post-session task completed successfully. : (Mon Sep 26 15:53:45 2011)
MAPPING> TM_6018 The session completed with [0] row transformation errors.
MANAGER> PETL_24002 Parallel Pipeline Engine finished.
DIRECTOR> PETL_24013 Session run completed with failure.
DIRECTOR> TM_6022
SESSION LOAD SUMMARY
================================================
DIRECTOR> TM_6252 Source Load Summary.
DIRECTOR> CMN_1740 Table: [SQ_FILE_DUAL] (Instance Name: [SQ_FILE_DUAL])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
DIRECTOR> TM_6253 Target Load Summary.
DIRECTOR> TM_6023
===================================================
DIRECTOR> TM_6020 Session [SIL_InsertRowInRunTable] completed at [Mon Sep 26 15:53:46 2011].
I checked physical datasource connection in DAC and workflow manager and tested and verified but still facing same issue. I connected through oracle merant ODBC driver.
Pls. let me know the solution for this error
Regards,
VSRHi,
Did you try using Oracle 10g/11g drivers as datasource? If not I think you should try that but before that ensure that from your DAC server box you are able to tnsping the OLAP database. Hope this helps
If this is helpful mark is helpful
Regards,
BI Learner -
Error in Transaction Data - Full Load
Hello All,
This is the current scenario that I am working on:
There is a process chain which has two transaction data load (FULL LOADS) processes to the same cube.In the process monitor everything seems okay (data loads seem fine) but overall status for both loads failed due to 'Error in source system/extractor' and it says 'error in data selection'.
Processing is set to data targets only.
On doing a manage on the cube, I found 3 old requests that were red and NOT set to QM status red. So I set them to QM status red and Deleted them and the difference I saw was that the subsequent requests became available for Reporting.
Now this data load which is a full load takes for ever - I dont even know why I do not see a initialize delta update option there - can Anyone tell me why I dont see that.
And, coming to the main question, how do I get the process chain completed - will I have to repeat the data loads or what options do I have to have a succesfully running process chain or at least these 2 full loads of transaction data.
Thank you - points will be assigned for helpful answers
- DB
Edited by: Darshana on Jun 6, 2008 12:01 AM
Edited by: Darshana on Jun 6, 2008 12:05 AMOne interesting discovery I just found in R/3, was this job log with respect to the above process chain:
it says that the job was cancelled in R/3 because the material ledger currencies were changed.
the process chain is for inventory management and the data load process that get cancelled are for the job gets cancelled in the source system:
1. Material Valuation: period ending inventories
2. Material Valuation: prices
The performance assistant says this but I am not sure how far can I work on the R/3 side to rectify this:
Material ledger currencies were changed
Diagnosis
The currencies currently set for the material ledger and the currency types set for valuation area 6205 differ from those set at conversion of the data (production startup).
System Response
The system does not allow you to post transactions after changing the currency settings to ensure consistency.
Procedure
Replace the current settings with the those entered at production
start-up.
If you wish to change the currency settings, you must use programs to convert data from the old to the new currencies.
Inform your system administrator
Anyone knowledgable in this area please give your inputs.
- DB
Maybe you are looking for
-
Crstal Report Server does not working on installation of Crystal Design
Hi, I had installed Crystal Report Server 2008 on a machine, I was able to login and open the CMC console as well but on installation of Crystal Desing the CMC and inforview are not opening. its displaying some dll file is missing. Kindly inform whet
-
No number range found for 01 RP_REINR SP11--Travel Manager
Hi While creating Travel request in in Travel manager i got this error messages"No number range found fo 01 RP_REINR SP!!-Travel Manager..Plse advice me how to resolve this.. Thanks&best regds Shaila
-
Help Find Screen Protector (In Store Pl0x)
Hi, it's me again. I'm having trouble finding a screen protector. I got one at the apple store with the iPod Touch 4G, but it was for the 2G version, and didn't have the hole for the camera and didn't fit right. My sister got me one at wal-mart today
-
How do I get pictures from my GSIII to my computer?
How do I get pictures from my GSIII to my computer?
-
No alerts of incoming messages
It works, but there were no alerts posted when messages were received, either on my Mac or my iPhone! Anyone else?