DATA ARCHIVING JOB Error
Hi All,
I am getting below bold highlighted error for the monthly scheduled write job for the Object SD_VBRK. I am trying to run the job by spliiting variant but still the same error getting repeatedly.
All available file names are already being used. Job cancelled after system exception ERROR_MESSAGE
Can you please help and send me resolution asap.
Thanks in advance.
Regards,
Narendra
Hi Karin,
Thanks for your response. I resolved this by splitting variant by date wise.Earlier, I tried by Sales Org.The job finished successfully.
Regards,
Narendra
Similar Messages
-
Hi friends,
I have done Archive configuration setup. In Define interfaces for archiving given 1 interface (which i want to do archive) and retention period as 1 day under asynchronous XML msgs. Scheduled a archiving job on the same day. After all these, I triggered one successful msg. The job ARV_BC_XMB_WRP* gets cancelled with error msg "Error when accessing the archive data". But I am not able to see the archive file in the physical path given in the configuration.
whr this msg gets archived?
Could any one help me wht is the problem and how to correct this?
thanx,
kumarHi Sumit,
Thanx for ur reply.
Got the msg ID from table and that cancelled msg in Moni and that gets archived whn job ran today.
Cancelled msgs are getting archived only If I maintain the below entry
category Parameters Current value
RUNTIME PERSIST_ARCH_MANUAL_CHANGES 1
in Integration Engine Configuration ---> Specific Configuration.
But in this case, every cancelled msg getting archived irrespective of the Interfaces given in Define Interfaces for Archiving. But i need to archive the cancelled msgs only for the interfaces defined.
To do this, I selected the "Manually cancelled Msgs" check box for the interface given in Define Interfaces for Archiving, but not working.
Again help me out on this.
Thanx,
Kumar.
Message was edited by:
ms kumar -
How to fix archiving job errors
Hello Experts,
We have daily batch runs for Archiving , the Basis team forced killed some of these jobs due to performance issues.
the situtation now is that there were also some spawn jobs (store, delete) jobs which were also cancelled (ripple affect).
now when I start the delete job manually for one of the objects RV_LIKP, I get the following error.
Any suggestions on how to resolve this issue?
Job log overview for job: ARV_RV_LIKP_DEL20140311053442 / 05344200
Date Time Message text Message class Message no. Message type
03/11/2014 05:34:42 Job started 00 516 S
03/11/2014 05:34:42 Step 001 started (program S3LIKPDLS, variant Z_ARCHIVE_PROD, user ID ARCH-BATCH) 00 550 S
03/11/2014 05:34:42 Archive file 013513-001RV_LIKP is being verified BA 238 S
03/11/2014 05:34:43 Archive file 013513-001RV_LIKP contains errors from offset 25,504,229 BA 192 S
03/11/2014 05:34:43 1 archive file(s) were not successfully verified BA 195 E
03/11/2014 05:34:43 Job cancelled after system exception ERROR_MESSAGE 00 564 AThank You all for taking time and replying .
this was actually an old file which was already moved to storage, hence the program issued error as it could not check the data and proceed with deletion.
Are there best practices to fix archiving errors .
Case 1: the archive program has been cancelld ex: batch job consumed more cpu, time ,so it was killed.
ex: here's the log..
03/10/2014 10:55:03 Job started
03/10/2014 10:55:03 Step 001 started (program S3LIKPWRS, variant 1100_43, user ID ARCH-BATCH)
03/10/2014 10:55:24 Archiving session 052589 is being created
03/10/2014 11:22:43 25 of 109,116 deliveries processed (0 can be archived)
03/10/2014 11:24:14 Path: /archive/x11/zzzarchive/csd/
03/10/2014 11:24:14 Name for new archive file: RV_LIKP_20140310_112414_0.ARCHIVE
03/10/2014 11:52:56 9,550 of 109,116 deliveries processed (816 can bearchived)
03/10/2014 12:22:58 14,075 of 109,116 deliveries processed (1,577 canbe archived)
03/10/2014 12:53:01 17,375 of 109,116 deliveries processed (2,239 canbe archived)
03/10/2014 13:23:04 23,125 of 109,116 deliveries processed (2,664 canbe archived)
03/10/2014 13:53:04 38,375 of 109,116 deliveries processed (2,678 canbe archived)
03/10/2014 14:23:05 49,250 of 109,116 deliveries processed (2,799 canbe archived)
03/10/2014 14:53:07 60,075 of 109,116 deliveries processed (2,916 canbe archived)
03/10/2014 15:23:07 73,500 of 109,116 deliveries processed (2,918 canbe archived)
03/10/2014 15:46:15 Internal session terminated with a runtime error SYSTEM_CANCELED (see ST22)
03/10/2014 15:46:15 Job cancelled
what do we do on such a situation ?
these are daily batch runs for archiving .
is there any note that describes situations and actions for "Archiving sessions with errors" -
Error default version Archive job
Hello All
Runtime error standard job
see screenshot
Regards,
RinazHi Rinaz - I don't think it works.
If you are trying to configure archiving on AE, you should follow the below blog.
AAE Archiving in SAP PI 7.11 with XMLDAS
To archive interfaces in the Adapter Framework (AFW), the XML DAS (Data Archiving Store) has to be set up. If no archiving has been set up XML message will be deleted from the AFW by the standard deletion job after a default time of 1 day (releases 7.10 and higher). You can define which interfaces should be archived via archiving rules.
Reference : 872388 - Troubleshooting Archiving and Deletion in PI -
ORA-39097: Data Pump job encountered unexpected error -12801
Hallo!I am running Oracle RAC 11.2.0.3.0 database on IBM-AIX 7.1 OS platform.
We normally do data pump expdp backups and we created a OS authenticated user and been having non-DBA users use this user (instead of / as sysdba which is only used by DBAs) to run expdp.This OS authenticated user has been working fine until it statrd gigin use error below
Export: Release 11.2.0.3.0 - Production on Fri Apr 5 23:08:22 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label Security,
OLAP, Data Mining, Oracle Database Vault and Real Application Testing optio
FLASHBACK automatically enabled to preserve database integrity.
Starting "OPS$COPBKPMG"."SYS_EXPORT_SCHEMA_16": /******** DIRECTORY=COPKBFUB_DIR dumpfile=COPKBFUB_Patch35_PreEOD_2013-04-05-23-08_%U.dmp logfile=COPKBFUB_Patch35_PreEOD_2013-04-05-23-08.log cluster=n parallel=4 schemas=BANKFUSION,CBS,UBINTERFACE,WASADMIN,CBSAUDIT,ACCTOPIC,BFBANKFUSION,PARTY,BFPARTY,WSREGISTRY,COPK
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 130.5 GB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/VIEW/GRANT/CROSS_SCHEMA/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/VIEW/COMMENT
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "WASADMIN"."BATCHGATEWAYLOGDETAIL" 2.244 GB 9379850 rows
. . exported "WASADMIN"."UBTB_TRANSACTION" 13.71 GB 46299982 rows
. . exported "WASADMIN"."INTERESTHISTORY" 2.094 GB 13479801 rows
. . exported "WASADMIN"."MOVEMENTSHISTORY" 1.627 GB 13003451 rows
. . exported "WASADMIN"."ACCRUALSREPORT" 1.455 GB 18765315 rows
ORA-39097: Data Pump job encountered unexpected error -12801
ORA-39065: unexpected master process exception in MAIN
ORA-12801: error signaled in parallel query server PZ99, instance copubdb02dc:COPKBFUB2 (2)
ORA-01460: unimplemented or unreasonable conversion requested
Job "OPS$COPBKPMG"."SYS_EXPORT_SCHEMA_16" stopped due to fatal error at 23:13:37
Please assist.have you seen this?
*Bug 13099577 - ORA-1460 with parallel query [ID 13099577.8]* -
ERROR: Invalid data pump job state; one of: DEFINING or EXECUTING.
Hi all.
I'm using Oracle 10g (10.2.0.3) I had some scheduled jobs for export the DB with datapump. Some days ago they began to fail with the message:
ERROR: Invalid data pump job state; one of: DEFINING or EXECUTING.
Wait a few moments and check the log file (if one was used) for more details.
The EM says that I can't monitor the job because there's no master tables. How can I find the log?
I also founded in the bdump that exceptions occurred:
*** 2008-08-22 09:04:54.671
*** ACTION NAME:(EXPORT_ITATRD_DIARIO) 2008-08-22 09:04:54.671
*** MODULE NAME:(Data Pump Master) 2008-08-22 09:04:54.671
*** SERVICE NAME:(SYS$USERS) 2008-08-22 09:04:54.671
*** SESSION ID:(132.3726) 2008-08-22 09:04:54.671
kswsdlaqsub: unexpected exception err=604, err2=24010
kswsdlaqsub: unexpected exception err=604, err2=24010
* kjdrpkey2hv: called with pkey 114178, options x8
* kjdrpkey2hv: called with pkey 114176, options x8
* kjdrpkey2hv: called with pkey 114177, options x8
Some help would be appreciated.
Thanks.Delete any uncompleted or failed jobs and try again.
-
ORA-39097: Data Pump job encountered unexpected error -39076
Hi Everyone,
Today i tried to take a export dump pump(table specific) from my test database, version is 10.2.0.4 on Solaris10(64-bit) and i got the following error message,
Job "SYSTEM"."SYS_EXPORT_TABLE_23" successfully completed at 09:51:36
ORA-39097: Data Pump job encountered unexpected error -39076
ORA-39065: unexpected master process exception in KUPV$FT_INT.DELETE_JOB
ORA-39076: cannot delete job SYS_EXPORT_TABLE_23 for user SYSTEM
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT_INT", line 934
ORA-31632: master table "SYSTEM.SYS_EXPORT_TABLE_23" not found, invalid, or inaccessible
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT_INT", line 1079
ORA-20000: Unable to send e-mail message from pl/sql because of:
ORA-29260: network error: Connect failed because target host or object does not exist
ORA-39097: Data Pump job encountered unexpected error -39076
ORA-39065: unexpected master process exception in MAIN
ORA-39076: cannot delete job SYS_EXPORT_TABLE_23 for user SYSTEM
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT_INT", line 934
ORA-31632: master table "SYSTEM.SYS_EXPORT_TABLE_23" not found, invalid, or inaccessible
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT_INT", line 1079
ORA-20000: Unable to send e-mail message from pl/sql because of:
ORA-29260: network error: Connect failed because target host or object does not exist
i hope the export dumpfile is valid one but i don't know why i am getting this error message. Does any one have faced this kind of problem. please advice me
Thanks
ShanOnce you see this:
Job "SYSTEM"."SYS_EXPORT_TABLE_23" successfully completed at 09:51:36The Data Pump job is done with the dumpfile. There is some clean up that is needed and it looks like something in the cleanup failed. Not sure what it was, but you dumpfile should be fine. One easy way to test it is to run impdp with sqlfile. This will do everything import will do, but instead of creating objects, it writes the ddl to the sql file.
impdp user/password sqlfile=my_test.sql directory=your_dir dupmfile=your_dump.dmp ...
If that works, then your dumpfile should be fine. The last thing the export does is write the Data Pump master table to the dumpfile. The first thing that import does is read that table in. So, if you can read it in (which impdp sqlfile does) your dump is good.
Dean -
Getting error while data archiving in BWP.
Dear All,
I am getting following error while data archiving in BWP .
Value 'NP' in field COMPOP not permitted.
Can anybody help me to solve this error.
Thanks in advance.
Regards,
VaibhavHi,
May i know the field description? COMPOP?
Rgds,
Ravi -
Error delete archive job - BROKEN_URI_EXISTS
Hello!
I'm having trouble removing Jobs archiving status Warning. I'm created a new home path and join Archive Path Properties a new folder, restarted JOB it worked, But i can not remove a Background Job status Warning error BROKEN_URI_EXISTS. In the XML DAS Administration when you click Unassign an error occurs:
Error while unassigning archive path /px1/xi_af_msg/ from archive store ARCHIVE MESSAGES; check application log of back-end system: java.lang.Exception: 598 _ASSIGN_ARCHIVE_STORES: _ASSIGN_ARCHIVE_STORES: I/O-Error while deleting collection 3iwjupqcfmi6jcuzaaaaazzg2g: Archive store returned following response: 598 I/O Error java.io.IOException: Error while deleting collection //sapmnt/PX1/global/xi/px1/xi_af_msg/2014/07/3iwjupqcfmi6jcuzaaaaazzg2g/ java.io.IOException: Error while deleting collection //sapmnt/PX1/global/xi/px1/xi_af_msg/2014/07/3iwjupqcfmi6jcuzaaaaazzg2g/
But this path //sapmnt/PX1/global/xi/px1/xi_af_msg/2014/07 no folder 07 and no files.
Please tell me how to stop and delete jobs with the status Warning
Best regards,
RinazHI Rinaz,
Check if your PI system is in the SP pointed in this note 1624448 - BROKEN_URI_EXISTS error when editing archiving job
Regards. -
"No list available" - Spool message - Data Archiving
Hi ,
I am facing problem in CRM Data Archiving .
Now I am Archiving the Arc Object >> PRODUCT_MD .
All the parameters i have given correctly.
when i execute the job, in the spool message it says <b>"No list available".</b>
I thought the problem with the input.
then i executed this tcode - "COMMPR01" (based on the help.sap.com for this archiving object.)
I selected some of the "Material" in that and set the product status as
<b>"To archive".</b>
Any Idea on this .
expecting your valuable inputs .
Regards
GaneshWhen ever we execute logic in batch mode, the ALV list will be populated to the spool. The error says that, there is not data coming up via list output of the report. Hence there is no spool being created.
Regards,
Chiranjeevi. -
Repository A2 is already used for document area DATAARCH (Data Archiving].
Dear ALL,
Actually I am getting probelem Photo configuration in back end.
I am using Transaction SM31 maitaining the table TOAAR_C, here I am getting probelem 'Content Repository Identification' (A2), here i am getting below error.
Repository A2 is already used for document area DATAARCH (Data Archiving].
Please look in to issue.
Regards,
venkatJürgen - your answer was very helpful and I'm sure I'm now in the right direction!
However I still have an issue:
When the write job is finished and before deleting and storage, the archive file is not accessible - and then I am not able to continue with deletion and storage.
An example:
Filename: RMM_EKKO11180104855_CLL
Logical path: ARCHIVE_GLOBAL_PATH_WITH_ARCHIVE_LINK
Physical file name: E:SAPContRepZ5RMM_EKKO11180104855_CLL
E:SAPContRepZ5 is the root of the content repository and should according to the documentation contain the file during the delete phase and until final storage in the repository. But there is no file there.
In the log of the write job there are the following entries:
Job started
Step 001 started (program RM06EW70, variant Z_DEMO_7, user ID CLL)
Reading purchasing documents
Archiving session 000913 is being created
Path:
Name for new archive file: RMM_EKKO11180104855_CLL
1 of 1 purchasing documents processed
Job finished
I am a little concern about that no path is written to the job log - could maybe mean something.
Any ideas?
Thanks,
Claus. -
Data Warehouse Jobs stuck at running - Since February!
Folks,
My incidents have not been groomed out of the console since February. I ran the Get-SCDWJob and found most of the jobs are disabled. See below. I've tried to enable all of them using PowerShell and they never are set back to Enabled.
No errors are present in the Event log. In fact, the Event log shows successfully starting the jobs.
I've restarted the three services. Rebooted the server.
I've been using this blog post as a guide.
http://blogs.msdn.com/b/scplat/archive/2010/06/07/troubleshooting-the-data-warehouse-data-warehouse-isn-t-getting-new-data-or-jobs-seem-to-run-forever.aspx
Anyone have any ideas?
Win 08 R2 and SQL 2008 R2 SP 1.
BatchId Name Status CategoryName StartTime
EndTime IsEnabled
13810 DWMaintenance Running Maintenance 3/22/2013 4:26:00 PM
True
13807 Extract_DW_ Running Extract 2/28/2013 7:08:00 PM
False
ServMgr_MG
13808 Extract_Ser Running Extract 2/28/2013 7:08:00 PM
False
vMgr_MG
13780 Load.CMDWDataMart Running Load 2/28/2013 7:08:00 PM
False
13784 Load.Common Running Load 2/28/2013 7:08:00 PM
False
13781 Load.OMDWDataMart Running Load 2/28/2013 7:08:00 PM
False
13809 MPSyncJob Running Synchronization 2/28/2013 8:08:00 PM
True
3405 Process.SystemCenter Running CubeProcessing 1/31/2013 3:00:00 AM 2/10/2013 2:59:00 PM True
ChangeAndActivityMan
agementCube
3411 Process.SystemCenter Running CubeProcessing 1/31/2013 3:00:00 AM 2/10/2013 2:59:00 PM True
ConfigItemCube
3407 Process.SystemCenter Running CubeProcessing 1/31/2013 3:00:00 AM 2/10/2013 2:59:00 PM True
PowerManagementCube
3404 Process.SystemCenter Running CubeProcessing 1/31/2013 3:00:00 AM 2/10/2013 2:59:00 PM True
ServiceCatalogCube
3406 Process.SystemCenter Running CubeProcessing 1/31/2013 3:00:00 AM 2/10/2013 2:59:00 PM True
SoftwareUpdateCube
3410 Process.SystemCenter Running CubeProcessing 1/31/2013 3:00:00 AM 2/10/2013 2:59:00 PM True
WorkItemsCube
13796 Transform.Common Running Transform 2/28/2013 7:08:00 PM
FalseOkay, I've done to much work without writing it down. I've gotten it to show me a new error using Marcel's script. The error is below.
It looks like a Cube issue. Not sure how to fix it.
There is no need to wait anymore for Job DWMaintenance because there is an error in module ManageCubeTranslations an
e error is: <Errors><Error EventTime="2013-07-29T19:03:30.1401986Z">The workitem to add cube translations was aborte
cause a lock was unavailable for a cube.</Error></Errors>
Also running the command Get-SCDWJobModule | fl >> c:\temp\jobs290.txt shows the following errors.
JobId : 302
CategoryId : 1
JobModuleId : 6350
BatchId : 3404
ModuleId : 5869
ModuleTypeId : 1
ModuleErrorCount : 0
ModuleRetryCount : 0
Status : Not Started
ModuleErrorSummary : <Errors><Error EventTime="2013-02-10T19:58:30.6412697Z">The connection either timed out or was lo
st.</Error></Errors>
ModuleTypeName : Health Service Module
ModuleName : Process_SystemCenterServiceCatalogCube
ModuleDescription : Process_SystemCenterServiceCatalogCube
JobName : Process.SystemCenterServiceCatalogCube
CategoryName : CubeProcessing
Description : Process.SystemCenterServiceCatalogCube
CreationTime : 7/29/2013 12:57:39 PM
NotToBePickedBefore :
ModuleCreationTime : 7/29/2013 12:57:39 PM
ModuleModifiedTime :
ModuleStartTime :
ManagementGroup : DW_Freeport_ServMgr_MG
ManagementGroupId : f61a61f2-e0fe-eb37-4888-7e0be9c08593
JobId : 312
CategoryId : 1
JobModuleId : 6436
BatchId : 3405
ModuleId : 5938
ModuleTypeId : 1
ModuleErrorCount : 0
ModuleRetryCount : 0
Status : Not Started
ModuleErrorSummary : <Errors><Error EventTime="2013-02-10T19:58:35.1028411Z">Object reference not set to an instance o
f an object.</Error></Errors>
ModuleTypeName : Health Service Module
ModuleName : Process_SystemCenterChangeAndActivityManagementCube
ModuleDescription : Process_SystemCenterChangeAndActivityManagementCube
JobName : Process.SystemCenterChangeAndActivityManagementCube
CategoryName : CubeProcessing
Description : Process.SystemCenterChangeAndActivityManagementCube
CreationTime : 2/10/2013 7:58:31 PM
NotToBePickedBefore : 2/10/2013 7:58:35 PM
ModuleCreationTime : 2/10/2013 7:58:31 PM
ModuleModifiedTime : 2/10/2013 7:58:35 PM
ModuleStartTime : 2/10/2013 7:58:31 PM
ManagementGroup : DW_Freeport_ServMgr_MG
ManagementGroupId : f61a61f2-e0fe-eb37-4888-7e0be9c08593
JobId : 331
CategoryId : 1
JobModuleId : 6816
BatchId : 3406
ModuleId : 6242
ModuleTypeId : 1
ModuleErrorCount : 0
ModuleRetryCount : 0
Status : Not Started
ModuleErrorSummary : <Errors><Error EventTime="2013-02-10T19:58:38.7064180Z">Object reference not set to an instance o
f an object.</Error></Errors>
ModuleTypeName : Health Service Module
ModuleName : Process_SystemCenterSoftwareUpdateCube
ModuleDescription : Process_SystemCenterSoftwareUpdateCube
JobName : Process.SystemCenterSoftwareUpdateCube
CategoryName : CubeProcessing
Description : Process.SystemCenterSoftwareUpdateCube
CreationTime : 2/10/2013 7:58:35 PM
NotToBePickedBefore : 2/10/2013 7:58:39 PM
ModuleCreationTime : 2/10/2013 7:58:35 PM
ModuleModifiedTime : 2/10/2013 7:58:39 PM
ModuleStartTime : 2/10/2013 7:58:35 PM
ManagementGroup : DW_Freeport_ServMgr_MG
ManagementGroupId : f61a61f2-e0fe-eb37-4888-7e0be9c08593
JobId : 334
CategoryId : 1
JobModuleId : 6822
BatchId : 3407
ModuleId : 6246
ModuleTypeId : 1
ModuleErrorCount : 0
ModuleRetryCount : 0
Status : Not Started
ModuleErrorSummary : <Errors><Error EventTime="2013-02-10T19:58:42.2943950Z">Object reference not set to an instance o
f an object.</Error></Errors>
ModuleTypeName : Health Service Module
ModuleName : Process_SystemCenterPowerManagementCube
ModuleDescription : Process_SystemCenterPowerManagementCube
JobName : Process.SystemCenterPowerManagementCube
CategoryName : CubeProcessing
Description : Process.SystemCenterPowerManagementCube
CreationTime : 2/10/2013 7:58:39 PM
NotToBePickedBefore : 2/10/2013 7:58:42 PM
ModuleCreationTime : 2/10/2013 7:58:39 PM
ModuleModifiedTime : 2/10/2013 7:58:42 PM
ModuleStartTime : 2/10/2013 7:58:39 PM
ManagementGroup : DW_Freeport_ServMgr_MG
ManagementGroupId : f61a61f2-e0fe-eb37-4888-7e0be9c08593
JobId : 350
CategoryId : 1
JobModuleId : 6890
BatchId : 3410
ModuleId : 6299
ModuleTypeId : 1
ModuleErrorCount : 0
ModuleRetryCount : 0
Status : Not Started
ModuleErrorSummary : <Errors><Error EventTime="2013-02-10T19:58:45.8355723Z">Object reference not set to an instance o
f an object.</Error></Errors>
ModuleTypeName : Health Service Module
ModuleName : Process_SystemCenterWorkItemsCube
ModuleDescription : Process_SystemCenterWorkItemsCube
JobName : Process.SystemCenterWorkItemsCube
CategoryName : CubeProcessing
Description : Process.SystemCenterWorkItemsCube
CreationTime : 2/10/2013 7:58:42 PM
NotToBePickedBefore : 2/10/2013 7:58:46 PM
ModuleCreationTime : 2/10/2013 7:58:42 PM
ModuleModifiedTime : 2/10/2013 7:58:46 PM
ModuleStartTime : 2/10/2013 7:58:42 PM
ManagementGroup : DW_Freeport_ServMgr_MG
ManagementGroupId : f61a61f2-e0fe-eb37-4888-7e0be9c08593
JobId : 352
CategoryId : 1
JobModuleId : 6892
BatchId : 3411
ModuleId : 6300
ModuleTypeId : 1
ModuleErrorCount : 0
ModuleRetryCount : 0
Status : Not Started
ModuleErrorSummary : <Errors><Error EventTime="2013-02-10T19:58:49.6887476Z">Object reference not set to an instance o
f an object.</Error></Errors>
ModuleTypeName : Health Service Module
ModuleName : Process_SystemCenterConfigItemCube
ModuleDescription : Process_SystemCenterConfigItemCube
JobName : Process.SystemCenterConfigItemCube
CategoryName : CubeProcessing
Description : Process.SystemCenterConfigItemCube
CreationTime : 2/10/2013 7:58:46 PM
NotToBePickedBefore : 2/10/2013 7:58:50 PM
ModuleCreationTime : 2/10/2013 7:58:46 PM
ModuleModifiedTime : 2/10/2013 7:58:50 PM
ModuleStartTime : 2/10/2013 7:58:46 PM
ManagementGroup : DW_Freeport_ServMgr_MG
ManagementGroupId : f61a61f2-e0fe-eb37-4888-7e0be9c08593 -
We are experiencing slowing performance in OTL due to the volumes of historic data we are holding and would like to archive some. I have read the OTL whitepaper and followed the instructions however when I run "Validate Data Set" then set completes with warning and the log file contains the following:
+---------------------------------------------------------------------------+
Start of log messages from FND_FILE
+---------------------------------------------------------------------------+
Validation Warnings
Timecards Not Retrieved - Status = Approved >
Timecard Period Worker
10-APR-10-16-APR-10 11 - 1234 - BLOGGS, JOE
.with a long list of employees. Running the "Archive Data Set" job does the same (completes with warning and similar log file messages) however all the data is transferred into the _AR tables as expected. I think the issue might be that we don't use Oracle Payroll/Projects and so we never actually retrieve time to another application. We transfer the time to BEE and then it stays there - reporting and payroll is done through bespoke reports/processes looking at the HXT tables.
Will this cause any issues in archiving? Is it something to worry about? Can I flag the timecards as being retrieved to prevent completion with warning?
Thanks in advance,
John.We have over 6 years of timecards in Oracle Time and Labor. We are looking into archiving our timecards. Following the OTL implementation guide, I created a data set to archive Jan 1-31, 2007 timecards and then ran the "Validate Data Set" process. The "Validate Data Set" process completes with a warning. The log says "Timecards Not Retrieved - Status = Approved" with a long list of names. I'm not finding anything on My Oracle support related to this. Did anyone find out what this means?
Also, the implementation guide mentions setting profile "OTL: Max Errors in Validate Data Set". I'm not able to find that profile. I see one for "OTL: Max warnings in Validate Data Set" that is set to 500 at the site level. Does anyone know if oracle may have changed the name of that profile ... or know where I can find the "OTL: Max Errors in Validate Data Set" profile?
We are on R12.1.3.
Any input would be greatly appreciated.
Melinda -
XI Archiving: Restarting a terminated archived job.
Hi All,
Due to some constraints, our PI server (devt) has only 9.5 GB of disk space for the archive directory. We have 300,000 messages that is flagged for archiving and it seems that the archiving job will terminate with a dataset_write_error which we assume is caused because there is not enough disk space in the archive folders.
After moving the previous archive files out of the directory to reclaim the disk space, we restarted the archive job.Will restarting the archive job allow the archive job to pick up from where it was terminated or will it "restart" from beginning?
Anyone experienced something similar before?Dear Lugman,
the behaviour of the new archiving job depends on the configuration of archiving object BC_XMB (transaction AOBJ). If the option "Do Not Before End of Write Phase" is set (default), then the next archiving job will start from the very beginning. In this case all archive file written by the previous job can be deleted. If the option is de-selected the new archiving job will start with the last archiving file. All files that were written sucessfully before the error ocurred are safe.
In general you might also want to increase the retention period temporarily. This way you reduce the number of messages to be archived in one single blow and you have better control on the data volume that is written to file.
Best regards,
Harald Keimer
XI Development Support
SAP AG, Walldorf -
Business Partner Data Archiving - Help Required
Hi,
Am New to Data Archiving and need help to Archive Business Partner in CRM. I tried to Archive some BP's but it was not archiving. Kindly throw some light on it.
Problem we face are:
1) When we try to write a Business Partner to an Archive File the Job Log Shows finished but no files are created in the System.
2) When we try to delete the BP from Data Base it doesn't show the Archived BP's that could be deleted (I guess this is because there are no archived file).
Archive Object Used is CA_BUPA. We have created a Variant and set the Time as Immediate and Spool to LOCL.
Kindly let us know if there is any step we are missing here and if we are on the wrong track.
All answers are appreciated.
Thanks,
Prashanth
Message was edited by:
Prashanth KRHi,
To archive a BP following steps are to be followed.
A. Mark the BP to be archived in transaction BP >status tab
B. Run dacheck
FYI : Steps on how to perform dacheck :
1. In transaction DACONTROL change the following parameter to the
values : for CA_BUPA .
No. calls/packages 1
Number of Objects 50
Package SizeRFC 20
Number of Days 1 (if you use mobile client it should be more)
2.If a business partner should be archived directly after the
archiving note was set, this can be done by reseting the check
date with report CRM_BUPA_ARCHIVING_FLAGS.
here only check (set) the options :
- Archiving Flag
- Reset Check Date
Reset all options should be unchecked.
3. go to dacheck and run the process
4. This will change the status of the BPs to 'Archivable'
*Only those bp's which are not involved in any business transactions,
install base, Product Partner Range (PPR) will be set to archivable
The BP's with status 'Archivable' will be used by the archiving
run.
Kindly refer note 646425 before goint to step C ***
C.Then run transaction CRM_BUPA_ARC.
- Make sure that the "selection program" in transaction "DACONTROL"
is maintained as " CRM_BUPA_DA_SELECTIONS"
Also create a variant, which can done by excecuting
CRM_BUPA_DA_SELECTION and enter the variants name for the field
Variant. This variant is the buisness partner range.
- Also please check note 554460.
Regards, Gerhard
Maybe you are looking for
-
ADE 2 is extremely slow in presenting ACSM file
I have purchased a document which is in ACSM format. I know nothing of this format. ADE 2 appears to download the data and displays the pages, but is extremely slow in scrolling the pages. So slow as to be useless. Any change takes ages to accomp
-
Device Software Modified?
Hi. My wife's Samsung Galaxy S4 got a message on her VZW Protect app that says "device software modified". It looks like others have mentioned it on the forums here and each time you contact them for specific details. Is this the best way to figure o
-
Tengo problemas para retirar la tarjeta SIM de mi iphone 4s
Buenas tardes, queria saber si alguien tuvo el mism problema que yo o si alguien conoce alguna alternativa para solucionarlo. Gracias.
-
Iphone4 locked which should be unlocked
My girlfriend bought a white iPhone4 in May. And there was something wrong with the home key. She took it to the genius bar and changed a new one (which is in a black box). And she has a 3 Network pay as you go sim card and put it in the iPhone and t
-
DV5210US VISTA64 AUDIO TROUBLE
Hi I upgraded my dv5210us to Vista Ultimate x64 yesterday and and everything works fine except for the sound card. I have tried a countless number of drivers and installers as well as the HP software update application to no avail. After all the driv