Job locked
Hi,
I am trying to drop a job but get the following error:
ORA-27468: "." is locked by another processI try to disable it and get the same error.
I looked for a blocking session in v$lock but found none.
Is there anyway to resolve this issue.
Thanks.
Solomon,
Its a scheduler job, its current state is 'SCHEDULED'. If it is running, wouldn't it have a session blocking it in the v$lock view. As I mentioned, I did not find any.
Anyway, I ran
exec dbms_scheduler.stop_job() just in case the job was running but got the same error.
Thanks.
Similar Messages
-
Does MM_EKKO archive write job lock the tables?
Archive experts,
We run MM_EKKO archiving twice a week. From my understanding the write job just read the data and write it to archive files. But we run Replenishment jobs which hit EKPO tables, the jobs run slow, and I noticed that the archive write job is holding the locks on this table. As soon as I cancelled the write job, the replenishment jobs move faster. why this happens? Archive write jobs should not be causing any performance issues, only the delete jobs will impact the performance. Am I correct? Is any one experiencing similar issues?
SamHi Sam,
Interesting question! Your understanding is correct, write job just reads the data from tables and writes it into archive files... but....write job of MM_EKKO (and MM_EBAN) is a bit different. The MM_EKKO write job also takes care of setting the deletion indicator (depending on whether its a one step or two step archiving. So its possible that it puts a lock during the process of setting the deletion indicator, as its a change to database.
please have a look at the folloing link for the explanation of one step and two step archiving:
http://help.sap.com/saphelp_47x200/helpdata/en/9b/c0963457889b37e10000009b38f83b/frameset.htm
Hope this explains the reason for the performance problem you are facing.
Regards,
Naveen -
Archive data of using DART : Job lock problem in table TSP01
Hi ,
I'm facing problem while archiving from Production system to UNIX using DART.
Using TC: FTW1A to data extract, once data has extracted, we need to do verifaction progess through TC. FTWE1(run a BG jobRTXWCHK4) and FTWD(BG Job RTXWCHK2).
When I am running TC: FTWD(BG Job RTXWCHK2)to verify, it was holding an extensive lock on the TSP01 table for long period of time which blocking others processing with this table, So We need to terminate this job. For the time being solution is to run this job during weekends.But I want solution for this.
Is any boby can help me in this problem?
Regards,
Nupur S JaipuriyarLocking a row that does not exist can be difficult.
On most database you can lock an entire table through "LOCK TABLE <table>", however this may be extreme. Potentially you could also insert an empty row into the table with the id that you want to lock, then you would have a write lock on the row until you commit the transaction. -
Hello All,
We have a JOB that is sending e-mails.
JOB runs every 15 minutes and is sending 15 emails within this time.
At the end of sending e-mails the JOB should await its next run in the next 15 minutes, but the JOB is locked with your session avoiding the next execution.
Job is being scheduled with the following command:
EXEC CRIA_JOB(USER||'.ENVIA_FILA_EMAIL;', 'SYSDATE +1/288','13/03/2012 10:00');
We have performed tests PING / TELNET on smtp during the executions of the JOB and no failures occur.
Any idea?
Thank you.Sorry guys.
You have what version of the Oracle database?
Oracle Database 10.2.0.1.0 running on Windows Server 2003 64bits
You have a job scheduled using what mechanism? (cron? DBMS_JOB?, DBMS_SCHEDULER?, a third-party product?)
DBMS_JOB
You are sending out emails using what technology?
SMTP on Port 25 - utl_smtp
This is the procedure of the JOB.
CREATE OR REPLACE PROCEDURE ENVIA_EMAIL(
REMETENTE IN VARCHAR2,
DESTINATARIO IN VARCHAR2,
ASSUNTO IN VARCHAR2,
MENSAGEM1 IN LONG,
MENSAGEM2 IN LONG,
MENSAGEM3 IN LONG,
MODULO IN VARCHAR2,
ASSINATURA1 IN VARCHAR,
ASSINATURA2 IN VARCHAR,
ASSINATURA3 IN VARCHAR)
IS
MAILHOST VARCHAR2(30);
rPORTA_SMTP LICENCA.PORTA_SMTP%TYPE;
mail_conn utl_smtp.connection;
crlf VARCHAR2( 2 ):= CHR( 13 ) || CHR( 10 );
mesg LONG;
BEGIN
SELECT SERVIDOR_SMTP, PORTA_SMTP INTO MAILHOST, rPORTA_SMTP
FROM LICENCA;
-- := '192.168.220.1' (endereco do servidor SMTP retirado do select em licenca;
mail_conn := utl_smtp.open_connection(mailhost, rPORTA_SMTP);
mesg:= 'Date: ' || TO_CHAR( SYSDATE, 'dd Mon yy hh24:mi:ss' ) || crlf ||
'From: <'||REMETENTE||'>' || crlf ||
'Subject: '||ASSUNTO || crlf ||
'To: '||DESTINATARIO || crlf ||
' '||crlf ;
IF MENSAGEM1 IS NOT NULL THEN
mesg := mesg || crlf || MENSAGEM1 || crlf || crlf;
END IF;
IF MENSAGEM2 IS NOT NULL THEN
mesg := mesg || MENSAGEM2 || crlf || crlf;
END IF;
IF MENSAGEM3 IS NOT NULL THEN
mesg := mesg || MENSAGEM3 || crlf || crlf;
END IF;
mesg := mesg || crlf || crlf;
IF ASSINATURA1 IS NOT NULL THEN
mesg := mesg || ASSINATURA1 || crlf ;
END IF;
IF ASSINATURA2 IS NOT NULL THEN
mesg := mesg || ASSINATURA2 || crlf ;
END IF;
IF ASSINATURA3 IS NOT NULL THEN
mesg := mesg || ASSINATURA3 || crlf ;
END IF;
mesg := mesg || crlf ||
|| crlf ||
'Email enviado eletronicamente - Tecnologia Bohm,Interal (powered by Oracle)'
|| crlf ||
MODULO || ' ' || mailhost || ' ' || TO_CHAR(SYSDATE,'DDMONYYYY');
utl_smtp.helo(mail_conn, mailhost);
utl_smtp.mail(mail_conn, REMETENTE);
utl_smtp.rcpt(mail_conn, DESTINATARIO);
utl_smtp.data(mail_conn, mesg);
utl_smtp.quit(mail_conn);
Exception
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR(-20002,'NAO FOI POSSIVEL ENVIAR EMAIL.'||SQLERRM);
END;
I noticed that the session of Job Waiting is from NETWORK Class and ths status is ACTIVE.
There is the possibility of putting a timeout?
If u guys need any more information, let me know.
Thanks. -
Hello,
I’ve finally managed to deploy my first guest cluster with a shared VHDX using a service template.
So, I now want to try and update my service template. However, whenever I try to do anything with it, in the services section, I receive the error:
Unable to perform the job because one or more of the selected objects are locked by another job. To find out which job is locking the object, in the jobs view, group by status, and find the running or cancelling job for the object. ID 2606
Well I tried that and there doesn’t seem to be a job locking the object. Both the cluster nodes appear to be up and running, and I can’t see a problem with it at all. I tried running the following query in SQL:
SELECT * FROM [VirtualManagerDB].[dbo].[tbl_VMM_Lock] where TaskID='Task_GUID'
but all this gives me is an error that says - conversion failed when converting from a character string to uniqueidentifier msg 8169, level 16, State 2, Line 1
I'm no SQL expert as you can probably tell, but I'd prefer not to deploy another service template in case this issue occurs again.
Can anyone help?No one else had this?
-
RFC_USER getting locked
Hi,
RFC_USER are frequently getting locked .
How to find In which RFC RFC_USER password is wrogly maintain ?
which background job locking RFC_USER
why RFC_USER getting locked frequently ?
REgards
MYou may not get the sufficient information from SM21. Try ti check the RFC trace it might help you a bit. Other possible solution is you need to check manually the RFC connection which is maintaining this users and maintain the correct password
Previously we had the same scenario RFC user getting locked due to incorrect password maintained.( after investigated so much we created the Newuser for this RFC and maintained the password)
Thanks Regards,
Avinash I -
Global Enqueue Services Deadlock detected while executing job
Hi Gurus,
Need your help in analying below situation.
I got a Global Enqueue Services Deadlock in our 2-node RAC stating deadlock detected due to auto execute of job. But when I checked job status its was executed successfully. Also, the trace files gives some session id , client id which I am not able to mapped to any particular instance. trace files output as below. What would be the approach for such situtation and how to find the relevant sql which caused such deadlocks.
ENV - 11gR2 RAC 2-node with ASM
Linux RHEL 6.2
Trace File output
Trace file /diag/app/ora11g/diag/rdbms/remcorp/REMCORP1/trace/REMCORP1_j001_25571.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
ORACLE_HOME = /u01/app/ora11g/product/11.2.0/db_1
System name: Linux
Node name: remedy-ebu-db1
Release: 2.6.32-220.el6.x86_64
Version: #1 SMP Wed Nov 9 08:03:13 EST 2011
Machine: x86_64
Instance name: REMCORP1
Redo thread mounted by this instance: 1
Oracle process number: 38
Unix process pid: 25571, image: oracle@remedy-ebu-db1 (J001)
*** 2013-05-17 02:00:34.174
*** SESSION ID:(1785.23421) 2013-05-17 02:00:34.174
*** CLIENT ID:() 2013-05-17 02:00:34.174
*** SERVICE NAME:(SYS$USERS) 2013-05-17 02:00:34.174
*** MODULE NAME:() 2013-05-17 02:00:34.174
*** ACTION NAME:() 2013-05-17 02:00:34.174
ORA-12012: error on auto execute of job 83
ORA-00060: deadlock detected while waiting for resource
ORA-06512: at "ARADMIN.CMS_CUSTOMER_DETAILS", line 3
ORA-06512: at line 1
Regards,
Nikhil Mehta.Look at your alert.log, search for messages such as this:
Wed Jun 16 15:05:58 2010
Global Enqueue Services Deadlock detected. More info in fileSearch for the .trc file mentioned and opened it, it should have all the info you need.
Even on the error you posted, you already have some very important information:
ORA-06512: at "ARADMIN.CMS_CUSTOMER_DETAILS", line 3In short, you have another job, or a query that runs at the same time as your job which wants to do some DML on rows of that table and at the same time your Job wants to do some DML on another set of rows tha table the query has locked first.
As in:
a) Query/Job X Locks some rows of ARADMIN.CMS_CUSTOMER_DETAILS.
b) Your job locks some rows of ARADMIN.CMS_CUSTOMER_DETAILS.
c) Your job wants the lock on the rows that X has locked first.
d) X wants the lock on the rows your job has locked first.
e) Deadlock.
So if I have something you want, and you have something I want, we are both "deadlocked" thus LMD comes in and kill one of us to resolve the impass. It's probably killing the X job/query and so that's why your job is still executing successfully.
So you have two options:
1) Identify the X Job/Query -- either stop it or change it's execution time
2) Your job - either stop it or change it's execution time -
/SAPAPO/SDP221, selection locks ?
Dear DP experts
Two selection conditions loaded in different sessions of SDP94.. NO common CVC's at detailed level. Did all looks ups from MC62.
Lock arguments conflict. Dialog says selected data is locked. I do not see anything common. Stale locks are deleted. Still arguments appear. I can only see the lock GUID's in SM12
What can be possibly wrong.?
I don't see the message details of 221 in SAP notes
Lock is set at a live cache level.
Thanks
BSNo wildcards, no range, no exclusions in range, no special chars.
I dug these notes out
1722188 - DP mass processing job: locking conflict (detailed lock)
1988958 - Data is locked: How to find out locked planning objects when using liveCache locking logic
/SAPAPO/TS_LC_DISPLAY_LOCKS is a report delivered in this note.
You can check locks when you see lock GUID;s in SM12 which is always the case when live cache lock is used. Authorizations are the same for this report and SM12. The note says you can navigate between the two.
But yes, no point wasting time finding out what is locked when there is a report like above. I have not downloaded it. I dont know if this note is part of latest patch. I need to check that. ..
Makes sense. As with SAP, little things of no economic consequence and so much struggle.
Thanks
BS -
Job dependecny with Not running for other jobs condition
Hi,
I have a job chain which should run with the below conditions
1) at 19:00 GMT.
2) Also it has another dependency that it should check for job x and job y and both these job x and job y should not be running.
I checked for creating job locks and locks can be used for two jobs and it compliments each other and cannot be used in this case.
How do I set this dependency in this case.
Thanks.Hello,
Then I don't understand the scenario.
So the 19:00 job can only start when job x and y are not running. But job x and y can start when the 19:00 job is running?
In that case create a pre-running action on the 19:00 job that checks if job x or y are running and waits for them to finish.
Regards Gerben -
Job dependency **** urgent
Hi All,
I need a help immediately .
job definition is needs to be run on hourly basis from morning 6AM to 18 GMT.
Once the job which is scheduled at 18 got completed , another job definition needs to run , if the job which is scheduled at 18 not completed the job no need to run .
in case if i put in the job chain how we can do this ?
Thanks
Ramkumar1) Create a timewindow restricting the time from 06 to 18 hrs
submit the job1 with the hourly submit frames.
2) between job1 and job2 create a job lock
Under definitions you will see locks in the name field give some appropriate name and save it.
in job1 and job2 add this lock
this lock will make sure that these 2 jobs donot step on each other when job1 is running and if job2 is scheduled inbetween
job2 will wait until job1 completes and then start running. vice-versa is also true.
3) Schedule job2 at 18:00 hrs
hope this is clear
As per the above scenario the job2 should be schedueld at 18:00 what happen's if the job is scheduled at 17 :00 ?
Will the second job wait for the completion of the job 1 or the job 2 will start running at 17:00 ?
Thanks
Ram -
Hi all,
We have just migrated our Java based application from Crystal Report 9 Report Application Server to BO Enterprise 3.1.
We imported the existing CR9 report into Crystal Report 2008 and have it saved to BOE 3.1 repository.
However, we observed the time required to generate a particular report is much longer than the existing production system.
The same report can be generated in the production environment within 10 seconds.
However, the same report at the new BOE 3.1 platform will run over 40 seconds.
The said report is frequently generated by users and the lengthed processing time from 10 seconds to 40 seconds are not acceptable.
We then added the "-trace" option in the BO RAS server and try to see what goes wrong.
We observe the particular delay in the trace log as follow:
TraceLog 2010 11 5 18:40:01.616 3572 6068 (Administrator:60) (..\cdtsagent.cpp:3117): doOneRequest saRequestId_getPromptConnInfos in
TraceLog 2010 11 5 18:40:01.616 3572 6068 (Administrator:60) (..\cdtsagent.cpp:658): JobSharing: getMatchingPromptingState: begin
TraceLog 2010 11 5 18:40:01.616 3572 6068 (Administrator:60) (..\cdtsagent.cpp:666): JobSharing: getMatchingPromptingState: job is bound, or metadata not retrieved. Do not search for a prompt state.
TraceLog 2010 11 5 18:40:01.632 3572 6068 (Administrator:60) (..\cdtsagent.cpp:858): JobSharing: getMatchingReportHandler: begin.
TraceLog 2010 11 5 18:40:01.632 3572 6068 (Administrator:60) (..\cdtsagent.cpp:866): JobSharing: getMatchingReportHandler: job bound, null doc, or job locked and not a write action. returning.
TraceLog 2010 11 5 18:40:15.194 3572 6068 (Administrator:60) (..\cdtsagent.cpp:658): JobSharing: getMatchingPromptingState: begin
TraceLog 2010 11 5 18:40:15.210 3572 6068 (Administrator:60) (..\cdtsagent.cpp:666): JobSharing: getMatchingPromptingState: job is bound, or metadata not retrieved. Do not search for a prompt state.
There seems the RAS server is busy processing something from 18:40:01 to 18:40:15.
We have been playing around different server parameters and the report options to try to see how to improve the performance.
It seems we got stuck at this point and we have no concrete direction on how to shorten the report processing time.
Although the particular report has large number of subreports (52 subreport objects),
we don't expect the performance of BO XI is much worse than CR9 RAS.
Any help / suggestions are much appreciated.
Best regards,
Ivan
Edited by: Ivan Wong on Nov 16, 2010 4:50 PMHello Ted Ueda,
The report are generated via our web application using Java SDK.
Is there any reference available such that we can use the Crystal Report Page server to complete the similar task?
In the mean time, we have already try to reduce the report complexity and able to cut down the report generation time to 20s.
However, it is still way too long for the user to accept.
Base on the trace log, I am also thinking RAS is trying to look for a cache report.
Do you aware if I can force the system not to do cache in order to try to speed up the processing?
I don't see any report options that can force the caching option.
Thanks and regards,
Ivan -
Hello,
i am using Firefox version 31. I have a problem with page setup margins. I need to use left
30mm; right 10mm; top 20mm; bottom 20mm. I am using VMware floating assignemnt linked clones
virtual desktops. After users log off - machines are deleted and users next time logs on get's brand
new VDI's. Users gets printers to VDI using login script from Active Directory.
Option Explicit
Dim strPrinterUNC, objNetwork
strPrinterUNC = "\\some_server\printer_number_1"
Set objNetwork = CreateObject("WScript.Network")
objNetwork.AddWindowsPrinterConnection strPrinterUNC
WScript.Sleep (20000)
objNetwork.SetDefaultPrinter strPrinterUNC
Set objNetwork = Nothing
Each user have different logon script, because they use different printers (different printer names e.g. \\some_server\printer_number_2 ; \\some_server\printer_number_3 and etc. Page setup margins in Internet Explorer are ok. But how to make Firefox page setup margins as i need? For other options i have
used CCK2 Wizard 2.0.4 tool. It worked fine. Maybe i can put some information in C:\Program
Files\Mozilla Firefox\defaults\pref\autoconfig.js I have some usefull data in it allready. I
have found info that: "Setting print margins is done in the printer settings". I have a lot of
printers, so i can not set printer margins individualy for each of them. Now mozilla shows top,
bottom, right, left each 12.7 mm. What should i do if i have a lot of printers in enterprise
environment?Firefox has a profile folder that has preferences to save this. But the config that would need to be changed is:
print. save_print_settings = True - (default): Save the print settings after each print job
Locking that preference: [http://kb.mozillazine.org/Locking_preferences]
Or done manually:
#In order to check the margins, we need to go to ''File'' > ''Page Setup''.
#Once this is done, switch to the ''Margins & Header/Footer'' tab.
#Check what's set there under ''Margins''.
The following are the default values for ''Margins'':
Check these values accordingly and change them if necessary. -
Incoming Email in SharePoint 2013
Trevor,
Side question - I need to configure incoming email; since I have multiple WFE Servers & APP Servers and based on Q&A at the link below (is the hotfix part of the December cumulative update 15.0.4551.1511 or I have to apply it separately)?
http://social.technet.microsoft.com/Forums/sharepoint/en-US/f9f1d254-0f9e-4eec-a1c7-a94252668680/sharepoint-2013-incoming-mail-with-nlb?forum=sharepointgeneral
Note that the SPLockJobType will be changed to Job in the December 2013 CU for load balancing purposes.
http://sharepoint.nauplius.net/2013/08/update-on-incoming-email-job-lock-type-change-between-sharepoint-2010-and-2013/
It is not recommended to run Incoming Email on more than one SharePoint 2013 server due to a synchronization issue Microsoft identified (hence the job lock type change).
Thanks
DavinderAs of the December 2013 Cumulative Update, you can have more than 1 server in a SharePoint 2013 farm running the Incoming Email service as the LockJobType was changed back to None. The change is part of the Dec 2013 CU.
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
SharePoint 2013 : Incoming Mail with NLB
I am trying to configure Incoming Email and I have (2) of each WFE & APP Servers
Based on Q&A link below
http://social.technet.microsoft.com/Forums/sharepoint/en-US/f9f1d254-0f9e-4eec-a1c7-a94252668680/sharepoint-2013-incoming-mail-with-nlb?forum=sharepointgeneral
Note that the SPLockJobType will be changed to Job in the December 2013 CU for load balancing purposes.
http://sharepoint.nauplius.net/2013/08/update-on-incoming-email-job-lock-type-change-between-sharepoint-2010-and-2013/
It is not recommended to run Incoming Email on more than one SharePoint 2013 server due to a synchronization issue Microsoft identified (hence the job lock type change).
My question: is the patch part of SharePoint Cumulative Update 15.0.4551.1511 - if not then I will apply it separately (do I apply to all the servers and would I have to run SharePoint Products and Configuration) ?
Thanks
DavinderThe patch is part of the December 2013 CU. You can use Incoming Email in a load balanced or high availability MX/SMTP routing given you have the Dec 2013 CU installed on SharePoint 2013.
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
What is the impact of R/3 upgradation from 4.7 to ECC 6 on BI
Dear all,
Can any one tell me what will be the impact of R/3 Upgradation on BI... as we are shortly going to upgrade our R/3 from 4.7 to ECC 6.
Do we need to take any precautions in R/3 and aswell as BI
Please give the information...points will be given
Regards
venuHi
Please Refer this as this vll give u the Pros and Cons for upgrade.
Refer
http://wiki.ittoolbox.com/index.php/Upgrade_BW_to_Netweaver_2004s_from_v3.0B
This Wiki contains Rob Moore's ( BW Manager, Edwards Limited) teams experiences in upgrading Business Warehouse System from 3.0B to BW 7.0.
Contents
1 Upgrading from BW 3.0B to BW 7.0 (Netweaver 2004s)
2 Introduction
3 Overview & Scope
4 Drivers
5 Environment
6 Resource & Timescales
7 High Level Plan
8 Summary Task List
8.1 #Support Pack Hike
8.2 #Plug-in installation
8.3 #PREPARE process
8.4 #Dbase upgrades
8.5 #System Upgrade
9 Lessons Learnt
10 Issues & Fixes
10.1 Unfixed Issues
10.2 Fixed Issues
11 Regression Testing Process
11.1 Introduction
11.2 Set up
11.3 Actions
11.4 Security
12 Transport Freeze
13 Web Applications
13.1 Dashboards
13.2 Internet Graphics Server (IGS)
14 Detailed Task Lists
14.1 Support Pack Hike (detail)
14.2 Plug-in installation (detail)
14.3 Dbase upgrades (detail)
14.4 PREPARE Process (detail)
14.5 System Upgrade (detail)
Upgrading from BW 3.0B to BW 7.0 (Netweaver 2004s)
Introduction
This Wiki contains my teams experiences in upgrading our Business Warehouse System from 3.0B to BW 7.0.
Hopefully it will be useful to anyone else who's about to embark on this. If there's anything I've missed or got wrong, then please feel free to edit it or contact me and I'll try to explain.
Rob Moore - BW Manager, Edwards Limited.
Overview & Scope
This was to be a technical upgrade of BW only. The new BW 7.0 web functionality & tool suite which requires the Java stack rather than the ABAP stack was out of scope. We had heard that the latter was where most of the problems with the upgrade lay. Our plan is to wait for this part of BW 7.0 to become more stable. Also it has a big front end change and the business didn't have sufficient resource to cope with that much change management.
Drivers
3.0B at the end of its maintenance
Opportunities to do better reporting
Options to utilise BI Accelerator
Environment
Our R/3 system was at 4.6C and was not going to be upgraded. We have APO at version SCM4.0.
Our BW system is approximately 300 GB, with 125 global users. It was at version 3.0B SP 18
We have Development, Acceptance and Production environments.
Resource & Timescales
The Project ran for 3.5 months from Feb to May 2007. We used the following resources. The percentages are the approx. amount of their time spent on the project.
Project Manager * 1 70%
BW technical team * 3 50%
ABAP coder * 1 10%
SAP Systems Development expert * 1 20%
Basis * 1 25%
High Level Plan
These are the basic areas. We planned to complete this process for each environment in turn, learning our lessons at each stage and incorporating into revised plans for the next environment. However we did the Support Packs and Plug-Ins in quick succession on all environments to keep our full transport path open as long as possible.
Upgrade BW to the minimum support pack.
Install R/3 Plug-ins PI 2004.1
Run PREPARE on BW
Dbase upgrades (Database & SAP Kernel upgrade)
System Upgrade
Summary Task List
This list contains all the basic tasks that we performed. A more detailed check list is shown below for each of the headings.
#Support Pack Hike
We moved only to the minimum acceptable SP as this seemed most likely to avoid any problems with the 3.0B system prior to the upgrade.
Apply OSS 780710 & 485741
Run Baseline for Regression tests
Full Backup
Apply SP's (we were at SP18, going to SP20)
SPAU list review
Regression testing
#Plug-in installation
Apply SAP note 684844
Import and patch Basis plugin PI 2004.1
SPAU list review
#PREPARE process
BW Pre-Prepare tasks
BW - Inconsistent Data fix
Run PREPARE
Review of results
Any showstoppers from PREPARE?
#Dbase upgrades
Database Upgrade (FixPak)
SAP Kernel Upgrade
#System Upgrade
Reverse Transport of Queries
Reconnect DAB & SAB to AAE
Run Baseline for Regression tests
Full Backup
Run the Upgrade
SPAU list review
Regression testing
Lessons Learnt
Testing is all! We picked up on a lot of issues, but would have picked up more if we'd had a full copy of our production environment to test over.
Our approach of doing a full upgrade on each environment before moving to the next one paid dividends in giving us experience of issues and timescales.
Write everything down as you go, so that by the time you get to upgrading production you've got a complete list of what to do.
We succeeded because we had people on our team who had enough experience in Basis and BW to be able to troubleshoot issues and not just read the manual.
The SAP upgrade guide is pretty good, if you can understand what they're on about...
Remember the users! The fact that the loads have been successful doesn't count for anything unless the users can see the data in Excel! There's a tendency to get caught up in the technology and forget that it's all just a means to an end.
Issues & Fixes
I've listed the main issues that we encountered. I have not listed the various issues where Transfer rules became Inactive, DataSources needed replication or we had to reinstall some minor Business Content.
Unfixed Issues
We could not fix these issues, seems like SP 13 will help.
Cant delete individual request from ODS
After PREPARE had been run, if we had a load failure and needed to set an ODS request to Red and delete it, we found that we could not. We raised an OSS with SAP but although they tried hard we couldn't get round it. We reset the PREPARE and still the issue persisted. Ultimately we just lived with the problem for a week until we upgraded production.
Error when trying to save query to itself
Any query with a re-usable structure cannot be saved (more thsan once!)
OSS 975510 fixed the issue in Dev and Acc, but NOT in Production! SP 13 may solve this once it's released.
Warning message when running some queries post-upgrade
Time of calculation Before Aggregation is obsolete. Not a big issue so we haven't fixed this one yet!
Process Chain Scheduling Timing error
Process chains get scheduled for the NEXT day sometimes and has to be manually reset.
See OSS 1016317. Implement SP13 to fix this. We will live with it for now .
Fixed Issues
Duplicate Fiscal Period values in Query
If you open up a drop down box ("Select Filter Value") for Fiscal Year/Period to filter your query, you are presented with duplicate entries for Month & Year.
Due to Fiscal Year Period InfoObject taking data from Master Data not InfoProvider. Thus it picks up all available periods not just Z2.
Auto-Emails being delayed
Emails coming from BW from process chains are delayed 2 hours on BW before being released
Due to userids that send these emails (e.g. ALEREMOTE) being registered on a diffferent timeazone (i.e. CET) from the BW system (i.e. GMT)
Pgm_Not_Found short dump
Whenever a query is run via RRMX or RSRT
Call transaction RS_PERS_ACTIVATE to Activate History and Personalisation
Characteristics not found
When running a query the warning message Characteristic does not exist is displayed for the following: 0TCAACTVT, 0TCAIPROV, 0TCAVALID
We activated the three characteristics listed and the warnings stopped. NO need to make them authorisation-relevant at this stage.(also did 0TCAKYFNM)
System generated Z pgms have disappeared
Post-upgrade the system Z-pgms ceased to exist
Discovered in Development so we compared with pre-upgraded Production and then recreated them or copying them from production.
Conversion issues with some Infoobjects
Data fails to Activate in the ODS targets
For the InfoObjects in question, set the flag so as not to convert the Internal values for these infoobjects
InfoObject has Conversion routine that fails, causing load to fail
The routine prefixes numeric PO Numbers with 0s. SD loads were failing as it was not able to convert the numbers. Presumably the cause of the failure was the running of the Pre-Prepare RSMCNVEXIT pgm.
Check the Tick box in the Update rule to do the conversion prior to loading rather than the other way round.
Requests fail to Activate on numeric data
Request loads OK (different from above issue) but fails to Activate
Forced conversion within the update rules using Alpha routine. Deleted Request and reloaded from PSA.
Database views missing after pre-PREPARE work
Views got deleted from database, although not from data dictionary
Recreated the views in the database using SE14.
Workbook role assignations lost
We lost a few thousand workbook assignments when we transported the role they were attached to into Production
The workbooks did not exist in Development, thus they all went AWOL. We wrote an ABAP program to re-assign them in production
Regression Testing Process
Introduction
We were limited to what we could do here. We didn't have a sandbox environment available. Nor did we have the opportunity to have a replica of our production data to test with, due to lack of disk space in Acceptance and lack of sufficient Basis resource.
Set up
We manually replicated our production process chains into test. We didn't have any legacy InfoPackages to worry about. We asked our super-users for a list of their "Top 10" most important queries and did a reverse transport of the queries from Production back into test (as we do not generally have a dev/acc/prodn process for managing queries, and they are mostly created solely in prodn). We made sure every application was represented. In retrospect we should have done some Workbooks as well, although that didn't give us any problems.
Actions
Prior to the various changes we loaded data via the Process chains and ran the example queries to give ourselves a baseline of data to test against. After the change we ran the same queries again and compared the results against the baseline. We tried to keep R/3 test environments as static as possible during this, although it wasn't always the case & we often had to explain away small changes in the results. After upgrading BW Development we connected it to Acceptance R/3, so that we had pre-upgrade (BW Acceptance) and post-upgrade (BW Development) both taking data from the same place so we could compare and contrast results on both BW systems. We did the same thing once BW Acceptance had been upgrading by connecting it (carefully!) to Production R/3. To get round the lack of disk space we tested by Application and deleted the data once that Application had been signed off. Once we got to System test we involved super-users to sign off some of the testing.
Security
We chose to implement the new security schema rather than choosing the option to stick with old. For us, with a relatively small number of users we felt we could get away with this & if it all went wrong, just temporarily give users a higher level role than they needed. Our security roles are not complex: we have end user, power-user and InfoProvider roles for each BW application, together with some common default roles for all. In the event we simply modified the default "Reports" role that all our users are assigned, transported it and it all went smoothly. Apart from the fact that everyone's workbooks are assigned to this role and so we "lost" them all !
Transport Freeze
Once you've upgraded Development you've lost your transport path. We planned around this as best we could and when absolutely necessary, developed directly in Acceptance or Production, applying those changes back to Development once the project was complete. Depending on what other BW projects you have running this may or may not cause you pain!
Web Applications
Dashboards
We had various dashboards designed via Web Application Designer. All these continued to function on the upgraded system. However there were various formatting changes that occurred e.g. Bar graphs were changed to line graphs, text formats on axes changed etc. SAP provides an upgrade path for moving your Web applications by running various functions. However we took the view that we would simply re-format our dashboards manually, as we didn't have very many to do. Plus the external IGS (see below) powered all our environments and needs upgrading separately as part of the SAP method. Thus we couldn't have tested the SAP path in Development without risking Production. Sticking with manual mods was a lower risk approach for us. We did find we had to re-activate some templates from BC to get some of the reports to continue to work.
Internet Graphics Server (IGS)
We had an external IGS server with v3.0B. Post-upgrade the IGS becomes part of the internal architecture of BW and thge external server is redundant. We found no issues with this; after the upgrade BW simply stops using the external IGS and no separate config was needed.
Detailed Task Lists
Support Pack Hike (detail)
Apply OSS 780710 & 485741
Communicate outage to users
Warning msg on screen
Stop the jobs which extract data into delta queues
Clear the delta queues by loading into BW
Check RSA7 in PAE that delta queues are now empty
Run Baseline for Regression tests
Stop Delta queues
Lock Out users
Full Backup
Apply SP's
Upgrade the SAP kernel from 620 to 640
SPAU list review
Apply OSS Notes 768007 & 861890
Unlock Test users (inc. RFC id's on R/3)
Regression testing
Regression sign-off
Remove warning msg
Unlock users
User communication
Plug-in installation (detail)
Communicate outage to users
Warning msg on screen
Apply SAP note 684844
Lock out users
Full Backup
Empty CRM queues
Import and patch Basis plugin PI 2004.1
SPAU list review
Apply OSS 853130
Switch back on flag in TBE11 (app. BC-MID)
Remove warning msg
Unlock users
User communication
Dbase upgrades
Dbase upgrades (detail)
Communicate outage to users
Warning msg on screen
Run Baseline for Regression tests
Stop the Data extract jobs
Full Backup
Lock Out users
Apply FixPak13SAP to DB2 database
Upgrade the SAP kernel from 620 to 640
Apply OSS 725746 - prevents RSRV short dump
Unlock Test users (inc. RFC id's on R/3)
Regression testing
Regression sign-off
Remove warning msg
Unlock users
User communication
PREPARE Process (detail)
Pre-PREPARE Process
RSDG_ODSO_ACTIVATE
Repair Info objects and recreate the views
Communicate outage to users
Warning msg on screen
Run Baseline for Regression tests
Stop the Data extract jobs
Lock Out users
Full Backup
Run RSMDCNVEXIT Using BSSUPPORT ID
If there conversion process runs longer delay the regular backup
Re-run Baselines and sign off
If there conversion process Fails repeat the steps on Sunday after the regular backup
BW work
Back up customer-specific entries in EDIFCT (note 865142)
Activate all ODS objects
Execute report RSUPGRCHECK with flag "ODS objects" (note 861890)
Check Inconsistent InfoObjects - Upgr Guide 4.4; OSS 46272; Convert Data Classes of InfoCubes - Upgr Guide 4.3
Execute report SAP_FACTVIEWS_RECREATE (note 563201)
Make sure Delta queues are empty (RSA7)
Basis Work
Full Backup of BW
Lock users
Unlock id's RFC id's and designated test users
Confirm backup complete OK
Apply OSS 447341 Convert Inconsistent Characteristic Values - Upgr Guide 4.5
Confirm OK to PREPARE
Run PREPARE
Review of results
System Upgrade (detail)
Communicate outage to users
Process errors from PREPARE
Check disk space availability for PAB
Warning msg on screen
Reverse Transport Queries from PAB to SAB & DAB
Change BW to R/3 connection
Get backup put on hold, for Ops to release later
Ensure Saturday night backup is cancelled
Final run of PREPARE
Run Baseline for Regression tests
Clear Delta Queues
Delete Local Transports
Remove process chains from schedule
Lock Out users (with some exceptions)
Confirm to Ops and Angie that we're ready
Full Backup
Incremental backup of Unix files
UPGRADE "START"
Back up kernel
Unpack latest 700 kernel to upgrade directory
Check ids required for upgrade are unlocked
Check no outstanding updates
Turn off DB2 archiving
Open up the client for changes
Stop saposcol, & delete from exe directory
Run the Upgrade
Execute the saproot.sh script
Perform the database-specific actions
Perform follow-up activities for the SAP kernel
Reimport additional programs
Import Support Packages
Call transaction SGEN to generate ABAP loads
Transport Management System (TMS)
Handover to BW Team
SPAU list review
Apply OSS Notes from SPAU
Process Function Module XXL_FULL_API (part of SPAU)
Restore table EDIFCT (if required)
Transport Fixes from our BW Issue list
Convert the chart settings - manually
Perform activities in the Authorization Area
Activate hierarchy versions
Update the where-used list
Execute the conversion program for the product master
Transport New roles to PAB
Unlock selected users
Regression testing
Regression sign-off
Go / No Go decision
Restore if required
Remove warning msg
Tell Ops that the CR is now complete
Lock down PAB
Unlock users
User communication
Perform the follow-up activities for SAP Solution Manager
Reschedule background jobs
Reschedule weekly backup on Ctrl-M
Drinks all round.
Vendors mentioned: dBase
Hope this helps.
Maybe you are looking for
-
I opened a java file in notepad. however, i did not uncheck the always associate too notepad. now all the java files are linked to notepad. should i disassociate java files from notpad by going into view type in windows folder tools or is it safe to
-
DB Adapter picking XML Data with ' ' as ' '
Hi, I am trying to get XML CLOB from Oracle DB using DBAdapter in ESB, but DBAdapter is converting '<' as '<'. Is anybody encountered this problem before, I am trying to go for, user extensions, to use java convert back the '<' to '<' Thx rvr
-
Release dates of nteweaver 2004 and netweaver 2004's and XI 3.0 and PI 7.0
Hi, i would like to know the information of the following: release dates of NetWeaver 2004 and NetWeaver 2004's and SAP XI 3.0 and SAP PI 7.0 kalyan.
-
How do I merge data from table1 on server 1 to final table on server 2 with a stored procedure to execute every so many hours.
-
Any ideas how to resolve this problem?