Guidelines to reduce downtime during upgradation R/3 to ECC6

hi all........
Here we implemented SAP R/3 4.6  in the year 2005.
Now we are planning to upgrade our system to ECC6 unicode .
We are planning to upgrade our Production and Development system to SAP
ERP6 EHP4.As part of the upgradation; we had setup a test system with the
following details:
Sun Fire 480R (UltraSparc III + 1.05GHz x 4 processor)
Memory 16 GB and Storage : Sun CSM 200 SATA HDD
SAP R/3 4.71, single code pages 1100, Oracle 9.2.0.8.0, Solaris 9
Test system with 1.2TB data copied from Production System.
We have upgraded the test system using CU&UC method ( Upgrade Guide SAPERP6.0 EHP4 SR1 Abap ). It took 2 days downtime for full upgradation and 8 days for Unicode conversion.
Requirement :
To Upgrade the current systems without any additional hardware to SAP
ERP6 EHP4 Unicode.
As a part of the above, the cluster has to be upgraded to Sun cluster
3.2, and operating system to Solaris10 (5.10)
We are having the following queries:
1. With only 1.3 TB data, we feel, the down time of 10 days is not
realistic.Though we had not deviated from the above mentioned upgrade
guide, what could be the reasons for this down time of 10 days?what arethe affecting factors? Kindly suggest the ways by which we can get
reduced downtime . The referred Notes are (784118, 855772, 952514,
936441, 954268, 864861 ) still we are unable to pin point the reason. Abreak up of this down time is given below for reference.
2. Are there any alternate approved upgradation methods to reduce
downtime in our present set up? (details of present set up is given below)
3. How to calculate the down time with reference to(sapnote:857081) to in
our setup
Current Production System Details :
System : Sun Fire V890
Processor : 1.5GHz UltraSPARC IV+ processors -- 8no.s
Memory : 64 GB
Storage : Sun Storagetek 6180: HDD FCAL(16 no.s x 450 GB)
15,000-rpm 4 Gb/sec 4GB data cache (Storage connected to server using FC
cable)
SAP : R/3 4.71 single code pages 1100
Database : Oracle 9.2.0.8 data-size is around 1.3 TB
OS : Solaris 9 (5.9)
Cluster : Sun cluster 3.1
Network : 1GB ethernet (1000BASE-SX)
Architecture : Application and database server on Same system
SAP Development system specification same as the above with 187 GB data
size. PRD and DEV systems are on cluster; PRD failover to DEV system.
The time taken for test upgrade - brief details:
1. Kernel upgraded 6.20 to 6.40 (2hours)
2. Oracle Upgrade to 10.2.0.4.0 (3.30 hours)
3. SAP UPGRADE Preparation-Apply support packages (18 hours)
4. Downtime Phase and Pre-Processing and Finalization Phase (20 hours)
5. Unicode Conversion-Taking DB Export 109GB(Almost 10% of total DB
size)(98 hours)
6. Installation of unicode system using export (95 hours)
Looking forward to hear from you.
Best Regards,
Rajeesh K.P

Hi .......
i am very sorry for late reply...............................
The time taken for test upgrade - brief details:
1. Kernel upgraded 6.20 to 6.40 (2hours)
2. Oracle Upgrade to 10.2.0.4.0 (3.30 hours)
3. SAP UPGRADE Preparation-Apply support packages (18 hours)
4. Downtime Phase and Pre-Processing and Finalization Phase (20 hours)
5. Unicode Conversion-Taking DB Export 109GB(Almost 10% of total DB
size)(98 hours)
6. Installation of unicode system using export (95 hours)
2 days downtime means the sum of time used for above 4 activities ..........................
as you suggested we can do the same 4 different down times......................
Unicode Conversion-Taking DB Export 109GB(Almost 10% of total DB size) is taking a continuous down time
of 98 hours (4.5 days ) (Activity 5)+  installation of Unicode system using the Unicode export is taking 95 hours 4.5 days
(Activity 6)
i think the we have to do activity 5 & 6 in a single stretch with out users, so it will take a continuous down time of 9 Days
With this in mind I have a few questions for you to clear a little bit more all the problem.
Q:1 Did you use more parallel jobs during Downtime than default? and I mean real Downtime where the SAP system is taken down
No.we have used default number (3 ) of parallel jobs during downtime
Did the system allow you to use ICNV (incremental converssion) for some tables?
yes we may use ICNV
Can you separate the Unicode conversion to a different weekend? or is a must for some business reason?
Yes , we can separate the Unicode conversion to a different weekend , it will not affect our business
Did you save the post download Report so you can see what are the most time consuming phases?
i am not so clear about this question , we have made an activity chart with time duration as per the test upgrade. Or how we can save the post download report from the system
Looking forward to hear from you.
Best Regards,
Rajeesh K.P

Similar Messages

  • Minimizing Downtime during Upgrade

    Hi All,
    We are in process of upgrade from R/3 4.7 EE 2.00 to ECC 6.0 SR3.
    The details of legacy and new landscape is as follows :
    Legacy Landscape
    OS       Solaris 9
    DB       Oracle 9.2.04
    SAP     R/3 4.7 EE 2.00
    New Landscape
    OS       Solaris 10
    DB       Oracle 10g
    SAP     ECC 6.0 SR3
    We have taken the offline backup of Live PRD and restored it on to Stand by server. We have also upgrade oracle 9i to 10g, which has taken around 20 hours.
    My question is, we want to minimize the downtime by directly installing Oracle 10g with R/3 4.7 EE and the take the offline/online backup of live PRD system and restored back to oracle 10g enviroment. By doing this we can reduce the downtime by aprrox 15 hrs.
    Is this scenario is possible ? Please give your valuable suggestion from your last experiences.
    Thanks,
    Kshitiz Goyal

    > My question is, we want to minimize the downtime by directly installing Oracle 10g with R/3 4.7 EE and the take the offline/online backup of live PRD system and restored back to oracle 10g enviroment. By doing this we can reduce the downtime by aprrox 15 hrs.
    Yes - this is possible.
    I suggest you get the newest installation CD set for 4.7 which supports Oracle 10g (using sapinst, not R3SETUP):
    See note "969519 - 6.20/6.40 Patch Collection Installation : Oracle/UNIX", Section 3:
    3.) Retroactive Release of Oracle 10.2
    Markus

  • XPRA to minimize downtime durina upgrade

    Hello Experts,
    I am looking for information about XPRAs and system downtime during upgrade. Is it possible to modify XPRAs that has to be run during upgrade inorder to reduce the downtime, how? Please provide me other suggestions in order to reduce the downtime with XPRAs...
    Thanks...
    Viral

    Hi Viral,
    When you will run the prepare it will give you how many xpra objects are there for conversion.You can work on the xpra objects at uptime period.
    Regarding  parallel process when you will run the upgrade it will ask you for the parameter no of background process and no of parallel process.You can take a bigger value to reduce the downtime provided you have sufficient hardware.
    Regards
    Ashok Dalai
    Edited by: Ashok Dalai on Aug 20, 2009 11:30 AM

  • Best practices to reduce downtime for Database releases(rolling changes)

    Hi,
    What are best practices to reduce downtime for database releases on 10.2.0.3? What DB changes can be rolling and what can't?
    Thanks in advance.
    Regards,
    RJiv.

    I would be very dubious about any sort of universal "best practices" here. Realistically, your practices need to be tailored to the application and the environment.
    You can invest a lot of time, energy, and resources into minimizing downtime if that is the only goal. But you'll generally pay for that goal in terms of developer and admin time and effort, environmental complexity, etc. And you generally need to architect your application with rolling upgrades in mind, which necessitates potentially large amounts of redesign to existing applications. It may be perfectly acceptable to go full-bore into minimizing downtime if you are running Amazon.com and any downtime is unacceptable. Most organizations, however, need to balance downtime against other needs.
    For example, you could radically minimize downtime by having a second active database, configuring Streams to replicate changes between the two master databases, and configure the middle tier environment so that you can point different middle tier servers against one or the other database. When you want to upgrade, you point all the middle tier servers against database A other than 1 that lives on a special URL. You upgrade database B (making sure to deal with the Streams replication environment properly depending on requirements) and do the smoke test against the special URL. When you determine that everything works, you configure all the app servers to point at B and have Streams replication process configured to replicate changes from the old data model to the new data model), upgrade B, repeat the smoke test, and then return the middle tier environment to the normal state of balancing between databases.
    This lets you upgrade with 0 downtime. But you've got to license another primary database. And configure Streams. And write the replication code to propagate the changes on B during the time you're smoke testing A. And you need the middle tier infrastructure in place. And you're obviously going to be involving more admins than you would for a simpler deploy where you take things down, reboot, and bring things up. The test plan becomes more complicated as well since you need to practice this sort of thing in lower environments.
    Justin

  • Options for fast recovery - Reducing Downtime

    OS: OEL 5.7
    Database : 11.2.0.3-EE (non-RAC)
    I'm looking for Options using ONLY Oracle Features to reduce downtime on scheduled outages, due application changes and upgrades.
    In this particular case I have only one application installed on this database (ERP).
    Default Backup (full) and Restore operation are activities that we already know, but I'm looking for others options that reduce downtime.
    I need a rollback plan in short time.
    Any help is welcome.

    Hi,
    Dataguard is the best option in case of short downtime, but you will need double of storage space.
    Two thing you must consider:
    * What is database size?
    * What is amount of data that will be updated/deleted/added during this application change?
    Choose one of this option:
    * Dataguard
    The best option and is really fast, near zero downtime. (As mentioned by mseberg using a nice example)
    * Flashback Database with RESTORE POINT
    Oracle Flashback Database and restore points are related data protection features that enable you to rewind data back in time to correct any problems caused by logical data corruption or user errors within a designated time window. These features provide a more efficient alternative to point-in-time recovery and does not require a backup of the database to be restored first.
    Restore points provide capabilities related to Flashback Database and other media recovery operations. In particular, a guaranteed restore point created at an system change number (SCN) ensures that you can use Flashback Database to rewind the database to this SCN. You can use restore points and Flashback Database independently or together.
    You will need to open the database with RESETLOGS after FLASHBACK Database.
    * Guarantee Restore Point (with flashback database disabled)
    Like a normal restore point, a guaranteed restore point serves as an alias for an SCN in recovery operations. A principal difference is that guaranteed restore points never age out of the control file and must be explicitly dropped. In general, you can use a guaranteed restore point as an alias for an SCN with any command that works with a normal restore point.
    A guaranteed restore point ensures that you can use Flashback Database to rewind a database to its state at the restore point SCN, even if the generation of flashback logs is disabled.
    You don't need RESETLOGS after rollback.
    * Edition-Based Redefinition for Online Application Maintenance and Upgrades
    Edition-based redefinition enables you to upgrade a database component of an application while it is in use, thereby minimizing or eliminating down time. This is accomplished by changing (redefining) database objects in a private environment known as an edition.
    To upgrade an application while it is in use, you copy the database objects that comprise the application and redefine the copied objects in isolation. Your changes do not affect users of the application—they continue to run the unchanged application. When you are sure that your changes are correct, you make the upgraded application available to all users.

  • Downtime for upgrade from SP14 to SP23 - PI 7.0

    Hello All,
    What is the estimated downtime for upgrading PI7.0 from SP14 to SP23. We only have one application server in the environment and the decision to upgrade will happen based on the downtime information . Appreciate any response . Thanks Rahul.

    Hi Rahul,
    We have recently upgraded our PI 7.0 system from SP 21 to SP 21.
    Basis team has requested for 19 hrs downtime. But the actual downtime that was required was less than 10 hours.
    During upgrade, you have a downtime minimisation approach which would enable you to perform few actvities online without bringing down the system.
    To follow this approach, Goto the transaction code "SAINT" --> Extras --> Settings --> Import Queue --> Import Mode.
    There would be an option called "Downtime-minimized". Check that option so that you would be able to reduce the downtime during the upgrade.
    Regards,
    Subbu

  • Disck Space not enough during upgrade CUCM 9.1(2)SU1 to 9.1(2)SU2a

    Hi,
    I face an issues during upgrade my current CUCM 9.1(2)SU1 to 9.1(2)SU2a, it show with this error message "There is not enough disk space in the common partition to perform the upgrade, for steps to resolve this condition please refer to CUCM 9.1(1) release notes or view defect CSCuc63312 in bug toolkit on Cisco.com".  I had check the bug id and got 2 way to do it:-
    1. reduce the amount of traces on the system, but I not ready want to do this
    2. install a COP file name: ciscocm.free_common_space_v1.0.cop.sgn
    If I select option 2, will it have any risk? Is it mean 9.1(2) - inactive version will be clear out from disk space or it will clear out 2 version - active and inactive version from disk space?
    I need have some advice on this before I perform.
    Thanks.

    Hi,
    As per the following link
    http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/rel_notes/9_1_1/CUCM_BK_R6F8DBD4_00_release-notes-for-cucm-91/CUCM_BK_R6F8DBD4_00_release-notes-for-cucm-91_chapter_011.html
    You will not be able to switch to the previous version after the COP file is installed. For example, if you are upgrading from Cisco Unified Communications Manager 9.0(1) to Cisco Unified Communications Manager 9.1(1) and the previous version is Cisco Unified Communications Manager 8.6, the COP file clears the space by removing the 8.6 version data that resides in the common partition. So after you apply the COP file, you will not be able to switch to the 8.6 version."
    Additionally, regarding the first option, if you do not want to reduce the tracing levels you can still delete some old traces using RTMT which gives you an option of deleting the traces from the server as well as transferring / downloading them to your pc simultaneously by checking the 'delete from server' option on last page of log file collection option.
    HTH
    Manish

  • Ways to reduce downtime for filling up setup table

    Hi Experts,
    Can anyone tell me the step by step process so that i can reduce downtime for filling up setup tables?
    I know that setup tables can be filled by considering sales document numbers....but the further steps are not clear with me...........specially with data loadin till PSA and then to ODS/Cube
    So plz throw some light on this.......
    Regards,
    Vaishnavi.

    Hi,
    You will need to fill the set up tables in 'no postings period'. In other words when no trasnactions are posted for that area in R/3 otherwise those records will not come to BW. Discuss this with end user and decide. Weekends are a general choice for this activity.
    You can run them after business hours so that there wont be any transactions, or in the night times or you can do it on week ends so that there is no need to take down time.
    Fill the setup tables with already closed values and then fill up again with open values.This will reduce the down time.
    Initialize  closed periods first in which users wont enter data ( for example in 2007 or 2006), this initializations can be done while users are working. Then the initialize last period at night/weekends.holidays etc.
    If you know documents that are in closed periods, and you are sure that these documents can no longer be changed, you can only fill the SetUp tables for these documents or only for these periods, by continuing to post in open periods. You then initialize only for these intervals, delete the setup table, and only then do you fill the setup table with the rest of the documents  this procedure can drastically reduce the downtimes of your system.
    However, there is a risk that user exits (and in LIS, formulas and conditions) can be used to retrieve documents that are in periods that are already "closed periods".
    One more thing what you need to bear in mind is, to check if there are any Scheduled jobs which are updating the transaction tables, which would definitely cause Data Reconciliation Issues.
    Try Early Delta Initialization
    With early delta initialization, you have the option of writing the data into the delta queue or into the delta tables for the application during the initialization request in the source system. This means that you are able to execute the initialization of the delta process (the init request), without having to stop the posting of data in the source system. The option of executing an early delta initialization is only available if the DataSource extractor called in the source system with this data request supports this.
    Extractors that support early delta initialization are delivered with Plug-Ins as of Plug-In (-A) 2002.1.
    You cannot run an initialization simulation together with an early delta initialization.
    Hope this link may make you clear about Early Delta Initialization
    http://help.sap.com/saphelp_nw04s/helpdata/en/80/1a65dce07211d2acb80000e829fbfe/frameset.htm
    http://www.allinterview.com/showanswers/2907.html
    http://sap.ittoolbox.com/groups/technical-functional/sap-bw/early-delta-initialization-459379
    http://books.google.co.in/books?id=qYtz7kEHegEC&pg=PA293&lpg=PA293&dq=early+delta&source=web&ots=AM1PtX6wcZ&sig=xKOF85Gb8UtszY44zt06K6R0n3M&hl=en#PPA290,M1
    http://www.blackwellpublishing.com/journal.asp?ref=1069-6563&site=1
    EARLY DELTA
    Early delta Initialization
    How To… Minimize Downtime For Delta Initialization
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5d51aa90-0201-0010-749e-d6b993c7a0d6
    How To Minimize Effects of Planned Downtime (NW7.0)
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/901c5703-f197-2910-e290-a2851d1bf3bb
    Note 753654 - How can downtime be reduced for setup table update
    602260 - Procedure for reconstructing data for BW 
    437672 - LBWE: Performance for setup of extract structures 
    436393 - Performance improvement for filling the setup tables 
    Note 739863
    /thread/756626 [original link is broken]
    Re: How to Setup and INIT from 2LIS_13_VDITM with millions of records
    How downtime can be reduced for setup table update.
    Fill setup tables without locking users
    Initialization Setup Tables.
    Hope this helps.
    Thanks,
    JituK

  • Need to convince manager on posting downtime during build of setup tables

    Can any one suggest some pointers in writing a business case for downtime during build up of setup tables
    this is for Sales Application component.

    Delta loads especially for LO extraction is governed by the V3 jobs scheduled in the source system.
    These V3 jobs will post into the delta queue even if there is some activity in the R/3 system.
    To ensure that no transactions get missed - a downtime is taken in the source system so that no documents are changed.
    If you want to avoid downtime :
    1. You should know which documents changed during the period the extractors were refreshed. this is in order to fo a full repair request.
    Typical procedure involves :
    1. Take downtime.
    2. Clear delta queues into SAP BI
    3. Deschedule V3 jobs
    4. Move enhancements into the production R/3 system.
    5. Do an init without data transfer into SAP BI to create the delta queues
    6. Schedule the V3 collection jobs
    7. open the system for users to post / change documents etc
    8. Continue deltas into SAP BI
    If you want to avoid downtime then
    1. Clear delta queues into SAP BI
    2. Deschedule V3 jobs
    3. Move enhancements into R/3 production
    4. Init without data transfer
    5. Schedule V3 jobs
    6. Continue deltas
    7. Perform full repair request for documents that have changed between steps 1 and 5 into SAP BI.
    I have gioven the steps for an enhancement - but then any activity involving changes to V3 extractions or Upgrade activity in SAP BI would fall into the same category.

  • Anybody know the estimate downtime for upgrade 10TB data 9i DB  to 10gr2

    Anybody know the estimate downtime for upgrade 10TB data 9i DB to 10gr2

    Depend of the choosing method.
    Depend if you move database, or if you stay on same server box.
    Depend of processor you have.
    In fact, the size of database is not directly implied during upgrade, that would be more regarding number of objects.
    If I said you one hour, that won't be a big help. Manual upgrade may be very quick, but estimate time is not an easy task. Test on your test server and estimate for your production.
    Nicolas.

  • Best practice to reduce downtime  for fulllaod in Production system

    Hi Guys ,
    we have  options like "Initialize without data transfer  ", Initialization with data transfer"
    To reduce downtime of production system for load setup tables , first I will trigger info package for  Initialization without data transfer so that pointer will be set on table , from that point onwards any record added as delta record , I will trigger Info package for Delta , to get delta records in BW , once delta successful, I will trigger info package for  the repair full request to  get all historical data into setup tables , so that downtime of production system will be reduced.
    Please let me know your thoughts and correct me if I am wrong .
    Please let me know about "Early delta Initialization" option .
    Kind regards.
    hari

    Hi,
    You got some wrong information.
    Info pack - just loads data from setup tables to PSA only.
    Setup tables - need to fill by manual by using related t codes.
    Assuming as your using LO Data source.
    In this case source system lock is mandatory. other wise you need to go with early delta init option.
    Early delta init - its useful to load data into bw without down time at source.
    Means at same time it set delta pointer and will load into as your settings(init with or without).
    if source system not able lock as per client needs then better to go with early delta init options.
    Thanks

  • Error while runnning autoconfig on apps tier during upgrade

    Hi,
    Error while runnning autoconfig on apps tier during upgrade from 11.5.9 to 11.5.10.2
    below is the error message.
    Enter the APPS user password :
    The log file for this session is located at: /u01/app/tinst/tinstappl/admin/TINST_dba5/log/05031134/adconfig.log
    AutoConfig is configuring the Applications environment...
    AutoConfig will consider the custom templates if present.
    Using APPL_TOP location : /u01/app/tinst/tinstappl
    Classpath : /u01/app/tinst/tinstcomn/util/java/1.6/jdk1.6.0_18/jre/lib/rt.jar:/u01/app/tinst/tinstcomn/util/java/1.6/jdk1.6.0_18/lib/dt.jar:/u01/app/tinst/tinstcomn/util/java/1.6/jdk1.6.0_18/lib/tools.jar:/u01/app/tinst/tinstcomn/java/appsborg2.zip:/u01/app/tinst/tinstcomn/java
    Using Context file : /u01/app/tinst/tinstappl/admin/TINST_dba5.xml
    Context Value Management will now update the Context file
    Exception in thread "main" java.lang.NoClassDefFoundError: oracle/jdbc/OracleDriver
    at oracle.apps.ad.util.DBUtil.registerDriver(DBUtil.java:153)
    at oracle.apps.ad.util.DBUtil.<init>(DBUtil.java:102)
    at oracle.apps.ad.tools.configuration.FileSysDBCtxMerge.getDBConnection(FileSysDBCtxMerge.java:759)
    at oracle.apps.ad.tools.configuration.FileSysDBCtxMerge.initializeParams(FileSysDBCtxMerge.java:147)
    at oracle.apps.ad.tools.configuration.FileSysDBCtxMerge.setParams(FileSysDBCtxMerge.java:128)
    at oracle.apps.ad.context.CtxValueMgt.mergeCustomInFiles(CtxValueMgt.java:1762)
    at oracle.apps.ad.context.CtxValueMgt.processCtxFile(CtxValueMgt.java:1579)
    at oracle.apps.ad.context.CtxValueMgt.main(CtxValueMgt.java:709)
    Caused by: java.lang.ClassNotFoundException: oracle.jdbc.OracleDriver
    at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
    ... 8 more
    ERROR: Context Value Management Failed.
    Terminate.
    The logfile for this session is located at:
    /u01/app/tinst/tinstappl/admin/TINST_dba5/log/05031134/adconfig.log
    Please let me know your suggestions to resolve this issue.
    Regards,
    Sreenivasulu.

    Hi,
    DB tier autoconfig was successfully completed.
    We have already checked the above mentioned metalink ID's, but still the issue is same.
    Can you please advice any other solutions.
    Regards,
    Sreenivasulu.

  • Error during Upgrade from 4.6c to ECC 6.0

    Hi All,
      We are facing an error when upgrading from 4.6c to ECC 6.0. We are facing this error on the table COEP - runtime object inconsistancy. What we found is there is ERP upgrade has created new extra fields in the table. In log file the error is specified as : Duplicate Field name, But we not able to find the duplicate field name in the table.  Please kindly help as early as possible. The upgrade process is stuck.
    Regards
    Anil Kumar K

    Hi Anil,
    Is this issue fixed? Can i know how you fixed it?
    replied to your message Re: How to adopt the index changes during upgrade.
    Thanks,
    Somar

  • Error in transaction KEA0 during upgrade

    Dear All,
    During upgrading the custmer system from 4.7 to ECC 6.0 we found some error in the transaction KEA0 while trying to activate the cross client for an operating concern.
    The activation logs says that there are the syntax errors in the gereted subroutine pool: RK2O0100.
    I however corrected the error and activated the code but again when I am trying to execute the same it gave me the same error and when I go to the sub routine pool I found that the changes are again undone and the previos version is activated. This happens repeatedly that the code is corrected and activated by me and says no error and ones I log off and login again it says the same error.
    Help will be appriciated.
    Thanks iin advance....
    Abhi...

    Dear Raymond,
    Thankds for your help..........
    I tring to run RKEAGENV it shows the error message with STOP button saying that the field WTGBTR not contained in the nametab for table CE1E_B1 and when Io run RKEAGENF it gives a short dump saying that entry for CE10100 is not allowed for TTABS.....
    Please help.

  • Encounter a problem during upgrade sap r/3 4.7 to ecc6

    Hi expert:
    i encounter a problem during upgrade sap r/3 4.7 to ecc6
    in START_SHDI_FIRST step.
    here is the error msg~
    MSSCRESPDEFN.LOG
    Msg 2812, Level 16, State 62, Server Jack, Line 6
    Could not find stored procedure 'dbo.sap_upg_getrelease'.
    setuser
    i checked MS SQL DB and found out that i only got 'erp.sap_upg_getrelease' but not dbo owner.
    can any one tell me how to solve this problem?~
    thanks.
    Regards
    Jack Lee

    It's the kernel for the shadow the correct?.
    Regards,
    Alfredo.

Maybe you are looking for