Best practice to reduce downtime  for fulllaod in Production system

Hi Guys ,
we have  options like "Initialize without data transfer  ", Initialization with data transfer"
To reduce downtime of production system for load setup tables , first I will trigger info package for  Initialization without data transfer so that pointer will be set on table , from that point onwards any record added as delta record , I will trigger Info package for Delta , to get delta records in BW , once delta successful, I will trigger info package for  the repair full request to  get all historical data into setup tables , so that downtime of production system will be reduced.
Please let me know your thoughts and correct me if I am wrong .
Please let me know about "Early delta Initialization" option .
Kind regards.
hari

Hi,
You got some wrong information.
Info pack - just loads data from setup tables to PSA only.
Setup tables - need to fill by manual by using related t codes.
Assuming as your using LO Data source.
In this case source system lock is mandatory. other wise you need to go with early delta init option.
Early delta init - its useful to load data into bw without down time at source.
Means at same time it set delta pointer and will load into as your settings(init with or without).
if source system not able lock as per client needs then better to go with early delta init options.
Thanks

Similar Messages

  • Best practices to reduce downtime for Database releases(rolling changes)

    Hi,
    What are best practices to reduce downtime for database releases on 10.2.0.3? What DB changes can be rolling and what can't?
    Thanks in advance.
    Regards,
    RJiv.

    I would be very dubious about any sort of universal "best practices" here. Realistically, your practices need to be tailored to the application and the environment.
    You can invest a lot of time, energy, and resources into minimizing downtime if that is the only goal. But you'll generally pay for that goal in terms of developer and admin time and effort, environmental complexity, etc. And you generally need to architect your application with rolling upgrades in mind, which necessitates potentially large amounts of redesign to existing applications. It may be perfectly acceptable to go full-bore into minimizing downtime if you are running Amazon.com and any downtime is unacceptable. Most organizations, however, need to balance downtime against other needs.
    For example, you could radically minimize downtime by having a second active database, configuring Streams to replicate changes between the two master databases, and configure the middle tier environment so that you can point different middle tier servers against one or the other database. When you want to upgrade, you point all the middle tier servers against database A other than 1 that lives on a special URL. You upgrade database B (making sure to deal with the Streams replication environment properly depending on requirements) and do the smoke test against the special URL. When you determine that everything works, you configure all the app servers to point at B and have Streams replication process configured to replicate changes from the old data model to the new data model), upgrade B, repeat the smoke test, and then return the middle tier environment to the normal state of balancing between databases.
    This lets you upgrade with 0 downtime. But you've got to license another primary database. And configure Streams. And write the replication code to propagate the changes on B during the time you're smoke testing A. And you need the middle tier infrastructure in place. And you're obviously going to be involving more admins than you would for a simpler deploy where you take things down, reboot, and bring things up. The test plan becomes more complicated as well since you need to practice this sort of thing in lower environments.
    Justin

  • Best practice to define length for varchar field of table in sql server

    What is best practice to define length for a varchar field in table
    where field suppose Remarks By Person  varchar(max) or varchar(4000)
    Could it affect on optimization in future????
    experts Reply Must ... 
    Dilip Patil..

    Hi Dilip,
    Varchar(n/max) is a variable-length, non-unicode character data. N defines the string length and can be a value from 1 through 8,000. Max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size is the actual length of the data entered
    + 2 bytes. We always use varchar when the sizes of the column data entries vary considerably. While if the filed data size might exceed 8,000 bytes in some way, we should use varchar(max).
    So the conclusion is just like Uri said, use varchar(max) or varchar(4000) is depends on how much characters we are going to store.
    The following document about varchar in SQL Server is for your reference:
    http://technet.microsoft.com/en-us/library/ms176089.aspx
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Best practice on Oracle VM for Sparc System

    Dear All,
    I want to test Oracle VM for Sparc System but I don't have new model Server to test it. What is the best practice of Oracle VM for Sparc System?
    I have a Dell laptop which has spec as below:
    -Intel® CoreTM i7-2640M
    (2.8GHz, 4MB cache)
    - Ram: 8GB DDR3
    - HDD: 750GB
    -1GB AMD Radeon
    I want to install Oracle VM VirtualBox on my laptop and then install Oracle VM for Sparc System in Virtual Box, is it possible?
    Please kindly give advice,
    Thanks and regards,
    Heng

    Heng Horn wrote:
    How about computer desktop or computer workstation with the latest version has CPU supports Oracle VM or SPARC?Nope. The only place you find SPARC T4 processors is in Sun Servers (and some Fujitsu servers, I think).

  • How to reduce downtime for setup table

    Scenario u2013According to system data, Setup table will normally take 5 days to fill but client agreed only for max 2 days downtime. User can do change only last 3 month documents not before that. For filling 3 month data in set up table 1 day required so I have to mange options accordingly.
    Datasource u2013 2LIS_13_VDITM -> DSO u2013 ZBIllIG ->Info cube
    I have to Reduce Downtime for Setup table so planning following optionsu2013
    1.     First run the info package for Initialization without data transfer. Then start filling setup table without blocking the User. In case Users changes any document at the time of filling setup table then these changes will move to delta queue. Once setup table filled then execute full repair request and then Delta info package.
    2.     Early delta initialization u2013 no idea how to perform steps.
    Please share your views with detail steps.
    OLI*BW doesnu2019t have any date range in selection criteria so manually I will find out document for particular dates and use these document range.
    Checked lot of post in SDN but still expecting final answer to go ahead in Production.

    Hi ,
    Your requirement is Billing ODS and Cube - Reset up in R/3 SYSTEM & Initialization in BW SYSTEM .
    Before starting find the previous data load volume and size.
    1.Go to LBWG application value=13 (Always Schedule the job in the back-ground mode)
    2.Verify using tcode u2018SE16u2019 that there are NO records in u2018MC13VD0ITMSETUPu2019 table after above delete job is complete.
    3.Suspend the process chain job in BW.This is to avoid it getting kicked off while the reload process is still in progress.
    4.Need to check LBWQ in R/3 system for MCEX13, unprocessed Outbound queue (records). This should be empty as the last delta would have processed all.
    5.Delete the initflag in BW.
    6.Need to check RSA7 in R/3 SYSTEM to verify that there is NO record for 2LIS_13_VDITM    (to be done right before the Setup job).
    7.Create New Info Package for Info Source '2LIS_13_VDITM' for u2018Initialize without Data Transfer Optionu2019 .Execute the package.Re-establish the Delta processing flags in R/3 and BW for the Billing TD load .
    8.Save the record count for table u2018VBRPu2019 using SE16 right before the setup job.
    9.Schedule Billing Data Setup Job 'OLI9BW'  in R/3 SYSTEM .
    10.After the Billing Setup job is complete in R/3 system, get the record count of table u2018VBRPu2019 again using u2018SE16u2019
    Expeted time in R/3:5 to 7 hrs(setupjobs)
    Expeted time for init and fullload : 6 hrs
    ODS activation : 3hrs
    Cube and with agrregates fill all : 8hrs.
    Thanks,
    naidu.

  • Ways to reduce downtime for filling up setup table

    Hi Experts,
    Can anyone tell me the step by step process so that i can reduce downtime for filling up setup tables?
    I know that setup tables can be filled by considering sales document numbers....but the further steps are not clear with me...........specially with data loadin till PSA and then to ODS/Cube
    So plz throw some light on this.......
    Regards,
    Vaishnavi.

    Hi,
    You will need to fill the set up tables in 'no postings period'. In other words when no trasnactions are posted for that area in R/3 otherwise those records will not come to BW. Discuss this with end user and decide. Weekends are a general choice for this activity.
    You can run them after business hours so that there wont be any transactions, or in the night times or you can do it on week ends so that there is no need to take down time.
    Fill the setup tables with already closed values and then fill up again with open values.This will reduce the down time.
    Initialize  closed periods first in which users wont enter data ( for example in 2007 or 2006), this initializations can be done while users are working. Then the initialize last period at night/weekends.holidays etc.
    If you know documents that are in closed periods, and you are sure that these documents can no longer be changed, you can only fill the SetUp tables for these documents or only for these periods, by continuing to post in open periods. You then initialize only for these intervals, delete the setup table, and only then do you fill the setup table with the rest of the documents  this procedure can drastically reduce the downtimes of your system.
    However, there is a risk that user exits (and in LIS, formulas and conditions) can be used to retrieve documents that are in periods that are already "closed periods".
    One more thing what you need to bear in mind is, to check if there are any Scheduled jobs which are updating the transaction tables, which would definitely cause Data Reconciliation Issues.
    Try Early Delta Initialization
    With early delta initialization, you have the option of writing the data into the delta queue or into the delta tables for the application during the initialization request in the source system. This means that you are able to execute the initialization of the delta process (the init request), without having to stop the posting of data in the source system. The option of executing an early delta initialization is only available if the DataSource extractor called in the source system with this data request supports this.
    Extractors that support early delta initialization are delivered with Plug-Ins as of Plug-In (-A) 2002.1.
    You cannot run an initialization simulation together with an early delta initialization.
    Hope this link may make you clear about Early Delta Initialization
    http://help.sap.com/saphelp_nw04s/helpdata/en/80/1a65dce07211d2acb80000e829fbfe/frameset.htm
    http://www.allinterview.com/showanswers/2907.html
    http://sap.ittoolbox.com/groups/technical-functional/sap-bw/early-delta-initialization-459379
    http://books.google.co.in/books?id=qYtz7kEHegEC&pg=PA293&lpg=PA293&dq=early+delta&source=web&ots=AM1PtX6wcZ&sig=xKOF85Gb8UtszY44zt06K6R0n3M&hl=en#PPA290,M1
    http://www.blackwellpublishing.com/journal.asp?ref=1069-6563&site=1
    EARLY DELTA
    Early delta Initialization
    How To… Minimize Downtime For Delta Initialization
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5d51aa90-0201-0010-749e-d6b993c7a0d6
    How To Minimize Effects of Planned Downtime (NW7.0)
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/901c5703-f197-2910-e290-a2851d1bf3bb
    Note 753654 - How can downtime be reduced for setup table update
    602260 - Procedure for reconstructing data for BW 
    437672 - LBWE: Performance for setup of extract structures 
    436393 - Performance improvement for filling the setup tables 
    Note 739863
    /thread/756626 [original link is broken]
    Re: How to Setup and INIT from 2LIS_13_VDITM with millions of records
    How downtime can be reduced for setup table update.
    Fill setup tables without locking users
    Initialization Setup Tables.
    Hope this helps.
    Thanks,
    JituK

  • Feasibility for a new production system in the same environment

    Dear friends,
    Our management has come up with a requirement of expansion of SAP environment to cover up users in a different country for their francise over there. The options we are considering are below:
    1. Implementing a whole new setup ( ECC, EP,BI, PI,Trex).
    2. Expanding the lease lines between the sites in both the countries and providing access to the present setup itself with enhancement in the current hardware configuration.
    or is it a good practice to keep a new Production system in the other country and add a new delivery route for the new production system within the current transport route ? How the license part may be taken care in these cases ?
    Your suggestions are welcomed.
    Many thanks,
    SUJIT

    Hi Sujit,
    The answer to this question really depends on whether the new country is going to share much of the same configuration with some tailoring to local laws and requirements.  If the system is going to be the same, then there is now reason to deploy a new system unless you are reaching technical limits for hardware.  For ease of management and usage, then having one system is preferred.  You can then upgrade your network to cope with the increased load and hardware as required. 
    Hope this helps,
    Graham

  • ChaRM scenario for only one productive system

    Hi CHaRM-experts!
    I would like to know whether or not is it possible to use the Change Request Management scenarion for only one productive system?
    It means we have systems and cases where we should apply the emergency correction only in productive system, without any transport from DEV or QAS system.
    Does ChaRM support such case?
    Thank you very much!
    regards
    Thom

    Hi Don,
    thank you very much for your answer.
    No, I do not plan any transport in the productive system.
    I would just make any adminstrative changes without transport.
    Does ChaRM support such scenarios?
    Thank you!
    regards
    Thom

  • Best Practice setting up NICs for Hyper V 2008 r2

    I am looking at some suggestions for best practice for setting up a hyper V 2008 r2 at a remote location with 5 nics, one for managment vlan and other 4 on the data vlan.  This server will host  2 virtual machines, one is a DC and the other
    is a member local DHCP server.  The server is setup now with one nic on the management Vlan and the other nic's set to get there ip from the local dhcp server on on the host.   We have the virtual networks setup in Hyper V to
    point to each of the nics using the "external connection".  The virtual servers 'DHCP and AD" have there own ip set within them.  Issues we are seeing,  when the site looses external connections for a while they cannot get ip
    addresses from the local dhcp server anymore.
    1. NIC on management Vlan -- IP Static -- Physical host
    2. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V  -- virtual server DHCP
    3. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- Virtual server domain controller
    4. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    5. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    Thanks in advance

    Looks like you may be over complicating things here.  More and more of the recommendations from Microsoft at this point would be to create a Logical Switch and then layer on Logical Networks for your management layers, but here is what I would do for
    you simple remote office.  
    Management NIC:  Looks good (Teaming would be better, but only if you had 2 different switching to protect against link failures at the switch level.  Doesn't seem relevant in this case however.
    NIC for Data Network VLAN:  I would use one NIC in your case if you can have the ability to Trunk multiple VLANs at the switch level to the NIC.  That way you are setting the VLAN on the VMs NIC that you want to access and your
    Virtual Switch configuration is very simple.  On this virtual switch however, I would uncheck IPv4 and IPv6.  There is no need to give this NIC an address as you are just passing traffic through them from the VMs that are marked with VLAN tags.  Again,
    if you have multiple physical switches in the building teaming could be an option, but probably adds more complexity than is necessary for a small office. 
    Even if you keep your Virtual Switches linked to separate NICs unchecking IPv4 and IPv6 makes sense. 
    Disable all the other NICs
    Beyond that, check your routing.  Can you ping between all hosts when there is not interruption? What DHCP server are they getting there addresses on normally?  Where are your name resolution servers (DNS, WINS)?  
    No silver bullet here, but maybe a step in the right direction.
    Rob McShinsky (VirtuallyAware.com)
    VirtuallyAware - Experiences in a Virtual World (Microsoft MVP - Virtual Machine)

  • Best Practice : Creating Custom Renderer for Standard Component

    I've been reading the docs and a few threads about Custom Renderers. The best practice seems to be to create a Custom Component where you need a Custom Renderer. Is this the case?
    See [this post|http://forums.sun.com/thread.jspa?forumID=427&threadID=520422]
    I've created several Custom Renderers to override the HTML provided by the Standard Components, however I can't see the benefit in also creating a Custom Component when the behaviour of the standard component is just fine.
    Thanks,
    Damian.

    It all depends on what you are trying to accomplish. Generally speaking if all you need is for the user interface output to be changed then a renderer will work just fine. A new component is usually made in order to provide some fundamental change in server side functionality not related to the user interface. - Ponderator

  • Best Practice Advice - Using ARD for Inventorying System Resources Info

    Hello All,
    I hope this is the place I can post a question like this. If not please direct me if there is another location for a topic of this nature.
    We are in the process of utilizing ARD reporting for all the Macs in our district (3500 +/- a few here and there). I am looking for advice and would like some best practices ideas for a project like this. ANY and ALL advice is welcome. Scheduling reports, utilizing a task server as opposed to the Admin workstation, etc. I figured I could always learn from those with experience rather than trying to reinvent the wheel. Thanks for your time.

    hey, i am also intrested in any tips. we are gearing up to use ARD for all of our macs current and future.
    i am having a hard time with entering the user/pass for each machine, is there and eaiser way to do so? we dont have nearly as many macs running as you do but its still a pain to do each one over and over. any hints? or am i doing it wrong?
    thanks
    -wilt

  • BPC 5 - Best practices - Sample data file for Legal Consolidation

    Hi,
    we are following the steps indicated in the SAP BPC Business Practice: http://help.sap.com/bp_bpcv151/html/bpc.htm
    A Legal Consolidation prerequisit is to have the sample data file that we do not have: "Consolidation Finance Data.xls"
    Does anybody have this file or know where to find it?
    Thanks for your time!
    Regards,
    Santiago

    Hi,
    From [https://websmp230.sap-ag.de/sap/bc/bsp/spn/download_basket/download.htm?objid=012002523100012218702007E&action=DL_DIRECT] this address you can obtain .zip file for Best Practice including all scenarios and csv files under misc directory used in these scenarios.
    Consolidation Finance Data.txt is in there also..
    Regards,
    ergin ozturk

  • Best Practices: iPad/MacBookPro synching for video production in education

    My organization just bought 14 Macbook Pros and 14 iPads Minis. Our goal is to have students in single-day classes use the iPads to film something, then synch/export the video to a MacBook Pro where they can then edit that video in iMovies. Once that single-day class is over, all of the video will (likely) be deleted and new students come in a couple days later and start fresh. I'm trying to figure out the best practices for this to make it as painless as possible for all involved.
    So, matching AppleIDs for each pair? One AppleID for all devices and manual synch through iTunes? Dropbox/cloud synching instead of iTunes?
    All of these devices are brand new. I have already started prepping the MacBook Pros, but have not even turned on the iPads since I'm not sure which AppleID I should attach to the iPads -- I assume the first AppleID on an iPad will accept the iLife apps much the same way they do on the MacBook Pros.
    Any help is appreciated.
    Thanks
    Jack

    well the most important fact to accept is that ALL DRIVES WILL FAIL.  It's just a matter of when.  I can tell you about a nightmare situation with g-drives (before Hitachi bought them).   What format are you shooting?  If you shoot on tape, you can always recapture as long as you captured with abort clips on dropped frames on make new clip on timecode break are enabled.  But that's gonna take "real time."  If you shot on a chip based format, backing up the chips in multiple places (and I mean multiple) can provide a sense of security.  But if you need to be able to get back to work immediately if you have a drive fail, having a back up of your media or if you've stored it on a redundant raid is crucial.  I also seriously recommended having a clone of your startup drive so if your startup (boot) drive fails, you can get back to work quickly. 
    https://discussions.apple.com/docs/DOC-2494

  • Best practices when carry forward for audit adjustments

    Dear experts,
    I would like to know if someone can share his best practices when performing carry forward for audit adjustments.
    We are actually doing legal consolidation for one customer and we are facing one issue.
    The accounting team needs to pass audit adjustments around April-May for last year.
    So from January to April / May, the opening balance must be based on December closing of prior year.
    Then from May / June to December, the opening balance must be based on Audit closing of prior year.
    We originally planned to create two members for December period, XXXX.DEC and XXXX.AUD
    Once the accountants would know their audit closing balance, they would have to input it on the XXXX.AUD period and a business rule could compute the difference between the closing of AUD and DEC periods and store the result on an opening flow.
    The opening flow hierarchy would be as follow:
    F_OPETOT (Opening balance Total)
        F_OPE (Opening balance from December)
        F_OPEAUD (Opening balance from the difference between closing balance of Audit and December periods)
    Now, assume that we are in October, but for any reason, the accountant run a carry forward for February, he is going to impact the opening balance because at this time (October), we have the audit adjustments.
    How to avoid such a thing? What are the best practices in this case?
    I guess it is something that you may have encounter if you did a consolidation project.
    Any help will be greatly appreciated.
    Thanks
    Antoine Epinette

    Cookman and I have been arguing about this since the paleozoic era. Here's my logic for capturing everything.
    Less wear and tear on the tape and the deck.
    You've got everything on the system. Can't tell you how many times a client has said "I know that there was a better take." The only way to disabuse them of this notion is to look at every take. if it's not on the system, you've got to spend more time finding the tape, and adding "wear and tear on the tape and the deck." And then there's the moment where you need to replace the audio for one word from another take. You can quickly check all the other takes (particularly if you've done a thorough job logging the material - see below)_.
    Once it's on the system, you still need to log and learn the material. You can scan thru material much faster once it's captured. Jumping around the material is much easier.
    There's no question that logging the material before you capture makes you learn the material in a more thorough way, but with enough selfdiscipline, you can learn the material as thoroughly once it's been captured.

  • Best practices with LDIF Development for RBAC?

    I'm currently working on enforcing RBAC (Role Based Access controls) in OID that may be subject to change every few months. What I've currently been doing is writing LDIF files to make changes to the existing RBAC once the changes have been finalized.
    Unfortunately, now we have ended up with a growing list of LDIF files that must be run in sequential order if we were to build a new environment. Any defects or development errors that slip through developer unit testing must be handled in the same manner.
    What is the best practice process for performing this type of development? Would it make more sense to have one LDIF file that removes all of the RBAC enforcement (via ldapmodify -c), and then a separate file that will install the latest and most up to date version? I've also considered just using one LDIF file, appending any updates to the end of it and using the ldapmodify command with the -c parameter

    With regard to the 29.97/30 thing, you'll find that video people are idiosyncratically imprecise about that. We say 60 when we mean 59.94, we say 30 when we mean 29.97 and we say 24 when we mean 23.976.
    We're quirky.
    Whenever somebody says one of those nice, round numbers, you can assume they're really talking about the corresponding ugly fraction.
    Unless they're film people, in which case +24 means 24, dangit.+

Maybe you are looking for

  • Help please! itunes wont open

    i am using a windows XP and my itunes will not open. I have uninstalled and reinstalled itunes and quicktime numerous times and itunes will still not open. Quicktime opens fine. I have also opened my Task Manager and itunes appear under processes but

  • Setting Gmail as default email application

    I just purchased my first mac and am trying to set up gmail as the default mail program (so when I click on "mail to" hyperlinks when browsing it automatically invokes gmail rather than the apple mail program, etc). I read elsewhere that I should lau

  • How to have a field to be encrypted?

    Hi all, What is the best practice to encrypt a field using ADF 11g? I have a sensitive field "BankAccountNo" in a table, this field should be encrypted in DB. I've tried to write a Converter in ViewControl layer to encrypt/decrypt data by implements

  • Parallax scrolling for phone layouts

    I think the parallax scrolling should be available especially for mobile phone designs, as these need to have all content vertically. Scrolling with the finger is also much more intuitiv than with a mouse, so this effect would give some playful inter

  • Output to Excel with Oracle9i Report -- DEMO DOWN

    The Viewlet "Output to Excel with Oracle9i Report" (as well as the others) at http://otn.oracle.com/products/reports/htdocs/getstart/demonstrations/index.html Does not work. I keep getting a 'Incorrect skin. Stoppting' Error. Please update the viewle