Reg Downtime for MM implementation

Dear Users,
We are already using SAP ECC 6.0 for the day to day business operations and planning to implement a part of Materials Management module soon.
Does anyone here think that we need a downtime on SAP for this implementation please?
Warm regards,
Vijay.

Hi,
I don't know which sub module you are going to implement. If this is only an additional functionality like Version Management, Confirmations etc, then there is no need to down time.
You can make changes in Golden / Development client, then move it to Quality. Test it there and then move the changes to Production.
Anand

Similar Messages

  • Downtime for EBCDIC to ASCII conversion

    Hello,
    we have performed successfully an EBCDIC to ASCII conversion for a client's development system.
    The total downtime was about 24 hours.
    The customer though, refuses to have downtime of their production system, since it severely affects their systems.
    Is there a possibility not to have any downtime for an EBCDIC to ASCII conversion ?
    We were thinking as alternatives to build a second productive system, do the conversion there and after finishing, to apply journal receivers from the real production system. This would minimize their downtime to the backup time of the production system and the time needed to apply the journal receivers ?!?
    Has anyone performed such a task ?
    Would this be feasible (we would have to apply journals from an EBCDIC system to an ASCII system)
    Thank you very much
    Katerina Psalida

    Hi Katerina,
    applying journal changes from the EBCDIC system to the ASCII system will not work for several reasons, primarily because the journal keeps track of the journaled tables through an internal journal ID, which will not be the same after the EBCDIC to ASCII conversion. Technically it would not work because the data in the journal entries is kept very low-level, so a conversion from EBCDIC to ASCII during apply is not implemented. Also, the journal entries are based on the relative record numbers in the table, and after the conversion, the relative record numbers will not necessarily be the same.
    I am not aware of a zero-downtime conversion option. You can speed up the conversion if you use the "Inplace" conversion option. Did you use that when you measured the 24 hours downtime at the test system? If not, you should give the Inplace option a try. Depending on your data, it could reduce the downtime significantly.
    Kind regards,
    Christian Bartels.

  • Downtime for portal

    Hi All,
    i want to implement downtime for my portal in a particular time and dont want users to access my server and need a downtime page like sdn,
    But my administrator should access portal. how to do this.
    Thanking you ,
    rukmini

    Hi devi,
    check out this thread,
    https://forums.sdn.sap.com/click.jspa?searchID=13124687&messageID=5629005
    hope this fits your scenario.
    Cheers-
    Pramod
    reward points if helpful.

  • How to set reg.cgi for VideoPhoneLabs

    i start with stratus .i try set it to work with my dedicated apache server and sql but have no clue how to do it or where to put this file on my server.
    i realy have probleme with reg.cgi
    for now i did www/cgi-bin/reg.cgi and in the xml i set my websiteurl = "http://88..../cgi-bin/"
    also how to complete reg.cgi for have it to talk with my db ?
    here the reg.cgi file.
    help please ...
    #! /usr/bin/python --
    reg.cgi by Michael Thornburgh.
    This file is in the public domain.
    IMPORTANT: This script is for illustrative purposes only. It does
    not have user authentication or other access control measures that
    a real production service would have.
    This script should be placed in the cgi-bin location according to
    your web server installation. The database is an SQLite3 database.
    Edit the location of the database in variable "dbFile".
    Create it with the following schema:
    .schema
    CREATE TABLE registrations (
        m_username VARCHAR COLLATE NOCASE,
        m_identity VARCHAR,
        m_updatetime DATETIME,
        PRIMARY KEY (m_username)
    CREATE INDEX registrations_updatetime ON registrations (m_updatetime ASC);
    # CHANGE THIS
    dbFile = '.../registrations.db'
    import cgi
    import sqlite3
    import xml.sax.saxutils
    query = cgi.parse()
    db = sqlite3.connect(dbFile)
    user = query.get('username', [None])[0]
    identity = query.get('identity', [None])[0]
    friends = query.get('friends', [])
    print 'Content-type: text/plain\n\n<?xml version="1.0" encoding="utf-8"?>\n<result>'
    if user:
        try:
            c = db.cursor()
            c.execute("insert or replace into registrations values (?, ?, datetime('now'))", (user, identity))
            print '\t<update>true</update>'
        except:
            print '\t<update>false</update>'
    for f in friends:
        print "\t<friend>\n\t\t<user>%s</user>" % (xml.sax.saxutils.escape(f), )
        c = db.cursor()
        c.execute("select m_username, m_identity from registrations where m_username = ? and m_updatetime > datetime('now', '-1 hour')", (f, ))
        for result in c.fetchall():
            eachIdent = result[1]
            if not eachIdent:
                eachIdent = ""
            print "\t\t<identity>%s</identity>" % (xml.sax.saxutils.escape(eachIdent), )
            if f != result[0]:
                print "\t\t<registered>%s</registered>" % (xml.sax.saxutils.escape(result[0]), )
        print "\t</friend>"
    db.commit()
    print "</result>"

    Persistent binding is effectively provided by STMS (MPxIO) - is there anything in particular you're wanting to do that STMS doesn't provide?

  • What are the thread safety requirements for container implementation?

    I rarely see in the TopLink documentation reference to thread safety requirements and it’s not different for container implementation.
    The default TopLink implementation for:
    - List is Vector
    - Set is HashSet
    - Collection is Vector
    - Map is HashMap
    Half of them are thread safe implementations List/Collection and the other half is not thread safe Set/Map.
    So if I choose my own implementation do I need a thread safe implementation for?
    - List ?
    - Set ?
    - Collection ?
    - Map ?
    Our application is always reading and writing via UOW. So if TopLink synchronize update on client session objects we should be safe with not thread safe implementation for any type; does TopLink synchronize update on client session objects?
    The only thing we are certain is that it is not thread safe to read client session object or read read-only UOW object if they are ever expired or ever refreshed.
    We got stack dump below in an application always reading and writing objects from UOW, so we believe that TopLink doesn’t synchronize correctly when it’s updating the client session objects.
    java.util.ConcurrentModificationException
    at java.util.AbstractList$Itr.checkForComodification(AbstractList.java:449)
    at java.util.AbstractList$Itr.next(AbstractList.java:420)
    at oracle.toplink.internal.queryframework.InterfaceContainerPolicy.next(InterfaceContainerPolicy.java:149)
    at oracle.toplink.internal.queryframework.ContainerPolicy.next(ContainerPolicy.java:460)
    at oracle.toplink.internal.helper.WriteLockManager.traverseRelatedLocks(WriteLockManager.java:140)
    at oracle.toplink.internal.helper.WriteLockManager.acquireLockAndRelatedLocks(WriteLockManager.java:116)
    at oracle.toplink.internal.helper.WriteLockManager.checkAndLockObject(WriteLockManager.java:349)
    at oracle.toplink.internal.helper.WriteLockManager.traverseRelatedLocks(WriteLockManager.java:144)
    at oracle.toplink.internal.helper.WriteLockManager.acquireLockAndRelatedLocks(WriteLockManager.java:116)
    at oracle.toplink.internal.helper.WriteLockManager.checkAndLockObject(WriteLockManager.java:349)
    at oracle.toplink.internal.helper.WriteLockManager.traverseRelatedLocks(WriteLockManager.java:144)
    at oracle.toplink.internal.helper.WriteLockManager.acquireLockAndRelatedLocks(WriteLockManager.java:116)
    at oracle.toplink.internal.helper.WriteLockManager.acquireLocksForClone(WriteLockManager.java:56)
    at oracle.toplink.publicinterface.UnitOfWork.cloneAndRegisterObject(UnitOfWork.java:756)
    at oracle.toplink.publicinterface.UnitOfWork.cloneAndRegisterObject(UnitOfWork.java:714)
    at oracle.toplink.internal.sessions.UnitOfWorkIdentityMapAccessor.getAndCloneCacheKeyFromParent(UnitOfWorkIdentityMapAccessor.java:153)
    at oracle.toplink.internal.sessions.UnitOfWorkIdentityMapAccessor.getFromIdentityMap(UnitOfWorkIdentityMapAccessor.java:99)
    at oracle.toplink.internal.sessions.IdentityMapAccessor.getFromIdentityMap(IdentityMapAccessor.java:265)
    at oracle.toplink.publicinterface.UnitOfWork.registerExistingObject(UnitOfWork.java:3543)
    at oracle.toplink.publicinterface.UnitOfWork.registerExistingObject(UnitOfWork.java:3503)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.registerIndividualResult(ObjectLevelReadQuery.java:1812)
    at oracle.toplink.internal.descriptors.ObjectBuilder.buildWorkingCopyCloneNormally(ObjectBuilder.java:455)
    at oracle.toplink.internal.descriptors.ObjectBuilder.buildObjectInUnitOfWork(ObjectBuilder.java:419)
    at oracle.toplink.internal.descriptors.ObjectBuilder.buildObject(ObjectBuilder.java:379)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.buildObject(ObjectLevelReadQuery.java:455)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.conformIndividualResult(ObjectLevelReadQuery.java:622)
    at oracle.toplink.queryframework.ReadObjectQuery.conformResult(ReadObjectQuery.java:339)
    at oracle.toplink.queryframework.ReadObjectQuery.registerResultInUnitOfWork(ReadObjectQuery.java:604)
    at oracle.toplink.queryframework.ReadObjectQuery.executeObjectLevelReadQuery(ReadObjectQuery.java:421)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.executeDatabaseQuery(ObjectLevelReadQuery.java:811)
    at oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:620)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:779)
    at oracle.toplink.queryframework.ReadObjectQuery.execute(ReadObjectQuery.java:388)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.executeInUnitOfWork(ObjectLevelReadQuery.java:836)
    at oracle.toplink.publicinterface.UnitOfWork.internalExecuteQuery(UnitOfWork.java:2604)
    at oracle.toplink.publicinterface.Session.executeQuery(Session.java:993)
    at oracle.toplink.publicinterface.Session.executeQuery(Session.java:950)

    Hi Lionel,
    As a general rule of thumb, the ATI Rage 128 Pro will not support a 20" LCD. That being said, there are reports of it doing just that (possibly the edition that went into the cube).
    I'm not that familiar with the ins and outs of the Cube, so I can't give you authoritative information on it.
    A good place to start looking for answers is:
    http://cubeowner.com/kbase_2/
    Cheers!
    Karl

  • How to reduce downtime for setup table

    Scenario u2013According to system data, Setup table will normally take 5 days to fill but client agreed only for max 2 days downtime. User can do change only last 3 month documents not before that. For filling 3 month data in set up table 1 day required so I have to mange options accordingly.
    Datasource u2013 2LIS_13_VDITM -> DSO u2013 ZBIllIG ->Info cube
    I have to Reduce Downtime for Setup table so planning following optionsu2013
    1.     First run the info package for Initialization without data transfer. Then start filling setup table without blocking the User. In case Users changes any document at the time of filling setup table then these changes will move to delta queue. Once setup table filled then execute full repair request and then Delta info package.
    2.     Early delta initialization u2013 no idea how to perform steps.
    Please share your views with detail steps.
    OLI*BW doesnu2019t have any date range in selection criteria so manually I will find out document for particular dates and use these document range.
    Checked lot of post in SDN but still expecting final answer to go ahead in Production.

    Hi ,
    Your requirement is Billing ODS and Cube - Reset up in R/3 SYSTEM & Initialization in BW SYSTEM .
    Before starting find the previous data load volume and size.
    1.Go to LBWG application value=13 (Always Schedule the job in the back-ground mode)
    2.Verify using tcode u2018SE16u2019 that there are NO records in u2018MC13VD0ITMSETUPu2019 table after above delete job is complete.
    3.Suspend the process chain job in BW.This is to avoid it getting kicked off while the reload process is still in progress.
    4.Need to check LBWQ in R/3 system for MCEX13, unprocessed Outbound queue (records). This should be empty as the last delta would have processed all.
    5.Delete the initflag in BW.
    6.Need to check RSA7 in R/3 SYSTEM to verify that there is NO record for 2LIS_13_VDITM    (to be done right before the Setup job).
    7.Create New Info Package for Info Source '2LIS_13_VDITM' for u2018Initialize without Data Transfer Optionu2019 .Execute the package.Re-establish the Delta processing flags in R/3 and BW for the Billing TD load .
    8.Save the record count for table u2018VBRPu2019 using SE16 right before the setup job.
    9.Schedule Billing Data Setup Job 'OLI9BW'  in R/3 SYSTEM .
    10.After the Billing Setup job is complete in R/3 system, get the record count of table u2018VBRPu2019 again using u2018SE16u2019
    Expeted time in R/3:5 to 7 hrs(setupjobs)
    Expeted time for init and fullload : 6 hrs
    ODS activation : 3hrs
    Cube and with agrregates fill all : 8hrs.
    Thanks,
    naidu.

  • Best practices to reduce downtime for Database releases(rolling changes)

    Hi,
    What are best practices to reduce downtime for database releases on 10.2.0.3? What DB changes can be rolling and what can't?
    Thanks in advance.
    Regards,
    RJiv.

    I would be very dubious about any sort of universal "best practices" here. Realistically, your practices need to be tailored to the application and the environment.
    You can invest a lot of time, energy, and resources into minimizing downtime if that is the only goal. But you'll generally pay for that goal in terms of developer and admin time and effort, environmental complexity, etc. And you generally need to architect your application with rolling upgrades in mind, which necessitates potentially large amounts of redesign to existing applications. It may be perfectly acceptable to go full-bore into minimizing downtime if you are running Amazon.com and any downtime is unacceptable. Most organizations, however, need to balance downtime against other needs.
    For example, you could radically minimize downtime by having a second active database, configuring Streams to replicate changes between the two master databases, and configure the middle tier environment so that you can point different middle tier servers against one or the other database. When you want to upgrade, you point all the middle tier servers against database A other than 1 that lives on a special URL. You upgrade database B (making sure to deal with the Streams replication environment properly depending on requirements) and do the smoke test against the special URL. When you determine that everything works, you configure all the app servers to point at B and have Streams replication process configured to replicate changes from the old data model to the new data model), upgrade B, repeat the smoke test, and then return the middle tier environment to the normal state of balancing between databases.
    This lets you upgrade with 0 downtime. But you've got to license another primary database. And configure Streams. And write the replication code to propagate the changes on B during the time you're smoke testing A. And you need the middle tier infrastructure in place. And you're obviously going to be involving more admins than you would for a simpler deploy where you take things down, reboot, and bring things up. The test plan becomes more complicated as well since you need to practice this sort of thing in lower environments.
    Justin

  • What are appropriate directives for array implementation in hls

    dear friends,
      I have tried hardware implementation in VivadoHLS by passing arrays to the function in Microblaze based soc. The problem I am facing is that the RTL is not being implemented properly.Though the implementation was success for register inputs and outputs , there were no functions generated in the header files in include directory of pcores through which inputs are to be given to hardware while the input and output arguments are arrays in SDK. i need a help to know the appropriate directives for this implementation . And the functions to be used to give inputs after generating hardware and how the final outputs from the hardware are taken. Please clarify the implementation of this basic example.
    void array_add(int z[4],int x[4])
    #pragma HLS INTERFACE ap_fifo port=z
    #pragma HLS INTERFACE ap_fifo port=x
    #pragma HLS RESOURCE variable=z core=AXIS
    #pragma HLS RESOURCE variable=x core=AXIS
    #pragma HLS RESOURCE variable=return core=AXI4LiteS
    int i;
    label0:for(i=0;i<4;i++)
    z[i]=x[i];
    return;
    thanks and regards
    sasidhar

    Hi
    There are couple of directives for this. This can determine the way you want to implement your array or partition this.
    I found a good guide.
    http://users.ece.utexas.edu/~gerstl/ee382v_f14/soc/vivado_hls/VivadoHLS_Improving_Performance.pdf
    Hope this helps.
    Regards
    Sikta

  • Setup script document for Sourcing implementation

    Hi,
    Please can one share a setup script document for Sourcing implementation ?
    Regards.

    Hi Ram,
    See wiki page:
    General PPDS wiki page
    http://wiki.sdn.sap.com/wiki/display/SCM/APO-PPDS
    Setup Matrix Generation in a Complex Manufacturing Evironment
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/00a618c4-8aad-2b10-6ebb-f70cb4470195
    Oficial doc
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/00a618c4-8aad-2b10-6ebb-f70cb4470195?quicklink=index&overridelayout=true
    Hope this help.
    Luiz Giani

  • Ways to reduce downtime for filling up setup table

    Hi Experts,
    Can anyone tell me the step by step process so that i can reduce downtime for filling up setup tables?
    I know that setup tables can be filled by considering sales document numbers....but the further steps are not clear with me...........specially with data loadin till PSA and then to ODS/Cube
    So plz throw some light on this.......
    Regards,
    Vaishnavi.

    Hi,
    You will need to fill the set up tables in 'no postings period'. In other words when no trasnactions are posted for that area in R/3 otherwise those records will not come to BW. Discuss this with end user and decide. Weekends are a general choice for this activity.
    You can run them after business hours so that there wont be any transactions, or in the night times or you can do it on week ends so that there is no need to take down time.
    Fill the setup tables with already closed values and then fill up again with open values.This will reduce the down time.
    Initialize  closed periods first in which users wont enter data ( for example in 2007 or 2006), this initializations can be done while users are working. Then the initialize last period at night/weekends.holidays etc.
    If you know documents that are in closed periods, and you are sure that these documents can no longer be changed, you can only fill the SetUp tables for these documents or only for these periods, by continuing to post in open periods. You then initialize only for these intervals, delete the setup table, and only then do you fill the setup table with the rest of the documents  this procedure can drastically reduce the downtimes of your system.
    However, there is a risk that user exits (and in LIS, formulas and conditions) can be used to retrieve documents that are in periods that are already "closed periods".
    One more thing what you need to bear in mind is, to check if there are any Scheduled jobs which are updating the transaction tables, which would definitely cause Data Reconciliation Issues.
    Try Early Delta Initialization
    With early delta initialization, you have the option of writing the data into the delta queue or into the delta tables for the application during the initialization request in the source system. This means that you are able to execute the initialization of the delta process (the init request), without having to stop the posting of data in the source system. The option of executing an early delta initialization is only available if the DataSource extractor called in the source system with this data request supports this.
    Extractors that support early delta initialization are delivered with Plug-Ins as of Plug-In (-A) 2002.1.
    You cannot run an initialization simulation together with an early delta initialization.
    Hope this link may make you clear about Early Delta Initialization
    http://help.sap.com/saphelp_nw04s/helpdata/en/80/1a65dce07211d2acb80000e829fbfe/frameset.htm
    http://www.allinterview.com/showanswers/2907.html
    http://sap.ittoolbox.com/groups/technical-functional/sap-bw/early-delta-initialization-459379
    http://books.google.co.in/books?id=qYtz7kEHegEC&pg=PA293&lpg=PA293&dq=early+delta&source=web&ots=AM1PtX6wcZ&sig=xKOF85Gb8UtszY44zt06K6R0n3M&hl=en#PPA290,M1
    http://www.blackwellpublishing.com/journal.asp?ref=1069-6563&site=1
    EARLY DELTA
    Early delta Initialization
    How To… Minimize Downtime For Delta Initialization
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5d51aa90-0201-0010-749e-d6b993c7a0d6
    How To Minimize Effects of Planned Downtime (NW7.0)
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/901c5703-f197-2910-e290-a2851d1bf3bb
    Note 753654 - How can downtime be reduced for setup table update
    602260 - Procedure for reconstructing data for BW 
    437672 - LBWE: Performance for setup of extract structures 
    436393 - Performance improvement for filling the setup tables 
    Note 739863
    /thread/756626 [original link is broken]
    Re: How to Setup and INIT from 2LIS_13_VDITM with millions of records
    How downtime can be reduced for setup table update.
    Fill setup tables without locking users
    Initialization Setup Tables.
    Hope this helps.
    Thanks,
    JituK

  • "Case use" technique for SAP implementation?

    Hello all,
    has anyone used "case use" technique for SAP implementation? I mean for standard process, not only developing.
    Any recommendation about it?
    Thanks in advance and best regards,
    Adolfo

    It would be helpful to go through the help document of ASAP Methodology.
    The following information may clarify some doubts/requirements you have.
    Generating the Project IMG through ASAP:
    After you have set the project scope, the next step is to generate the Project IMG. From the Business Process Master List (BPML), you can directly access the IMG activities relevant for configuring each process.
    BPML: The Business Process Master List, along with the Business Blueprint, is a key result of the second phase of the Roadmap. Microsoft Excel tables contain the SAP scenarios, process groups, and processes that have been set in scope in the SAP Reference Structure, and are crucial for configuring your SAP System. In Realization, the third phase of the Roadmap, the BPML provides the basis for monitoring and steering test activities and for configuring your SAP System. It contains the titles of the structure items, and displays the status, the owner, links to documentation and links to the SAP System. Amongst other things, the BPML allows you to:
    1) Set your baseline and final scope. These are used for baseline and final configuration.
    2) Access the Project IMG and specific IMG activities assigned to structure items.
    3) Access integration test plans, which help you carry out all required integration tests.
    The Prerequisite is you have set the project scope.
    Process Flow to use the Business Blueprint as a basis for configuring your SAP System:
    1) Set the project scope.
    2) Generate the Project IMG.
    3) Generate the BPML.
    4) From a specific processes in the BPML, you can go to the relevant IMG activities and make Customizing settings.

  • Is SAP Solution manager mandatory for SRM7 implementation?

    Dear experts,
    I get to know that use of SAP solution manager is mandatory for SRM 7 implementation. is it true?
    what is the best practice to manage a SAP SRM implementation project? should we manage the project in SAP SM or outside of it?
    Thanks and regards,
    Ranjan
    Ranjan Sutradhar

    Hi Ranjan,
    Solution Manager is not mandatory for SRM 7 implementation.
    Only thing is Solution Manager contains best pratices for Project implementation e.g AS is process , to be process, blue print etc.
    It is just a guilde. All things might not be  required for your project.
    We did it without solution Manager except that we used it for generating installation key & for maintainance opt.
    Best Wishes,
    Tushar.

  • What could be the Approximate time for HRMS Implementation?

    Dear Friends,
    Any clue on What could be the Approximate time for HRMS Implementation?
    Modules:  Core HR, Payroll, SSHR & OLM
    Bahrain Legislation
    2 Functional
    1 Technical Resource
    Integration with Third Party system for Financials..
    Anyy approximate time pls..
    Guru

    Hi Guru,
    Difficult to estimate these as each legislation is different.
    I also presume, Bahrain is not a seeded legislation.
    So you will have to write all the tax rules and reports.
    But, as the Tax/NI rules for GCC are fairly simple, should not be difficult.
    Best way to go about it, is to do it in phases.
    Core HR - BG configuration, Person migration, Absences etc.. and finally any reports for HR
    Payroll - elements, balances, formulas and all the logic.. and then all the reports.
    Once HR and payroll are done.. SSHR and OLM should take 1/3(or 1/2) of the time you spent on the HR/Payroll.
    Cheers,
    Vignesh

  • Re:Downtime for a Resource

    Dear Experts
        I wanted to know how to enter the Downtime for a resource in PPPI.
    As when i went through SAP library it was explaining the Downtime for a resource and the menu path for entering the down time was given as Resource->Reporting->Enter Downtime.
    But when i went through the menupath in ECC 6.0 i could not find the same in Resouce->Reporting, there is no downtime node.
    Pl do explain me about the scenario in detailed steps.
    Regards
    Surender

    Dear satya
      I have a resource and i schedule it for a particlar operation for 24 hrs.
    Suddenly there is a breakdown in the resource due to which i cannot use it.
    So inned to enter the breakdown time for that resource and the enetred time where can i see.
    Surender

  • Development Standards for BW implementation

    Hi all,
    Any one has any materials on development standards for BW implementation - with any special attention to BI 7.0 implementation?
    Thanks for your assistance. Contributions will be rewarded with points.
    Regards,
    Uche

    hi Uche,
    check if helps
    Multi-Dimensional Modeling with SAP NetWeaver BI
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84
    Modeling the Data Warehouse Layer with BI
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3668618d-0c01-0010-1ab5-aa75c3a4dfc2
    Frontend Design Guidelines - SAP BI in SAP NetWeaver 2004s
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/58fd9183-0e01-0010-f183-fdc9019f77ab
    Enterprise Reporting, Query, and Analysis - Developers Guide
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/0901c9bb-0601-0010-49ab-c1770c527673
    check
    https://www.sdn.sap.com/irj/sdn/developerareas/bi
    'key topics'

Maybe you are looking for