Performance problems on SEM-BPS

Hello,
I have long response times in a web application containing a web excel layout. So I analyzed the BPS_STAT0 and it indicates 2 seconds for the selection of a variable whereas my stop watch indicates 10 seconds.
what is this delta of 8 seconds?
Thank you for your help.
Catherine Bellec

Hi Catherine,
Web application consists on number of interface elements like variables, functions and layouts. if you are changing the variable selections and layout refreshes after variable selection then in BPS_STAT0 you will see the total time of variable selection and layout refresh on the variable selection event. this becomes more misleading if you have any function running on layout refresh. so this explains the delta time.
If the triggering of an event is activated in the attributes for the selectors, the whole page is updated after each selection. In this case, it is preferable to deactivate the events and add a “Refresh” pushbutton. This means that the page is only updated once upon completion of the entire selection.
Regards
Tarun

Similar Messages

  • Performance concerns in upload flat file into SEM-BPS.

    Hi,
    we are using HOW-TO document to upload flatfile into SEM-BPS.
    in the same exit function, we have a need to derive missing characteristic values from reference data.
    So, we are reading reference data using API_SEMBPS_GETDATA.
    upload is taking around 10 minutes in test systems. our concern is, if taking so long for test systems with less data , what about production time with more data?
    from what all I can see is, most of the time is being consumed at reading reference data.
    I'm dealing around 14000 records of reference data & around the same number of uploaded records. Initially, system status is, "number of cells to be formed : 33092" and then immediately after 5 seconds, status changes to, "formed cells : 33000" and this status stays for like 8 minutes.
    I'm not using any input/output layouts but just exit planning function in the planning folder. So, I anticipate that above status is while reading reference data but not due to huge amounts of uploaded records.
    When I ran the same exit funciton by commenting just the "reading reference data" code, upload function execution time came out as 2 minutes.
    What is the best bet in dealing this scenario?
    Usually, what is the best approch to read reference data / to derive missing characteristic values?
    I couldn't able to use "Char Relationship using reference data" as it might not be suitable in my case. Even if suits, am missing enough documents/info/examples to deal "Char Relationship". Documentation on help.sap.com is not enough in this case.
    PS: initially, when I tried to read 14000 records at detailed level, exit funciton returned no reference data & in the debugging mode, I could see a message "too many records".
    Can a given layout read only a maximum of 9999 records / excel rows?
    records are at CALMONTH level. As I don't need CALMONTH for derivation of characteristic values, I have deleted CALMONTH from " READ reference data" level to avoid the above message. Does this anyway relates to peformance?
    Appreciate any help

    Hello Hari,
    it is touch to say what exactly causes this performance problem. Since you are dealing with custom coding (the how-to + your derivation logic), I suggest to do an ABAP trace (SE30) to see where the time is really spent.
    The API call to read data should not take more than a few seconds. Test it separately by putting the API call into a simple ABAP program.
    As Mary pointed out already, there's a 9999 line limit for layouts and therefore the GETDATA API as well.
    Note: The file upload/download how-to solution was never meant for mass data loads. This needs to be done using regular BW functionality.
    Regards
    Marc
    SAP NetWeaver RIG

  • Performance Problems with Web Layouts in web interface

    Hello Gurus,
    We have a a BPS web interface tool which has the following design:
    1> A web interface with several tabs
    2> Each tab has around 3-4 input layouts which are dependent on each other
    3> In all there are 120-140 layouts that the tool uses...
    My questions were in term of performance
    1> Is there a limit to how many web layouts you can use per page/tab/view or if SAP recommends specific number of web layouts per page/tab/view ?
    2> If there is a limitation...our intention was to convert all the display layouts into BW reports so as to increase the performance of the tool....
    3> Would like to know the restriction on the number of users who can log into the tool as a specific point of time ? We may have 50-60 minimum using this tool.
    I would appreciate your help in this regard.
    Thanks in advance

    Hello Rashmi,
    Have you got a chance to look at the performance guide and SAP notes on BPS performance. If not here are the details
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/documents/a1-8-4/performance guide - sap sem bw bps.pdf
    Enclosed are the few SAP notes related for improving performance.
    358921 - Oracle database parameterization for SEM
    459897 - SEM-CPM: Performance when reading transaction data
    566713 – Required information for the analysis of performance problems 
    560369 - Proposals BW aggregates for SEM-BPS
    180605 - Oracle database parameter settings for BW
    124361 - Oracle parameterization (R/3 >= 4.x, Oracle 8.x / 9.x)
    358529 - Overview of performance notes
    350011 - Technical performance: Using the business content
    340246 - Techn. performance: Overview of statistics   
    417091 - Optimize execution time of planning functions
    Some of them are Oracle specific, ignore them if you are not oracle database. Hope this helps.
    Thanks,
    Praveen
    PS.Dont forget to reward points

  • Refresh current SEM-BPS layout

    Hi,
    I have a need to refresh the current SEM-BPS layout after executing a planning sequence.
    How do i do that?
    the problem is,
    A planning sequence reposts current layout data & the data disappers from the layout but the document icons still appear in the layout. (these are just virtual icons meaning it doesn't let me to delete/edit the document. If i click on these icons, nothing happens but the icons just appears)
    I switched the layout (another button in the folder) after executing this planning sequence & again came back to the 1st layout & the icons are gone.
    So, I'm anticipating that refreshing the current layout should solve this problem.
    Appreciate any kind of help.

    Hi,
    I guess the problem lies in the way the Excel macros transmit the data from the BPS sheet to your sheet. The best way to handle the problem is to insert a delete operation in you VBA coding before the new data is transferred to the second sheet. What I mean is in the macro SAPAfterDataPut you first delete all data in the second sheet (including the icons) and then transfer everything to the second sheet.
    You cannot trigger a "restart" of the Excel layout via a planning function. If you are in a layout and trigger a planning function the system notices that the same layout should be shown and thus does not restart the Excel (for performance reasons). If you switch to another layout then the Excel application is stopped and a new Excel is started. There is no way to influence this behavior without doing a system modification.
    Best regards,
    Gerd Schoeffl
    SAPNetWeaver BI RIG EMEA

  • Analysis Authorization with SEM-BPS

    Hi,
    We have performed technical upgrade from BW 3.5 to BI 7.0. We want to migrate to BI 7.0 functionality phase wise.
    We have SEM-BPS and now we want to migrate to Analysis Authorization of BI 7.0.
    Once we have igrated to Analysis Authorization, will there be any impact on SEM-BPS? Can we still use SEM-BPS with New Analysis Authorizations? We do not want to move to BI-IP in near future?.
    Please advise.
    Best Regards,
    UR

    Dear UR,
    Iu2019m going to try helping you,
    In difference of reporting functionality, in planning, the data of an InfoCube is not just read; it is also changed or created.
    There are two planning tools in BI: BW-BPS (Business Planning and Simulation), and BI Integrated Planning.
    There are two main tcode: BPS0 and RSPLAN
    There are three authorization objects to manage Integrated Planning:
    S_RS_PL_ADMIN - Planning Administrator
    S_RS_PL_PLANNER u2013 Planner
    S_RS_PL_PLANMOD_D u2013 Planning Modeler (Development System)
    The main object in the planning scenario is InfoCube real-time, where can available writing in small package that arrive in parallel. In some cases the security requirements for reporting and planning can be merging. In this case you need authorization object for checking planning, as authorization object above, and you need authorization object for using a query for planning requires as S_RS_COMP.
    In addition to authorization for displaying data, the authorizations for changing data you need analysis authorization (the analysis authorization focus in the InfoProvider, no in Aggregation Level).
    In your analysis authorization design for reporting stuff, you should use in 0TCAACTVT characteristic 03 value. In the planning stuff, you should use in 0TCAACTVT characteristic 03 and 02 values. As explain following:
    Using the characteristics 0TCAACTVT (activity), you can restrict the authorization to different activities. Read (03) is set as the default activity; you must also assign the activity Change (02) for integrated planning.
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/b1/0c9441b8972e7be10000000a1550b0/frameset.htm
    I hope this suggestion can help you answer question,
    Luis

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Webinar: Business Planning in SAP BW - The SEM-BPS transition to BW-BPS

    <b>SAP NetWeaver Know-How Network Webinar: 
    Business Planning in SAP BW - The SEM-BPS transition to BW-BPS
    Wednesday 14 July  2004</b>
    <b>11 a.m. EDT</b>
    On Wednesday 14 July, Lori Vanourek, a NetWeaver BI RIG Consultant, hosts the webinar titled <b>Business Planning in SAP BW - The SEM-BPS transition to BW-BPS</b> as part of the ongoing SAP NetWeaver Know-How Network Webinar Series.
    Here’s how Lori describes her webinar presentation:
    “Business Planning is an integral part of SAP NetWeaver '04.  Join us as we share SAP's product roadmap for business planning and discuss the transition of BPS from SAP SEM to SAP NetWeaver '04.”
    SDN invites you to post your questions to the presenter prior to the webinar and continue the online discussion afterward.
    <b>How to Participate</b>
    (Please go to the SDN Events page to see the article and download the PDF presentation)
    Dial-in Information:
    Date: Wednesday 14 July 2004
    Time: 11 a.m. EDT
    Within the U.S., call: +1.888.428.4473
    Outside the U.S., call: +1.651.291.0618
    Password: NetWeaver04
    WebEx Information:
    Topic: SAP NetWeaver Know-How Network
    Date: Wednesday 14 July 2004
    Time: 11 a.m. EDT
    Meeting Number: 742391500
    Meeting Password: netweaver04 (lowercase)
    WebEx Link: sap.webex.com
    Replay Information:
    A recorded replay of this call will be available for approximately three months after the webinar. Access this recording by dialing the appropriate number and using the replay access code 720149.
    Toll-free: +1.800.475.6701
    International: +1.320.365.3844
    <b>About the SAP NetWeaver Know-How Webinar Series</b>
    The SAP NetWeaver Know-How Webinar Series is driven by the SAP NetWeaver Regional Implementation Group (RIG), part of the SAP Development organization. The mission of the SAP NetWeaver RIG is to enable customers, employees, and partners to successfully implement the SAP NetWeaver solution. This SAP RIG has expertise in BI, EP, XI, and WebAS. They contribute their implementation expertise to the SDN implementation forums as well as to the SAP NetWeaver Know-How Webinar Series.
    <b>Disclaimer</b>
    SDN is not responsible for any changes to the webinar schedule. The webinar schedule may be changed or cancelled without prior notice.

    Hello Marc and Sander,
    I checked the document status and tested the link myself using a PC external to the SAP environment and did not have any problem. You do have to be logged in to do this. If you are not, you should see a page that will appear which says that a login is required to use that feature. On some occassions, users see an error message instead. I apologize for that.
    Kind Regards,
    David

  • Report on process chains using SEM-BPS

    We are looking to develop a tool to:
    1. Monitor data loads at the process chain level
    2. Within in the same tool, enter error resolution /
       action steps for all failed loads
    3. Report out % of successful or failed loads, using the data captured by the tool.
    We have studied the BW Tables rspcchain, rspcchainattr, rspclogchain, rspcprocesslog, rspclogs, rseventchain and
    rspcchaint.
    We are considering the following options:
    1. Develop an ABAP report and allowing the process chain monitor to manually enter the error resolutions (Perhaps in a Z-Table)
    2. Create an InfoCube in BW using the above tables and then write a query, but we can not allow the user to enter error resolutions in the query.
    3. Create a transactional infocube in SEM-BPS using the above tables and use the input functionality of tansactional infocube to facilitate the error resolution report entry.
    Can anyone suggest any alternate solution, or has anyone encountered similar requirement on project?
    One more doubt to check: is there any way to create an infoobject with CHAR type to enter data with 1000 or more characters? Or is there any other alternative solution to it? The doubt is to take care of the text entry for the error resolution.
    Milind Vad

    Just some thoughts:
    Have you considered using document service in BW or even KM (Portal) technology rather than CHAR 1000+ fields? This would give you more flexibility also on the type of document.
    For loading there are monitors available (based on tables like RSMONMESS). If you are interested in loads only (and not other process types), maybe using information from the BW load monitor could be beneficial (like RSMONMESS) rather than the process chain stuff.
    Maybe it's even worth having an InfoObjekt with a resolution code that links to a document describing the resolution with a key and whenever there is a new solution available create a new key with a new document. That would be most beneficial for reporting (like how many times has this problem / resolution occurred and the like).
    Regards, Klaus

  • Changes are not reflected into SEM-BPS 1 layout.

    Hi,
    1st, I've included code in SAPAfterDataPut that copies data from SEM-BPS 1 into New Sheet.
    Now, I've inserted a method in the sheet module of New Sheet. This method takes care of reflecting changes from New Sheet into SEM-BPS 1 layout. but, finally am not getting changes back to SEM-BPS 1 sheet.
    I've tested this in excel (outside of BPS) and it works fine.
    in BPS, I went into VB debugging mode (after inserting a break point) and I can see that change that has been made in New Sheet was reflected back to SEM-BPS 1. But, when i test the same out of debugging mode, am not getting the same result.
    Looks like,after changes in New Sheet, SAPAfterDataPut is again being executed and is overwriting the changes that I made in new sheet.
    Do i need to inlclude code only in SAPBeforeDataPut to reflect changes from new Sheet into SEM-BPS 1?
    In excel (outside of BPS), this method works in such a way that, as soon as i make changes in Sheet2 will get reflected into Sheet1 without runnning any macro.
    but, with same code, this doesn't work in BPS. (instead of reflecting changes to SEM-BPS 1, changed value in New Sheet is reverted back to original value that is same as the value in SEM-BPS 1)
    Appreciate any ideas/thoughts on this.

    Hi,
    Usually you work with a second sheet in the following way: write a macro that copies the data from the SEM-BPS 1 sheet to the new sheet. In order to run that macro include it into the macro "SAPAfterDataPut" and set the flag in the check box of in the popup of the third screen of the layout builder. The system will call the macro after the data has been written to the sheet SEM-BPS 1.
    There is a second predefined macro that the system will executed provided that the flag is set on the popup in the layout builder. This macro is called "SAPBeforeDataGet". It is called before the BPS reads data from the SEM-BPS 1 and should be used for transferring the data back to the first screen (similar to the coding you have done in SAPAfterDataPut). Using this macro should solve your problem.
    I did not understand how the changes are transferred from the new sheet in your (stand alone) case. Unfortunately it sometimes does make a difference if you test a macro in a stand alone Excel or in the Excel inplace. Some features are disabled in Excel Inplace (by Microsoft!!).
    Best regards,
    Gerd Schoeffl
    SAPNetWeaver RIG BI EMEA

  • Re: SEM BPS Upgrade from version 3.1 to 6.0

    Hello,
    Has anyone involved in an SEM BPS upgrade; especially from 3.1 to 6.0? Would you please give a check list of the processes/methods to be followd?
    Many thanks
    Rj

    Hello Rj,
    in general this upgrade should not cause much pain since BPS basically remained the same and no migration is required. Assuming you don't use any other SEM components, you should perform a run through of all of your planning apps.
    The devil is in the details (like custom coding or JavaScript)...
    Regards,
    Marc
    SAP NetWeaver RIG

  • SEM-BPS 6.0 error message requested data cannot be locked but at 999 row?

    Hi all,
    We are using SEM-BPS with WIB and EP 7.0 and I encountered issues for the first time.  I normally avoid configuring layouts with large hierarchies but at this one the client insisted.  They are in testing right now and selection of relatively small node (approx. 10-20 cost centers) still gave us an error message. 
    I know layouts has a limit of 9,999 rows and we should have been okay since it is about 200-300 lines per cost center but we are getting the error message below and it mentioned a limit of 999 rows in the selection table..  Any way we can change that?
    Thanks,
    mary
    Requested data cannot be locked (-> see long text)
    Notification Number RSPLS092
    Diagnosis
    In order to edit data from InfoProvider 'ZYEE_C02', the requested data has to be locked exclusively for user 'YEESU01'. The data that is currently being requested is specified by a very large selection table. In order to lock the data exclusively, the system has to store a compressed version of this selection table on the SAP standard lock server. However, the compressed selection table still has more than 999 rows. So that a reasonable number of users are able to change data at the same time, the system limits the number of records allowed in a selection table to 999 records.
    Information on Context of Lock Request:
    Lock requests can come from BI-Planning or from BW-BPS. The context specifies the following information:
    BI-Planning: for a lock request from a query, {Planning Function} for a lock request from a planning function.
    BW-BPS: {Planning Area}{Planning Level}{Planning Package}{Planning Function}{Parameter Group}. For lock requests from manual planning the following is true: Planning function = '0-MP' and the parameter group is the technical name of the planning layout.
    InfoProvider 'ZYEE_C02' is always a Basis InfoProvider. The current context information is:
    '{0-ADHOC}{0-MP}'
    System Response
    The requested data cannot be locked.
    Procedure
    You can normally only avoid this by simplifying your selections. For more information, see the documentation on this.
    Procedure for System Administration

    Hi Mary,
    in RSPLSE you use the option that the BI lock table is maintained in the SAP enqueue server. Here there is a limit - as explained in the long text of the message - that the compressed selection table for one enqueue request should not have more than 999 rows. This limit can not be changed.
    The selection table seems to be very big, since the compression factor is at least 5. So please check your selection table:
    1 check whether you can make the characteristic causing the problem not lock relevant
    2 or use the second option that the BI lock table is maintained in a shared memory area of the central instance of the system.
    But it is better to keep the selection tables small, e.g. by making some characteristics not lock relevant.
    Check sizing note 928044, e.g. in case you want to use 2. The default setting is very small.
    Regards,
    Gregor

  • SEM-BPS 6.0 BPS_WB generate Web Interface BSP

    Hi all,
    We recently had a redirect of the portal connected to our NW04S BW development system and had worked thorugh the issues on the portal side so existing Iviews works (SEM-BPS 6.0). 
    We are now creating some new iview for a new applications but I cannot generate the BSP page.  BPS_WB say that the BSP was generated correctly but when I try to test it, it says that http entry is missing and would I like to add it and when I say I do, it fails since I cannot generate a node under SICF since I do not have the authorization.  I did an SU53 check and asked for 01 create on S_ICF_ADM but getting a lot of questions since 01 on S_ICF_ADM is reserved for Basis only on this project.
    Another thing we are also concerned with is that some system settings that need to be done on the BW side might now have been done although Basis said all entries are changed..
    I have to admit that the last time I worked with someone to set up portal connection to BPS was back in 2003 so I have to dig pretty deep into my memory and had been doing some quick searches but have not found anything obvious yet that would solve the regeneration issue except for the authorization addition which they are very hesitant about.
    Any information or thoughts on this issue / problem would be appreciated.
    Thansk,
    Mary

    Hello Mary,
    I still work on BCS, so dont have BPS here. What I can do is to give you a summary of those things I had done in the past. Maybe it helps?
    I gues you know that there are 2 diffren types of nodes: System nodes to enable connection to the web and the node for each web interface.
    The required system nodes are describes in OSS 517484. I guess you have released them? Since BW Release 3.5 the web interface works with HTMLB. I guess you have activated thise too?`So the last possibility is from my point of view the web interface itself.
    The node for the web interface existis already?
    /default_host/sap/bc/bsp/sap/<your_web_interface>
    In develop system the web interfaces will be activated manually normally. But in Prod automaticall because of not allowed customising. If the web interface has created such an entry already I think the only chance is to give the web interface another name in Development system - or to delete the existing node in the prod system (dangerous).
    If this is not your problem, maybe another consultant has an idea?
    regards
    Eckhard Lewin

  • SEM-BPS functionality

    Hi to everybody.
    I'm searching for a particular functionality of SEM-BPS but I don't know if it exists or if I have to develop it custom(for ex:with ABAP).
    Here, the issue: I have a product-hierarchy, the last two bottom-level of which are PRODUCT and (under) SKU (PRODUCT is parent of SKU).
    I want to spread (I mean "distribute") values of particular PRODUCTS down to the respective SKUs following the hierarchy definition. An example: the value of PRODUCT P10 down to SKU 19, SKU 20 and SKU 21, the value of PRODUCT P11 down to SKU 22, SKU 23, SKU 24  and so on.
    I have tried to use ALLOCATION function and it works perfectly but there are too much parameters to define (sender, receiver, distributor and so on) and it isn't related to the pre-defined hierarchy (I mean, there is no possibility to define a hierarchical reference).
    This problem is born because of the large amount of PRODUCTS and SKUs (it's an hard work defining every single parameter...).
    So I'm searching to a function that permits to refer to the product-hierarchy and execute the spread.
    Any advice?
    Thanx a lot.
    Luca.

    Hi Luca,
    I am afraid I may not be able to help you with  your query! But if you dont mind can I ask for your help. I am new to SEM-BPS. I have done certification in SEM but you know how its like. The pace of the academy is so fast and furious that it is impossible to retain most of the stuff after the academy. Now I am working on an SEM-BPS project. The company wants me to show to them what BPS can offer them. They are currently using their own bespoked planning software but are not very happy. I need to show to them the functionalities of SEM-BPS. Are their standard examples on heirarchies in BPS? Or is their something u can send me to get started with using heirarchies in BPS so that I can map their organistion structure to BPS planning layout and then can proceed from there? And last but not the least can you advise me on the structure of info cube? What things do i need to take in to consideration before I can design my info cube? Thanks a lot.

  • User-exit in sem-bps

    In SEM-BPS, I have a layout for manually entering data. The purpose is to enter values for a list of cost centers. This list
    of cost centers should change according to a SKF (statistical key figure) entered in the layout's header.
    The list of cost centers linked to a given SKF is read from a database table.
    This whole process is achieved with the use of a variable of type user-exit.
    However, I have two problems:
    1) If I change the SKF in the layout's header, I'm not beeing able to force another read to the database table, in order to
    refresh the cost center list;
    2) I'm not being able to detect the SKF selected in header, altough I'm using the function API_SEMBPS_VARIABLE_GETDETAIL to
    do so, wich returns all the SKF's (statistical key figures)
    Can anybody help me ?
    Thank you,
    Ricardo
    PS: Here is the exit, in its present state:
    <b>FUNCTION ZLACT_SFK_CC.
    ""Interface local:
    *"  IMPORTING
    *"     VALUE(I_AREA) TYPE  UPC_Y_AREA
    *"     VALUE(I_VARIABLE) TYPE  UPC_Y_VARIABLE
    *"     VALUE(I_CHANM) TYPE  UPC_Y_CHANM OPTIONAL
    *"     VALUE(ITO_CHANM) TYPE  UPC_YTO_CHA
    *"  EXPORTING
    *"     REFERENCE(ETO_CHARSEL) TYPE  UPC_YTO_CHARSEL
      DATA: ls_charsel TYPE upc_ys_charsel,
            seqno type i,
            tab_val_sel LIKE UPC_YS_API_VARSEL occurs 0 with header line,
            tab_val_sel_all LIKE UPC_YS_API_VARSEL occurs 0 with header line,
            head like UPC_YS_API_HEAD occurs 0 with header line.
      CLEAR:
      eto_charsel, eto_charsel[] ,
      tab_val_sel, tab_val_sel[] ,
      ls_charsel, char_value, seqno,
      t_ccusto, t_ccusto[],
      t_iest, t_iest[].
      case i_variable.
        WHEN 'PRVCCIE'.
    Get value of statistical key figure entered in layout's header****
    CALL FUNCTION 'API_SEMBPS_VARIABLE_GETDETAIL'
       EXPORTING
         I_AREA                   = 'PRRVS'
         I_VARIABLE               = 'PRVSK'
       TABLES
         ETK_VARSEL              = tab_val_sel.
    select cost center list from database table ***********************
        select
          from ZPR_CC_IE_CG
          appending corresponding fields of table t_ccusto
          where iest = tab_val_sel-low.
          clear:seqno.
          loop at tab_val_sel.
            ADD 1 TO seqno.
            ls_charsel-low   = tab_val_sel-low.
            ls_charsel-seqno = seqno.
            ls_charsel-opt   = 'EQ'.
            ls_charsel-sign  = 'I'.
            ls_charsel-chanm = '0STKEYFIG'.
            INSERT ls_charsel INTO TABLE eto_charsel.
          endloop.
          loop at t_ccusto.
            ADD 1 TO seqno.
            ls_charsel-low   = t_ccusto-ccusto.
            ls_charsel-seqno = seqno.
            ls_charsel-opt   = 'EQ'.
            ls_charsel-sign  = 'I'.
            ls_charsel-chanm = i_chanm.
            INSERT ls_charsel INTO TABLE eto_charsel.
          ENDLOOP.
      when others.
    *do nothing
      ENDCASE.
    endfunction.</b>

    Hello,
    I use the exit in the planning level and I link the exit to a variable for "0FISCPER".  I copy the data from a "Non-transactional Cube" to a "Transactional Cube" derictly of the period end closing.  I still need the data in previous month for calculations in BPS.  Maybe, I can use your code.
    Thanks and best regards
    Constant

  • DYNAMIC SIMULATION MENU IN SEM BPS PLEASE ANSWER

    DYNAMIC SIMULATION MENU IN SEM BPS PLEASE ANSWER
    Hi everbody,
    I cannot see menu "dynamic simulation" and planning function "dynamic simulation" in our company's SEM System. Installation details are so:
    Component version : Netweaver 04
    Database: Oracle 9.2.0.5.0
    SAP_ABA     640     0010     SAPKA64010     
    SAP_BASIS     640     0010     SAPKB64010
    PI_BASIS     2004_1_640     0006     SAPKIPYI66     
    SAP_BW     350     0010     SAPKW35010     
    FINBASIS     300     0012     SAPK-30012INFINBASIS
    SEM-BW     400     0012     SAPKGS4012     
    BI_CONT     353     0005     SAPKIBIFP5
    Does SAP not support integration with Powersim in this new version of SEM or is there another way to integrate Powersim with BPS? I have seen these menus in version SEM 3.20 with underlying BW version of 3.10.
    Thank you very much.
    With best regards.

    Hi again,
    I know that Powersim is sold by own and has seperate license, but to connect Powersim and BPS there is a delivered planning function in SEM-BPS as I read and in "Planning" menu (in screen bps0) was a menu named "dynamic simulation" in Version SEM 3.2, I had seen this menu option in Version SEM 3.2.
    I think these menus are necessary to call simulation model that is prepared by Powersim, otherwise how can I connect my Powersim model to BPS?
    Can be the problem Support Package level or something that is not installed to system?
    Thanks very much

Maybe you are looking for

  • Transfer upgrade in Apple store?

    I wasn't able to order the 5s early Friday morning and had to wait until Saturday to put my order in. I used an upgrade from a different line. Since the phone isn't expected to ship for another month (specifically, 10/28), I was wondering if it's pos

  • How to restrict user to insert certain number of records (urgent)

    i have master detail Form. My requirement is as following. 1) In master block user enter other information and enter suppose 5 in text item. 2) Then 5 rows should be display in detail block. and user couldnt enter more than 5 records in detail block.

  • (LDOM) The file just loaded does not appear to be executable.

    Sorry if this is not the right forum - it seems to be the closest one for things about LDOMs; plese let me know if I should post somewhere else. I have installed LDoms_Manager-1_2.zip on a T5220 Server; following the book ("Logical Domains 1.2 Admini

  • My servlet cuts off the rest of my jsp page on Tomcat !

    I'm using the Oracle Jdevloper 9i to create a web application. I will then deploy the application to a Tomcat server. Now here is the problem. When developing and running on the Oracle OC4J server the application works fine but after deploying to the

  • Imac mid-2007 installation disc

    Is it possible to get another installation disc for my intel imac ? I get one of the file is corrupted and i need to get it changed. Thank you .^^.