Performance of source variables versus maintaining 2 areas

Hello,
I have a function module 'wrapper' which calls the API_SEMBPS_VARIABLE_SET function to set several variables in a multi area. I also need to maintain the same variables in a basic area. Would the performance of calling this function twice for each variable (1 in the multi and 1 in the basic area) be faster or slower than using the method described in the 'How to...BPS Variables of Type Exit' document. Ie, Having a source variable in the multi area, and then defining variables in the basic area of type Exit which then calls the function 'Z_VARIABLE_GET_DETAIL’ to set their values.
Many thanks, Linda

Hi Linda,
it does not make a big difference. However, using an exit variable will make sure that things are in sync. Check the How To... Use Reporting Variables in BW-BPS, which also describes how to implement global BPS variables.
http://service.sap.com/nw-howtoguides > BI
Regards,
Marc
SAP NetWeaver RIG

Similar Messages

  • Variable type exit with two source variables

    I have a case where I should determine a variable (A) value according to two other variables (B) and (C). That is, e.g.
    - IF (B)=x AND (C)=y THEN (A)=z
    - IF (B)=i AND (C)=j THEN (A)=k
    It seems like I cannot call the user exit with more than one exported variable, the standard call below:
    call function m_user_exit
      exporting
        i_area      = m_area
        i_variable  = m_variable
        i_chanm     = l_chanm
        ito_chanm   = mto_chanm
      importing
        eto_charsel = rto_value
    Any suggestions?
    Regards, Aki

    Thanks to everyone especially to Natarajan,
    Both source variables are type exit, so I did the following:
    A-variable was the “standard source”
    B-variable was determined by function call (source below)
    C-variable then determined by combination of A and B
    I added this simple code in the beginning of “your own logic”  and I had source_var_B as a “B-variable”.
    DATA: source_var_B TYPE upc_yto_charsel.
    call function 'ZDETERMINE_B'
      exporting
        i_area      = 'AREA'
        i_variable  = 'VAR_B'
        i_chanm     = 'CHAR_B'
        ito_chanm   = ito_chanm
      importing
        eto_charsel = source_var_B.
    Do you see any risks or inconsistencies or whatever I should take into account?
    Aki

  • Package variable versus application context variables

    Any difference in using package level variables versus setting attribute value pairs in a session based context ?
    Thanks

    Packaged variable - can be used only by user having execute privilege on the package. Same user can modify the values of the packaged variable when logged on to the session.
    Context variable - It cannot be set by only mere value assignment, It needs to the call to a procedure(trusted program) to set the value, hence it is more secure.
    Both are global variables & can be accessed throughout the session.
    Regards,

  • How to pass parameter into a source variable of a invoke activity

    I'm an new BPELer, I created a invoke activity to submit Oracle Appplications concurrent program, but I don't know how to pass parameter into source variable.
    BTW, I have created the mapper file (.xsl) file.
    could anyone tell me how to do that?
    Thanks,
    Victor

    Hi.
    How you start application? I think you send message to webservice(BPEL process is webservice too). So construct message with variable and value.
    But I created only processes where input value doesn't matter. I haven't use mapper yet too.

  • Data source for a planning area

    Hi Experts,
    I have created a data source 9AZDP_APAP for my planning area ZDP_APAP. When I'm creating this data source, SAP itself prefixing my datasource name with '9A'.
    I'm wondering is this ( Starting the data sorce name with '9A') with in our system only or with every system.
    If it is with every system then is there anyway I can override this.
    The reason why I'm asking this question is my data source is giving error " M version is different from A version' in RSA1, when I'm trying to activate my datsource I'm getting authorization error. When I contacted my security people on this they are telling I'm not following the naming standards. They are telling that the data source name should start with Z.
    All the inputs on this will be highly appreciated.
    Regards,
    Brahma Reddy

    Hi,
    Generating a data source on a planning area in APO is really very helpful function we have. this will help us to save a lot lot time to generate cube/rules/source.
    Definately, when system generates the data source it will follow its own logic "9A" which we can not change.
    I guess you  create a DataSource  manually if required, where you can follow your own logic, but that is not really consistent as system generates. becuase when it generates, it will check for time char and all based on storage bucket profile and generates.
    Hope my information will help you in understanding the importance of auto generation of data source
    Yarala

  • Source and Program monitor are black/blank...Just installed CS5

    I drag my first video clip over to CS5 and put it in the timeline. I play the video in the program monitor and all I get is sound and no video. WHen I hit the spacebar/pause, I can then see the video in the program monitor. BUt when I go back to the play mode, it goes black again and I hear the audio.
    THe Source and program monitor are giving me the problems.  Any help for a rookie?

    Work through all of the steps (ideas) listed at http://ppro.wikia.com/wiki/Troubleshooting
    If your problem isn't fixed after you follow all of the steps, report back with ALL OF THE DETAILS asked for in the FINALLY section, the questions at the end of the troubleshooting link... most especially the codec used... see Question 1

  • Variables from workbook are not visible in the Planning function

    Hi,
    We're using a workbook with a bunch of queries in it. Every query has a lot of variables declared that are calculated based on two input ready variable that users are required to fill the first time.
    Everything works as expected until the save data function is called, which in turn calls the planning function.
    Here, the user exit variable try to be recalculated again. Unfortunately, this planning function loses track of the two input ready variables that were on the workbook, so the user exit variables have no values.
    Can you please suggest a method so that:
    a) The user exit processing in the planning function "sees" the two input ready variables
    b) The user exit processing is not triggered in the planning function and it's taking the already calculated values
    Let me know if you require further information.
    Best regards,
    George

    Hi,
    the variables in the filter of the query lives in one variable container and the variables of the filter of the planning sequence live in another variable container. They do not know each other. You need to establish a relationship between them. Usually this is done via variable binding for regular variables. For exit variables the variable binding can be done as written above.
    If as in your case you have variables in the filter of the query and you want to use their values in the variable of the planning sequence than you need to store the values somewhere where you can retrieve them in the exit. This can be in a e.g. in shared memory or in the session memory. To do this you need to spend an extra exit variable in the filter of the query. The exit variable reads the real values and stores it somewhere (e.g. shared memory, global parameter of a function group, a class member or something similar). Now as the variable value is stored it can be read in the implemenation of the exit variable in the filter of the planning sequence.
    In total we have 3 variables:
    Variable 1 contains the value which the user keys in and exists in the filter of the query.
    Variable 2 reads variable 1 and stores the value. It needs to be defined in the filter of the query as well.
    Variable 3 reads the stored value and needs to be defined in the filter of the planning sequence.
    Regards Matthias Nutt
    SAP Consulting Switzerland

  • Error saving map. Stored procedure returned non-zero result. Check if source and target schemas are present.

    I am using VS 2012 and BizTalk 2013 and attempting to deploy an application to BizTalk when I get these errors:
    Error 47
    at Microsoft.BizTalk.Deployment.Assembly.BtsMap.Save()
       at Microsoft.BizTalk.Deployment.Assembly.BtsArtifactCollection.Save()
       at Microsoft.BizTalk.Deployment.Assembly.BtsAssembly.Save(String applicationName)
       at Microsoft.BizTalk.Deployment.BizTalkAssembly.PrivateDeploy(String server, String database, String assemblyPathname, String applicationName)
       at Microsoft.BizTalk.Deployment.BizTalkAssembly.Deploy(Boolean redeploy, String server, String database, String assemblyPathname, String group, String applicationName, ApplicationLog log)
    0 0
    Error 49
    Failed to add resource(s). Change requests failed for some resources. BizTalkAssemblyResourceManager failed to complete end type change request. Failed to deploy map "XXX.BTS2013.XXX.Maps.map_XXXX_R01_InsLabProc".
    Error saving map. Stored procedure returned non-zero result. Check if source and target schemas are present. Error saving map. Stored procedure returned non-zero result. Check if source and target schemas are present.
    0 0
    Error 46
    Failed to deploy map "XXX.BTS2013.XXX.Maps.map_XXXX_R01_InsLabProc".
    Error saving map. Stored procedure returned non-zero result. Check if source and target schemas are present.
    0 0
    I also tried to Import a MSI file from our test environment to see if that would work...got the same errors.  After spending hours (not kidding) looking for an answer, all I could find is that a hotfix would work.  So, I got the hotfix from Microsoft
    Support and applied it then rebooted.  Still getting the same errors.  I'm absolutely at a stand still.  Interesting that I got this application to deploy yesterday and then the next time I deployed it I started getting these errors.  I'm
    ready to pull my hair out!
    Is there an answer for this out there somewhere?  Any help would be appreciated.
    Thanks,
    Dave

    Hi Dave,
    Which hotfix have you applied? I don't think a hotfix of this issue is available for BizTalk 2013 yet. You should create a
    support ticket with Microsoft to get a solution.
    If this answers your question please mark as answer. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • GRC 5.3: CUP asks to perform risk analysis even when there are no risks in request

    Hi All,
    We recently upgraded from GRC 5.3 SP13 to SP22.
    The one issue which we are facing after upgrade is that now CUP is forcing approvers to do Risk Analysis, even when there are no risks in the CUP Request, that is Risk Tab is Green.
    Previously approvers were able to approve requests without doing risk analysis, if there were no risks in the request.
    CUP used to force them to do risk analysis only when there were risks associated with requests.
    But now, it is forcing approvers to perform risk analysis, even if there are no risks, i.e. approvers are not able to approve requests without any risks without doing risk analysis.
    Please advise.
    Thanks
    Aditi

    Hi,
    Can you check if any change is made in Configuration -> Workflow -> Stage -> Approvers
    Regards,
    Claudio

  • Variable 4MKTHI01 (Planning area 4CRM0001) is not available

    Hi All,
    I have activated planning profiles and level from 000 client.
    Now, when I try to open the planning level it says "Variable 4MKTHI01 (Planning area 4CRM0001) is not available"
    Please reply, how to activate this variable 4MKTHI01 under the planning level 4MP70000 of planning area 4CRM001
    Thanks
    Santosh

    Hi,
    take a look into note 886388 and 1083129.
    kind regards Michael

  • When trying to perform apps updates, says the apps are no longer available. Microsoft office and Chrome?

    When trying to perform apps updates, says the apps are no longer available. Microsoft office Word, Excel, PowerPoint and Chrome?

    Not sure how I got in AppleWorks. I was having slow internet issues as well. I checked the apps later, and they were able to update.

  • VIPM: pkg build aborted: "The following source VIs or Libraries are missing" (xpost from JKI forums)

    Hey folks-
    I hope this is not 'illegal' with respect to forum rules or anything, but I was thinking some of you that might be able to provide some insight on this might not necessarily be lurking the JKI forums as much as NI's...
    http://forums.jki.net/topic/1718-pkg-build-aborted-the-following-source-vis-or-libraries-are-missing...
    I've wasted many hours trying to figure this out... 
    thx
    -pat

    Re: Rules
    You're OK in my book since you dclared it a cross-post.
    It stops us from wasting our time trying to answer what has already been answered.
    Sorry i can help with the QSM issue.
    Take care,
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Report Performance with Bind Variable

    Getting some very odd behaviour with a report in APEX v 3.2.1.00.10
    I have a complex query that takes 5 seconds to return via TOAD, but takes from 5 to 10 minutes in an APEX report.
    I've narrowed it down to one particular bind. If I hard code the date in it returns in 6 seconds, but if I let the date be passed in from a parameter it takes 5+ minutes again.
    Relevant part of the query (an inline view) is:
    ,(select rglr_lect lect
    ,sum(tpm) mtr_tpm
    ,sum(enrols) mtr_enrols
    from ops_dash_meetings_report
    where meet_ev_date between to_date(:P35_END_DATE,'DD/MM/YYYY') - 363 and to_date(:P35_END_DATE,'DD/MM/YYYY')
    group by rglr_lect) RPV
    I've tried replacing the "to_date(:P35_END_DATE,'DD/MM/YYYY') - 363" with another item which is populated with the date required (and verified by checking session state). If I replace the :P35_END_DATE with an actual date the performance is fine again.
    The weird thing is that a trace file shows me exactly the same Explain Plan as the TOAD Explain where it runs in 5 seconds.
    Another odd thing is that another page in my application has the same inline view and doesn't hit the performance problem.
    The trace file did show some control characters (circumflex M) after each line of this report's query where these weren't anywhere else on the trace queries. I wondered if there was some sort of corruption in the source?
    No problems due to pagination as the result set is only 31 records and all being displayed.
    Really stumped here. Any advice or pointers would be most welcome.
    Jon.

    Don't worry about the Time column, the cost and cardinality are more important to see whther the CBO is making different decisions for whatever reason.
    Remember that the explain plan shows the expected execution plan and a trace shows the actual execution plan. So what you want to do is compare the query with bind variables from an APEX page trace to a trace from TOAD (or sqlplus or whatever). You can do this outside APEX like this...
    ALTER SESSION SET EVENTS '10046 trace name context forever, level 1';Enter and run your SQL statement...;
    ALTER SESSION SET sql_trace=FALSE;This will create a a trace file in the directory returned by...
    SELECT value FROM v$parameter WHERE name = 'user_dump_dest' Which you can use tkprof to format.
    I am assuming that your not going over DB links or anything else slightly unusual?
    Cheers
    Ben

  • SLOW report performance with bind variable

    Environment: 11.1.0.7.2, Apex 4.01.
    I've got a simplified report page where the report runs slowly compared to running the same query in sqldeveloper. The report region is based on a pl/sql function returning a query. If I use a bind variable in the query inside apex it takes 13 seconds to run, and if I hard code a string it takes only a few hundredths of a second. The query returns one row from a table which has 1.6 million rows. Statistics are up-to-date and the columns in the joins and where clause are indexed.
    I've run traces using p_trace=YES from Apex for both the bind variable and hard coded strings. They are below.
    The sqldeveloper explain plan is identical to the bind variable plan from the trace, yet the query runs in 0.0x seconds in sqldeveloper.
    What is it about bind variable syntax in Apex that is causing the bad execution plan? Apex Bug? 11g bug? Ideas?
    tkprof output from Apex trace with bind variable is below...
    select p.master_id link, p.first_name||' '||p.middle_name||' '||p.last_name||' '||p.suffix personname,
    p.gender||' '||p.date_of_birth g_dob, p.master_id||'*****'||substr(p.ssn,-4) ssn, p.status status
    from persons p
    where
       p.person_id in (select ps.person_id from person_systems ps where ps.source_key  like  LTRIM(RTRIM(:P71_SEARCH_SOURCE1)))
    order by 1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.01          0          1         27           0
    Fetch        2     13.15      13.22      67694      72865          0           1
    total        4     13.15      13.23      67694      72866         27           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 62  (ODPS_PRIVACYVAULT)   (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT ORDER BY (cr=72869 pr=67694 pw=0 time=0 us cost=29615 size=14255040 card=178188)
          1   FILTER  (cr=72869 pr=67694 pw=0 time=0 us)
          1    HASH JOIN RIGHT SEMI (cr=72865 pr=67694 pw=0 time=0 us cost=26308 size=14255040 card=178188)
          1     INDEX FAST FULL SCAN IDX$$_0A300001 (cr=18545 pr=13379 pw=0 time=0 us cost=4993 size=2937776 card=183611)(object id 68485)
    1696485     TABLE ACCESS FULL PERSONS (cr=54320 pr=54315 pw=0 time=21965 us cost=14958 size=108575040 card=1696485)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (ORDER BY)
          1    FILTER
          1     HASH JOIN (RIGHT SEMI)
          1      INDEX   MODE: ANALYZED (FAST FULL SCAN) OF
                     'IDX$$_0A300001' (INDEX)
    1696485      TABLE ACCESS   MODE: ANALYZED (FULL) OF 'PERSONS' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       1276        0.00          0.16
      db file sequential read                       812        0.00          0.02
      direct path read                             1552        0.00          0.61
    ********************************************************************************Here's the tkprof output with a hard coded string:
    select p.master_id link, p.first_name||' '||p.middle_name||' '||p.last_name||' '||p.suffix personname,
    p.gender||' '||p.date_of_birth g_dob, p.master_id||'*****'||substr(p.ssn,-4) ssn, p.status status
    from persons p
    where
       p.person_id in (select ps.person_id from person_systems ps where ps.source_key  like  LTRIM(RTRIM('0b')))
    order by 1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.02       0.04          0          0          0           0
    Execute      1      0.00       0.00          0          0         13           0
    Fetch        2      0.00       0.00          0          8          0           1
    total        4      0.02       0.04          0          8         13           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 62  (ODPS_PRIVACYVAULT)   (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT ORDER BY (cr=10 pr=0 pw=0 time=0 us cost=9 size=80 card=1)
          1   FILTER  (cr=10 pr=0 pw=0 time=0 us)
          1    NESTED LOOPS  (cr=8 pr=0 pw=0 time=0 us)
          1     NESTED LOOPS  (cr=7 pr=0 pw=0 time=0 us cost=8 size=80 card=1)
          1      SORT UNIQUE (cr=4 pr=0 pw=0 time=0 us cost=5 size=16 card=1)
          1       TABLE ACCESS BY INDEX ROWID PERSON_SYSTEMS (cr=4 pr=0 pw=0 time=0 us cost=5 size=16 card=1)
          1        INDEX RANGE SCAN IDX_PERSON_SYSTEMS_SOURCE_KEY (cr=3 pr=0 pw=0 time=0 us cost=3 size=0 card=1)(object id 68561)
          1      INDEX UNIQUE SCAN PK_PERSONS (cr=3 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 68506)
          1     TABLE ACCESS BY INDEX ROWID PERSONS (cr=1 pr=0 pw=0 time=0 us cost=2 size=64 card=1)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (ORDER BY)
          1    FILTER
          1     NESTED LOOPS
          1      NESTED LOOPS
          1       SORT (UNIQUE)
          1        TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                       'PERSON_SYSTEMS' (TABLE)
          1         INDEX   MODE: ANALYZED (RANGE SCAN) OF
                        'IDX_PERSON_SYSTEMS_SOURCE_KEY' (INDEX)
          1       INDEX   MODE: ANALYZED (UNIQUE SCAN) OF 'PK_PERSONS'
                      (INDEX (UNIQUE))
          1      TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                     'PERSONS' (TABLE)

    Patrick, interesting insight. Thank you.
    The optimizer must be peeking at my bind variables with it's eyes closed. I'm the only one testing and I've never passed %anything as a bind value. :)
    Here's what I've learned since my last post:
    I don't think that sqldeveloper is actually using the explain plan it says it is. When I run explain plan in sqldeveloper (with a bind variable) it shows me the exact same plan as Apex with a bind variable. However, when I run autotrace in sqldeveloper, it takes a path that matches the hard coded values, and returns results in half a second. That autotrace run is consistent with actually running the query outside of autotrace. So, I think either sqldeveloper isn't really using bind variables, OR it is using them in some other way that Apex does not, or maybe optimizer peeking works in sqldeveloper?
    Using optimizer hints to tweak the plan helps. I've tried both /*+ FIRST_ROWS */ and /*+ index(ps pk_persons) */ and both drop the query to about a second. However, I'm loath to use hints because of the very dynamic nature of the query (and Tom Kyte doesn't like them either). The hints may end up hurting other variations on the query.
    I also tested the query by wrapping it in a select count(1) from ([long query]) and testing the performance in sqldeveloper and in Apex. The performance in that case is identical with both bind variables and hard coded variables for both Apex and SqlDeveloper. That to me was very interesting and I went so far as to set up two bind variable report regions on the same page. One region wrapped the long query with select count(1) from (...) and the other didn't. The wrapped query ran in 0.01 seconds, the unwrapped took 15ish seconds with no other optimizations. Very strange.
    To get performance up to acceptable levels I have changed my function returning query to:
    1) Set the equality operator to "=" for values without wildcards and "like" for user input with wildcards. This makes a HUGE difference IF no wildcard is used.
    2) Insert a /*+ FIRST_ROWS */ hint when users chose the column that requires the sub-query. This obviously changes the optimizer's plan and improves query speed from 15 seconds to 1.5 seconds even with wildcards.
    I will NOT be hard coding any user supplied values in the query string. As you can probably tell by the query, this is an application where sql injection would be very bad.
    Jeff, regarding your question about "like '%' || :P71_SEARCH_SOURCE1 || '%'". I've found that putting wildcards around values, particularly at the beginning will negate any indexing on the column in question and slows performance even more.
    I'm still left wondering if there isn't something in Apex that is breaking the optimizer "peeking" that Patrick describes. Perhaps something in the way it switches contexts from apex_public_user to the workspace schema?

  • Source List for MRP Area

    Is there a possibility to maintain source list for MRP purposes
    at storage Location level? We receive requirements from a couple
    of our subsidaries and we were consolidating this to be a single
    requirement for our suppliers, however this has changed we have
    to now show each subsidary's requirements individually. We
    currently only use a single plant, previously we created
    additional plants for this and maintained a separate source list
    for each. Is this the best way to represent and run MRP for each
    subsidary? Thanks.

    Dear Kennedy,
    Source list for Storage Location level not possible!!
    An MRP area represents an organizational unit for which you can perform material requirements planning separately.
    An MRP area can include one or several storage locations of a plant or a subcontractor. You can define MRP areas in a plant.
    By defining MRP areas, you can perform MRP for each area. This means the right quantity of materials can be provided on time for each area, such as a production line, a storage location for spare parts, or subcontractor stock.
    Bye,
    Muralidhara

Maybe you are looking for

  • Null Validation in Tabular Form in APEX 4.2.2

    Hi to all respected Gurus of this Forum! Recently downloaded APEX 4.2.2. As we know that this version has some improved features and one is the best is i.e. *"Validation on Tabular Form".* For test purpose I succeeded to apply validation on field of

  • Adobe Photoshop CC Error After Windows Update

    Hi. My Windows 8 automatically updated today and since then my Adobe Photoshop CC had been crashing. Everytime I try to edit a new work, the project flickers to black making it difficult to edit layers. When I choose to transform the layer, that is w

  • Photo stream added to side bar but not updating

    I've added Photo Stream to the Finder Side Bar of OSX Mavericks through the Smart Folder Saved Search option. However, the photo stream in the Finder Side Bar is not updating with new pictures taken from my iPhone. Would anyone have a solution possib

  • Costing view for non valuated, non stock material type

    Hi, can anyone help me in understanding if a costing view is necessary to be maintained in the material master for non stock, non valuated material types. if it is not necessary then why it is not? thanks a lot in advance.

  • Speedy Boot Times W/ 10.4.8

    Anyone else notice a dramatic decrease in boot up time w/ the new update? I started my intel 20in iMac this morning to sync songs on my ipod before work, and it was zooming. iMac   Mac OS X (10.4.8)