Bad performance view on views

we have two views with equal key fields which perform very good (order less than 1 second).
Both views give extra fields from data sets.
I want to combine those extra fields in one record with equal key values. Because some key combinations can miss in either one of both sets, I make an extra view with a union only on the keyfields of both views and use this view as driver for the combination, so I join both original views open with this driver. When I now query the resultview the performance is bad (order 3.5 minutes).
(This is independent of the number of result records)
Explain plan shows no full table scans.
Any suggestions to improve this construction??

I have split the two parts of the union query, and analysed both parts separately, see below.
The subviews perform as follows:
c_contr_opb_CD with restriction on contract_no 7 rows in 0:02 seconds
without restrictions 2216 records in 3:34 minutes
c_contr_opb_B with restriction on contract_no 5 rows in 0:01 seconds
without restrictions 1567 records in 0:05 minutes
(in both views are 1332 records with common keys)
When I make the view over an open join of both views and I use the
c_contr_opb_B as driver then it performs with and without restriction
on contract_no the same, suggesting it performs the 'group by'-clause in
the view c_contr_opb_CD before applying the restriction.
When, on the other hand I use the view c_contr_opb_CD as driver it seems to
apply the restriction before the 'group by'-clause in the view.
Is there a way to always apply the restriction berore the 'group by' clause
select tB.Contract_no
, tB.agc
, tB.salesgroup
, tB.B_pricing
, tB.B_No_of_elements
, tB.B_m3_elements
, tB.C_complete
, tCD.C_pricing
, tCD.C_No_of_elements
, tCD.C_m3_elements
, tCD.D_pricing
, tCD.D_No_of_elements
, tCD.D_m3_elements
from c_contr_opb_CD tCD
, c_contr_opb_B tB
where (tCD.Contract_no(+) = tB.Contract_no
and tCD.agc(+) = tB.agc
and tCD.salesgroup(+) = tB.salesgroup)
and tB.contract_no=lpad('20042002',20);
query in 3:40 minutes, result 5 records
Options Object Operation
SELECT STATEMENT
OUTER MERGE JOIN
JOIN SORT
C_CONTR_OPB_B VIEW
GROUP BY SORT
VIEW
UNION-ALL
NESTED LOOPS
UNIQUE SCAN PK_CMHD INDEX
BY INDEX ROWID CMMT TABLE ACCESS
RANGE SCAN PK_CMMT INDEX
NESTED LOOPS
UNIQUE SCAN PK_CMHD INDEX
BY INDEX ROWID CMSV TABLE ACCESS
RANGE SCAN PK_CMSV INDEX
JOIN SORT
C_CONTR_OPB_CD VIEW
GROUP BY SORT
VIEW
UNION-ALL
NESTED LOOPS
RANGE SCAN X_SHP_CM INDEX
BY INDEX ROWID CMMT TABLE ACCESS
RANGE SCAN PK_CMMT INDEX
NESTED LOOPS
RANGE SCAN X_SHP_CM INDEX
BY INDEX ROWID CMSV TABLE ACCESS
RANGE SCAN PK_CMSV INDEX
28 rows selected.
select tCD.Contract_no
, tCD.agc
, tCD.salesgroup
, tB.B_pricing
, tB.B_No_of_elements
, tB.B_m3_elements
, tB.C_complete
, tCD.C_pricing
, tCD.C_No_of_elements
, tCD.C_m3_elements
, tCD.D_pricing
, tCD.D_No_of_elements
, tCD.D_m3_elements
from c_contr_opb_B tB
, c_contr_opb_CD tCD
where (tB.Contract_no(+) = tCD.Contract_no
and tB.agc(+) = tCD.agc
and tB.salesgroup(+) = tCD.salesgroup)
and tCD.contract_no=lpad('20042002',20);
query in 0:05 minutes, result 7 rows
Options Object Operation
SELECT STATEMENT
OUTER MERGE JOIN
JOIN SORT
C_CONTR_OPB_CD VIEW
GROUP BY SORT
VIEW
UNION-ALL
NESTED LOOPS
UNIQUE SCAN PK_CMHD INDEX
BY INDEX ROWID CMMT TABLE ACCESS
RANGE SCAN PK_CMMT INDEX
NESTED LOOPS
UNIQUE SCAN PK_CMHD INDEX
BY INDEX ROWID CMSV TABLE ACCESS
RANGE SCAN PK_CMSV INDEX
JOIN SORT
C_CONTR_OPB_B VIEW
GROUP BY SORT
VIEW
UNION-ALL
NESTED LOOPS
RANGE SCAN X_SHP_CM INDEX
BY INDEX ROWID CMMT TABLE ACCESS
RANGE SCAN PK_CMMT INDEX
NESTED LOOPS
RANGE SCAN X_SHP_CM INDEX
BY INDEX ROWID CMSV TABLE ACCESS
RANGE SCAN PK_CMSV INDEX
create or replace view c_contr_opb_B as
select     Contract_no
     ,     agc
     ,     salesgroup
     ,     min(C_complete)          C_complete
     ,     sum(B_pricing)          B_pricing
     ,     sum(B_No_of_elements)     B_No_of_elements
     ,     sum(B_m3_elements)     B_m3_elements
     from (     select     h.cm_num               Contract_no
          ,     ms.agc
          ,     ms.sales_group               salesgroup
          ,     decode(ms.closed_reason,'C001','*',' ')     C_complete
          ,     ms.unit_price*ms.ord_qty     B_pricing
          ,      decode(ms.agc,'PROJ',fnc_struct_m3(ms.ccn,ms.structure_id,ms.structure_rev)
                    ,0)*ms.ord_qty          B_m3_elements
          ,     ms.ord_qty                B_No_of_elements
          from     cmmt     ms
          ,     cmhd     h
          where     ms.ccn          = h.ccn
          and     ms.cm_num     = h.cm_num
          and     ms.price_bucket     = 'B'
          and     ms.closed_reason in ('C001',' ')
          and     h.ccn          = 'SPAN'
          UNION ALL
          select     h.cm_num               Contract_no
          ,     ms.agc
          ,     ms.sales_group               salesgroup
          ,     decode(ms.closed_reason,'C001','*',' ')     C_complete
          ,     ms.unit_price*ms.ord_qty     B_pricing
          ,      0                    B_m3_elememts
          ,     ms.ord_qty                B_No_of_elements
          from     cmsv     ms
          ,     cmhd     h
          where     ms.ccn          = h.ccn
          and     ms.cm_num     = h.cm_num
          and     ms.price_bucket     = 'B'
          and     ms.closed_reason in ('C001',' ')
          and     h.ccn          = 'SPAN'
     ) group by Contract_no
     ,     agc
     ,     salesgroup
create or replace view c_contr_opb_CD as
select     Contract_no
     ,     agc
     ,     salesgroup
     ,     sum(C_pricing)          C_pricing
     ,     sum(C_No_of_elements)     C_No_of_elements
     ,     sum(C_m3_elements)     C_m3_elements
     ,     sum(D_pricing)          D_pricing
     ,     sum(D_No_of_elements)     D_No_of_elements
     ,     sum(D_m3_elements)     D_m3_elements
     from (     select     h.cm_num               Contract_no
          ,     ms.agc
          ,     ms.sales_group               salesgroup
          ,     ms.unit_price*ms.ord_qty     C_pricing
          ,     ms.ord_qty                C_No_of_elements
          ,      decode(ms.agc,'PROJ',fnc_item_m3(ms.ccn,ms.item,ms.revision),0)
                    *ms.ord_qty          C_m3_elements
          ,     ms.unit_price*ms.ord_qty     D_pricing
          ,     decode(ms.agc,'PROJ',fnc_item_aantal_D(ms.ccn,ms.item,ms.revision),0)
                                   D_No_of_elements
          ,      decode(ms.agc,'PROJ',fnc_item_m3_D(ms.ccn,ms.item,ms.revision),0)
               * decode(ms.agc,'PROJ',fnc_item_aantal_D(ms.ccn,ms.item,ms.revision),0)
                                   D_m3_elements
          from     cmmt     ms
          ,     cmhd     h
          where     ms.ccn          = h.ccn
          and     ms.cm_num     = h.cm_num
          and     ms.price_bucket     = 'C'
          and     ms.closed_reason in ('F001','F003',' ')
          and     h.ccn          = 'SPAN'
          UNION ALL
          select     h.cm_num               Contract_no
          ,     ms.agc
          ,     ms.sales_group               salesgroup
          ,     ms.unit_price*ms.ord_qty     C_pricing
          ,     ms.ord_qty                C_No_of_elements
          ,      0                    C_m3_elememts
          ,     ms.unit_price*ms.ord_qty     D_pricing
          ,     0                    D_No_of_elements
          ,      0                    D_m3_elements
          from     cmsv     ms
          ,     cmhd     h
          where     ms.ccn          = h.ccn
          and     ms.cm_num     = h.cm_num
          and     ms.price_bucket     = 'C'
          and     ms.closed_reason in ('F001','F003',' ')
          and     h.ccn          = 'SPAN'
     ) group by Contract_no
     ,     agc
     ,     salesgroup

Similar Messages

  • Performance problem on view with spatial column - resolved

    I have had a problem with queries on a view that had a spatial column, where the view did not belong to the logged in user. Where my spatial window was retrieved by a sub-query, the spatial scan did a full table scan instead of using the spatial index.
    I have found that the problem can be resolved by granting MERGE VIEW on the view to the querying user.
    The view can be as simple as SELECT * FROM table.
    The badly performing query could be as simple as
    select id from T1.tstview
    where SDO_RELATE(coordinates,
    (SELECT coordinates FROM T1.tstWINDOW WHERE ID = '1')
    ,'mask=INSIDE+COVEREDBY querytype=WINDOW') = 'TRUE'  ;
    I think this is a bug, and have raised an SR - MERGE VIEW is supposed to override issues with the "security intent" of a view.
    The workaround is simple enough once you're aware of it and I thought it was worth passing on.

    Thanks for sharing this workaround!
    Which ORACLE version did you test ?

  • Performance point of view

    Hi every one,
    I am new to XI . I have a small doubt in Mappings.
    In XI we have 4 types of mappings:
    1.Graphical mapping
    2.XSLT mapping,
    3.Java Mapping
    4.ABAP mapping
    In case of performance point of view which one is best .
    Plz explain me in details .
    I will give full points for correct answers.
    Thanks and Regards,
    P.Naganjana Reddy

    Hi,
    refer to these links:
    Re: Why xslt  mapping?
    http://searchsap.techtarget.com/tip/0,289483,sid21_gci1217018,00.html
    /people/r.eijpe/blog/2006/02/20/xml-dom-processing-in-abap-part-iiib150-xml-dom-within-sap-xi-abap-mapping
    /people/sravya.talanki2/blog/2006/12/27/aspirant-to-learn-sap-xiyou-won-the-jackpot-if-you-read-this-part-iii
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/8a57d190-0201-0010-9e87-d8f327e1dba7
    Regards,
    Nithiyanandam

  • Which design is best from a performance point of view?

    Hello
    I'm writing a small system that needs to track changes to certain columns on 4 different tables. I'm using triggers on those columns to write the changes to a single "change register" table, which has 12 columns. Beacuse the majority of tracked data is not shared between the tables, most of the columns will have null values. From a design point of view it is apparent that having 4 separate change register tables (one for each main table that is being tracked), would be better in terms of avoiding lots of null columns for each row, but I was trying to trade this off against having a single table to see all changes that have been made across the tracked tables.
    From a performance point of view though, will there be any real difference whether there are 4 separate tables or 1 single register table? I'm only ever going to be inserting into the register table, and then reading back from it at a later date and there won't be any indexes on it. Someone I work with suggested that there would be more overhead on the redo logs if a single table was used rather than 4 separate tables.
    Any help would be appreciated.
    David

    The volumes of data are going to be pretty small,
    maybe a couple of thousand records each day, it's an
    OLTP environment with 150 concurrent users max.Consider also the growing of data and if you'll put data constantly to an historical db or if the same tables will contain the increasing number of record.
    The point that my colleague raised was that multiple
    inserts into a single table across multiple
    transactions could cause a lot of redo contention,
    but I can't see how inserting into one table from
    multiple triggers would result in more redo
    contention that inserting into multiple tables. The
    updates that will fire the triggers are only ever
    going to be single row updates, and won't normally
    cause more than one trigger to fire within a single
    transaction. Is this a fair assumption to make?
    David
    I agree with you, the only thing I will consider, instead of redo problem, is the locking on the table that could be generated when logs of different tables will have to be put in a single table, i mean if after insert of a log record you could need to update it....
    In this case if 2 or more users have to update the same log you could have problems.

  • Please validate my logic performance point of view:

    Please validate my logic performance point of view:
    logic I wrote :
       LOOP AT i_mara INTO wa_mara.
    *-----For material description, go to makt table.
          SELECT SINGLE maktx
            FROM makt
            INTO l_maktx
           WHERE matnr = lwa_mara-matnr
             AND SPRAS = 'E'.
          IF sy-subrc = 0.
            wa_mara-MAKTX = l_maktx.
          ENDIF.        " IF sy-subrc = 0.
    *-----For Recurring Inspection, go to marc table.
          SELECT prfrq
            FROM marc
            INTO l_prfrq
            UP TO 1 ROWS
           WHERE matnr = lwa_mara-matnr.
          ENDSELECT.
          IF sy-subrc = 0.
            wa_mara-prfrq = l_prfrq.
          ENDIF.          " IF sy-subrc = 0.
          MODIFY TABLE i_mara FROM wa_mara
                 TRANSPORTING maktx.
          CLEAR : wa_mara.
       ENDLOOP.   " LOOP AT i_mara INTO wa_mara.
    Or is it better below : ?
    To SELECT all the maktx values from makt and all prfrq values from marc
    in two internal tables and
    Loop at i_mara.
      LOOP at all maktx itab
    and pass corresponding maktx values into i_mara table
    and pass corresponding prfrq values into i_mara table
    ENDLOOP.
    OR
    is there any better performance logic you suggest ?
    THANKS IN ADVANCE.

    ok this is very funny so if someone gets a good way to code he should wait till he gets 1198 points till he write a performance wiki
    so that means ppl who has high SDN points only can write wiki
    for your information wiki definition is
    [http://en.wikipedia.org/wiki/Wiki |http://en.wikipedia.org/wiki/Wiki]
    its all about contribution and sharing.
    did you try that code on a production or a Quality server. If you did you wont say that coz the results i have shown in that blog is what i my self tested on a Quality system of our client.
    and for your information i did my internship at a SAP AFS consultancy firm and i created the account at that time. I have joined that company and now working as a developer over there.
    if you have worked on a client system development on SD and MM you will know that most of the time
    we use header and item tables like
    likp,lips
    vbak,vbap
    vbrk,vbrp
    most of the time we come across nested loops with smiler kind of condition.
    in this Q he has MATNR as reference.
    if you see it properly you can see both tables are sorted.
    and the select statement is for all entries.
    for your information there can be a delivery document item with out a header if you are aware of DB concepts in that case there will be a foreign key error.
    ok lets think about a situation like that even in that case if there ant any header data then simply the client wont request for that record.( you would know if you have worked with clients )
    last but not least i dont care about my points rate at SDN i just wanted to share what i know coz anyway i have a very good job here. dont try to put down ppl just because they are new.
    Thomas Zloch  : i never told its my code i saw it some where then i checked and i bogged it so that i can get it when i want and i saw it in se30 ( its not se38 ) but i know most of ABAP developers dont check it much so i just wanted to help.
    Rui Pedro Dantas   : ya your correct we dont need to use it most of the time since sorted table is easy but there are programs which works with bulky data load we can use it in places like that. Thanks for talking the truth
    Nafran
    sorry if i said anything to hurt anyone.

  • Bad performance updating purchase order (ME22N)

    Hello!
    Recently, we face bad performance updating purchase orders using transaction ME22N. The problem occurs since we implemented change documents for a custom table T. T is used to store additional data to purchase order positions using BAdIs ME_PROCESS_PO_CUST and ME_GUI_PO_CUST.
    I've created a change document C_T for T using transaction SCDO. The update module of the change document is triggered in the method POST of BAdI ME_PROCESS_PO_CUST.
    Checking transaction SM13, I recognized that the update requests of ME22n have status INIT for several minutes before they are processed. I also tried to exclude the call of the update module for change document C_T (in Method POST) - the performance problem still occurs!
    The problem only occurs with transaction ME22N, thus I assume that the reason is the new change document C_T.
    Thanks for your help!
    Greetings,
    Wolfgang

    I agree with vikram, we don't have enough information, even not a small hint on usage of this field, so which answer do you expect (The quality of an answer depends ...) This analysis must be executed on your system...
    From a technical point of view, the BAPI_PO_CHANGE has EXTENSIONIN table parameter, fill it using structure BAPI_TE_MEPOITEM[X] alreading containing CI_EKPODB (*) and CI_EKPODBX (**)
    Regards,
    Raymond
    (*) I guess you have used this include
    (**) I guess you forgot this one (same field names but data element always BAPIUPDATE)

  • Bad performance quadro 4800 and premiere pro

    Could someone please help me understand if somethings wrong with my settings or if my 4800 is defected?
    I run OSX Lion 10.7.2 and have the quadro fx 4800 installed in a:
    Mac Pro 4,1
    2 x 2.93Ghz Quad-core Intel Xeon
    12GB of Ram
    with two additional Geforce GT 120 installed too.
    the latest drivers for the 4800
    GPU driver version: 7.12.9 270.05.10f03
    CUDA driver version: 4.0.50
    I run Premiere 5.5.2 and have tested the performance with the two different settings:
    Mercury Playback Engine GPU Acceleration
    Mercury Playback Engine Software Only
    I did this test because I didn’t feel my 4800 did the work I’ve read everywhere it should. Adding simple text titles to AVCHD footage made my playback drop frames.
    Anyhow, I tested my machine with 1920x1080 AVCHD and added video layers until I started to see stutter during playback. First with “Mercury Playback Engine GPU Acceleration”. With 14 video layers sized down so you could see them all beside one another playback started to drop frames. The yellow line was still yellow in the top of the timeline. Shouldn’t it turn red if the footage needs rendering?
    I then switched to “Mercury Playback Engine Software Only” and the yellow line turned red. The strange thing is that when I played back the same 14 layers of video the dropped frames where gone!! Isn’t this beyond strange??? Shouldn’t everything run more smooth with the “Mercury Playback Engine GPU Acceleration”?
    Has it got anything to do with my Geforce GT 120 installed? Should I get rid of those? My two 24 inch apple displays are both connected to the 4800.
    PLEASE help me or redirect me to some good forums!

    After testing final cut (which I love) and Premiere back and forth with the exact same media I notice that editing prores in Final cut is the best when it comes to just edit the film. Sure Premiere does take native MTS files but what good is this when it doesn't run fluidly? Comparing playback and editing with prores in Final Cut and Premiere makes me realize that on my computer, don't ask me why, a rendered sequence with prores material in Premiere is far more jerky than a sequence with prores that "do not need render" in Final cut. Shouldn't they be the same?? Movements like camera pans that originally were shot really smoothly isn't as smooth as they should be playing back in premiere (and yes I know I got the right sequence settings and all). Final cut however gives me what I want.
    To me it all comes down to how good the editing App presents what you currently are editing. Adobe has many advantages with the dynamic link and so on but playback is a such an important part of editing!! Has anyone got the same problem with "not as smooth playback as you would really want" even though the sequence is rendered displaying a green line??
    Thanks for the links lasvideo but I practically read every article there is about CUDA, premiere and mercury playback engine already.
    Anyone knows anything about Adobe CS 6 release dates?
    12 dec 2011 kl. 22:03 skrev lasvideo:
    Re: bad performance quadro 4800 and premiere pro
    created by lasvideo in Premiere Pro CS5 & CS5.5 - View the full discussion
    Some answers for you right from the horses mouth 
    http://blogs.adobe.com/premiereprotraining/2011/02/cuda-mercury-playba ck-engine-and-adobe-premiere-pro.html
    http://forums.adobe.com/message/3804386#3804386
    http://forums.adobe.com/community/premiere/faq_list
    Replies to this message go to everyone subscribed to this thread, not directly to the person who posted the message. To post a reply, either reply to this email or visit the message page: http://forums.adobe.com/message/4079978#4079978
    To unsubscribe from this thread, please visit the message page at http://forums.adobe.com/message/4079978#4079978. In the Actions box on the right, click the Stop Email Notifications link.
    Start a new discussion in Premiere Pro CS5 & CS5.5 by email or at Adobe Forums
    For more information about maintaining your forum email notifications please go to http://forums.adobe.com/message/2936746#2936746.

  • Is the best practice to use database views or view objects?

    Hi everyone,
    If the option is available, is it preferable consolidate as much data as possible into a database view instead of doing this through view view objects? It seems the answer would be yes, but I would like to hear the pros and cons related to performance, etc.
    While I do not mind a detailed discussion, practical "rule-of-thumb" advice is what I am after; I am a newbie that needs general guidelines - not theories.
    James

    Performance is the main driver behind the question because I am wondering if it is faster to send a single large record set across a network or several small ones and "assemble" them at the client level.
    Probably is better to send one large record, but you will need to take in account time required to create this one large record in db(maybe oracle object types, or arrays of oracle object types).
    Check this for some VO performance advices: Advanced View Object Techniques  (especially property: "In Batches Of" which defines number of roundtrips between app server and db)
    As far as creating an updatable database view, I know there are minor tricks that are required to make that happen from a strictly SQL standpoint. But, I am curious the best way to go in JDeveloper.
    Some solutions:
    Using Updatable Views with ADF | Real World ADF
    Andrejus Baranovskis's Blog: How to Update Data from DB View using ADF BC DoDML Method
    Dario

  • View of views, what will speed faster or slower why ?

    we can create view of view.But will it work faster or it will be slower.
    Can any one tell me the reasons.

    So it is (I think) generally accepted best practice
    to always write views which work only with tables and
    not other views. Of course this can lead to lots of
    duplication, if you have many views which are
    variations on a theme. That seems a bit excessive to me, but perhaps I'm in the minority here.
    I would much, for example, have no problem creating multiple layers of views in order to prevent identical logic from getting spread across lots of different views because it's almost inevitable that when the logic changes, the "variation on a theme" views are going to get out of sync.
    If you have some concept of "pipeline orders" for orders that are in various stages of the pipeline, I'd have no problem creating a PIPELINE_ORDERS view and having the CUSTOMER_PIPELINE, SALESMAN_PIPELINE, and WAREHOUSE_PIPELINE views reference that view rather than embedding the "what qualifies as a pipeline order" logic in all the views in order to ensure that everyone is using the same definition. I've seen way too many cases where a simple test got coded in slightly different ways in different views resulting in subtly different output that caused subtly different bugs.
    Of course, depending on the logic, you might be able to create an IS_PIPELINE function that could be put in all the views in order to centralize the same logic, but that may cause bigger problems if Oracle decides it now has to call the function a bazillion times for the bazillion closed orders in your table. To which, of course, you might decide to create a function-based index on the IS_PIPELINE function, which may or may not be an option depending on the function signature and whether all the criteria are columns in the same table. And you'll probably need to ensure the optimizer knows the return value for this function is skewed (which may be more or less difficult depending on the Oracle version). But you're pretty quickly dealing with performance issues that are at least as complicated as the performance issues you've created for the optimizer by having nested views.
    Justin

  • Optimizing SQL for views on views

    i'm trying to optimize the performance of a view, which joins 2
    other views:
    select *
    from v1, v2
    where v2.xxx = v1.xxx
    when i select data from the view, i set a where clause that
    results in only one matching row in the view v1 that can be
    accessed by rowid (unique index). There is also a (none-unique)
    index on the column xxx of that v2 view which should by used by
    the optimizer (rule-based, 7.3.4.3.0).
    But it isn't. Instead the database performs a full table scan of
    the driving table of the v2 view, finds some rows and merges the
    data with those from the v1 view. But as the v2 view is very
    large it takes very long....
    When i type
    select * from v2 where xxx='abc'
    the query executes qickly because the index on xxx is used.
    What can prevent the optimizer from using the index on xxx in my
    view?
    I even tried to force use of the index by the INDEX hint but it
    didn't work.
    any help appreciated
    thanks
    null

    Thanks kgronau,
    My Oracle gateway for SQL server is in a Linux box and obviously the SQL server in a windows box.
    In that case how I would execute dg4msql_cvw.sql which is under $ORACLE_HOME/dg4msql/admin path of Linux server?
    Regards
    Satish

  • Embed view in view container dynamically

    Hello Experts,
      I have links in my web dynpro ABAP view, when user clicks on a link I need to show another view in view container.  For example I have two links in my view and when user clicks on link1 I need to show VIEW1 to the user in the view container UI element, if user clicks on link2 I need to show VIEW2.  I have written the below code in Link action method.  But it is not working.  Any one can help in this?
      DATA : lo_window_controller TYPE REF TO if_wd_window_controller.
      DATA : lo_view_controller TYPE REF TO if_wd_view_controller.
      DATA : lo_window_rr TYPE REF TO if_wd_rr_window.
      lo_view_controller   = wd_this->wd_get_api( ).
      lo_window_controller = lo_view_controller->get_embedding_window_ctlr( ).
      lo_window_rr         = lo_window_controller->get_window_info( ).
    Embedding view
      lo_window_rr->embed_view( used_view_name     = 'V_EMP_TERMINATION'
                                embedding_position = 'V_ACTION/VCU_CONTAINER'
                                used_component_name = 'ZHR_MSS_APPL' ).
    with best regards
    K. Mohan Reddy

    Hi mohan ,
    You written the code for embedding the view . then you have to creat the navigation link then only you can do navigation
    Look at the sample code for creating the navigation link dynamically
    DATA lo_navi_services TYPE REF TO if_wd_navigation_services_new.
    DATA lo_api_v_main_wf TYPE REF TO if_wd_view_controller.
    DATA lo_navi_services TYPE REF TO if_wd_navigation_services_new.
    DATA lo_view_usage TYPE REF TO if_wd_rr_view_usage.
    DATA lo_window TYPE REF TO if_wd_rr_window.
      lo_api_v_main_wf = wd_this->wd_get_api( ).
      lo_navi_services ?= lo_api_v_main_wf.
      lo_view_usage = lo_api_v_main_wf->get_view_usage( ).
      lo_window = lo_view_usage->get_window( ).
    CONSTANTS lc_target TYPE string VALUE 'VIEW_MAIN_WF/VC_WF'. "viewname /view vontainer name
    *Write the code for getting the view name here
    if view_name is not initial
       TRY.
         lv_window_name = lo_window->get_name( ).
         wd_comp_controller->fire_unactivate_all_pro_event( ).
            wd_this->m_navi_repository_handle = lo_navi_services->do_dynamic_navigation(
                source_window_name        = 'lv_window_name'
                source_vusage_name        = lo_view_usage->name
                source_plug_name          =  'source_out_plug'
             plug_parameters           = lv_plug_parameter
                target_view_name          = lv_view_name
                target_plug_name          = 'FROM_Plug'
             target_embedding_position = lc_target )."lv_target ).
          CATCH cx_wd_runtime_repository INTO lr_error.
        ENDTRY.
    hope thisl piece of code is help full
    regards
    chinnaiya

  • CMP 6.1 Entity bad performance.

    I'am using entity 1.1 EJB on WL 6.1 and facing very bad performances:
    around 150ms for an insert (i have 20 columns).
    When accessing an order interface to read 2 fields in a session bean method: around
    90 ms.
    I'am very disapointed and confused. What should I look up for
    to increase the performance ? Any important tuning or parameters ? Should I use EJB
    2.0 to have significant perf ?
    Thanks for any advice because we are thinking to switch all the application on stored
    procedures. A solution without Entity and fewer stateless session beans.
    My config:
    WL: 6.1 on Sun sparc
    SGBD: Sybase
    Entity: WebLogic 6.0.0 EJB 1.1 RDBMS (weblogic-rdbms11-persistence-600.dtd)
    Thanks

    Historically its hard to get good performance & scalability out of sybase
    without using stored procs. Using dynamic sql on sybase just doesnt do as
    well as procs. Oracle on the other hand can get very close to stored proc
    speed out of well written dynamic sql.
    As far as weblogic goes, my experience is the focus of their testing for db
    related stuff is Oracle, then DB2, then MSSQLServer. Sybase is usually last
    on the list.
    As far as the 6.1 cmp, haven't used it much, but because of these other
    things I would be cautious about using it with Sybase.
    Joel
    "Antoine Bas" <[email protected],> wrote in message
    news:3cc7cdcf$[email protected]..
    >
    I'am using entity 1.1 EJB on WL 6.1 and facing very bad performances:
    around 150ms for an insert (i have 20 columns).
    When accessing an order interface to read 2 fields in a session beanmethod: around
    90 ms.
    I'am very disapointed and confused. What should I look up for
    to increase the performance ? Any important tuning or parameters ? ShouldI use EJB
    2.0 to have significant perf ?
    Thanks for any advice because we are thinking to switch all theapplication on stored
    procedures. A solution without Entity and fewer stateless session beans.
    My config:
    WL: 6.1 on Sun sparc
    SGBD: Sybase
    Entity: WebLogic 6.0.0 EJB 1.1 RDBMS(weblogic-rdbms11-persistence-600.dtd)
    >
    Thanks

  • Bad performance when open a bi publisher report in excel

    We use bi publisher(xml publisher) to create a customized report. For a small report, user like it very much. But for a bigger report, users complain about the performance when they open the file.
    I know it is not a native excel file, that may cause the bad performance. So I ask my user to save it to a new file as a native excel format. The new file still worse than a normal excel file when we open it.
    I did a test. We try to save a bi publish report to excel format, the size shrink to 4Mb. But if we "copy all" and "Paste Special" value only to a new excel file, the size is only 1Mb.
    Do I have any way to improve that, users are complaining everyday. Thanks!
    I did a test today.
    I create a test report
    Test 1: Original file from BIP in EBS is 10Mb. We save it in my local disk, when we open the file, it takes 43 sec.
    Test 2: We save the file in native excel format, the file size is 2.28Mb, it takes 7 sec. to open.
    Test 3: We copy all cell and "PasteSpecial" to a new excel file with value only. The file size is 1.66Mb, it takes only 1 sec to open.
    Edited by: Rex Lin on 2010/3/31 下午 11:26

    EBS or Standalone BIP?
    If EBS see this thread for suggestions on performance tuning and hints and tips:
    EBS BIP Performance Tuning - Definitive Guide?
    Note also that I did end up rewriting my report as PL/SQL producing a csv file and have done with several large reports in BIP on EBS.
    Cheers,
    Dave

  • Month view, week view and working week view in outlook calendar in wpf

    Hello everybody!
    i was posted
    http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/7d927ca0-a110-4ede-bb2c-fa0070625722/ here how to make Outlook calendar sheet view , and tried
    http://www.codeproject.com/Articles/30881/Creating-an-Outlook-Calendar-Using-WPF-Part-2 this sample now i wanted it in Week view ,month view and working week(without sat & sun) , anybody knows any idea for this further requirement.
    Thanks & Regards dhampall

    Hi dhampall_79,
    Ok, you could open the project you shared, and open RudiGrobler.Controls/Calendar/Themes/Generic.xaml file, and then you could find below part of code:
    <Style TargetType="{x:Type local:Calendar}">
    <Setter Property="Template">
    <Setter.Value>
    <ControlTemplate TargetType="{x:Type local:Calendar}">
    <Border Background="#E3EFFF"
    BorderBrush="#6593CF"
    BorderThickness="2,2,2,2">
    <Grid>
    <Grid.ColumnDefinitions>
    <ColumnDefinition Width="50" />
    <ColumnDefinition Width="*" />
    </Grid.ColumnDefinitions>
    <Grid.RowDefinitions>
    <RowDefinition Height="38" />
    <RowDefinition Height="22" />
    <RowDefinition Height="*" />
    </Grid.RowDefinitions>
    <StackPanel Orientation="Horizontal" Grid.Row="0" Grid.Column="0" Grid.ColumnSpan="2" Margin="5,0,0,0">
    <Button Content="Previous" Height="25" Command="{x:Static local:Calendar.PreviousDay}" Background="{x:Null}" BorderBrush="{x:Null}">
    </Button>
    <Button Content="Next" Height="25" Command="{x:Static local:Calendar.NextDay}" Background="{x:Null}" BorderBrush="{x:Null}">
    </Button>
    </StackPanel>
    <Border BorderBrush="#6593CF" BorderThickness="0,0,1,1" Grid.Column="0" Grid.Row="1" />
    <local:CalendarDayHeader Grid.Column="1" Grid.Row="1" x:Name="PART_DayHeader"/>
    <ScrollViewer Grid.Row="2" Grid.Column="0" Grid.ColumnSpan="2" x:Name="PART_ScrollViewer">
    <Grid>
    <Grid.ColumnDefinitions>
    <ColumnDefinition Width="50" />
    <ColumnDefinition Width="*" />
    </Grid.ColumnDefinitions>
    <local:CalendarLedger Grid.Column="0" x:Name="PART_Ledger"/>
    <local:CalendarDay Grid.Column="1" x:Name="PART_Day" />
    </Grid>
    </ScrollViewer>
    </Grid>
    </Border>
    </ControlTemplate>
    </Setter.Value>
    </Setter>
    </Style>
    Above code is the appearance of the View, you could create your own view and replace
    <local:CalendarLedger Grid.Column="0" x:Name="PART_Ledger"/>
    <local:CalendarDay Grid.Column="1" x:Name="PART_Day" />
    You could change above two lines to your control, and then you could get what you want.
    best regards,
    Sheldon _Xiao[MSFT]
    MSDN Community Support | Feedback to us
    Microsoft
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

  • View application pages - view forms views and application pages. enumerate lists

    view application pages - view forms views and application pages. enumerate lists, if we disable this permission in sharepoint then user gets blocked from getting into application pages which is good. But now I have few list view web parts on a page and user
    is not able to see those reports based on view. It shows working on it. As soon as I enable view application pages permission it works.
    I need a permission level -view forms views only.
    MCTS Sharepoint 2010, MCAD dotnet, MCPDEA, SharePoint Lead

    Hi Amit,
    SharePoint has a feature called “ViewFormPagesLockDown” at site collection scope. After enabling the feature, all groups / users not having the “View Application Pages” permission will not be able to navigate to pages like “_layouts/viewlsts.aspx”
    or “pages/forms/allitems.aspx”.
    So, for your issue, please disable the ViewFormPagesLockDown feature via PowerShell command:
    $lockdownFeature = get-spfeature viewformpageslockdown
    disable-spfeature $lockdownFeature -url [the URL of your site]
    More information:
    http://sharepointtechie.blogspot.jp/2011/06/blocking-access-to-application-pages.html
    http://sureshpydi.blogspot.jp/2013/12/viewformpageslockdown-feature-in.html
    Best Regards,
    Wendy
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Wendy Li
    TechNet Community Support

  • Embedding view in view container UI element

    Hello,
       I have a requirement which is given below:
      I have a MAIN view where there is a view container UI element. This container holds one of 3 views(VIEW1,VIEW2 and VIEW3) depending on user input. Initially VIEW1 is displayed (this is set as the default view). Then the user navigates to VIEW2 and enters some selection criteria and confirms. Then VIEW3 is displayed with the entered selection criteria. Then the user clicks on search on the MAIN view and the RESULT view is displayed. When the user clicks on back functionality in the RESULT view,MAIN view is again displayed but the view container has VIEW1. The user wants to see VIEW3 in the viewcontainer in MAIN view.
    Please let me know if there is anyway to achieve this.
    Regards
    Nilanjan

    Hi Nilanjan,
    Create three context attributes V1,V2,V3 of type char1.
    Default value for V1 is 'X'. 
    Bind the visible property of each view container to the above attributes like
    View1-V1
    View2-V2
    View3-V3
    When you run the application, defaul View1 displays, coz its default values set as 'X'.
    Now depends up on the logic, change the value of each attribute as 'X' or SPACE .
    Eg:
        DATA lo_el_context TYPE REF TO if_wd_context_element.
        DATA ls_context TYPE wd_this->Element_context.
        DATA lv_v1 TYPE wd_this->Element_context-v1.
        DATA lv_v2 TYPE wd_this->Element_context-v2.
        DATA lv_v3 TYPE wd_this->Element_context-v3.
    *   get element via lead selection
        lo_el_context = wd_context->get_element( ).
    *   @TODO handle not set lead selection
        IF lo_el_context IS INITIAL.
        ENDIF.
    *   set single attribute
        lo_el_context->set_attribute(
          name =  `V1`
          value = 'X' ).
    *   set single attribute
        lo_el_context->set_attribute(
          name =  `V2`
          value = '' ).
    *   set single attribute
        lo_el_context->set_attribute(
          name =  `V3`
          value = '' ).
    or
    *   set single attribute
        lo_el_context->set_attribute(
          name =  `V1`
          value = '' ).
    *   set single attribute
        lo_el_context->set_attribute(
          name =  `V2`
          value = 'X' ).
    *   set single attribute
        lo_el_context->set_attribute(
          name =  `V3`
          value = '' ).
    or
    *   set single attribute
        lo_el_context->set_attribute(
          name =  `V1`
          value = '' ).
    *   set single attribute
        lo_el_context->set_attribute(
          name =  `V2`
          value = '' ).
    *   set single attribute
        lo_el_context->set_attribute(
          name =  `V3`
          value = 'X' ).
    Regards,
    Amarnath S

Maybe you are looking for