How to use the delta index in TREX?

We have been using TREX with EP KM for 2 years.
However we never used the delta index functionality.
Could you share your experience on how to use delta indexes?
Thanks!

Hai,
Please check the below link.....
http://help.sap.com/saphelp_nw70/helpdata/EN/d9/0418418291a854e10000000a1550b0/frameset.htm
Regards,
Yoganand.V

Similar Messages

  • How to use the index method for pathpoints object in illustrator through javascripts

    hii...
    am using Illustrator CS2 using javascripts...
    how to use the index method for pathpoints object in illustrator through javascripts..

    Hi, what are you trying to do with path points?
    CarlosCanto

  • How to use the change log in ODS to track Delta change?

    People say that historical data (like Delta change) in ODS can be tracked in the Change Log.  How to use the change log to track historic data?
    Thanks

    Kevin
    See if it helps
    Every ODS object is represented on the database by three transparent tables:
    Active data: A table containing the active data (A table)
    Activation queue: For saving ODS data records that are to be updated but that have not yet been activated. The data is deleted after the records have been activated.
    <b>Change log: Contains the change history for delta updating from the ODS Object into other data targets, such as ODS Objects or InfoCubes for example.</b>
    An exception is the transactional ODS object, which is only made up of the active data table.
    The tables containing active data are constructed according to the ODS object definition, meaning that key fields and data fields are specified when the ODS object is defined. Activation queue and change log are the same in the table’s structure. They have the request ID, package ID and the record number as a key.
    Data base structure changes
    http://help.sap.com/saphelp_nw04/helpdata/en/d2/d53ec3efdc9b47a9502c3a4565320c/frameset.htm
    Hope this helps
    Thnaks
    Sat

  • How to use the table maintenance events for validating the input entries..?

    Hi,
    I have created a Z table with 6 fields in which all are KEY fields. All are of CHAR type. I have created the Table Maintenance Generator for the same. While maintaining the entries in the table, even though I maintain a blank entry for a field it is saving the entry. But, I don't want that way. All the fields are mandatory in my table. One should enter all the fields. Otherwise it should not allow to save the entry. So, I think it can be done using the Table Maintenance Events. can someone tell me how to use the Table Maintenance Events. and which event to use for my reuqirement and what is the logic to be written.
    Or Is there any other way to solve my problem.
    Please share your inputs. Thanks in advance.
    Best regards,
    paddu.

    In the table maintenance generator, Environment --> Modifications --> Events then a screen will be appear here,we need to create the Events.In the EVENTS screen, press new Entries, there give 01(Before Saving the Data in the Database) and give a name(This will become a PERFORM), then click the Editor pushbutton, this will be there at the right side of the entry, then a popup will be appear, you can create an include program, there inside of the include program write ur code.
    Here is documentation for Event 01(Before Saving the Data in the Database )
    Event 01: Before Saving the Data in the Database
    Use
    This event occurs before new, changed or deleted entries are written to the database. Other activities can be performed, for example:
    hidden entry processing
    fill hidden fields
    flag data to be written to hidden tables after the database change.
    To have the changes saved by the central maintenance dialog routines, SY-SUBRC must be set to 0 at the end of the routine.
    Realization
    This event has no standard routine. The following global data is available for the realization of the user routine:
    internal table TOTAL
    field symbols
    field symbols <ACTION> and <ACTION_TEXT>
    <STATUS>-UPD_FLAG
    If internal table data are to be changed before saving, t he changes should be made in both the internal table TOTAL and in the internal table EXTRACT.
    FORM abc.
    DATA: F_INDEX LIKE SY-TABIX. "Index to note the lines found
    LOOP AT TOTAL.
    IF <ACTION> = desired constant.
    READ TABLE EXTRACT WITH KEY <vim_xtotal_key>.
    IF SY-SUBRC EQ 0.
    F_INDEX = SY-TABIX.
    ELSE.
    CLEAR F_INDX.
    ENDIF.
    (make desired changes to the line TOTAL)
    MODIFY TOTAL.
    CHECK F_INDX GT 0.
    EXTRACT = TOTAL.
    MODIFY EXTRACT INDEX F_INDX.
    ENDIF.
    ENDLOOP.
    SY-SUBRC = 0.
    ENDFORM.
    Regards,
    Joy.

  • How to use the CMS functionality in Sun Portal Server 7.2

    Hi All,
    How to use the CMS functionality using the ccd.war(Portlet) which is available in the library as i could add it to my channel but not able to show the functionality as it is showing the error msg "You are currently not logged in. Please login." should I create userid and there respective roles inorder to use the CMS functionality.
    Has any one used this as I could this in glass fish server.
    Any Input is appreciated.
    Thanks & regards
    Srikanth

    Have a look at the "*Roles*" section of the portal server 7.2 content management system guide
    http://docs.sun.com/source/820-4275/index.html . You can also look at [project mirage|https://mirage.dev.java.net] for some screencasts
    Alternatively,
    1. ccd.war has 3 portlets in it:
    (a) custom content definition portlet
    (b) custom content portlet
    (c) workflow portlet
    2. Inorder to work with these portlets, user needs to be in anyone of the below roles:
    (a)Consumer (b) Editor (c) Approver (d) Administrator (e) Submitter (f) Contributor (g) Publisher
    3. By default ccd.war gets deployed using a default roles file (/var/opt/SUNWportal/tmp/ccd.roles.properties)
    Note: In windows, you may not find this file
    4. Access the portlets as a user in any of the role mentioned in the ccd.roles.properties
    (OR)
    you can use a new roles file which has mapping to your custom roles. For this , undeploy existing ccd.war and deploy again with a new roles.properties file
    Hope this helps!

  • How to get the column index inside a dataTable

    Hello,
    before I get staked, there are multiple threads handling familiar topics to the one I'm questioning about but none gives an anwer. If there is one, I'm propably to less skilled to see it.
    So here is my Problem: I've build a web-interface to a time-recording system. The hours worked on a certain project are displayed in a dataTable component which is generated out of a mySQL Query. Each entry (column/row) contains a inputText component to display and edit the specific value.
    Editing one of these inputText elements now fires a valueChangeEvent which reads the new value and stores it in the database. For that cause I need to know the row- and column-index of the inputText component that fired the event.
    Using the getClientId method from the valueChangeEvent I get some Information which makes it possible to calculate the row/column index. A typical clientID looks like "form_table:mainTable:0:_id14". "form_table" is the ID of the form the dataTable is in. "mainTable" is the id of the dataTable component. "0" is the row the component is in. And finally "_id14" stands for the id randomly given to the inputText component by JSF.
    My Problem is now, that though I can calculate the column out of the[i] "_id14", this calculation is hardcoded. So everytime I add a component in before the dataTable, the calculation needs to be adjusted in the code.
    The Questions:
    - How to force a sensefull id indicating a column-index for the inputText components inside the columns of a dataTable?
    - Nicer since no workaround: How to get the column-index inside the dataTable on a natural way? (e.g. out of the valueChangeEvent the specific inputText component fires)
    After some investigation here on the board and on the net I know multiple ways to get the row index, (Things like component-binding and so on) but I can't find a answer on how to get the column-index.
    Thanks to all answers and/or links to things my eyes missed while searching for one.

    ...then index 0 becomes index 1 and my program doesn't work properlyThe program works properly, just not as you expect it to.
    As you've noticed the table gives you the flexibility to move columns around. So if you move column 0 to column 1, why would you expect to still use 0 as the index? The table manages the reordering of columns for you to make sure the data being displayed in each table column comes from the correct column in the data model.
    You can manage this yourself using one of the following methods (I forget which one):
    table.convertColumnIndexToModel(int viewColumnIndex)
    table.convertColumnIndexToView(int modelColumnIndex)
    Or, you can get data from the data model directly:
    table.getModel().getValueAt(row, 0);

  • How to use the array elements

    Hi, i'm trying to automate a questionary using array.
    // In the compositionReady for the stage
    sym.actual = 0;
    sym.arr = [ "Hi","Love","You" ]              // This is my question array.
    sym.checkArr = function(){                  // this the function that i use to check whis question of the array i'm going to use.
              sym.Question = arr[actual];      // i want to use the value "Hi", and then increase my "actual" var and then use the value "Love"
    // In my symbol timeline i use a trigger, in that trigger i use
    sym.stop();
    sym.quest = sym.getComposition().getStage().Question;         // I'm getting the Question value;
    sym.$("Ask").html(quest );                                                         //"Ask" is an empty textfield;
    sym.getComposition().getStage().actual ++;                    // Increase "actual" for the nex time.
    But the code doesn't work, i think the problem is the way i use the array index.
    Please help!
    Thanks =)

    Hi there,
    chino_10 wrote:
    in te timeline trigger  why did you use a local variable?
    It's good practice to use local variables whenever possible. The quest variable will only apply to that trigger, and it won't carry over to another trigger in the same timeline. So when using a trigger to set the HTML of a text element in the same timeline, I find it cleaner to assign a new HTML value from a local variable.
    chino_10 wrote:
    and if you are calling the global variable "actual" from de compositionReady handler why didn'y you call it with all the path like "sym.getComposition().getStage().actual ++;"?
    I think maybe I didn't express it as clearly as I could have. When you declare sym.actual in compositionReady, you actually have declared a variable scoped to the main stage symbol - it's not a global variable. To declare a global variable (which is not always best practice), your code in compositionReady would be:
    actual = 0;
    This actual variable would now be accessible from any element/symbol in the composition simply by assigning a value to actual. You may see how this could quickly get messy.
    So getting back to variable scope: if you declare a symbol variable using sym.varName, then any element in the same scope can call it using sym.varName. So your main timeline triggers can call it without using the full addressing of sym.getComposition().getStage().actual++. Instead, you could just use sym.actual++. Less code, and easier to read. But if you wanted to call it from another symbol (outside of the scope of the main stage), then you'd have to use the full addressing to actually address the variable, i.e., sym.getComposition().getStage().actual ++.
    hth,
    Joe

  • Qeury not using the bitmap index

    Hi,
    Pls have a look at the query below:
    SELECT
    A.flnumber,
    A.fldate,
    SUBSTR(C.sec,1,3) sect,
    D.element,
    C.class,
    SUM(C.qty) qty,
    A.indicator,
    DECODE(A.indicator, 'I', B.inrt, 'O', B.outrt, 'R', B.rting, NULL) direction,
    B.rting
    FROM
    Header A,
    Paths B,
    PathData C,
    ElementData D
    WHERE
    (D.category='N') AND
    (A.rt=B.rt) AND
    (C.element=D.element) AND
    (A.fldate=C.fldate AND
    A.flnumber=C.flnumber) AND
    C.element IN (SELECT codes FROM Master_codes WHERE type='F')
    GROUP BY A.flnumber,
         A.fldate,
         SUBSTR(C.sec, 1, 3),
         D.element,
         C.class,
         A.indicator,
         DECODE(A.indicator,'I', B.inrt, 'O', B.outrt,'R', B.rting, NULL),
    B.rting
    UNION ALL
    SELECT
    A.flnumber,
    A.fldate,
    SUBSTR(C.sec,1,3) sect,
    D.element,
    C.class,
    SUM(C.qty) qty,
    A.indicator,
    DECODE(A.indicator, 'I', B.inrt, 'O', B.outrt, 'R', B.rting, NULL) ROUTE_direction,
    B.rting
    FROM
    Header A,
    Paths B,
    PathData C,
    ElementData D
    WHERE
    (D.category='N') AND
    (A.rt=B.rt) AND
    (C.element=D.element) AND
    (A.fldate=C.fldate AND
    A.flnumber=C.flnumber) AND
    C.element NOT IN (SELECT codes FROM Master_codes WHERE type='F')
    GROUP BY A.flnumber,
         A.fldate,
         SUBSTR(C.sec, 1, 3),
         D.element,
         C.class,
         A.indicator,
         DECODE(A.indicator,'I', B.inrt, 'O', B.outrt,'R', B.rting, NULL),
    B.rting
    The cost in the explain plan is very high. The table PathData* has 42710366 records and there is a bitmap index on the flnumber_ and fldate* columns. But the query above does not use the indexes. The other tables in the list are fine as their respective PK and indexes are used but the table PathData* is going for a "Table Access by Local Index Rowid". dont know what it means but the cost for this is 7126 which is high. I cant figure out why is the query not using the bitmap indexes for this table.
    Pls let me know what should be done.???

    Thread: HOW TO: Post a SQL statement tuning request - template posting
    HOW TO: Post a SQL statement tuning request - template posting
    SELECT a.flnumber,
           a.fldate,
           Substr(c.sec, 1, 3)       sect,
           d.element,
           c.class,
           SUM(c.qty)                qty,
           a.INDICATOR,
           Decode(a.INDICATOR, 'I', b.inrt,
                               'O', b.outrt,
                               'R', b.rting,
                               NULL) direction,
           b.rting
    FROM   header a,
           paths b,
           pathdata c,
           elementdata d
    WHERE  ( d.category = 'N' )
           AND ( a.rt = b.rt )
           AND ( c.element = d.element )
           AND ( a.fldate = c.fldate
                 AND a.flnumber = c.flnumber )
           AND c.element IN (SELECT codes
                             FROM   master_codes
                             WHERE  TYPE = 'F')
    GROUP  BY a.flnumber,
              a.fldate,
              Substr(c.sec, 1, 3),
              d.element,
              c.class,
              a.INDICATOR,
              Decode(a.INDICATOR, 'I', b.inrt,
                                  'O', b.outrt,
                                  'R', b.rting,
                                  NULL),
              b.rting
    UNION ALL
    SELECT a.flnumber,
           a.fldate,
           Substr(c.sec, 1, 3)       sect,
           d.element,
           c.class,
           SUM(c.qty)                qty,
           a.INDICATOR,
           Decode(a.INDICATOR, 'I', b.inrt,
                               'O', b.outrt,
                               'R', b.rting,
                               NULL) route_direction,
           b.rting
    FROM   header a,
           paths b,
           pathdata c,
           elementdata d
    WHERE  ( d.category = 'N' )
           AND ( a.rt = b.rt )
           AND ( c.element = d.element )
           AND ( a.fldate = c.fldate
                 AND a.flnumber = c.flnumber )
           AND c.element NOT IN (SELECT codes
                                 FROM   master_codes
                                 WHERE  TYPE = 'F')
    GROUP  BY a.flnumber,
              a.fldate,
              Substr(c.sec, 1, 3),
              d.element,
              c.class,
              a.INDICATOR,
              Decode(a.INDICATOR, 'I', b.inrt,
                                  'O', b.outrt,
                                  'R', b.rting,
                                  NULL),
              b.rting  Edited by: sb92075 on Mar 13, 2011 7:58 AM

  • How to use the function module /IRM/IPBB_AGREEMENT_CREATE.

    Hi all,
    Please help me how to use the function module /IRM/IPBB_AGREEMENT_CREATE.
    It is a Vistex fuction module which is used to create Sales contract in SAP-Vistex. If anyone has use the function module and do have the sample code please share it.
    Thanks.

    FORM create_agreement TABLES pt_agreement
                                           CHANGING po_agreement .
      CONSTANTS: c_strt_knumh TYPE knumh VALUE '0000000000'.
      DATA: lc_kona  TYPE /irm/s_gkona,
            lc_cbasp TYPE /irm/s_ipcbasp,
          lt_cbapr TYPE /irm/t_ipcbapr,         "Partners
          lc_cbapr TYPE /irm/s_ipcbapr,
          lt_cbadt TYPE  /irm/t_ipcbadt,        "Dates
          lc_cbadt TYPE  /irm/s_ipcbadt,
          lt_cbafs TYPE  /irm/t_ipcbafs,
          lc_cbafs TYPE  /irm/s_ipcbafs,
          lt_cbacn TYPE  /irm/t_ipcbacn,
          lc_cbacn TYPE  /irm/s_ipcbacn,
          lt_cbacl TYPE  /irm/t_ipcbacl,
          lc_cbacl TYPE  /irm/s_ipcbacl,
          lt_cbtpv TYPE  /irm/t_ipagtpv,
          lc_cbtpv TYPE  /irm/s_ipagtpv,
          lt_texts TYPE  text_lh,
          lc_texts TYPE  itclh,
          lt_cbasd TYPE  /irm/t_ipcbasd,
          lc_cbasd TYPE  /irm/s_ipcbasd,
          lc_agreement    TYPE  /irm/s_ipcbasp_doc,
          lc_e_log_number TYPE  balognr,
          lt_messages     TYPE  /irm/t_gprolog.
      DATA: lt_vake      TYPE  cond_vakevb_t,
            lc_vake      TYPE LINE OF cond_vakevb_t,
            lt_konh      TYPE  /irm/t_gkonh,
            lc_konh      TYPE LINE OF /irm/t_gkonh,
            lt_konp      TYPE  /irm/t_gkonp,
            lc_konp      TYPE LINE OF /irm/t_gkonp,
            lt_konw      TYPE  /irm/t_gkonwu,
            lc_konw      TYPE LINE OF /irm/t_gkonwu,
            lt_konm      TYPE  /irm/t_gkonmu,
            lc_konm      TYPE LINE OF  /irm/t_gkonmu,
            lt_komg      TYPE  /irm/t_gkomg_index,
            lc_komg      TYPE LINE OF /irm/t_gkomg_index,
            lt_user_data TYPE  /irm/t_gpraxfu_index,
            lc_user_data TYPE LINE OF /irm/t_gpraxfu_index.
      DATA: lc_updt(1) TYPE c.
      DATA: lc_knumh TYPE knumh.
      DATA: BEGIN OF lc_str_knumh,
             hd(2) TYPE c VALUE '$$',
             inc_num(8) TYPE c,
            END OF lc_str_knumh.
      DATA: blank_agree_key TYPE knuma VALUE '~~~~~~~~~~'.
      FIELD-SYMBOLS <konh_line> LIKE LINE OF lt_konh.
      FIELD-SYMBOLS <konp_line> LIKE LINE OF lt_konp.
      DATA: lc_rule TYPE type_key_rule.
      READ TABLE pt_agreement INTO lc_rule INDEX 1.
    SELECT SINGLE * FROM  kona
            WHERE  vkorg   = lc_rule-vkorg
            AND    vtweg   = '10'
            AND    spart   = '10'
            AND    boart   = 'ZPS1'
            AND    botext  = lc_rule-sap_agkey.
    IF sy-subrc = 0.
       lc_updt = 'U'.
    ELSE.
      lc_updt = 'I'.
    ENDIF.
      LOOP AT pt_agreement INTO lc_rule.
        MOVE sy-tabix TO lc_str_knumh-inc_num.
        CONDENSE lc_str_knumh-inc_num NO-GAPS.
        WHILE lc_str_knumh-inc_num+7(1) = ' '.
          CONCATENATE '0' lc_str_knumh-inc_num INTO lc_str_knumh-inc_num.
          CONDENSE lc_str_knumh-inc_num NO-GAPS.
        ENDWHILE.
        CONCATENATE '$$' lc_knumh INTO lc_knumh.
        MOVE lc_str_knumh TO lc_knumh.
       MOVE c_strt_knumh TO lc_knumh.
        CLEAR: lc_konh, lc_konp, lc_komg.
        MOVE: lc_rule-vkorg    TO lc_komg-komg-vkorg,
              '10'             TO lc_komg-komg-vtweg,
              '10'             TO lc_komg-komg-spart,
              p_waers          TO lc_komg-komg-waerk,
              '1300'           TO lc_komg-komg-bukrs,
              lc_rule-lifnr    TO lc_komg-komg-lifnr,
              lc_knumh         TO lc_komg-knumh,
              lc_knumh         TO lc_konh-knumh,
              lc_knumh         TO lc_konp-knumh,
              lc_rule-datab    TO lc_konh-datab,
              lc_rule-datbi    TO lc_konh-datbi.
        CASE lc_rule-tablnam.
          WHEN 'A701'.                   "Every Agreement will have a A701 rule -
            "Therefore we acn setup the header using A701
            MOVE: 'New       '  TO lc_kona-knuma,
                  lc_rule-vkorg TO lc_kona-vkorg,
                  '10'          TO lc_kona-vtweg,
                  '10'          TO lc_kona-spart,
                  'ZPS1'        TO lc_kona-boart,
                  'C'           TO lc_kona-abtyp,
                  'V'           TO lc_kona-kappl,
                  p_waers       TO lc_kona-waers,
                  lc_rule-knuma_ag TO lc_kona-abrex,
                  'ZPS2'        TO lc_kona-kobog,
                  lc_rule-datab TO lc_kona-datab,
                  lc_rule-datbi TO lc_kona-datbi,
                  lc_rule-sap_agkey TO lc_kona-botext,
                  '1300'        TO lc_kona-bukrs,
                  'I'           TO lc_kona-updkz.
            MOVE: 'New       '  TO lc_cbasp-knuma_ag,
                  'ZPS1'        TO lc_cbasp-boart_ag,
                  p_waers       TO lc_cbasp-waers,
                  'A'           TO lc_cbasp-setl_mth,
                  'B'           TO lc_cbasp-setl_typ,
                  'A2'          TO lc_cbasp-ident,
                  'E'           TO lc_cbasp-setlm,
                  'ZPDA'        TO lc_cbasp-pargr,
                  'X'           TO lc_cbasp-npric,
                  'LF'          TO lc_cbasp-stprl,
                  lc_rule-lifnr TO lc_cbasp-stpar,
                  lc_rule-contract_rev  TO lc_cbasp-rvnum,
                  'I'           TO lc_cbasp-updkz.
            CONCATENATE: blank_agree_key
                   lc_rule-lifnr        INTO lc_konh-vakey.
            MOVE: lc_rule-lifnr TO lc_konp-lifnr.
          WHEN 'A703'.
            CONCATENATE: blank_agree_key
                         lc_rule-kunnr      INTO lc_konh-vakey.
            MOVE lc_rule-kunnr TO lc_komg-komg-kunnr.
          WHEN 'A709'.
            CONCATENATE: blank_agree_key
            lc_rule-zzprodh1  lc_rule-zzprodh2  lc_rule-zzprodh3
                   lc_rule-zzprodh4  lc_rule-zzprodh5 INTO lc_konh-vakey.
            CONCATENATE: lc_rule-zzprodh1  lc_rule-zzprodh2  lc_rule-zzprodh3
                   lc_rule-zzprodh4  lc_rule-zzprodh5 INTO lc_komg-komg-prodh.
          WHEN 'A710'.
            CONCATENATE: blank_agree_key
                lc_rule-matkl   INTO lc_konh-vakey.
            MOVE lc_rule-matkl TO lc_komg-komg-matkl.
          WHEN 'A711'.
            CONCATENATE: blank_agree_key
                 lc_rule-matnr   INTO lc_konh-vakey.
            MOVE lc_rule-matnr TO lc_komg-komg-matnr.
            IF lc_rule-kschl = 'ZPPL'.
              MOVE: 'C'              TO lc_konp-krech,
                    'CAD'              TO lc_konp-konwa.
              lc_konp-kbetr = lc_rule-net_po_price * 1.
            ENDIF.
          WHEN 'A717'.
          WHEN 'A718'.
            CONCATENATE: blank_agree_key
                 lc_rule-zzextwg INTO lc_konh-vakey.
            MOVE lc_rule-zzextwg TO lc_komg-komg-zzextwg.
         WHEN 'A719'.
           CONCATENATE: blank_agree_key
                lc_rule-werks INTO lc_konh-vakey.
           MOVE lc_rule-werks TO lc_komg-komg-werks.
         WHEN 'A721'.
           CONCATENATE: blank_agree_key
                 lc_rule-kunnr lc_rule-werks INTO lc_konh-vakey.
           MOVE: lc_rule-kunnr TO lc_konp-kunnr,
                 lc_rule-kunnr TO lc_komg-komg-kunnr.
          WHEN 'A722'.
            CONCATENATE: blank_agree_key
                         lc_rule-vkbur INTO lc_konh-vakey.
            MOVE lc_rule-vkbur TO lc_komg-komg-vkbur.
          WHEN 'A724'.
            CONCATENATE: blank_agree_key
                          lc_rule-kunnr lc_rule-vkbur INTO lc_konh-vakey.
            MOVE: lc_rule-kunnr TO lc_konp-kunnr,
                  lc_rule-kunnr TO lc_komg-komg-kunnr,
                  lc_rule-vkbur TO lc_komg-komg-vkbur.
        ENDCASE.
        MOVE: 'A'                  TO lc_konh-kvewe,
              lc_rule-tablnam+1(3) TO lc_konh-kotabnr,
              lc_rule-kappl        TO lc_konh-kappl,
              lc_rule-kschl        TO lc_konh-kschl.
        REPLACE ALL OCCURRENCES OF '~' IN lc_konh-vakey WITH ' '.
        APPEND lc_konh TO lt_konh.
        CLEAR lc_konh.
    *--- Add in the KONP.Do we need to add
        MOVE: lc_rule-kappl        TO lc_konp-kappl,
              lc_rule-kschl        TO lc_konp-kschl,
              'G'                  TO lc_konp-krech.
        IF lc_rule-kschl+3(1) = '%'.
          MOVE: 'A'              TO lc_konp-krech,
                '%'              TO lc_konp-konwa.
          lc_konp-kbetr = lc_rule-rebate_perc * 1.
        ENDIF.
        APPEND lc_konp TO lt_konp. CLEAR lc_konp.
        APPEND lc_komg TO lt_komg. CLEAR lc_komg.
      ENDLOOP.
      IF  lc_updt = 'I'.
        CALL FUNCTION '/IRM/IPCB_AGREEMENT_CREATE'
          EXPORTING
          I_MESSAGES_DISPLAY        = ' '
          I_SAVE_MESSAGES           = ' '
          I_COMMIT_WORK             = 'X'
          I_CALL_FROM_WS            = ' '
            is_kona                   = lc_kona
            is_cbasp                  = lc_cbasp
            it_cbapr                  = lt_cbapr
            it_cbadt                  = lt_cbadt
            it_cbafs                  = lt_cbafs
            it_cbacn                  = lt_cbacn
            it_cbacl                  = lt_cbacl
            it_cbtpv                  = lt_cbtpv
            it_texts                  = lt_texts
            it_cbasd                  = lt_cbasd
          IMPORTING
            es_agreement              = lc_agreement
            e_log_number              = lc_e_log_number
          TABLES
            t_messages                = lt_messages
         CHANGING
          CT_VAKE                   = lt_vake
          ct_konh                   = lt_konh
          ct_konp                   = lt_konp
          CT_KONW                   = lt_konw
          CT_KONM                   = lt_konm
           ct_komg                   = lt_komg
          CT_USER_DATA              = lt_usr_data
          EXCEPTIONS
            no_documents_to_process   = 1
            no_authorization          = 2
            creation_failed           = 3
            new_pricing_not_maitained = 4
            OTHERS                    = 5.
        IF sy-subrc <> 0.
    Implement suitable error handling here
        ELSE.
          MOVE: lc_agreement-knuma_ag TO po_agreement,
                lc_agreement-knuma_ag TO lc_kona-knuma.
        ENDIF.
        APPEND LINES OF lt_messages TO gt_messages.
      ELSE.
        MOVE-CORRESPONDING kona TO lc_kona.
      ENDIF.
      LOOP AT lt_konh ASSIGNING <konh_line>.
        MOVE lc_kona-knuma TO <konh_line>-vakey+0(10).
       move '&'           to <konh_line>-knumh+0(1).
      ENDLOOP.
      LOOP AT lt_konp ASSIGNING <konp_line>.
       MOVE lc_kona-knuma TO <konp_line>-vakey+0(10).
        move '&'           to <konh_line>-knumh+0(1).
      ENDLOOP.
      lc_kona-updkz = 'U'.
      lc_cbasp-updkz = 'U'.
      CLEAR lt_messages.
      CALL FUNCTION '/IRM/IPCB_AGREEMENT_CHANGE'
        EXPORTING
        I_MESSAGES_DISPLAY      = ' '
        I_SAVE_MESSAGES         = ' '
        I_COMMIT_WORK           = 'X'
        I_INIT_DATA             = 'X'
          is_kona                 = lc_kona
          is_cbasp                = lc_cbasp
         it_cbapr                = lt_cbapr
         it_cbadt                = lt_cbadt
         it_cbafs                = lt_cbafs
         it_cbacl                = lt_cbacl
         it_cbacn                = lt_cbacn
        IT_FIELDS               =
         it_texts                = lt_texts
       IMPORTING
         e_log_number            = lc_e_log_number
         TABLES
           t_messages              = lt_messages
        CHANGING
          cs_agreement            = lc_agreement
        CT_VAKE                 = lt_vake
          ct_konh                 = lt_konh
          ct_konp                 = lt_konp
        CT_KONW                 = lt_konw
        CT_KONM                 = lt_konm
          ct_komg                 = lt_komg
        CT_USER_DATA            = lt_usr_data
        EXCEPTIONS
          no_documents_to_process = 1
          no_authorization        = 2
          change_failed           = 3
          agreement_locked        = 4
          OTHERS                  = 5.
    IF sy-subrc <> 0.
    Implement suitable error handling here
    ENDIF.
      APPEND LINES OF lt_messages TO gt_messages.
    ENDFORM.                    " CREATE_AGREEMENT

  • How to download the ORACLE INDEX TUNING

    Hi,
    I am using Oracle 10g. i want to know how to download the ORACLE INDEX TUNING .
    Thank u...!

    Hi,
    Thank u for u r reply. I am asking about the Oracle Index tuning Wizard 9.2.0.1.0 Production. In the Oracle tuning book it has give this. That's why i am asking about the how to download the Oracle Index tuning Wizard 9.2.0.1.0 Production. Or where we will find this in oracle 10g.

  • Optimizer is not using the right index

    Hi gurus,
    there's something I understand. If someone can explains, it'll be greatly appreciated.
    Env:
    10gR2 on Redhat AS
    The
    SQL> desc stock_detail
    Name Null? Type
    NO NOT NULL NUMBER(15)
    BP_CODE NOT NULL VARCHAR2(10)
    STOC_CAT_CODE NOT NULL VARCHAR2(6)
    BUIL_CODE NOT NULL VARCHAR2(8)
    LOCA_CODE NOT NULL VARCHAR2(8)
    LOCA_SUB_CODE NOT NULL VARCHAR2(6)
    ITEM_NO NOT NULL NUMBER(8)
    QTY NOT NULL NUMBER(6)
    DEFAULT_SHELF NOT NULL VARCHAR2(1)
    CREATION_DATE NOT NULL DATE
    CREATION_USER NOT NULL VARCHAR2(8)
    CM_NO NUMBER(15)
    LANDING_COST NUMBER(11,2)
    SUPPLI_COST NUMBER(11,2)
    RMA_DEADLINE DATE
    MOD_USER VARCHAR2(8)
    MOD_DATE DATE
    RECEP_DATE DATE
    NOTE VARCHAR2(2000)
    FLAG VARCHAR2(1)
    REFUS VARCHAR2(1)
    STOC_MOVE_REAS_CODE VARCHAR2(6)
    I have many indexes on this table. (like 5 or 6).
    There's one with item + business_unit (lets say INDEX_A)
    And there's one with item + category + business_unit (lets say INDEX_B)
    The following sql is always using the wrong index
    select nvl(sum(sd.qty),0)
    from stock_detail sd, location lo
    where sd.item_no = 419261 <- In INDEX_A & INDEX_B
    and sd.STOC_CAT_CODE='REG' <- In INDEX_B
    and sd.bp_code = 'TECMTL' <- In INDEX_A & INDEX_B
    and sd.buil_code <> 'TRANSIT'
    and sd.buil_code = lo.buil_code
    and sd.loca_code = lo.code
    and sd.loca_sub_code = lo.sub_code
    and nvl(lo.restricted, 'N') = 'Y';
    This SQL always use the INDEX_A. INDEX_B is far better.
    Stats of the index uactually used (INDEX_A):
    Last Analyzed 2007-10-18 22:04:38
    Blevel 1
    Distinct Keys 72124
    Clustering Factor 105368
    Leaf Blocks 339
    Average Leaf Blocks Per Key 1
    Average Data Blocks Per Key 1
    Number of Rows 110285
    Sample Size 110285
    Stats of the index I want to be used (INDEX_B)
    Last Analyzed 2007-10-18 22:04:46
    Blevel 2
    Distinct Keys 77407
    Clustering Factor 103472
    Leaf Blocks 551
    Average Leaf Blocks Per Key 1
    Average Data Blocks Per Key 1
    Number of Rows 110285
    Sample Size 110285
    Is there a way to use the right index without adding a hint?
    Thanks in advance.
    Message was edited by:
    (made a mistkae on the stats of index B)

    I assume the execution path is a nested loop driving of the table with the constant inputs.
    The key difference in the stats is that the second index has a blevel of 2. I'd guess that the cost of using the first index is 1, and the cost of using the second index is three.
    The basic cost of accessing a table through an index is:
    blevel +
    index selectivity (ix_sel) * leaf blocks +
    table selectivity (ix_sel_with_filtering) * clustering_factor.
    However, if the blevel is 1, then Oracle ignores it.
    Your index and table selectivities in both cases are 1/distinct_keys (since this is 10.2)
    The numbers involved with the leaf block and clustering factor calculations are so small (and similar) that the difference of 2 in the add-on for the blevel is the deciding factor.
    According to the statistics, though, the choice of index shouldn't make much difference to the performance, since the number of rows (and blocks) visited is likely to be the same. However, if you have an uneven distribution of values for individual columns, you may need a histogram on that column so that the optimizer can see the effect it has on the expected work.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    P.S. I suppose it's probably fair to mention that I wrote a pretty good book about how the optimizer works: http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

  • How to track BIA delta index merge and rebuild

    Hi Gurus,
                 I have set up process chains which load the BIA delta index and full index rebuild in separate process chains.
    Is there any way to check if BIA is behaving as desired? I know to check RSDDSTATTREX table.
    Regards,
    Anil

    Not a reply from Guru..
    But I put my thoughts in and here are few take aways for you ,which you may wish to have a look in to..
    PS - BI Accelerator is quite a Virgin territory ..few documentations or persons worked with ..
    You can execute some performance checks with tcode - RSRV to validate the same.
    <b>BI Accelerator Performance Checks</b>
    1)Size of Delta Index
    If you have chosen delta mode for an index of a table, new data is not written to the main index but to the delta index. This can significantly improve performance during indexing. However, if the delta index is large, this can have a negative impact on performance when you execute queries. When the delta index reaches 10% of the main index, the system displays a warning.
    The system performs a merge for the index in repair mode. The settings are retained.
    2) Propose Delta Index for Indexes
    It is useful to create a delta index for large indexes that are often updated with new data. New data is not written to the main index, but to the delta index. This can significantly improve the performance of indexing, since the system only performs the optimize step on the smaller set of data from the delta index. The data from the delta index is used at the runtime of the query.
    The system determines proposals from the statistics data: Proposals are those indexes that received new data more than 10 times during the last 10 days. A prerequisite for these proposals is that the statistics for the InfoCube are switched on.
    Data in the main index and delta index should be merged at regular intervals (see test Size of Delta Index).
    In repair mode, the system sets the Has Delta Indexproperty for the proposed indexes. The delta index is created when the data is next loaded for this index.
    3)Compare Size of Fact Tables with Fact Index
    The system calculates the number of records in both fact tables (E and F tables) for the InfoCube and compares them with the number of records in the fact index of the BI accelerator index. If the number of records in the BI accelerator index is significantly greater than the number in the InfoCube (more than a 10% difference), you can improve query performance by rebuilding the BIA index.
    The following circumstances can result in differences in the numbers of records:
    &#9675;       The InfoCube was compressed after the BI accelerator index was built. Since the BI accelerator index is not compressed, it may contain more records than the InfoCube.
    &#9675;       Requests were deleted from the InfoCube after the BI accelerator index was built. The requests are deleted from the BIA index in the package dimension only. The records in the fact index are therefore no longer referenced and no longer taken into account when the query is executed; however, they are not deleted.
    http://help.sap.com/saphelp_nw2004s/helpdata/en/6b/cda64246c6c96ae10000000a155106/content.htm
    Hope it Helps
    Chetan
    @CP..

  • How to use the same POWL query for multiple users

    Hello,
    I have defined a POWL query which executes properly. But if I map the same POWL query to 2 portal users and the 2 portal users try to access the same page simultaneously then it gives an error message to one of the users that
    "Query 'ABC' is already open in another session."
    where 'ABC' is the query name.
    Can you please tell me how to use the same POWL query for multiple users ?
    A fast reply would be highly appreciated.
    Thanks and Regards,
    Sandhya

    Batch processing usually involves using actions you have recorded.  In Action you can insert Path that can be used during processing documents.  Path have some size so you may want to only process document that have the same size.  Look in the Actions Palette fly-out menu for insert path.  It inserts|records the current document work path into the action being worked on and when the action is played it inserts the path into the document as the current work path..

  • Can Oracle be forced to use the spatial index for sdo_filter in combination with an or clause? Difference between Enterprise and SE?

    We’re seeing the following issue: sql - Can Oracle be forced to use the spatial index for sdo_filter in combination with an or clause? - Stack Overflow (posted by a colleague of mine) and are curious to know if this behaviour is due to a difference between standard and enterprise, or could we doing something else wrong in our DB config.?
    We have also reproduced the issue on the following stacks:
    Oracle SE One 11.2.0.3 (with Spatial enabled)
    Redhat Linux 2.6.32-358.6.2.el6.x86_64 #1 SMP Thu May 16 20:59:36 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
    11.2.0.3.0 Standard Edition and 11.2.0.4.0 Standard Edition (both with Spatial enabled)
    Microsoft Windows Server 2003R2 Standard x64 Edition
    However, the SQL works fine if we try it on Oracle 11.2.0.3.0 *Enterprise* Edition.
    Any help or advice would be much appreciated.
    Kindest Regards,
    Kevin

    In my experience sdo_filter ALWAYS uses the spatial index, so that's not the problem. Since you did not provide the explain plans, we can't say for sure but I think yhu is right: Standard Edition can't use the bitmap operations, and thus it'll take longer to combine the results of the two queries (because the optimizer will surely split this OR up in two parts, then combine them).
    BTW: when asking questions about queries here, it would be nice if you posted the queries here as well, so that we do not have to check another website in order to see what you are doing. Plus it will probably get you more answers, because not everyone can be bothered to click on that link. It would also have been nice if you had posted your own answer on the other post here as well, because my recommendation would have been to use union all - but since you already found that out for yourself my recommendation would have been a little late.

  • Okay so I set up my Time Capsule already and is now backing up 2 of my iMacs. Works great. What I want to know is how to use the TC to directly store files? I want to do this to delete some files but still have them on the TC for future reference..

    Okay so I set up my Time Capsule already and is now backing up 2 of my iMacs. Works great. What I want to know is how to use the TC to directly store files? I want to do this to delete some files on iMac 20inch but still have them on the TC for future reference..eg some movies on iTunes. I want to directly save them on the drive so I can delete them from iTunes and gain some storage. (Ps on iMac 20 inch (it's almost full - 320 GB) when I enter time machine, a tab comes up on finder which reads "Time Machine backups" it's able to be ejected like a disc or a connected device. On the iMac 20 inch, I dragged some files onto there as if using it like a hard drive. Is this the correct method? Then I went to my 27inch iMac and saw the "Time Machine Backups" hoping to see the files I dragged from the 20inch iMac. But the files were not there except a folder that said "Backups.backupdb". Can someone help me?

    It's not a good idea to use a network disk for both Time Machine backups and other things.  By design Time Machine will eventually consume all the space on its output disk, which will then cause problem for your other files.  I'd store those other files on an external disk connected to the Time Capsule.  The problem with that is that Time Machine will only back up files that are local to your Mac.  That means that you'll only have one copy of the files on or attached to your Time Capsule.
    By the way, you've been misled by poor field labeling on this forum into typing a large part of your message into the field intended for the subject.  In the future just type a short summary of your post into that field and type the whole message into the field below that.

Maybe you are looking for