Question about Workspace enabled tables, triggers

Hello,
We have created our tables structured from Oracle Designer and we also have several triggers, API for almost all the tables. While reading Workspace Manager, it seems that after I would execute dbms_wm.enableversioning, it woould rename my table to a view and would creat other views and tables. I have following questions:
1)What would happen to those triggers which are based on the original tables. Would trigger be attached to the VIEW definition of the origianl table?
2) Lets assume I have a customer and cust_address tables respectively, I have a customer id =100 which has two customer address rows in the LIVE. I have a workspace is based on cust_address table. Would it be possible for a user to see all the address data ( live + workspace) for customer=100 in one SQL query?
3) What if I have two live customer address rows in cust_address tables and 3 rows in workspace of cust_address table, does workspace manager allow me to conditionally add and update worskpace rows to LIVE data, ie could I add 1 row from workspace to LIVE and update one row from workspace to LIVE data?
Thanks for your answers?
Syed

Ben,
Here is our business problem. We receive data in terms of text file regarding our customers. Files contain customer name and address information and customers payments and debts information. For the sake of simplicity,lets assume that I have customer , address, payment , debts and referral table . From the file we only load to teh customer,adderss,payments and debts table, once data is loaded, end user creates referral data which is stored in referral table.
When we receive data, we are not sure if data contain any customer unique key ( customer id, or SSN etc) , we try to match the incoming data based on last name, first name, date of birth , SSN and address etc, if system find a match then it creates rows in all these 4 tables (customer, address, payments and debts). If system did not find any match then I have another table called potential_match ( with similar columns like customer table plus matched_customer_id) and system insert the customer rows in this table and also insert other data such as address, payments and debts to the respective table. I want to process all these data immediately because business need is to immediately create referral data.
When an end user find some time, they come to potential_match table and they try to identify if two customers are really same or not, if they decide that two customers data are same then we have to associate address , payments, debts and referral data ( and some data in other tables which have been created by other process) to the original customers. So I need to write a PL/SQL which will basically add all the children records of customer say B to customer say A if A and B are same.
I was looking into workspace manager and I found that it might be a possible solution and I am not sure how would I utilize WSM so that end user does not have to change the way they used to be working. IF I donot add customer's other data ( address, payments, debt etc) to LIVE immediately then they would not be able to search for the customer debt if the debt is in workspace. I want to give them a transparent look to LIVE and production data all time ( I donot want to give them gotoWorkspace feature). Only few user will have access to potential_match table and gotoWorspace and they will be able to merge /associate to the original customers ie removing from workspace and adding to LIVE. Some time users may want to unmatched also, meaning if they matched two customers into one then if due to some problem it was a wrong mergd then they should be able to unmerged into two different customers and associative data. Users also want to run temporal query such as they want to go back intime and want to see customer debt data as on Dec 2004.
Based on your experience, how would I be able to design my workspace, do you think WSM is the way to go or I should use my own PL/SQL custom code to implement the problem.? I have never used it so I am not sure, I need to understand its implication. Currently we are using Oracle 8.17 but very soon we are moving to 10G.
Thanks for all your help.
Syed

Similar Messages

  • Questions about database application tables connector

    Hi all,
    I need to manage with OIM several databases (each one with its custom tables).
    Also, one of them will be my trusted target on the initial load of users. The structure of our tables is not similar to the included in the example.
    The documentation explains how the OraAppX.xml could be modified but it doesn't speak about relations with "DBTable_nonTrusted.xml" and/or "DBTable_trusted.xml" For example, if I don't have a column called USR_COMM_LANG, could I replace this column name for another that I have in my database ? And what about the references on the "DBTable_nonTrusted.xml" file. Connector doc doesn't explain how references to that field (USR_COMM_LANG---xel_usr_comm_lang) are managed by the connector and if the normal working of the connector could be affected if the name of a variable is changed in the xml conf file.
    It could be that adapters of this connector were hardcoded with the referece to xel_usr_comm_lang (???) .
    The connector documentation includes a section titled "Adding Custom Database Columns for Provisioning and Reconciliation". The first impression reading this document is that it is possible to extend the predefined table rather than create and define a new structure of table.
    Has anyone rebuilt the "DBTable_nonTrusted.xml", "DBTable_trusted.xml" and "OraAppX.xml" files changing all default variables in order to manage a customized table? .
    Many thanks in advance,
    Claudia

    Hi Claudia,
    1) Yes, you must replace the column names in the "column" tag on your XML file for the column names that you have in your database tables. Also, remember that you need to configure the other parameters (like data_type, data_typ_size, etc) according to your database table.
    2) The "xel_data_source" parameter is the attribute name that will be recognized by the OIM and they can be mapped to a form field (UD_DBAPP_XXXX) in your process definition in OIM Design Console.
    3) You can customize your Database Application Tables connector for provisioning as you wish, but some issues exists. In the Known Issues section of the documentation you can see that this connector doesn't work for more than two tables.
    4) To customize your Database Application Tables for Trusted Recon, there are other restrictions because OIM users have some required fields that must be filled to create a user successfully.
    Hope that helps, and please let me know if you have more questions.
    Regards.

  • Questions about authorizations of tables/change requests/badis/locks/lang

    Hi ,
    Few questions I have not been able to find out .
    1) HOw can we ensure that every time we do any change in a table including adding/changing content a change request is generated .Basically to ensure any changes being done are being stored in  a change request .
    2)How to give authorizations to/for a database table ?
    3)can/how we add water marks in scripts and smartforms ?
    4) Can we create and place our own BADIs in SAP standard code?
    5)different LOCK types/categories with clear difference (not the standard SAP help please..)
    6) tips on handling two table controls on one screen.
    7) WHat are the things required if we want to use objects(scripts,texts,smartforms) in different languages ?
    8)multilingual scripts ?
    9) how to have a search help in module pool without using Process on value request ?
    Moderator message - Please - one question per thread and please ask a specific question - post locked
    Edited by: Rob Burbank on Dec 3, 2009 4:29 PM

    FSKB     G/L Account Posting
    this transaction is not working

  • ICR Process 003 - question about data selection (table FBICRC003A)

    Hello, I am implementing ICR process 003. We are doing several test and we I have some questions that I hope you can help me:
    1 - If I run transaction FBICS3 - Customer/Vendor(  Select Documents) and then FBICA3 - Customer/Vendor (Document Assignment ) several times (the same selection criteria) will the same documents be selected redundantly and will be stored redundantly in table FBICRC003A? I expected that this will not happen but It seems to happen in my test environment. (?)
    2- If I need to delete the data stored in ICR '003' functionality I need to use transaction GCDE. The problem as I am using ledger '0L' for '003' process I cannot "delete data of one ledger" functionality -that allows to set selection data- and I have to "delete the data of an entire data group" that deletes all data stored in FBICRC003A & FBICRC003T tables. Should I set another ledger for '003' process in order to delete data using selection criteria? Is it recommended to not to use '0L' and create a new one?
    I have read in reference documentation that "is not necessary to set up a SL", but since all my productive companies are running in the same client that is the ICR cliente I am wondering if could be better to create and set a SL.
    Thanks in advance
    Rafael Barreda
    Edited by: Rafael Barreda on Sep 14, 2009 1:27 PM

    Hi Ralph,
    we have created a RFC in order to get data from client B to client A (where ICR system is placed). The summary is:
    I need to import vendors/customer data from Client B that belongs to a certain company code "0001". The company is called "X" in both clients A & B although it only exists as a company code (FI) in Client B and just as a company (and trading partner) in client A.
    Client A:
    1- have set company "0001" as a "Company to be reconciled" at FBIC032:
    RFC destination = ""
    RFC destination for data selection = "ZRFC0001"
    Local company= ""
    Data Source="Documents of Current Process"
    Separate Selection Process= "X"
    Data Transfer Type="Asynchronous via Direct RFC Connection"
    Sender field for reference number = "XBLNR"
    Client B:
    1- I have created the companies (V_T880) that I will need in order to inform trading partner on vendor's master data.
    2- I have assigned company code '0001' to company "X"
    3- I have assigned trading partners created on step 1 to vendors
    4- I have post few FI documents with trading partners informed.
    Then I run FBICS3 - Customer/Vendor: Select Documents in background but the programs takes a lot of time and do not select any document.
    Do you think that I am missing something?
    Thanks very much in advance.

  • Basic question about querying georaster tables

    hello all
    i have a table with georaster column containing georeferenced images covering large area. can i query the table to get an image covering any user given cooordinates . the user given coordinates can span more than one image in the table. is this possible
    thanks

    Hi,
    Yes this is possible to operate with several GeoRaster images in one or several tables. But before this you should create specific GeoRaster metadata structure to join those separate images. You can find information about this procedure in GeoRaster documentation.
    Regards,
    Andrejus

  • BRF+ - NW 7.01 - questions about a decision table

    In my testcase I have set up a decision table with multiple columns. As long as the number of rows is low it is easy to maintain. But we are looking at the possibility to use this kind of business rules in our current project, and then it would end up in at least 1000 or more rows.
    Is there an easy, user friendly way to let the business users maintain such a decision table ?
    Do you need to activate this decision table every time something is changed ?Is it possible to make this decision table local and the rest of the parts (application, data objects, function) transportable ? How can you deal with such a situation in the best way ?
    Is there a query / filter functionality to query a decision table ?
    Best regards,
    Elly van de Wouw
    IT - IFF

    Hi,
    You cannot use such big decision tables in NW 701 but only in NW 702. The reason is a change in the internal data model.
    In NW 701 you will run into timeouts.
    I did some internal testing in our development systems.
    1. programmatically create a decision table with many rows (10 columns)
    2. activate and save
    3. execute it (includes code generation)
    In NW 701 I did achieve around 800 rows and then I had a timeout. I hav to admit the system is also quite poor.
    In NW 702 it took about 100 seconds. In NW 702 you also can maintain the decision table entries in MS Excel and upload into the table.
    You need to activate the changes so that they are used for rules execution. As of NW 702 you further have possibilities to combine local and transported content.
    BR,
    Carsten

  • Question about global temp tables

    I have global temporary table with ON COMMIT setting ON COMMIT PRESERVE ROWS. E.g.:
    CREATE GLOBAL TEMPORARY TABLE admin_work_area
            (startdate DATE,
             enddate DATE,
             class CHAR(20))
          ON COMMIT PRESERVE ROWS;On application start procedure inserting data into table, on application end is DELETE statement used to make table empty.
    Interestingly if application is started again (in same session!) deleted rows appear again in table before call of insert-procedure. So after call of insert-procedure data will be doubled... :(
    So my question is:
    Does COMMIT in this constellation making ROLLBACK of deleted rows?
    Sounds unlogical to me, but appear to be like that...
    Message was edited by:
    Faust
    Edit: ON COMMIT setting

    Are you sure that the rows somehow just appear back
    and it's not the application which inserts them
    twice.Yes I'm sure, there is only one call of insert-procedure (on application start).
    Are you using autonomous transactions for
    those inserts by any chance?No.
    SID is just an index into session fixed array, so the
    only way to get the same SID in an instance is when
    the previous session ends.
    Each session array slot contains a SERIAL# field
    which is zero at instance start and is incremented
    every time the slot is reused by next session.
    So, as long as your session exists, it is impossible
    that someone else gets same SID + SERIAL# combination
    in an instance.
    Note that the SESSION_ADDR and SESSION_NUM give you
    the address and SERIAL# of the session owning a
    temporary segment.Original session exist...
    Thank you Tanel for your replay!

  • Question about sorted, hashed tables, mindset when using OO concepts...

    Hello experts,
    I just want to make sure if my idea about sorted and hashed table is correct.Please give tips and suggestions.
    In one of my reports, I declared a structure and an itab.
    TYPES: BEGIN OF t_mkpf,
            mblnr           LIKE mkpf-mblnr,
            mjahr           LIKE mkpf-mjahr,
            budat           LIKE mkpf-budat,
            xblnr(10)       TYPE c,
            tcode2          LIKE mkpf-tcode2,
            cputm           LIKE mkpf-cputm,
            blart           LIKE mkpf-blart,
          END OF t_mkpf.
    it_mkpf       TYPE SORTED   TABLE OF t_mkpf WITH HEADER LINE
                                       WITH NON-UNIQUE KEY mblnr mjahr.
    Now, I declared it as a sorted table with a non-unique key MBLNR and MJAHR. Now suppose I have 1000 records in my itab. how will it search for a particular record?
    2. Is it faster than sorting a standard table then reading it using binary search?
    3. How do I use a hashed table effectively? lets say that I want to use hashed type instead of sorted table in my example above.
    4. I am currently practicing ABAP Objects and my problem is that I think my mindset when programming a report is still the 'procedural one'. How do one use ABAP concepts effectively?
    Again, thank you guys and have a nice day!

    Hi Viray,
    <b>The different ways to fill an Internal Table:</b>
    <b>append&sort</b>
    This is the simplest one. I do appends on a standard table and then a sort.
    data: lt_tab type standard table of ...
    do n times.
    ls_line = ...
    append ls_line to lt_tab.
    enddo.
    sort lt_tab.
    The thing here is the fast appends and the slow sort - so this is interesting how this will compare to the following one.
    <b>read binary search & insert index sy-tabix</b>
    In this type I also use a standard table, but I read to find the correct insert index to get a sorted table also.
    data: lt_tab type standard table of ...
    do n times.
    ls_line = ...
    read table lt_tab transporting no fields with key ... binary search.
    if sy-subrc <> 0.
      insert ls_line into lt_tab index sy-tabix.
    endif.
    enddo.
    <b>sorted table with non-unique key</b>
    Here I used a sorted table with a non-unique key and did inserts...
    data: lt_tab type sorted table of ... with non-unique key ...
    do n times.
    ls_line = ...
    insert ls_line into table lt_tab.
    enddo.
    <b>sorted table with unique key</b>
    The coding is the same instead the sorted table is with a unique key.
    data: lt_tab type sorted table of ... with unique key ...
    do n times.
    ls_line = ...
    insert ls_line into table lt_tab.
    enddo.
    <b>hashed table</b>
    The last one is the hashed table (always with unique key).
    data: lt_tab type hashed table of ... with unique key ...
    do n times.
    ls_line = ...
    insert ls_line into table lt_tab.
    enddo.
    <b>You Can use this Program to Test:</b>
    types:
      begin of local_long,
        key1 type char10,
        key2 type char10,
        data1 type char10,
        data2 type char10,
        data3 type i,
        data4 type sydatum,
        data5 type numc10,
        data6 type char32,
        data7 type i,
        data8 type sydatum,
        data9 type numc10,
        dataa type char32,
        datab type i,
        datac type sydatum,
        datad type numc10,
        datae type char32,
        dataf type i,
        datag type sydatum,
        datah type numc10,
        datai type char32,
        dataj type i,
        datak type sydatum,
        datal type numc10,
        datam type char32,
        datan type i,
        datao type sydatum,
        datap type numc10,
        dataq type char32,
        datar type i,
        datas type sydatum,
        datat type numc10,
        datau type char32,
        datav type i,
        dataw type sydatum,
        datax type numc10,
        datay type char32,
        dataz type i,
        data11 type numc10,
        data21 type char32,
        data31 type i,
        data41 type sydatum,
        data51 type numc10,
        data61 type char32,
        data71 type i,
        data81 type sydatum,
        data91 type numc10,
        dataa1 type char32,
        datab1 type i,
        datac1 type sydatum,
        datad1 type numc10,
        datae1 type char32,
        dataf1 type i,
        datag1 type sydatum,
        datah1 type numc10,
        datai1 type char32,
        dataj1 type i,
        datak1 type sydatum,
        datal1 type numc10,
        datam1 type char32,
        datan1 type i,
        datao1 type sydatum,
        datap1 type numc10,
        dataq1 type char32,
        datar1 type i,
        datas1 type sydatum,
        datat1 type numc10,
        datau1 type char32,
        datav1 type i,
        dataw1 type sydatum,
        datax1 type numc10,
        datay1 type char32,
        dataz1 type i,
      end of local_long.
    data:
      ls_long type local_long,
      lt_binary type standard table of local_long,
      lt_sort_u type sorted table of local_long with unique key key1 key2,
      lt_sort_n type sorted table of local_long with non-unique key key1 key2,
      lt_hash_u type hashed table of local_long with unique key key1 key2,
      lt_apsort type standard table of local_long.
    field-symbols:
      <ls_long> type local_long.
    parameters:
      min1 type i default 1,
      max1 type i default 1000,
      min2 type i default 1,
      max2 type i default 1000,
      i1 type i default 100,
      i2 type i default 200,
      i3 type i default 300,
      i4 type i default 400,
      i5 type i default 500,
      i6 type i default 600,
      i7 type i default 700,
      i8 type i default 800,
      i9 type i default 900,
      fax type i default 1000.
    types:
      begin of measure,
        what(10) type c,
        size(6) type c,
        time type i,
        lines type i,
        reads type i,
        readb type i,
        fax_s type i,
        fax_b type i,
        fax(6) type c,
        iter type i,
      end of measure.
    data:
      lt_time type standard table of measure,
      lt_meantimes type standard table of measure,
      ls_time type measure,
      lv_method(7) type c,
      lv_i1 type char10,
      lv_i2 type char10,
      lv_f type f,
      lv_start type i,
      lv_end type i,
      lv_normal type i,
      lv_size type i,
      lv_order type i,
      lo_rnd1 type ref to cl_abap_random_int,
      lo_rnd2 type ref to cl_abap_random_int.
    get run time field lv_start.
    lo_rnd1 = cl_abap_random_int=>create( seed = lv_start min = min1 max = max1 ).
    add 1 to lv_start.
    lo_rnd2 = cl_abap_random_int=>create( seed = lv_start min = min2 max = max2 ).
    ls_time-fax = fax.
    do 5 times.
      do 9 times.
        case sy-index.
          when 1. lv_size = i1.
          when 2. lv_size = i2.
          when 3. lv_size = i3.
          when 4. lv_size = i4.
          when 5. lv_size = i5.
          when 6. lv_size = i6.
          when 7. lv_size = i7.
          when 8. lv_size = i8.
          when 9. lv_size = i9.
        endcase.
        if lv_size > 0.
          ls_time-iter = 1.
          clear lt_apsort.
          ls_time-what = 'APSORT'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            append ls_long to lt_apsort.
          enddo.
          sort lt_apsort by key1 key2.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_apsort ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_apsort
              assigning <ls_long>
              with key key1 = lv_i1
                       key2 = lv_i2
              binary search.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_apsort
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_apsort
              assigning <ls_long>
              with key key1 = lv_i1
                       key2 = lv_i2
              binary search.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_apsort
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
          clear lt_binary.
          ls_time-what = 'BINARY'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            read table lt_binary
              transporting no fields
              with key key1 = ls_long-key1
                       key2 = ls_long-key2
              binary search.
            if sy-index <> 0.
              insert ls_long into lt_binary index sy-tabix.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_binary ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_binary
              assigning <ls_long>
              with key key1 = lv_i1
                       key2 = lv_i2
              binary search.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_binary
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_binary
              assigning <ls_long>
              with key key1 = lv_i1
                       key2 = lv_i2
              binary search.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_binary
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
          clear lt_sort_n.
          ls_time-what = 'SORT_N'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            insert ls_long into table lt_sort_n.
          enddo.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_sort_n ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_n
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_n
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_n
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_n
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
          clear lt_sort_u.
          ls_time-what = 'SORT_U'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            insert ls_long into table lt_sort_u.
          enddo.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_sort_u ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_u
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_u
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_u
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_u
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
          clear lt_hash_u.
          ls_time-what = 'HASH_U'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            insert ls_long into table lt_hash_u.
          enddo.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_hash_u ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_hash_u
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_hash_u
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_hash_u
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_hash_u
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
        endif.
      enddo.
    enddo.
    sort lt_time by what size.
    write: / ' type      | size   | time        | tab-size    | directread  | std read    | time direct | time std read'.
    write: / sy-uline.
    loop at lt_time into ls_time.
      write: / ls_time-what, '|', ls_time-size, '|', ls_time-time, '|', ls_time-lines, '|', ls_time-readb, '|', ls_time-reads, '|', ls_time-fax_b, '|', ls_time-fax_s.
    endloop.
    form fill.
      lv_i1 = lo_rnd1->get_next( ).
      lv_i2 = lo_rnd2->get_next( ).
      ls_long-key1 = lv_i1.
      ls_long-key2 = lv_i2.
      ls_long-data1 = lv_i1.
      ls_long-data2 = lv_i2.
      ls_long-data3 = lv_i1.
      ls_long-data4 = sy-datum + lv_i1.
      ls_long-data5 = lv_i1.
      ls_long-data6 = lv_i1.
      ls_long-data7 = lv_i1.
      ls_long-data8 = sy-datum + lv_i1.
      ls_long-data9 = lv_i1.
      ls_long-dataa = lv_i1.
      ls_long-datab = lv_i1.
      ls_long-datac = sy-datum + lv_i1.
      ls_long-datad = lv_i1.
      ls_long-datae = lv_i1.
      ls_long-dataf = lv_i1.
      ls_long-datag = sy-datum + lv_i1.
      ls_long-datah = lv_i1.
      ls_long-datai = lv_i1.
      ls_long-dataj = lv_i1.
      ls_long-datak = sy-datum + lv_i1.
      ls_long-datal = lv_i1.
      ls_long-datam = lv_i1.
      ls_long-datan = sy-datum + lv_i1.
      ls_long-datao = lv_i1.
      ls_long-datap = lv_i1.
      ls_long-dataq = lv_i1.
      ls_long-datar = sy-datum + lv_i1.
      ls_long-datas = lv_i1.
      ls_long-datat = lv_i1.
      ls_long-datau = lv_i1.
      ls_long-datav = sy-datum + lv_i1.
      ls_long-dataw = lv_i1.
      ls_long-datax = lv_i1.
      ls_long-datay = lv_i1.
      ls_long-dataz = sy-datum + lv_i1.
      ls_long-data11 = lv_i1.
      ls_long-data21 = lv_i1.
      ls_long-data31 = lv_i1.
      ls_long-data41 = sy-datum + lv_i1.
      ls_long-data51 = lv_i1.
      ls_long-data61 = lv_i1.
      ls_long-data71 = lv_i1.
      ls_long-data81 = sy-datum + lv_i1.
      ls_long-data91 = lv_i1.
      ls_long-dataa1 = lv_i1.
      ls_long-datab1 = lv_i1.
      ls_long-datac1 = sy-datum + lv_i1.
      ls_long-datad1 = lv_i1.
      ls_long-datae1 = lv_i1.
      ls_long-dataf1 = lv_i1.
      ls_long-datag1 = sy-datum + lv_i1.
      ls_long-datah1 = lv_i1.
      ls_long-datai1 = lv_i1.
      ls_long-dataj1 = lv_i1.
      ls_long-datak1 = sy-datum + lv_i1.
      ls_long-datal1 = lv_i1.
      ls_long-datam1 = lv_i1.
      ls_long-datan1 = sy-datum + lv_i1.
      ls_long-datao1 = lv_i1.
      ls_long-datap1 = lv_i1.
      ls_long-dataq1 = lv_i1.
      ls_long-datar1 = sy-datum + lv_i1.
      ls_long-datas1 = lv_i1.
      ls_long-datat1 = lv_i1.
      ls_long-datau1 = lv_i1.
      ls_long-datav1 = sy-datum + lv_i1.
      ls_long-dataw1 = lv_i1.
      ls_long-datax1 = lv_i1.
      ls_long-datay1 = lv_i1.
      ls_long-dataz1 = sy-datum + lv_i1.
    endform.".
    Thanks & Regards,
    YJR.

  • VAT Tax determination- question about "Requirement" for Table-Is mandatory?

    Hi everybody,
    I am having problems to determine the appropriate VAT tax percent in some cases in Mexico.
    In Mexico for billing at the border with other countries the VAT tax percent is 11% and for billing inside the country the VAT tax percent is 16%. For determine the VAT tax percent the sequence access is MWST and is assigned to the condition type MWST.  For cover the border cases we create a table 989 with the fields: Distribution Channel/Plant and VK11 is fill out with this information. With these parameters is possible to determine successfully the VAT percent of 11% at border sales point.
    The problem that I have is related to the u201Crequirementu201D assigned to the table 989: Distribution Channel/Plant. This table has assigned a standard requirement number 7- Domestic Business and when billing to a customer with address of MX (Mexico) as LAND1 at general data section of customer master data, the VAT tax percent of 11% is obtained successfully, because all the fields of the table are filled. But when billing to a customer with address of US ( United States of America) as LAND1 at general data section of customer master data, is not possible to obtain the correct VAT tax percent of 11% even all the fields of the table are filled.
    I do not know if the problem is the requirement number 7- Domestic Business, but if I change for the requirement 8- Export Business then the opposite is occurring. For US customers the VAT tax percent is determinate successfully and for MX customers not, because the fields of the u201Crequirement u201Care not filled.  
    So, I am wondering if:
    1.     Would be necessary to create a new requirement for combine the 7- Domestic Business and 8- Export Business requirements or exist one like this? I do not need to conditioning anything
    2.     What happen if I left in blank the requirement field, I mean if I do not assign anything. I was doing testing for both customers and the VAT tax percent was determined correctly. But I do not know if is correct do not assign a u201Crequirementu201D to the table of the sequence of access MWST.
    Thank you in advance for your help!

    OK I think I understand but here is where I am at now. I originally created a regulation that requires a license when shipping to certain countries out of the EU. I set the regulation as an Export reg did all the necessary config and was able to assign licenses automatically where needed.
    Has I was testing the solution I could not find sales docs or deliveries for some of the materials and finally found out they are being moved on NB PO's.
    So I changed the reg to include imports etc., classified with import control class etc. but I am seeing a couple things that don't make sense and tells me my config isn't quite right.
    I finally do see that the system at least recognizes the new legal reg in the screening now. before this it had not. But it is still telling me it can't find any relevant country.
    The other thing the document itself only shows under Display Existing Export Documents even though in the log it says it's an import document and when I go to the Import log there is no data.
    Before I realized there was a note to change the Import to export on transfer I assigned the PO types to export orders and I am thinking that's why they are showing in the export list but I woul have tought that it would say it's an export doc which it doesn't.
    Any ideas?

  • Question about system catalog tables in oracle 11g database

    We have a customer issue and in order to trobleshoot that we need following information. We would really appreciate any input. I opened SR with support and they asked me to open a discussion here. Please see following.
    what is the meaning of the part# column of the sys.tabpart table and what are the possible values that can be entered into this system catalog table. Our findings are that Oracle may have varied the meaning of this column as the data that we are seeing returned is now > 2gig which does not make sense for a partition number.
    Our stored procedures use this information to retrieve information from the Oracle system tables to process a partitioned Oracle table. Our research has found that the limited description for this table (and columns) can be found in the following Oracle member at the following location:
    dpart.bsq --> ...... oracle database path... \RDMS\ADMIN folder
    The comment within this file notes the following:
    " this value is "partition number (see discussion below)"
    However, there is no additional discussion to describe what the value means and the possible variations. We need this information to move forward with our problem diagnosis and
    to see what kind of updates we need to make for our stored procedures....
    Thanks a lot in advance.
    Avni

    I ran the following....
    list expired archivelog all; - This showed all of the archivelogs that we were not able to validate...
    delete expired archivelog all; This deleted the expired archivelogs that were reported from the previous command...
    I then crosschecked the archivelogs by running the following:
    crosscheck archivelog all;
    Thank you for your help!

  • Question about import all tables except one

    I have exported all tables for a user. While importing I want to import all tables except one table whose name I know. How do I do this? Example: If I have tables: Employee, address, salary that I have exported. While importing I want to import only Employee and address. How do I do this. The database I have exported contains over 20 tables.

    I don't think it is possible at all, you just need to explicitly specify all the tables.
    If you really don't want to include the single table, then you can import all the tables and drop the desired one.
    hare krishna
    Alok

  • Question about creating new tables using SQL script in WebLogic Server

    Hi,
    I am new to WebLogic and I am following a book Java EE Development with Eclipse published by PACKT Publishing to learn
    Java EE.  I have installed Oracle Enterprise Pack for Eclipse on the PC and I am able to log into the WebLogic Server Administration Console
    and set up a Data Source.  However the next step is to create tables for the database.  The book says that the tables can be created using
    SQL script run from the SQL command line.
    I cannot see any way of inputting SQL script into the WebLogic Server Admistration Console.  Aslo there is no SQL Command line in DOS.
    Thanks  for your help.
    Brian.

    Sounds like you are to run the scripts provided by a tutorial to create the tables, right?  In that case, you may need to install an Oracle client to connect to your database.  The client is automatically installed with the database, so if you have access to the server that hosts the database, you should be able to run SQLplus from there.
    As far as I know, there is no way to run a script from the Admin Console.  I could be wrong, however.

  • Question about Workspace and Trash

    Hi,
    does anybody knows if it is possible that i can create a Workspace where the Files did not stored in the Trash if anybody delete them?
    regards
    Peter

    Hi,
    does anybody knows if it is possible that i can create a Workspace where the Files did not stored in the Trash if anybody delete them?
    regards
    Peter

  • Question about Workspace

    Hi,
    When I logon the workspace ,it hints me "Could not find a Hyperion Reporting and Analysis - System 9 server running on localhost at port 6800. Please verify your server connect string and confirm that the server is up."
    But I checked Hyperion service that they are all up,What is the issue cause,Thank you.

    Hi,
    Sounds like your BI+ core services are not up and running (Hyperion S9 BI+ 9.3 Core Services 1)
    If you have a look at \Hyperion\BIPlus\logs\stdout_console.log
    If it is all up and running the log will have "stdout - giving control to the ORB to process the requests" otherwise there will be errors.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Question about gaining enable access

    I recently upgraded the code on one of our
    6500 switches. Ever since the upgrade, I can log into the switch with no problem. However, when I go to enable mode, it prompts me once again for the username before the enable password. Not sure if anyone ever saw this. It would be nice to only have to enter my username once instead of again when I go to enable mode. Thx!
    Switch>en
    User Access Verification
    Username:

    I recently upgraded the code on one of our6500
    switches. Ever since the upgrade, I can log into the switch with no
    problem. However, when I go to enable mode, it prompts me once again
    for the username before the enable password. Not sure if anyone ever
    saw this. It would be nice to only have to enter my username once
    instead of again when I go to enable mode. Thx!Switch>enUser Access VerificationUsername:
    Hi,
    Can you share the configuration related to aaa configuration in your switch.
    Ganesh.H

Maybe you are looking for

  • Open/Save Dialogue Behavior

    Can someone help me understand Pages' behavior when opening/saving documents? Here's the scenario: -From Pages' File menu I select Open. A Finder window opens to the desktop (why the desktop?). Then I navigate to the document I want, edit as needed,

  • How to search a property value in a TDMS and return the group name.

    Hi. Please help I want to search a TDMS file by property value and then return the group names or channels. I am using labview 8.2 with the connectivity toolkits.

  • Error: Organizational Assignment created via Orgnizational Management in HCM form

    Hi All, I am getting below error while moving an EE to certain position via HCM form. I have cheked and found that if an org unit is to be assigned to another cost center in Organizational management from a certain date, this action generates a new r

  • My new iPod Touch doesn't want to connect with iTunes.

    Before, I had an iPod Nano from the second generation, so it was one of the first iPods on the market. Now I got an iPod Touch for my birthday, but when I plug him in on iTunes it says that there's something wrong with the software. I need to deinsta

  • Problem With Printing Summary

    Hellow, I have the following prob: I have a report that prints Purchase Order listing (more than one PO 2 be printed). The purchase order structure is divided into two tables: Header & Detail. And In the report there are two groups accordingly: Head