Why do we use NAST /TNAPR tables

Hi,
Can anyone please tell for what purpose we use this tables
With Best Regards
Mamatha.

Hi mamatha,
1. NAST
2. This table stores
messages which are output
by various applications.
3. eg. PO is printed, etc
4. Programs like RSNAST
use this table to
scan it and
determine what has to be done furhter
for this mesage !
---> how NAST table is filled?
1. NAST Table is used to Deposit/Collect Messages
produced by various applications.
(These messages can be used further,
to perform specific activities)
2. NAST Table is populated
from various standard Programs
(like migo, po, sales order etc)
via standard function modules.
The function modules INSERTS
a new record into the table
with appropriate information.
3. For further action,
program RSNAST00 is available,
which SCANS the NAST table
and appropriately calls
other function modules for further activity
(Based on the message type, application etc.)
Hope the above helps.
regards,
amit m.

Similar Messages

  • Why do we use Setup tables in LO

    Hi All,
    Can any body tell me that Why do we use Setup tables in LO.
    Plz reply back me on [email protected]
    Regards,
    Kiran

    Hi Kiran
    GO through the following earlier forum
    Re: LO Setup tables
    Regards
    RMK
    ***Assigning pointz is the only way of saying thanx in SDN***

  • Why we are using cluster tables mainly HR

    can any plz tell me why we are using cluster tables mainly HR???

    Nice question -
    Am making my guess based on whatever little I know;
    PCLn are <i>file</i> systems and not a traditional (RDBMS) table system. Each File may contain one or more data clusters. SAP Defines data clusters thus -
    <i>A data cluster is a grouping of several data objects. Elementary fields, field strings and internal tables can be grouped in a data cluster</i>.
    <i>Example</i>:<i>Clusters B1 and B2 in files PCL1 and PCL2 are relevant to time evaluation, as is cluster PS, which stores the generated schema</i>.
    The reason why file system of data storage is used instead of DB Table system may be for the purpose of storing voluminous data (Payroll and time) and ease of retrieval during processing (RDBMS may hv tough time in this). Also probably because of SAP's origin from Mainframe.
    Why data clusters are used -? Probably data clusters are an offshoot (or part) of File system
    Pls feel free to contradict the above. Actually DB experts can throw more light on this..
    Regards
    Chandra

  • How to use PL/SQL table

    Hi all,
    can you guys suggest me how can I use pl/sql tables for the below query to incresing the performance.
    DECLARE
        TYPE cur_typ IS REF CURSOR;
        c           cur_typ;
        total_val varchar2(1000);
        sql_stmt varchar2(1000);
        freeform_name NUMBER;
        freeform_id NUMBER;
        imgname_rec EMC_FTW_PREVA.EMC_Image_C_Mungo%rowtype;
        imgval_rec  EMC_FTW_PREVA.EMC_Content_C_Mungo%rowtype;
        CURSOR imgname_cur IS
            select * from EMC_FTW_PREVA.EMC_Image_C_Mungo
            where cs_ownerid in (
                        select id from EMC_FTW_PREVA.EMC_Image_C
                        where updateddate > '01-JUN-13'
                        and path is not null
                        and createddate != updateddate)
            and cs_attrid = (select id from EMC_FTW_PREVA.EMC_ATTRIBUTE where name = 'Image_Upload');
    BEGIN
        OPEN imgname_cur;
        LOOP
          FETCH imgname_cur INTO imgname_rec;
          EXIT WHEN imgname_cur%NOTFOUND;
          total_val := 'EMC_Image_C_' || imgname_rec.cs_ownerid;
          sql_stmt := 'SELECT instr(textvalue,''' || total_val || '''), cs_ownerid FROM EMC_FTW_PREVA.EMC_Content_C_Mungo a Where cs_attrid = (select id from EMC_FTW_PREVA.EMC_ATTRIBUTE where name = ' || '''' || 'Body_freeform' || '''' || ')';
            OPEN c FOR sql_stmt;
            LOOP
              FETCH c INTO freeform_id,freeform_name;
              EXIT WHEN c%NOTFOUND;
                                      IF freeform_id > 0 THEN
                dbms_output.put_line (imgname_rec.cs_ownerid || ',' || total_val || ',' || freeform_id || ',' || freeform_name);
                                      END IF;
            END LOOP;
            CLOSE c;     
       END LOOP;
       CLOSE imgname_cur;
    END;
    Thanks in Advance.

    can you guys suggest me how can I use pl/sql tables for the below query to incresing the performance.
    There would be absolutely no point at all in improving the performance of code that has NO benefit.
    The only result of executing that code is to possibly produce some lines of output AFTER the entire procedure if finished:
    dbms_output.put_line (imgname_rec.cs_ownerid || ',' || total_val || ',' || freeform_id || ',' || freeform_name);
    So first you need to explain:
    1. what PROBLEM you are trying to solve?
    2. why you are trying to use PL/SQL code to solve it.
    3. why are you using 'slow by slow' (row by row) processing and then, for each row, opening a new cursor to query more data?
    You should be using a single query rather than two nested cursors. But that begs the question of what the code is even supposed to be doing since the only output is going to a memory buffer.

  • Function module: why do we use FM and what is the purpose of using FM

    hi,
       Can any please explain. Why do we use FM?
                                           What is the purpose of using FM ?
                                           Where we are using FM and for what tables in R/3 ?
    I could be thankful to you if any one answer above questions.
    Arun

    Hi,
      We go for creating FM when there is a chance of using the same code in different reports in R/3.
    Suppose I have a requirement say, to display the Payer Name for every sale order.
    This is most common requirement in any project.
    You can create a FM say READ_CUSTOMER_NAME in SE37.
    Write a select statement from the table VBAP to fetch the Payer Name based on the Sales Order.
    Now you can activate the FM and it is ready to be used across all the reports in R/3.
    Need : To avoid redundant coding and to modularize the code.
    If you want to see the list of Standard FMs, got SE37 --> press F4 and you'll get all the SAP standard FMs.
    For customized FMs (User defined), type Z* or Y* and press F4.
    Hope this helps a bit !!!
    Regards,
    Balaji V

  • Use of internal table with hearerline in ABAP OO

    Hi,
    I have a very basic question regarding the use of Internal table with headerline in ABAP OO.
    I consider the concept of Internal table with header as one of the most features of ABAP, which is not there in any of the other porgramming languages.
    I accept that OO's concept is one of the most powerful and a very good concept. We should also implement OO's concept in ABAP.
    But my concern is why in the process of moving to OO's the concept of Internal table with headerline is no more used.
    Can any one tell me the main reason for this. Is there any technical reason for this. By this i mean anything to do with memory or effeciency.
    Thanx,
    Srinivas

    Hi Srinivas,
       here is something more which i found about the same
    follow this link
    https://sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5ac31178-0701-0010-469a-b4d7fa2721ca
    and search for header line.
    it says:
    Tables with header lines not allowed
    Only tables without header lines can be declared in ABAP Objects.
    Error message in ABAP Objects if the following syntax is used:
    DATA itab TYPE LIKE TABLE OF ... WITH HEADER LINE.
    Correct syntax:
    DATA: itab TYPE LIKE TABLE OF ... ,
    wa LIKE LINE OF itab.
    Reason:
    It depends on the statement whether the body or header line of a table is accessed. The table name should identify the table uniquely. Programs are easier to read. Tables with header lines do not improve performance.
    i hope this will help you.
    regards,
    Kinshuk
    PS mark helpful answers

  • Query plan changes when query is used in CREATE TABLE AS

    We've puzzled by the fact that EXPLAIN PLAN gives a much different output for a SELECT statement than it does when the same statement is used for CREATE TABLE . . . AS SELECT.
    The bad part is that the CREATE TABLE version performs very badly, and that's what we want the query for.
    Why does this happen? Is there a difference (from the database's point of view) between retrieving a set of rows to display to the user and putting that same set into a new table? Doesn't this make it harder to diagnose and fix query performance problems?
    Here's our query:
    create table query_test AS
    select term, parentTerm, apidb.tab_to_string(cast(collect(trim(to_char(internal)))
                       as apidb.varchartab), ', ') as internal
                 from (
                     select distinct ga.organism as term,
                                     ga.species as parentTerm,
                                     tn.taxon_id as internal
                     from apidb.GeneAttributes ga, SRES.TaxonName tn, sres.Taxon t,
                          dots.AaSequence aas, dots.SecondaryStructure ss
                     where ga.organism = tn.name
               and tn.taxon_id = t.taxon_id
                       and t.taxon_id = aas.taxon_id
       and aas.aa_sequence_id = ss.aa_sequence_id
               and t.rank != 'species'
               union
                     select distinct ga.species as term,
                       '' as parentTerm,
                                     ts.taxon_id as internal
                     from apidb.GeneAttributes ga, SRES.TaxonName tn, apidb.taxonSpecies ts,
                          dots.aasequence aas, dots.SecondaryStructure ss
                     where ga.organism = tn.name
                      and tn.taxon_id = ts.taxon_id
                      and ts.taxon_id = aas.taxon_id
                     and aas.aa_sequence_id = ss.aa_sequence_id
       group by term,parentTerm;Without the CREATE TABLE, the plan looks like this:
    | Id  | Operation                             | Name                      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | CREATE TABLE STATEMENT                |                           |  2911 |  5986K|       | 18840   (1)| 00:03:47 |
    |   1 |  LOAD AS SELECT                       | QUERY_TEST                |       |       |       |            |          |
    |   2 |   VIEW                                |                           |  2911 |  5986K|       | 18669   (1)| 00:03:45 |
    |   3 |    SORT GROUP BY                      |                           |  2911 |   332K|       | 18660   (1)| 00:03:44 |
    |   4 |     VIEW                              |                           |  2911 |   332K|       | 18659   (1)| 00:03:44 |
    |   5 |      SORT UNIQUE                      |                           |  2911 |   292K|       | 18659   (6)| 00:03:44 |
    |   6 |       UNION-ALL                       |                           |       |       |       |            |          |
    |*  7 |        HASH JOIN                      |                           |  2907 |   292K|  2160K| 17762   (1)| 00:03:34 |
    |   8 |         TABLE ACCESS FULL             | GENEATTRIBUTES10650       | 40957 |  1679K|       |   795   (1)| 00:00:10 |
    |*  9 |         HASH JOIN                     |                           | 53794 |  3204K|  1552K| 16675   (1)| 00:03:21 |
    |* 10 |          HASH JOIN                    |                           | 37802 |  1107K|       | 12326   (1)| 00:02:28 |
    |* 11 |           HASH JOIN                   |                           | 37945 |   629K|       | 10874   (1)| 00:02:11 |
    |  12 |            INDEX FAST FULL SCAN       | SECONDARYSTRUCTURE_REVIX9 | 37945 |   222K|       |    33   (0)| 00:00:01 |
    |  13 |            INDEX FAST FULL SCAN       | AASEQUENCEIMP_REVIX6      |  7886K|    82M|       | 10816   (1)| 00:02:10 |
    |* 14 |           TABLE ACCESS FULL           | TAXON                     |   514K|  6530K|       |  1450   (1)| 00:00:18 |
    |  15 |          TABLE ACCESS FULL            | TAXONNAME                 |   760K|    22M|       |  2721   (1)| 00:00:33 |
    |* 16 |        HASH JOIN                      |                           |     4 |   380 |       |   886   (1)| 00:00:11 |
    |  17 |         NESTED LOOPS                  |                           |   730 | 64970 |       |   852   (1)| 00:00:11 |
    |* 18 |          HASH JOIN                    |                           |     1 |    78 |       |   847   (1)| 00:00:11 |
    |  19 |           NESTED LOOPS                |                           |       |       |       |            |          |
    |  20 |            NESTED LOOPS               |                           |    17 |   612 |       |    51   (0)| 00:00:01 |
    |  21 |             TABLE ACCESS FULL         | TAXONSPECIES10646         |    12 |    60 |       |     3   (0)| 00:00:01 |
    |* 22 |             INDEX RANGE SCAN          | TAXONNAME_IND01           |     1 |       |       |     2   (0)| 00:00:01 |
    |  23 |            TABLE ACCESS BY INDEX ROWID| TAXONNAME                 |     1 |    31 |       |     4   (0)| 00:00:01 |
    |  24 |           TABLE ACCESS FULL           | GENEATTRIBUTES10650       | 40957 |  1679K|       |   795   (1)| 00:00:10 |
    |* 25 |          INDEX RANGE SCAN             | AASEQUENCEIMP_REVIX6      |   768 |  8448 |       |     5   (0)| 00:00:01 |
    |  26 |         INDEX FAST FULL SCAN          | SECONDARYSTRUCTURE_REVIX9 | 37945 |   222K|       |    33   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       7 - access("GA"."ORGANISM"="TN"."NAME")
       9 - access("TN"."TAXON_ID"="T"."TAXON_ID")
      10 - access("T"."TAXON_ID"="TAXON_ID")
      11 - access("AA_SEQUENCE_ID"="SS"."AA_SEQUENCE_ID")
      14 - filter("T"."RANK"<>'species')
      16 - access("AA_SEQUENCE_ID"="SS"."AA_SEQUENCE_ID")
      18 - access("GA"."ORGANISM"="TN"."NAME")
      22 - access("TN"."TAXON_ID"="TS"."TAXON_ID")
      25 - access("TS"."TAXON_ID"="TAXON_ID")
    46 rows selected.With the CREATE TABLE, the plan for the SELECT alone looks like this:
    | Id  | Operation                           | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                           |     2 |   234 |  1786   (1)| 00:00:22 |
    |   1 |  SORT GROUP BY                      |                           |     2 |   234 |  1786   (1)| 00:00:22 |
    |   2 |   VIEW                              |                           |     2 |   234 |  1785   (1)| 00:00:22 |
    |   3 |    SORT UNIQUE                      |                           |     2 |   198 |  1785  (48)| 00:00:22 |
    |   4 |     UNION-ALL                       |                           |       |       |            |          |
    |*  5 |      HASH JOIN                      |                           |     1 |   103 |   949   (1)| 00:00:12 |
    |   6 |       NESTED LOOPS                  |                           |   199 | 19303 |   915   (1)| 00:00:11 |
    |   7 |        NESTED LOOPS                 |                           |    13 |  1118 |   850   (1)| 00:00:11 |
    |   8 |         NESTED LOOPS                |                           |    13 |   949 |   824   (1)| 00:00:10 |
    |   9 |          VIEW                       | VW_DTP_E387155E           |    13 |   546 |   797   (1)| 00:00:10 |
    |  10 |           HASH UNIQUE               |                           |    13 |   546 |   797   (1)| 00:00:10 |
    |  11 |            TABLE ACCESS FULL        | GENEATTRIBUTES10650       | 40957 |  1679K|   795   (1)| 00:00:10 |
    |  12 |          TABLE ACCESS BY INDEX ROWID| TAXONNAME                 |     1 |    31 |     3   (0)| 00:00:01 |
    |* 13 |           INDEX RANGE SCAN          | TAXONNAME_IND02           |     1 |       |     2   (0)| 00:00:01 |
    |* 14 |         TABLE ACCESS BY INDEX ROWID | TAXON                     |     1 |    13 |     2   (0)| 00:00:01 |
    |* 15 |          INDEX UNIQUE SCAN          | PK_TAXON                  |     1 |       |     1   (0)| 00:00:01 |
    |* 16 |        INDEX RANGE SCAN             | AASEQUENCEIMP_REVIX6      |    15 |   165 |     5   (0)| 00:00:01 |
    |  17 |       INDEX FAST FULL SCAN          | SECONDARYSTRUCTURE_REVIX9 | 37945 |   222K|    33   (0)| 00:00:01 |
    |  18 |      NESTED LOOPS                   |                           |     1 |    95 |   834   (1)| 00:00:11 |
    |  19 |       NESTED LOOPS                  |                           |     1 |    89 |   833   (1)| 00:00:10 |
    |* 20 |        HASH JOIN                    |                           |     1 |    78 |   828   (1)| 00:00:10 |
    |  21 |         NESTED LOOPS                |                           |       |       |            |          |
    |  22 |          NESTED LOOPS               |                           |    13 |   949 |   824   (1)| 00:00:10 |
    |  23 |           VIEW                      | VW_DTP_2AAE9FCE           |    13 |   546 |   797   (1)| 00:00:10 |
    |  24 |            HASH UNIQUE              |                           |    13 |   546 |   797   (1)| 00:00:10 |
    |  25 |             TABLE ACCESS FULL       | GENEATTRIBUTES10650       | 40957 |  1679K|   795   (1)| 00:00:10 |
    |* 26 |           INDEX RANGE SCAN          | TAXONNAME_IND02           |     1 |       |     2   (0)| 00:00:01 |
    |  27 |          TABLE ACCESS BY INDEX ROWID| TAXONNAME                 |     1 |    31 |     3   (0)| 00:00:01 |
    |  28 |         TABLE ACCESS FULL           | TAXONSPECIES10646         |    12 |    60 |     3   (0)| 00:00:01 |
    |* 29 |        INDEX RANGE SCAN             | AASEQUENCEIMP_REVIX6      |   768 |  8448 |     5   (0)| 00:00:01 |
    |* 30 |       INDEX RANGE SCAN              | SECONDARYSTRUCTURE_REVIX9 |     1 |     6 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       5 - access("AA_SEQUENCE_ID"="SS"."AA_SEQUENCE_ID")
      13 - access("ITEM_1"="TN"."NAME")
      14 - filter("T"."RANK"<>'species')
      15 - access("TN"."TAXON_ID"="T"."TAXON_ID")
      16 - access("T"."TAXON_ID"="TAXON_ID")
      20 - access("TN"."TAXON_ID"="TS"."TAXON_ID")
      26 - access("ITEM_1"="TN"."NAME")
      29 - access("TS"."TAXON_ID"="TAXON_ID")
      30 - access("AA_SEQUENCE_ID"="SS"."AA_SEQUENCE_ID")
    50 rows selected.Edited by: JohnI on Jul 18, 2011 2:19 PM
    Edited by: JohnI on Jul 18, 2011 2:28 PM

    Charles Hooper wrote a series of blog entries on a similar topic some time ago: http://hoopercharles.wordpress.com/2010/12/15/select-statement-is-fast-insert-into-using-the-select-statement-is-brutally-slow-1/ (including a lot of useful comments) and two following articles. I have to confess that I did not read the posts again - but I think you will find some good ideas how to analyze the problem.
    Regards
    Martin Preiss

  • Why I cannot use RowID in where clause but can use it in order by clause

    I am on SQL Server 2008.
    1. If I use
    SELECT (ROW_NUMBER()  over
    (order by ImportId, ScenarioId, SiteID, AssetID, LocalSKUID, WEEKID, MonthID)) RowID, * 
      FROM [JnJ_Version1].[dbo].[td_Production_Week]
      order by RowID
    Statement works
    But
    2. If I use
    SELECT (ROW_NUMBER()  over
    (order by ImportId, ScenarioId, SiteID, AssetID, LocalSKUID, WEEKID, MonthID)) RowID, * 
      FROM [JnJ_Version1].[dbo].[td_Production_Week]
      where  RowID > 10000
    I get error, RowID is an invalid column Name why? How to correct query 2.

    This is due to the logical evaluation order of a SELECT statement. Logically, a SELECT statement is computed in the order:
    FROM (which includes JOIN)
    WHERE
    GROUP BY
    HAVING
    SELECT
    ORDER BY
    OFFSET
    Thus, you can use what is defined in the SELECT list in the ORDER BY clause, but not in the WHERE clause.
    In the case of row_number(), this has immediate repurcussions. row_number() is computed from the rows as they arrive the SELECT clause, and if you then you would filter on the value in the WHERE clause you would be going round in circles.
    To do what you are looking for, you use a nested table, for instance with a CTE:
    WITH numbering AS (
       SELECT (ROW_NUMBER()  over
    (order by ImportId, ScenarioId, SiteID, AssetID, LocalSKUID, WEEKID, MonthID)) RowID, * 
      FROM [JnJ_Version1].[dbo].[td_Production_Week]
    SELECT *
    FROM   numbering
    WHERE  RowID > 10000
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Use of Association Tables in Place of FK's  --  O/R Mapping

    We are currently in the process of redesigning one of our systems following an OO model to be implemented in Java. Our current Oracle database is 8.1.7. We care coding sql in JDBC directly without the use of mapping tools. My role in the project is a jr. schema dba.
    The oo designer on our team objects to holding foreign key attributes in the Java classes when (form the object model's perspective) they aren't need (i.e. using collections will indicate parent-child relationships).
    To get around the "foreign key problem" the oo designer is pushing for the use of "association tables" in the database to replace foreign key relationships. The designer and other members of the team (the sr. schema dba) cite literature on O/R mapping which suggests that this technique is a valid and viable approach.
    So as a simple example...
    DEP EMP
    =======          =======     
    # dep_no     # emp_no
    dep_name     dep_no
              emp_name
    Becomes...
    DEP EMP          A_DEP_EMP
    =======          =======     =========
    # dep_no     # emp_no     # dep_no
    dep_name     emp_name     # emp_no
    Where A_DEP_EMP is the association table showing the relationship between department and employee.
    To me it seems like we are making poor and unnecessary database design decisions to support the object model. My objections to this association table approach is that we are failing to use the inherent strengths of the a relational database:
    1) Both the dep and emp must exist before the association is built (assuming the association has FK's to the DEP and EMP tables) -- and yes I'm aware delaying the enforcement of FKs until commit time is an option
    2) The model does not protect against inserting orphaned emps into the EMP table (assuming we'd want to protect against this)
    3) The association table indicates a N:M cardinality. Assuming the design calls for a 1:M cardinality a UK on the emp_no column in the association table would be required. When I brought this up, the sr. schema dba raised concerns about the overhead of the UK and suggested that the application would be responsible for enforcing the cardinality requirements.
    The alternative that seems more appealing is managing the FK details in a persistence layer of the Java classes (currently the notion of separate view and entity objects is absent in our design -- the database interface is directly in each low-level java class).
    If i am correct in my understanding of Oracle's implementation of BC4J the foreign keys are handled in the persistence layer (view object) with out the need for actively maintaining them in the entity Object. Other O/R mapping models also seem to suggest that this is the way to go (e.g. Martin Fowler's recommendations http://www.martinfowler.com/isa/).
    So my question for the community at large:
    - Which approach would you use?
    - Has anyone been working with association tables in place of foreign keys? If so what are your experiences?
    - am i off-base with my objections?
    any feedback / comments much appreciated

    Martin,
    I have seen these kind of discussions over and over again and now you happen to be right in the middle of one. Without going into technical details some statements that might help:
    1) object model designers and relational model designers will NEVER 'agree' since the techniques used are, by nature, too different (they solve different problems, which is fine, but when you start to talk about them they give problems)
    2) my experience is that they do not have to 'agree', as long as they can 'work alongside eachother'. You are not there to tell the object designers how to do their work the same way they are not there to tell you how to do your job.
    3) as a senior DBA, I regard it my primary task to enforce regulations that will make the data stored as meaningfull and correct as possible. If I'm following relational principles to do so, I will use all of them where I find them applicable regardless how that data is used later on. Any suggestion by any person that undermines this principle will not get implemented. Exceptions (<0.1%) not taken into account. Also, there is a 'political correct' way of telling this and a 'bold' way.
    4) choose the 'political correct' one since it will save you a lot of stress :)
    5) if you violate rule 3, and shit happens, you will be held accountable (and rightfully so).
    6) if you have a more senior person telling you what to do, do it, but state your concerns very clearly in an email and save that mail + reply somewhere safe for future discussions. Refer to this if shit happens and you are, unrightfully so, held accountable.
    7) as a DBA, never trust application logic to enforce data integrity. It is your job, not the application developer's and you have the facilities to do so much easier than the app dev. Application developers don't have a primary task of testing data integrity, if it looks ok on the screen they pass their tests. you on the other hand perform special tests they don't. This is your additional value to the project.
    8) in oracle 8i and above I found the use of views and 'in stead of'-triggers very usefull for mimicing objects. Object designers/developers think they get simple tables they can map objects to and underneatch you yourself check everything is ok. The disadvantage of this is that you need to do some extra work not easily accounted for and might run into some performance issues, but that is your challange as you want to become a senior DBA ;). The advantage is that the discussion is over :) and reporting and other, none object applications can interface more easily (which might be an actual design decisive argument, I've used it successfully multiple times).
    9) try to stay away from discussions of how things are done best between the two methods and why one method 'is better' than the other. Try to find a workable solution that will violate the least of the two designing principles and implement that.
    A lot is dependent on your project and it might be that in your case these senior people are quite right in their suggestions, but I've found that the above 'rules' work most of the time for me. I hope they at least give you something to think about. From what I read from your post, it sounds like you know the technical pros and cons but you have a tough time communicating to your peers. Perhaps these rules help a bit.
    Success,
    Lennert
    We are currently in the process of redesigning one of our systems following an OO model to be implemented in Java. Our current Oracle database is 8.1.7. We care coding sql in JDBC directly without the use of mapping tools. My role in the project is a jr. schema dba.
    The oo designer on our team objects to holding foreign key attributes in the Java classes when (form the object model's perspective) they aren't need (i.e. using collections will indicate parent-child relationships).
    To get around the "foreign key problem" the oo designer is pushing for the use of "association tables" in the database to replace foreign key relationships. The designer and other members of the team (the sr. schema dba) cite literature on O/R mapping which suggests that this technique is a valid and viable approach.
    So as a simple example...
    DEP EMP
    =======          =======     
    # dep_no     # emp_no
    dep_name     dep_no
              emp_name
    Becomes...
    DEP EMP          A_DEP_EMP
    =======          =======     =========
    # dep_no     # emp_no     # dep_no
    dep_name     emp_name     # emp_no
    Where A_DEP_EMP is the association table showing the relationship between department and employee.
    To me it seems like we are making poor and unnecessary database design decisions to support the object model. My objections to this association table approach is that we are failing to use the inherent strengths of the a relational database:
    1) Both the dep and emp must exist before the association is built (assuming the association has FK's to the DEP and EMP tables) -- and yes I'm aware delaying the enforcement of FKs until commit time is an option
    2) The model does not protect against inserting orphaned emps into the EMP table (assuming we'd want to protect against this)
    3) The association table indicates a N:M cardinality. Assuming the design calls for a 1:M cardinality a UK on the emp_no column in the association table would be required. When I brought this up, the sr. schema dba raised concerns about the overhead of the UK and suggested that the application would be responsible for enforcing the cardinality requirements.
    The alternative that seems more appealing is managing the FK details in a persistence layer of the Java classes (currently the notion of separate view and entity objects is absent in our design -- the database interface is directly in each low-level java class).
    If i am correct in my understanding of Oracle's implementation of BC4J the foreign keys are handled in the persistence layer (view object) with out the need for actively maintaining them in the entity Object. Other O/R mapping models also seem to suggest that this is the way to go (e.g. Martin Fowler's recommendations http://www.martinfowler.com/isa/).
    So my question for the community at large:
    - Which approach would you use?
    - Has anyone been working with association tables in place of foreign keys? If so what are your experiences?
    - am i off-base with my objections?
    any feedback / comments much appreciated

  • Best options to use in Temp Table

    Hello,
    I was just trying to figure out the best options we can choose with when we come across a scenario where we need to use a  Temp Table/Table Variable/Create
    a Temp Table on the fly.
    However, I could not see any big difference in using those options. As per my understanding using a table variable is more convenient if the query logic is
    small and the result set also will be comparatively small.Creating a temp table is also an easy option but it takes much time and we can not create any indexes on it. I am working on a query optimization task where in plenty of temp tables are used
    and the query takes more than five minutes to execute. We have created few indexes and all in few tables and reduced the query execution time up to 2 mnts.Can anyone give me more suggestions on it. I have gone through various articles about it and came to
    know that there is no one solution for this and I am aware of the basic criteria like  use Set No count on, Order the table in which the indexes are created, Do not use Select * instead use only columns which are really required, Create Indexes
    and all. Other than these I am stuck with the usage of temp tables. There are some limitations where I can convert all the Temp table logic to CTE (I am not saying its not possible, I really dont have time to spend for the conversion). Any suggestions are
    welcome.
    Actual Query
    select Code,dbo.GetTranslatedText(Name,'en-US')
    as Name from ProductionResponse.ProductionResponse
    00.00.02
    5225 rows
    With Table Variable
    DECLARE  @General
    TABLE(Code
    NVarchar(Max),Name
    NVarchar(Max)
    INSERT
    INTO @General
    select Code,dbo.GetTranslatedText(Name,'en-US')
    AS Name  from ProductionResponse.ProductionResponse
    select
    * from @General
    00.00.03
    5225 rows
    With an Identity Column
    DECLARE  @General
    TABLE(Id
    INT IDENTITY(1,1)
    ,Code NVarchar(Max),Name
    NVarchar(Max)
    INSERT
    INTO @General
    select Code,dbo.GetTranslatedText(Name,'en-US')
    AS Number  from ProductionResponse.ProductionResponse
    select
    * from @General
    00.00.04
    5225 rows
    With Temp Table:
    CREATE
    TABLE #General (Id
    INT IDENTITY(1,1)
    PRIMARY KEY,Code
    NVarchar(Max),Name
    NVarchar(Max)
    INSERT
    INTO #General
    select Code,dbo.GetTranslatedText(Name,'en-US')
    as Name from ProductionResponse.ProductionResponse
    select
    * from #General
    DROP
    TABLE #General
    00.00.04
    5225 rows
    With Temp Table on the Fly
    SELECT G.Code,G.Name
    INTO #General  
    FROM
    select Code,dbo.GetTranslatedText(Name,'en-US')
    as Name from ProductionResponse.ProductionResponse
    )G
    select
    * from #General
    00.00.04
    5225 rows

    >> I was just trying to figure out the best options we can choose with when we come across a scenario where we need to use a Temp Table/Table Variable/Create a Temp Table on the fly. <<
    Actually, we want to avoid all of those things in a declarative/functional language. The goal is to write the solution in a single statement. What you are doing is mimicking a scratch tape in a 1950's tape file system. 
    Another non-declarative technique is to use UDFs, to mimic 1950's procedural code or OO style methods. Your sample code is full of COBOL-isms! In RDBMS we follow ISO-11179 rules, so we have “<something in particular>_code” rather than just “code” like
    a field within a COBOL record. The hierarchical record structure provides context, but in RDBMS, data elements are global.  Or better, they are universal names. 
    >> I am aware of the basic criteria like use SET NO COUNT ON, Order the table in which the indexes are created, Do not use SELECT * instead use only columns which are really required, CREATE INDEXes and all.<<
    All good, but you missed others. Never use the same name for a data element (scalars) and a table (sets). Think about what things like “ProductionResponse.production_response” means. A set with one element is a bit weird, but that is what you said. Also, what
    is this response? A code? A count? It lacks what we call an attribute property. 
    This was one of the flaws we inherited when ANSI standardized SQL and we should have fixed it. Oh well, too late now. 
    Never use NVARCHAR(MAX). Why do you need to put all of the Soto Zen sutras in Chinese Unicode? When you use over-sized data elements, you eventually get garbage data. 
    >> Other than these I am stuck with the usage of temp tables. There are some limitations where I can convert all the Temp table logic to CTE (I am not saying its not possible, I really do not have time to spend for the conversion). Any suggestions are
    welcome.<<
    Yes! This is how we do declarative/functional programming! Make the effort, so the optimizer can work, so you can use parallelism and so you can port your code out of T-SQL dialect. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Why do we use Allowed Operations in DML Process

    Hello,
    Why do we use Allowed Operations in DML Process ??
    Can you please clear this confusion:
    I am using apex 4.1. oracle 11g R2 SOE ...
    Using the Wizard, I created a Form and IR on Dept Table...
    In the form page:
    - Create Button
    The name is "CREATE"
    NO Database Action
    - DML Process
    Allowed Operations: nothing is checked
    This will insert a new row in the Dept table
    In the form page:
    - Create Button
    The name is "CREATE2"
    Database Action : insert
    - DML Process
    Allowed Operations: nothing is checked
    This will insert a new row in the Dept table
    So, What difference does it make if INSERT check box in Allowed Operations of DML Process is TICKED OR NOT ??
    Regards,
    Fateh

    kdm7 wrote:
    Okay.
    So can we keep a web button to access the www.ni.com ? So that web site opens only when button pressed?
    P.S  I,m a newbie.
    Yes, you can also, e.g. include a help file or manual as html and open that in the browser.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Use global temp table for DML error logging

    our database is 11.2.0.4 enterprise edition on solaris 10
    we are wondering if anyone has an opinion of or has done this before, to use a global temp table for DML error logging. We have a fairly busy transactional database with 2 hot tables for inserts. The regular error table created with dbms_errlog has caused many deadlocks which we don't quite understand yet. we have thought using global temp table for the purpose, and that seemed to work, but we can't read error from the GTT, the table is empty even reading from the same session as inserts. Does anyone have an idea why?
    Thanks

    The insert into the error logging table is done with a recursive transaction therefore it's private from your session which is doing the actual insert.
    Adapted from http://oracle-base.com/articles/10g/dml-error-logging-10gr2.php
    INSERT INTO dest
    SELECT *
    FROM  source
    LOG ERRORS INTO err$_dest ('INSERT') REJECT LIMIT UNLIMITED;
    99,998 rows inserted.
    select count(*) from dest;
      COUNT(*)
        99998
    SELECT *
    FROM  err$_dest
    WHERE  ora_err_tag$ = 'INSERT';
    1400    "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")"        I    INSERT    1000        Description for 1000
    1400    "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")"        I    INSERT    10000        Description for 10000
    1400    "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")"        I    INSERT    1000        Description for 1000
    1400    "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")"        I    INSERT    10000        Description for 10000
    rollback;
    select count(*) from dest;
      COUNT(*)
            0
    SELECT *
    FROM  err$_dest
    WHERE  ora_err_tag$ = 'INSERT';
    1400    "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")"        I    INSERT    1000        Description for 1000
    1400    "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")"        I    INSERT    10000        Description for 10000
    1400    "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")"        I    INSERT    1000        Description for 1000
    1400    "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")"        I    INSERT    10000        Description for 10000

  • CSV output (export to excel) don't use right code table

    CSV output (export to excel) don't use right code table. We use code table
    8859-2 (reports and forms are OK). Why export to excel don't use this code table.
    Thanks for answers

    You could also take the example from my blob that Denes mentioned and instrument it to output in either CSV or HTML. This way, you could have a single procedure that worked for both formats.
    All you would need to do is handle each field differently - pad a TD and /TD (with proper brackets, of course) if its HTML or just append a comma if it's CSV.
    Thanks,
    &#150; Scott &#150;
    http://spendolini.blogspot.com/
    http://sumnertech.com/

  • Why cannot I use hidden or display only item to store value for insert?

    hi, Gurus:
    I have a question:
    I implemented a form with report region in a page, the update works OK, but the add function has a problem:
    There is a column, offender_ID, which is a foreign key for another table, it should not be null during insert. However, even I pass the offender ID from master page when user click the create button, and it did shows in the form, it must be a text filed to insert successfully, why cannot I use hidden or display only item to store this value for insert? (If I use hidden or display only item, insert won't be successful, apex reports I tried to insert a null value to offender_ID column.)
    Many Thanks in advance.
    Sam

    Hi,
    There is a column, offender_ID, which is a foreign key for another table, it should not be null during insert. However, even I pass the offender ID from master page when user click the create button, and it did shows in the form, it must be a text filed to insert successfully, why cannot I use hidden or display only item to store this value for insert? (If I use hidden or display only item, insert won't be successful, apex reports I tried to insert a null value to offender_ID column.)I think both hidden and display items have attributes that can cause problems because of different ways these items function than non-hidden and non-display-only items function. Display Only items have a "Setting" of "Save Session State" Yes/No? That can be a problem.
    Would you do this? Make these items regular items instead and see if you can get those working. Then, we will try to change the fields back to hidden or display only.
    Howard
    Congratulations. Glad you found the solution.
    Edited by: Howard (... in Training) on Apr 11, 2013 10:26 AM

  • How to write - Perform using variable changing tables

    hi Gurus,
    I am facing an issue while writing a perform statement in my code.
    PERFORM get_pricing(zvbeln) USING nast-objky
                                  CHANGING gt_komv
                                           gt_vbap
                                           gt_komp
                                           gt_komk.
    in program zvbeln :-
    FORM get_pricing  USING    p_nast_objky TYPE nast-objky
                      tables   p_gt_komv type table komv
                               p_gt_vbap type table vbapvb
                               p_gt_komp type table komp
                               p_gt_komk type table komk.
      BREAK-POINT.
      DATA: lv_vbeln TYPE vbak-vbeln.
      MOVE : p_nast_objky  TO lv_vbeln.
      CALL FUNCTION '/SAPHT/DRM_ORDER_PRC_READ'
        EXPORTING
          iv_vbeln = lv_vbeln
        TABLES
          et_komv  = p_gt_komv
          et_vbap  = p_gt_vbap
          et_komp  = p_gt_komp
          et_komk  = p_gt_komk.
    ENDFORM.                    " GET_PRICING
    But its giving an error . please let me know how i can solve this .

    Hi,
    Please incorporate these changes and try.
    perform get_pricing(zvbeln) TABLES gt_komv gt_vbap gt_komp gt_komk
                                            USING nast-obky.
    in program zvblen.
    Form get_pricing TABLES p_gt_komv type table komv
                                           p_gt_vbap type table vbapvb
                                           p_gt_komp type table komp
                                           p_gt_komk type table komk
                              USING p_nast_objky TYPE nast-objky.
    REST OF THE CODE SAME.
    End form.
    Note : Please check lv_vbeln after the move statement.
    Hope this will help you.
    Regards,
    Smart Varghese

Maybe you are looking for

  • Crédito p/ Forn aparece no Monitor do ERP - não consigo enviar p/SEFAZ

    Boa tarde! Para devolução de mercadoria ao fornecedor via pedido de compra, criamos o crédito através da transação MIRO, conseguimos o número da NF automaticamente e ela aparece no monitor (J1BNFE), porém, status em "amarelo" - código 3, indicando qu

  • Looking for Help on How To Save Video as Divx File

    Can someone help? I have iMoive 4.0.1 and I need to save video clips as a Divx file. I go to the "share" button and try to compress the clip for Quicktime and choose expert settings. This gives me a list of options including Divx AVI. When I choose t

  • IPhone 5S manual network selection

    Hi all, I have an unlocked iphone 5S and seem to have an issue with the manual selection mode. I was travelling outside my resident country and switched to manual network selection mode to choose my preferred roaming network in the foreign country. A

  • Where should I change the adapter? It doesn't work from the start.

    where should I change the adapter? It doesn't work from the start. But there is no apple store in our city.

  • SOA Cluster- Server Migration

    Hi All, I m installing SOA suite in high availability environment (cluster) and I want to understand why I do need to configure Server Migration for SOA and OSB managed servers, what will happened if one of the managed serves fails and I didn't migra