Tkprof list each sql statement

I think I registered this question once, but I cannot find it now so I'll do it again.
I have a 10046 trace with a statement executed twice in the same session (same bind values) . I have different plans for each execution
When formatting the sql is showed once.
I wonder if I can make tkprof list each single execution individually, I checked the aggregate function but if I understand it
correct it only divides sql based on users.
Anyone who knows a tool that can do what I want or if I can do this with tkprof.
Version is 11.2.0.3
Appreciate your feedback.
Thanks
Magnus

Hi Magnus,
a SQL profiler can just use and interpret the traced information. In your case you have captured the following "As of 11.1 an execution plan is written to the trace file only after the first execution of every cursor. The execution statistics associated to it are the ones of the first execution only.". TKPROF itself is also able to re-compile and show an execution plan for a SQL (if not captured at all), but this is faulty due to various feature / general limitations (e.g. cardinality feedback or the general limits).
You need to use the following trace options, if you want both execution plans (for every execution) in the raw SQL trace file for further interpretation:
SQL> exec DBMS_MONITOR.SESSION_TRACE_ENABLE(NULL,NULL,TRUE,TRUE,'ALL_EXECUTIONS');
Regards
Stefan

Similar Messages

  • Procedure not checking each sql statement.

    Hi All,
    I have created 2 tables names are A1 and B1. A1 table has some fields. Fields are no,sal,comm.,load_date. In A1 table NO (column) is the primary key.
    Second table is B1. this table has id,phone_no and load_date. In this table constraint
    ID column is the Not null.
    After that I have created 2 procedures one for A1 and one for B1. with in those procedures I used SQL insert statements.
    In procedures I used some valid sql statements and some invalid statements ( invalid statements means that I have specified constraint that’s why I specified duplicated and null values). While executing the procedures procedure shows error because of invalid statement and in that procedures I did not specify any Exceptions.
    If I specify Exceptions in procedures executing successfully some records are not loading procedure is comeing out. How do we mention server needs to be check and every insert sql startement.
    EX:
    If I give 6 records from 1 to 3 valid statements. I mentioned 4 th record copy of previos record( duplicated). 5 th record and 6 th valid records.
    Procedure executing successfully. Procedure loaded 1 to 4 records after that not loaded 5,6 and 7 records. How do we specify for record inserts 7. actually 7 th record is valid statement we should be insert this record. Please tell me how do we handle sql statements each and every one successfully or not.
    create or replace procedure a_proc as
    --declare
    begin
    insert into a1 values (100,2000,300,sysdate);
    insert into a1 values(200,1000,400,sysdate);
    insert into a1 values(300,3000,500,sysdate);
    insert into a1 values(400,6000,600,sysdate);
    insert into a1 values(400,900,700,sysdate);
    insert into a1 values(400,10000,1200,sysdate);
    insert into a1 values(900,11000,1300,sysdate);
    commit;
    EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('ERROR - '||sqlerrm);
    end a_proc;
    In B1 table colums are ID,PHONE_NO and Load_date. ID is not null column.
    For B1 population I have created one procedure
    create or replace procedure b_proc as
    --declare
    begin
    insert into b1 values(1,123456,sysdate); -- 1 record
    insert into b1 (phone_no,load_date) values (7896538,sysdate); --2 record
    insert into b1 (phone_no) values(6763723458); ----3 record
    insert into b1 (phone_no) values(453465778); --4 record
    insert into b1 values(400,72894894,sysdate); --5 record
    insert into b1 values(500,72894894,sysdate); --6 record
    commit;
    EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('ERROR - '||sqlerrm);
    end b_proc;
    if I execute above procedure procedure executing successfully but procedure inserting only first record not inserting 5th and 6th record. How do we exception for 5th and 6th records also.
    Thanks and Regards,
    Venkat

    {color:#808080}{color:#333300}Hi,
    Please find answer to your question below:{color}
    Venkat: Procedure executing successfully. Procedure loaded 1 to 4 records after that not loaded 5,6 and 7 records. How do we specify for record inserts 7. actually 7 th record is valid statement we should be insert this record. Please tell me how do we handle sql statements each and every one successfully or not.
    {color}
    {color:#0000ff}Guna: The procedure hits an exception after 4th record, and does not process anymore as it exits out of the procedure, I believe the data is not committed as well. You need handle exceptions if the processing has to continue. Same is the belwo case as well
    {color}{color:#333300}Regards,
    Guna{color}

  • Tkprof : 0  user  SQL statement in trace file

    Hello!
    Please explain , what is difference of tkprof-converting of user session trace file and an internal/background process ?
    Get I message "0 user SQL statement in trace file" cause this difference?
    In my target session I issued :
    begin
    sys.dbms_system.set_ev(...., .., 10046, 12, '');
    end;
    begin
    sys.dbms_system.set_ev(..., .., 10053, 1, '');
    end;
    /Thanks and regards,
    Pavel

    Did you try to tkprof a 10053 trace ?
    That is not going to show anything. 10053 traces are meant to be read as they are. tkprof is there to deal with 10046 traces.

  • How to find redo generated per each SQL Statement?

    Database version is 10.2.0.4. OS is AIX.
    We have excessive redo generation during specific time of a day. We would like to identify which statement is contributing to this excessive redo.
    ADDM shows this as first finding
    Waits on event "log file sync" while performing COMMIT and ROLLBACK operations were consuming significant database time.
    Is it possible to find out which sql statement is generating how much redo in bytes during a specific time?
    Thank You
    Sarayu

    user11181920 wrote:
    Is it possible to find out which sql statement is generating how much redo in bytes during a specific time?It will not help you.
    For example, you will see bunch of UPDATE,DELETE,INSERT statements issued by apps. So? What you going to do with them? If you can modify the app to eliminate unnecessary DMLs - it would be great of course. But it is unlikely.From what I've seen, namely developers putting way more commits in than the business rules would justify, it is not so unlikely.
    >
    Also when DML statements generate Redo, the do not cause too many waits of "log file sync", because buffer Log writer flushes buffer as soon as it fills more than 30% or after some time, or on Commit/Rollback. This may or may not be true, as there can be several reasons for log file sync: insufficient I/O (which is the only one you are addressing); CPU starvation (because the cpu ultimately controls i/o requests, and even fixing some unrelated sql that hogs the cpu could change this); lgwr priority (on some systems it makes sense to simply raise the priority, it can vary widely as to whether this makes sense); and other things like private redo, file option flags and the IMU relationships with undo that can be too complicated to put in a partial sentence.
    >
    Waits on event "log file sync" while performing COMMIT and ROLLBACK operations were consuming significant database time.So the root cause can be too many commits, which you should evaluate and fix if possible. You can also use Tanel Poder's snapper script, including on lgwr to see more closely what it is waiting on. See page 28: http://files.e2sn.com/slides/Tanel_Poder_log_file_sync.pdf
    >
    Right. It is because Log Writer writes here synchronously, reliably, it waits until HDD writes data down and responds its written OK.
    And HDDs are not the fastest things in computer. They are mechanical - that is why.
    I would recommend you :
    1. Place redo log files on fast HDDs that used preferably dedicated for this purpose only. If other I/O will frequently disturb these drives they will cause heads change tracks which causes longest HDD waits.
    2. Consider to use faster technology like SAN or SSD.Read this instant classic: http://www.pythian.com/news/28797/de-confusing-ssd-for-oracle-databases/

  • SQL statement difference?

    I have just upgraded an Access database to MSSQL and all of
    the data was
    transferred successfully to the correct tables etc.
    So far so good..
    However, the pages that run from the Access db have sql
    statements to only
    show various fields for certain advertisements. This works
    fine, see here -
    http://www.fuerteventura.com/Furnishing/index.asp
    The pages that run from the MSSQL db have the same sql
    statements but for
    some reason don't show all of the data??
    See here -
    http://www.fuerteventura.com/Furnishing/index2.asp
    I have tried just creating a simple page to display all data
    and this works
    fine, so I'm guessing there's something I need to change in
    the SQL
    statements
    for the MSSQL database pages. As this is my very first bash
    at a mssql db
    I'm pretty much in the dark.
    Can anyone help?
    Here's 2 jpgs of each sql statement -
    Access -
    http://www.fuerteventura.com/access.jpg
    mssql -
    http://www.fuerteventura.com/mssql.jpg
    Thanks for any help
    Gary Woodward

    Great!
    Thanks vey much!
    Works fine now. Thank God for this forum..
    Gary
    "Lionstone" <[email protected]> wrote in
    message
    news:[email protected]...
    > Make sure there are no other BLOB-type fields (text,
    ntext, image,
    > varbinary) in your table.
    > All of these go to the end of the SELECT list, and if
    there is more than
    > one, arrange them in the order they're defined.
    >
    > If you still have problems, post DDL from the table (in
    Query Analyzer,
    > right click on the table and choose "script to new
    window as Create") and
    > the query you use.
    >
    >
    > "woodywyatt" <[email protected]> wrote
    in message
    > news:[email protected]...
    >> Hi
    >> The tables have been changed from "text" to
    "varchar" in the conversion
    >> I tried the listing idea anyway, but got the same
    results.
    >> Confused..
    >>
    >>
    >> "Lionstone" <[email protected]>
    wrote in message
    >> news:[email protected]...
    >>> If you incorrectly chose "text" instead of
    "varchar" for your text data,
    >>> then you must abandon SELECT * and list every
    field you want to select,
    >>> making sure to place all "text" type fields at
    the end of the list in
    >>> the order they're defined in your table.
    >>>
    >>>
    >>> "woodywyatt"
    <[email protected]> wrote in message
    >>> news:[email protected]...
    >>>>I have just upgraded an Access database to
    MSSQL and all of the data was
    >>>>transferred successfully to the correct
    tables etc.
    >>>> So far so good..
    >>>> However, the pages that run from the Access
    db have sql statements to
    >>>> only show various fields for certain
    advertisements. This works fine,
    >>>> see here -
    >>>>
    http://www.fuerteventura.com/Furnishing/index.asp
    >>>> The pages that run from the MSSQL db have
    the same sql statements but
    >>>> for some reason don't show all of the data??
    >>>> See here -
    http://www.fuerteventura.com/Furnishing/index2.asp
    >>>>
    >>>> I have tried just creating a simple page to
    display all data and this
    >>>> works fine, so I'm guessing there's
    something I need to change in the
    >>>> SQL statements
    >>>> for the MSSQL database pages. As this is my
    very first bash at a mssql
    >>>> db I'm pretty much in the dark.
    >>>> Can anyone help?
    >>>> Here's 2 jpgs of each sql statement -
    >>>> Access -
    http://www.fuerteventura.com/access.jpg
    >>>> mssql -
    http://www.fuerteventura.com/mssql.jpg
    >>>>
    >>>> Thanks for any help
    >>>> Gary Woodward
    >>>>
    >>>>
    >>>>
    >>>
    >>>
    >>
    >>
    >
    >

  • How to set Query SQL Statement parameter dynamically in Sender JDBCAdpter

    Hi All,
    I have one scenario in which we are using JDBC Sender Adapter.
    Now in this case,we need to set Query SQL Statement with a SELECT statement based on some fields.
    This SQL statement is not constant, it would need to be changed.
    Means sometimes receiver will want to execute SQL statement with these fields and sometimes they will want to execute it with different fields.
    We can create separate channels for each SQL statement but again that is not an optimum solution.
    So ,I am looking out for a way to set these parameters dynamically or set SQL statement at Runtime.
    Can you all please help me to get this?

    Shweta ,
    <i>Sometimes receiver will want to execute SQL statement dynamically</i>....
    How you will get the query dynamically? Ok Let me assume, consider they are sending the query through file, then its definitely possible. But u need BPM and also not sender JDBC receiver adapter instead, receiver JDBC adapter.
    SQL Query File ->BPM>Synchronous send [Fetch data from DB]--->Response -
    >...............
    Do u think the above design will suit's ur case!!!!
    Best regards,
    raj.

  • How to monitor worst performing sql statements

    Hi,
    I am new to oracle 9i release 2. I come from the Windows world, where we used sql server.
    When we performanced tested our product, we always monitor the worst performaning sql statement using the sql profiler. At the end of a 24 hour test, the profiler will list the sql statement with the longest execution time. What is the equivalent Oracle 9i tools that will allow me to monitor the worst performaning sql during a test that lasts between 10 to 24 hours?
    Thanks,
    Paul0al

    Besides statspack and OEM you have a few other options.
    If an SQL statement had been identified as performing poorly or a job that you can extract the SQL from then you have the option of explaining all the SQL statements and just reviewing the plans for reasonableness. You can also trace actual execution of the task or individual statemnts (alter session set sql_trace = true for basic 10046 event).
    When the SQL has not been identified in advance you can query the shared pool SQL areas for SQL statements that have relatively high physical, logical, or combined IO counts. Then you can perform tuning activities for these statements.
    HTH -- Mark D Powell --

  • Look for histroy of sql statement executed in database

    is there a way to look for histroy or list of sql statement executed in database.?
    similar to history command in linux or bash shell.

    The newer <a href="http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_2131.htm">v$sqlstats</a> (10g) is recommended over v$sql as (according to the documentation) it's "faster, more scalable, and has a greater data retention (the statistics may still appear in this view, even after the cursor has been aged out of the shared pool)", although it's missing a couple of the columns v$sql has.
    The history version (if you are licenced for AWR, which is part of the extra-cost Diagnostics Pack - you may not be licenced to use it even if the dictionary views are installed) is DBA_HIST_SQLSTAT.

  • Dynamically Identifying user issuing SQL statement

    Client wants to provide security to certain data by first capturing the identity of every user issuing a SQL statement, then, based on the user and a security table, allow access to certain data. Is this doable? TIA...

    Oracle has a whole product centered around this called "Label Seurity", which I'm guessing may be too much for your needs. Check out this marking shag for info about "virtual private databases" http://technet.oracle.com/deploy/security/oracle8i/pdf/vpd_wp6.pdf
    Basically, the idea is that the "old school", but still perfecly fine, way to do it is to create views for each group of users and grant permissions to the views for the appropriate users. Optionally using synomyms into their schemas to give users the same name for the different views.
    The virtual private database and similar stuff is hard to explain. I think of it as the db engine auto-adding a where clause to each sql statement based upon who you are. If that makes any sense.
    I've tried this a couple of different ways, but have yet to hit upon one that seems easy & generally applicable.
    Good Luck -d

  • SQL Statement ignored performing List of Values query

    Hi, New user just learning the basics. I have created a simple table PERSON with columns, ID, firstname, lastname, phone, city, State_ID
    Then clicked create Lookup table - State_Lookup with columns State_ID and State_Name.
    I create a page, include all columns from PERSON. For State the field is a select list that should do a lookup form the STATE_LOOKUP table. (I have entered 4 states in the table)
    I am getting the following error however:
    Error: ORA-06550: line 1, column 14: PL/SQL: ORA-00904: "STATE_ID": invalid identifier ORA-06550: line 1, column 7: PL/SQL: SQL Statement ignored performing List of Values query: "select STATE_ID d, STATE_ID v from STATE_ID_LOOKUP order by 1".
    I have not entered any sql, just selected all of my options using defaults and dropdowns. What is causing the error and what do I need to change?
    Thanks

    Okay, learned something: The database link name used, must not contain a dash. The DB_DOMAIN is appended automatically when you create a DB link, so if IT contains a dash, the db link name does as well. Check DBA_DB_LINKS to make sure you don't hit this well-hidden feature.
    Regards
    Martin Klier
    [http://www.usn-it.de|http://www.usn-it.de]

  • Capture all SQL statements and archive to file in real time

    Want to Capture all SQL statements and archive to file in real time?
    Oracle Session Manager is the tool just you need.
    Get it at http://www.wangz.net
    This tools monitor how connected sessions use database instance resources in real time. You can obtain an overview of session activity sorted by a statistic of your choosing. For any given session, you can then drill down for more detail. You can further customize the information you display by specifying manual or automatic data refresh, the rate of automatic refresh.
    In addition to these useful monitoring capabilities, OSM allows you to send LAN pop-up message to users of Oracle sessions.
    Features:
    --Capture all SQL statement text and archive to files in real time
    --Pinpoints problematic database sessions and displays detailed performance and resource consumption data.
    --Dynamically list sessions holding locks and other sessions who are waiting for.
    --Support to kill several selected sessions
    --Send LAN pop-up message to users of Oracle sessions
    --Gives hit/miss ratio for library cache,dictionary cache and buffer cache periodically,helps to tune memory
    --Export necessary data into file
    --Modify the dynamic system parameters on the fly
    --Syntax highlight for SQL statements
    --An overview of your current connected instance informaiton,such as Version, SGA,License,etc
    --Find out object according to File Id and Block Id
    Gudu Software
    http://www.wangz.net

    AnkitV wrote:
    Hi All
    I have 3 statements and I am writing some thing to a file using UTL_FILE.PUT_LINE after each statement is over. Each statement takes mentioned time to complete.
    I am opening file in append mode.
    statement1 (takes 2 mins)
    UTL_FILE.PUT_LINE
    statement2 (takes 5 mins)
    UTL_FILE.PUT_LINE
    statement3 (takes 10 mins)
    UTL_FILE.PUT_LINE
    I noticed that I am able to see contents written by UTL_FILE.PUT_LINE only after statement3 is over, not IMMEDIATELY after statement1 and statement2 are done ?
    Can anybody tell me if this is correct behavior or am I missing something here ?Calling procedure must terminate before data is actually written to the file.
    It is expected & correct behavior.

  • SQL Statement -- Throws GROUP BY issue on NVL

    Hey All,
    This is pretty much my first time tackling a very complex SQL Statement but the boss is adamant about it.
    Anyways here is the information. I have 3 Tables. Table one Contains Work Order Infromation that will be listed or grouped. Table 2 and Table 3 house information that needs to be summed (Estimate hours and Actual Hours).
    So my SQL Statement looked something like this... (Its a mess)
    select workorder.wolo3, workorder.wojp1, workorder.wonum, workorder.description, workorder.location, workorder.status, workorder.wopriority, workorder.targstartdate, workorder.reportdate, workorder.pmnum, workorder.schedstart, sum(labtrans.regularhrs + labtrans.premiumpayhours) as actualhours, sum(wplabor.quantity * wplabor.laborhrs) as estimatehrs
    from maximo.workorder left outer join maximo.labtrans on labtrans.regularhrs > 0 and labtrans.refwo = workorder.wonum and labtrans.siteid = workorder.siteid or labtrans.premiumpayratetype = 'MULTIPLIER' and labtrans.refwo = workorder.wonum and labtrans.siteid = workorder.siteid left outer join maximo.wplabor on wplabor.wonum = workorder.wonum and wplabor.siteid = workorder.siteid
    where workorder.istask=0 and workorder.siteid='NTS' and workorder.status in ('INPRG','SCHED') and workorder.crewid in ('MAINT','DAF') and workorder.schedstart <= TO_DATE('12-13-2009','MM-DD-YYYY') and wolo3 is not null group by workorder.wonum, workorder.status, workorder.wopriority, workorder.pmnum, workorder.location, workorder.description, workorder.targstartdate, workorder.reportdate, workorder.schedstart, workorder.wolo3, workorder.wojp1
    Now the problem is that the 2nd sum (Estimate Hrs) gets HUGE numbers if there are multiple rows in Actual Hours. So I did some searching. Seems that if I used the NVL function in the SQL Statement it would fix the issue...
    select workorder.wolo3, workorder.wojp1, workorder.wonum, workorder.description, workorder.location, workorder.status, workorder.wopriority, workorder.targstartdate, workorder.reportdate, workorder.pmnum, workorder.schedstart, sum(labtrans.regularhrs + labtrans.premiumpayhours) as actualhours,
    nvl((select sum(wplabor.quantity * wplabor.laborhrs) from maximo.wplabor where wplabor.wonum = workorder.wonum and wplabor.siteid = workorder.siteid),0)
    from maximo.workorder left outer join maximo.labtrans on labtrans.regularhrs > 0 and labtrans.refwo = workorder.wonum and
    labtrans.siteid = workorder.siteid or labtrans.premiumpayratetype = 'MULTIPLIER' and labtrans.refwo = workorder.wonum and
    labtrans.siteid = workorder.siteid
    where workorder.istask=0 and workorder.siteid='NTS' and workorder.status in ('INPRG','SCHED') and
    workorder.crewid in ('MAINT','DAF') and workorder.schedstart <= TO_DATE('12-13-2009','MM-DD-YYYY') and wolo3 is not null
    group by workorder.wonum, workorder.status, workorder.wopriority, workorder.pmnum, workorder.location, workorder.description, workorder.targstartdate, workorder.reportdate, workorder.schedstart, workorder.wolo3, workorder.wojp1
    However If get the error not a GROUP BY expression.
    I really need this to work. How can I get the NVL to work with out having to define a Group for it. The problem is that it is a "alias column" and you cannot Group by those fields. Frustation sets in... UGH!
    Thanks in Advance, Ben.

    Hi, Ben,
    Wlecome to the forum!
    user12273726 wrote:
    So my SQL Statement looked something like this... (Its a mess)You're right. Fix that first.
    Put each SELECT item, each condition, and each GROUP BY expression on a separate line. Indent to show here the clauses (SELECT, FROM, WHERE) are.
    For example:
    SELECT    workorder.wolo3
    ,       workorder.wojp1
    ,       workorder.wonum
    ,       workorder.description
    ,       workorder.location
    ,       workorder.status
    ,       workorder.wopriority
    ,       workorder.targstartdate
    ,       workorder.reportdate
    ,       workorder.pmnum
    ,       workorder.schedstart
    ,       SUM (labtrans.regularhrs + labtrans.premiumpayhours)     AS actualhours
    ,       SUM (wplabor.quantity * wplabor.laborhrs)          AS estimatehrs
    FROM                  maximo.workorder
    LEFT OUTER JOIN   maximo.labtrans     ON      labtrans.regularhrs         > 0
                                       AND     labtrans.refwo              = workorder.wonum
                             AND     labtrans.siteid              = workorder.siteid
                             OR     labtrans.premiumpayratetype = 'MULTIPLIER'     -- WARNING!  Don't mix AND and OR
                             AND     labtrans.refwo              = workorder.wonum
                             AND     labtrans.siteid              = workorder.siteid
    LEFT OUTER JOIN   maximo.wplabor     ON     wplabor.wonum              = workorder.wonum
                                       AND     wplabor.siteid              = workorder.siteid
    WHERE     workorder.istask      = 0
    AND       workorder.siteid       = 'NTS'
    AND       workorder.status       IN ('INPRG','SCHED')
    AND       workorder.crewid       IN ('MAINT','DAF')
    AND       workorder.schedstart   <= TO_DATE ('12-13-2009','MM-DD-YYYY')
    AND       wolo3                       IS NOT NULL
    GROUP BY  workorder.wonum
    ,            workorder.status
    ,       workorder.wopriority
    ,       workorder.pmnum
    ,       workorder.location
    ,       workorder.description
    ,       workorder.targstartdate
    ,       workorder.reportdate
    ,       workorder.schedstart
    ,       workorder.wolo3
    ,       workorder.wojp1
    ;When you post formatted text on this site, type these 6 characters:
    &#123;code&#125;
    (small letters only, inside curly brackets) before and after sections of formatted text, to preserve spacing.
    Now the problem is that the 2nd sum (Estimate Hrs) gets HUGE numbers if there are multiple rows in Actual Hours. Huge numbers are not necessarily wrong. Do yo mean the numbers computed by your query are much bigger than they are supposed to be?
    So I did some searching. Seems that if I used the NVL function in the SQL Statement it would fix the issue...
    select workorder.wolo3, workorder.wojp1, workorder.wonum, workorder.description, workorder.location, workorder.status, workorder.wopriority, workorder.targstartdate, workorder.reportdate, workorder.pmnum, workorder.schedstart, sum(labtrans.regularhrs + labtrans.premiumpayhours) as actualhours,
    nvl((select sum(wplabor.quantity * wplabor.laborhrs) from maximo.wplabor where wplabor.wonum = workorder.wonum and wplabor.siteid = workorder.siteid),0)
    ...It looks like you have a Chasm Trap, where you have multiple, independent one-to-many relationships on the same table, and the solution to aggregate them in separate queruies.
    It looks like you're trying to aggregate wplabor in scalar sub-queries, which makes the GROUP BY more complicated (as you discovered), and is also very inefficient.
    It would be easier to code and more efficient to run if you aggregated wplabor in a sub-query, and then joined to this sub-query (and not the actual wplabor table) in your main query.
    If you'd like help, post a little sample data (CREATE TABLE and INSERT statements) for all three tables, and the results you want from that data.
    Simplify as much as possible. Instead of GROUPing BY 11 columns, just GROUP BY, say, 3 columns: the 2 involved in the join condidiotns, and one more. Once you understand how to do this, adding the other expressions will be trivial.
    You can also simplify by ignoring, for now, the WHERE clause, and, therefore, the columns involved in the WHERE clause. It looks like that has nothing to do with the problem. Again, adding the conditions later will be trivial.

  • SQL Statements in ABAP and meaning

    Hello Friends,
    Please, can anybody provide me a documentation on the different ABAP SQL statements and there usage/meanings.
    Thanks,
    Shreekant

    hi,
    goto abapdocu->abap Database access->open Sql you will get examples.
    for documnetation got se38->specify the command and press F1.
    SELECT:
    Put the curson on that word and press F1 . You can see the whole documentation for select statements.
    SELECT result
    FROM source
    INTO|APPENDING target
    [[FOR ALL ENTRIES IN itab] WHERE sql_cond]
    Effect
    SELECT is an Open-SQL-statement for reading data from one or several database tables into data objects.
    The select statement reads a result set (whose structure is determined in result ) from the database tables specified in source, and assigns the data from the result set to the data objects specified in target. You can restrict the result set using the WHERE addition. The addition GROUP BY compresses several database rows into a single row of the result set. The addition HAVING restricts the compressed rows. The addition ORDER BY sorts the result set.
    The data objects specified in target must match the result set result. This means that the result set is either assigned to the data objects in one step, or by row, or by packets of rows. In the second and third case, the SELECT statement opens a loop, which which must be closed using ENDSELECT. For every loop pass, the SELECT-statement assigns a row or a packet of rows to the data objects specified in target. If the last row was assigned or if the result set is empty, then SELECT branches to ENDSELECT . A database cursor is opened implicitly to process a SELECT-loop, and is closed again when the loop is ended. You can end the loop using the statements from section leave loops.
    Up to the INTO resp. APPENDING addition, the entries in the SELECTstatement define which data should be read by the database in which form. This requirement is translated in the database interface for the database system´s programming interface and is then passed to the database system. The data are read in packets by the database and are transported to the application server by the database server. On the application server, the data are transferred to the ABAP program´s data objects in accordance with the data specified in the INTO and APPENDING additions.
    System Fields
    The SELECT statement sets the values of the system fields sy-subrc and sy-dbcnt.
    sy-subrc Relevance
    0 The SELECT statement sets sy-subrc to 0 for every pass by value to an ABAP data object. The ENDSELECT statement sets sy-subrc to 0 if at least one row was transferred in the SELECT loop.
    4 The SELECT statement sets sy-subrc to 4 if the result set is empty, that is, if no data was found in the database.
    8 The SELECT statement sets sy-subrc to 8 if the FOR UPDATE addition is used in result, without the primary key being specified fully after WHERE.
    After every value that is transferred to an ABAP data object, the SELECT statement sets sy-dbcnt to the number of rows that were transferred. If the result set is empty, sy-dbcnt is set to 0.
    Notes
    Outside classes, you do not need to specify the target area with INTO or APPENDING if a single database table or a single view is specified statically after FROM, and a table work area dbtab was declared with the TABLES statement for the corresponding database table or view. In this case, the system supplements the SELECT-statement implicitly with the addition INTO dbtab.
    Although the WHERE-condition is optional, you should always specify it for performance reasons, and the result set should not be restricted on the application server.
    SELECT-loops can be nested. For performance reasons, you should check whether a join or a sub-query would be more effective.
    Within a SELECT-loop you cannot execute any statements that lead to a database commit and consequently cause the corresponding database cursor to close.
    SELECT - result
    Syntax
    ... lines columns ... .
    Effect The data in result defines whether the resulting set consists of multiple rows (table-like structure) or a single row ( flat structure). It specifies the columns to be read and defines their names in the resulting set. Note that column names from the database table can be changed. For single columns, aggregate expressions can be used to specify aggregates. Identical rows in the resulting set can be excluded, and individual rows can be protected from parallel changes by another program.
    The data in result consists of data for the rows lines and for the columns columns.
    SELECT - lines
    Syntax
    ... { SINGLE }
    | { { } } ... .
    Alternatives:
    1. ... SINGLE
    2. ... { }
    Effect
    The data in lines specifies that the resulting set has either multiple lines or a single line.
    Alternative 1
    ... SINGLE
    Effect
    If SINGLE is specified, the resulting set has a single line. If the remaining additions to the SELECT command select more than one line from the database, the first line that is found is entered into the resulting set. The data objects specified after INTO may not be internal tables, and the APPENDING addition may not be used.
    An exclusive lock can be set for this line using the FOR UPDATE addition when a single line is being read with SINGLE. The SELECT command is used in this case only if all primary key fields in logical expressions linked by AND are checked to make sure they are the same in the WHERE condition. Otherwise, the resulting set is empty and sy-subrc is set to 8. If the lock causes a deadlock, an exception occurs. If the FOR UPDATE addition is used, the SELECT command circumvents SAP buffering.
    Note
    When SINGLE is being specified, the lines to be read should be clearly specified in the WHERE condition, for the sake of efficiency. When the data is read from a database table, the system does this by specifying comparison values for the primary key.
    Alternative 2
    Effect
    If SINGLE is not specified and if columns does not contain only aggregate expressions, the resulting set has multiple lines. All database lines that are selected by the remaining additions of the SELECT command are included in the resulting list. If the ORDER BY addition is not used, the order of the lines in the resulting list is not defined and, if the same SELECT command is executed multiple times, the order may be different each time. A data object specified after INTO can be an internal table and the APPENDING addition can be used. If no internal table is specified after INTO or APPENDING, the SELECT command triggers a loop that has to be closed using ENDSELECT.
    If multiple lines are read without SINGLE, the DISTINCT addition can be used to exclude duplicate lines from the resulting list. If DISTINCT is used, the SELECT command circumvents SAP buffering. DISTINCT cannot be used in the following situations:
    If a column specified in columns has the type STRING, RAWSTRING, LCHAR or LRAW
    If the system tries to access pool or cluster tables and single columns are specified in columns.
    Note
    When specifying DISTINCT, note that you have to carry out sort operations in the database system for this.
    SELECT - columns
    Syntax
    | { {col1|aggregate( col1 )}
    {col2|aggregate( col2 )} ... }
    | (column_syntax) ... .
    Alternatives:
    1. ... *
    2. ... {col1|aggregate( col1 )}
    {col2|aggregate( col2 )} ...
    3. ... (column_syntax)
    Effect
    The input in columns determines which columns are used to build the resulting set.
    Alternative 1
    Effect
    If * is specified, the resulting set is built based on all columns in the database tables or views specified after FROM, in the order given there. The columns in the resulting set take on the name and data type from the database tables or views. Only one data object can be specified after INTO.
    Note
    If multiple database tables are specified after FROM, you cannot prevent multiple columns from getting the same name when you specify *.
    Alternative 2
    ... {col1|aggregate( col1 )}
    {col2|aggregate( col2 )} ...
    Effect
    A list of column labels col1 col2 ... is specified in order to build the resulting list from individual columns. An individual column can be specified directly or as an argument of an aggregate function aggregate. The order in which the column labels are specified is up to you and defines the order of the columns in the resulting list. Only if a column of the type LCHAR or LRAW is listed does the corresponding length field also have to be specified directly before it. An individual column can be specified multiple times.
    The addition AS can be used to define an alternative column name a1 a2 ... with a maximum of fourteen digits in the resulting set for every column label col1 col2 .... The system uses the alternative column name in the additions INTO|APPENDING CORRESPONDING FIELDS and ORDER BY. .
    http://help.sap.com/saphelp_nw04/helpdata/en/62/10a423384746e8bf5f15ccdd36e8b1/content.htm

  • The column name "PERNR" has two meanings. ABAP/4 Open SQL statement.

    Hi All,
    Could anyone advise on what are the error I encountered at below code.
    I get the error in " The column name "PERNR" has two meanings. ABAP/4 Open SQL statement." . This errors happen to all the key fields I have selected in below code (eg: pernr, subty, objps, sprps, begda, endda, seqnr).
    The field zeih also encountered error in "Unknown column name "ZEIH". not determined until runtime, you cannot specify a field list."      
      SELECT  pernr
                   subty
                   objps
                   sprps
                   begda
                   endda
                  seqnr
                  zlsch
                  ZEIH
      SELECT * INTO CORRESPONDING FIELDS
              OF TABLE  i_pa0009
              FROM pa0001 INNER JOIN pa0009
              ON pa0001pernr = pa0009pernr
              WHERE bukrs IN s_code AND
              banks NE space.

    Hi,
    In this query
    SELECT pernr
    subty
    objps
    sprps
    begda
    endda
    seqnr
    zlsch
    ZEIH
    if you have used joins and if both the database pa0001, pa0009 has the above mentioned fields you will get the error message as the control gets confused as to which table of the field you are referring to..., instead use them in this way
    SELECT pa0001~pernr
    for each field mention to which table it belongs and ~ before the field name in the above fashion, this will remove the error
    Regards,
    Siddarth

  • SQL Statement / Count & Percentage

    Hello All,
    Below is the SQL statement that I'm using with an MS Access database. Now, I'm looking for an SQL statement that will get the same results using an SQL Compact database.
    Here is what I am using with MS Access that works great:
    queryString = "SELECT DStatus, Count(DStatus) As CountOfSummaryItem, (Count(*)/(Select count(*) FROM tbl_Records_DR)) as Percentage From tbl_Records_DR GROUP BY DStatus Order BY Count(DStatus) DESC;"
    Results are something like this out of a total 50 records in the database:
    Status   -   Count  -   Percentage
    Broken   -   5     -   10%
    Stopped -   10   -   20%
    Return   -   20   -   40%
    Closed   -   15   -   30%
    Thanks,
    ADawn
    ADawn

    SQL Server Compact has limited support for subqueries. I would suggest:
    SELECT DStatus, Count(*) FROM tbl_Records_DR GROUP BY DStatus
    to get the record count per "DStatus", and load the results on a generic List<> object, where you would do the math in code, or else, get first the total record count into a int variable, executing a scalar command with the following instruction
    (C#):
    SqlCeCommand totalCountCommand = new SqlCeCommand("SELECT Count(*) FROM tbl_Records_DR", myConnection);int i = (int)totalCountCommand.ExecuteScalar();
    ...and then use that value on the following instruction, either using string concatenation or a command parameter to divide each value fot the total count...
    SqlCeCommand rowCountCommand = new SqlCeCommand("SELECT DStatus, Count(*), Count(*)/" + i.ToString() + " * 100 AS Percentage FROM tbl_Records_DR GROUP BY DStatus", myConnection);SqlCeDataReader r = rowCountCommand.ExecuteReader();
    This is not tested code, it's just meant to give you an idea on how to get the result.
    Alberto Silva Microsoft MVP - Device Application Development - http://msmvps.com/AlbertoSilva moving2u - R&D Manager - http://www.moving2u.pt

Maybe you are looking for

  • Cannot Login... Help !

    Hi, I am an experimented Mac user, but quite new with Snow Leopard Server. I've just purchased the brand new MacMini Server. I have configured my server with the name server.local and installed OpenDirectory as Master. I wanted to try the network log

  • Itunes running too slow to use

    I recently uploaded a lot of content into my itunes and moved it to an external hard drive. All of a sudden Itunes runs so slow it periodically freezes as I scroll down through my library. I uploaded about 60,000 songs to bring the total to about 67,

  • Reg: Create new Page in iText

    Hi, I created pdf file by using iText...But i didnt get new page by using the syntax as follows, document.newPage();This retrurned true value...But page didnt add..Rectify this pblm ..

  • Give commands to MUSE widgets inside EDGE is possible?

    Hi. I love designing with Muse and Edge. Wouldn'it be just great if I could controll Muse widgets inside my Edge elements? I tried tracking the Muse document and calling the functions inside Edge using jquery but it didn't work. Is there a way for th

  • Conditional drop downs for forms

    I need some help! I have three drop down boxes. (for the sake of the forum I'll call them 1, 1a, and 1b) I want to make a selection in 1 and given that selection I want either the 1a or the 1b drop down box to appear. Here is what I have so far. I'm