Logical Reads for a SQL ID

Dear Experts,
Any idea how we can find out Logical reads for a SQL ID over a 24 hour period? AWR shows physical reads info.. any data dictionary view that captures logical I/O?
Thanks

Figured it out.. This is it:
col Time_Taken for 999999999999
col log_reads for 9999999999999
select sql_id, sum(disk_reads_delta) as phy_reads, sum(BUFFER_GETS_DELTA) as log_reads, sum(ELAPSED_TIME_DELTA/1000000) as Time_Taken
from dba_maint.DB_HIST_SQLSTATS
where PARSING_SCHEMA_NAME='USERNAME' and BEGIN_INTERVAL_TIME>='03-JAN-13' and sql_id in ('abcdefghijkl')
group by sql_id order by  phy_reads, log_reads, time_taken desc;

Similar Messages

  • SQL logic help for pl/sql block

    Hi All,
    I need your suggestions and comments for the below issue:
    I have two tables: table A and table B
    Table A has two columns as id and counts:
    Id Counts
    99 10
    999 13
    9999 7
    Table B has two columns as Id and order:
    Id Order
    99 1
    999 2
    9999 3
    We need to update Order in Table B such that Id having highest count in table A has Order as 1 in table B and it keep on increasing the order for other Ids based on decreasing counts in table A. This will be like a job which will run daily and look for counts in table A and update Order in table B according to it.
    It seems to be simple but i am not getting it. Please help me out of this by writing some PL/SQL block.
    I will really appreciate your all comments and responses.
    Regards
    Dev

    Hi,
    Keen2Learn wrote:
    Hi All,
    I am really greatful to all fo you for all your replies and comments. I change ORDER table to ORDERS. All you replies worked for me but there is some slight change in scenario which i need to discuss with you all.
    E.g: Table A has 10 rows like below:
    Id Counts Type
    99 10 A
    999 13 A
    9999 7 C
    99 4 B
    999 2 C
    88 2 A
    77 1 C
    777 3 B
    777 5 A
    888 2 CIf you'd like help, please post CREATE TABLE and INSERT statements for your sample data (including table b as it is before the UPDATE or MERGE).
    I populate data in Orders column Table B based on id, by grouping sum of counts for that id in table A.
    Id 999 has highest sum(count) as 15, so it has orders as 1 in Table B and do same for descending counts for each Id.
    Table B has 5 rows like below:
    Id Orders
    99 2
    999 1
    88 4
    777 3
    555 5
    Assume Table B as static(no new record comes in it) but only its Orders changes for each Id based on counts for that Id in Table A. As you see, Table B has id 555 which is not in Table A, so we need to update its orders to the highest number by taking its Count as 0(zero). Sorry, it's unclear what you want to do.
    Post what you'd like table b to look like after the UPDATE or MERGE.
    Right now i am doing it like this:
    declare
         cursor c1 is
         select Id, SUM (COUNT), RANK () OVER (ORDER BY SUM (COUNT)) rnk
         from TableA      AND Id IN (SELECT Id FROM TableB)
    GROUP BY Id
    ORDER BY rnk DESC;
    i NUMBER := 1;
    begin
    for curr in c1
    loop
         update TableB      
    set orders = i
         where id = curr.id;
         i := i + 1;
    end loop;
    end;I'm not sure what you're trying to do, but I'll bet you don't need PL/SQL to do it. Use a single UPDATE or MERGE statement (inside PL/SQL if necessary).
    But it is not updating orders for Id 555 in TableB.There is no row for id=555 in table b, and, according to your requirements, there never will be, because "Table B as static(no new record comes in it)". It's behaving exactly as you said you wanted it to. What's the problem?
    Please provide your suggesstions on what needs to be done to take care of this scenario. I will really appreciate your all suggesstions and comments. Please let me know if need some more explanation.Whenever you have a problem, post a little sample data (CREATE TABLE and INSERT statements, relevant columns only), and the results you want from that data.
    In the case of a DML operation (such as UPDATE) the sample data should show what the tables are like before the DML, and the results will be the contents of the changed table(s) after the DML.
    Explain, using specific examples, how you get those results from that data.
    Always say what version of Oracle you're using.

  • About Logical Reads

    Problem:
    1.
    If you run a SQL code and you read the logical reads. is it benefit for performance issue to have low value of logical reads not matter what?
    2.
    What is the definition of logical reads?'
    Im a beginner in performance tuning

    Logical reads is a metric that can be used to evaluate query plans. Less logical reads is usually better. However, it's not the best metric, that's wallclock time, in my opinion. Particularly, looking blindly on logical reads, may lead you to think that
    a plan with lots of CPU-expensive hashing operations are better than a simpler plan with nested loop joins.
    The nice thing with logical reads over wallclock time, is that logical reads is not affected by other operations on the server.
    OP, overall I agree with Erland, all other things being equal, less logical reads is better than more logical reads.
    But sometimes other things are not all equal - it does you little good to trade logical reads for a ton of CPU instead!
    However, logical reads *are* affected, dramatically, by other operations on the server - they are not as purely "logical" as one might like!  If your buffers are empty and you have to do physical IO to get the data, even the *logical* reads shows higher
    numbers!  Exactly why this is the case is obscure, or complicated, or something, but it is.  Similarly even contention with other processes especially for buffer space can affect the "logical" reads to a small or larger extent.
    But the good news is that sometimes logical reads are really, really cheap, so sometimes you can get a query that does 100,000 logical reads but runs in a tenth of second anyway.  Whether it's worth a battle to cut the logical reads down then, is up
    to you.
    HTH,
    Josh

  • Infospoke - How to single Logical Filename for all infospokes

    Hello all,
    I want to only have one logical path and logical filename for all of my infospokes.
    For example if I have infospokes ZAAA, ZBBB, ZCCC, I want to have only one common setting in the destination properties of TC:RSBO but three different distinguishable files on my OS.
    I think with the standard settings, I have to make one logical file for every infospoke I have.
    I am aware that I can put <DATE> or <TIME> etc in the logical file settings but that will not distinguish which infospoke the file came from.
    I want my file to look more like <INFOSPOKE>_<DATE><TIME>.csv
    I've already seen the document "How to Extract Data with OPEN HUB to a Logical Filename"  and this is not it because I have BW3.5 and 3.5 already does what this article says.
    Any help would be greatly appreciated.
    Just in case, I am on BW3.5 SP17 on Windows/SQL.

    Jasprit,
    Thanks for your input.
    The functionality of the FM you said is similar to the InfoSpoke functionality...
    Is it the same thing?
    I wonder if I use this it means it is not part of the InfoSpoke license...
    I will have our programer look at it and see what he thinks but in the mean time any other information is greatly appreciated.  Especially if I can stick with using the standard InfoSpoke functionality...
    Isn't this kind of functionality something that everyone wants?
    I wonder why SAP didn't make it like this?

  • How to avoid db file parallel read for nestloop?

    After upgraded to 11gr2, one job took more than twice as long as before on 10g and 11gr1 with compatibility being 10.2.0.
    Same hardware. (See AWR summary below). My analysis points to that Nestloop is doing index range scan for the inner table's index segment,
    and then use db file parallel read to read data from the table segment, and for reasons that I don't know, the parallel read is very slow.
    AVG wait is more than 300ms. How can I fluence optimier to choose db file sequential read to fetch data block from inner table by tweaking
    parameters? Thanks. YD
    Begin Snap: 13126 04-Mar-10 04:00:44 60 3.9
    End Snap: 13127 04-Mar-10 05:00:01 60 2.8
    Elapsed: 59.27 (mins)
    DB Time: 916.63 (mins)
    Report Summary
    Cache Sizes
    Begin End
    Buffer Cache: 4,112M 4,112M Std Block Size: 8K
    Shared Pool Size: 336M 336M Log Buffer: 37,808K
    Load Profile
    Per Second Per Transaction Per Exec Per Call
    DB Time(s): 15.5 13.1 0.01 0.01
    DB CPU(s): 3.8 3.2 0.00 0.00
    Redo size: 153,976.4 130,664.3
    Logical reads: 17,019.5 14,442.7
    Block changes: 848.6 720.1
    Physical reads: 4,149.0 3,520.9
    Physical writes: 16.0 13.6
    User calls: 1,544.7 1,310.9
    Parses: 386.2 327.7
    Hard parses: 0.1 0.1
    W/A MB processed: 1.8 1.5
    Logons: 0.0 0.0
    Executes: 1,110.9 942.7
    Rollbacks: 0.2 0.2
    Transactions: 1.2
    Instance Efficiency Percentages (Target 100%)
    Buffer Nowait %: 99.99 Redo NoWait %: 100.00
    Buffer Hit %: 75.62 In-memory Sort %: 100.00
    Library Hit %: 99.99 Soft Parse %: 99.96
    Execute to Parse %: 65.24 Latch Hit %: 99.95
    Parse CPU to Parse Elapsd %: 91.15 % Non-Parse CPU: 99.10
    Shared Pool Statistics
    Begin End
    Memory Usage %: 75.23 74.94
    % SQL with executions>1: 67.02 67.85
    % Memory for SQL w/exec>1: 71.13 72.64
    Top 5 Timed Foreground Events
    Event Waits Time(s) Avg wait (ms) % DB time Wait Class
    db file parallel read 106,008 34,368 324 62.49 User I/O
    DB CPU 13,558 24.65
    db file sequential read 1,474,891 9,468 6 17.21 User I/O
    log file sync 3,751 22 6 0.04 Commit
    SQL*Net message to client 4,170,572 18 0 0.03 Network

    Its not possible to say anything just by looking at the events.You must understand that statspacks and AWR actualy aggergate the data and than show the results.There may be a very well possibility that some other areas also need to be looked at rather than just focussin on one event.
    You have not mentioned any kind of other info about the wait event like their timings and all that.PLease provide that too.
    And if I understood your question corretly,you said,
    How to avoid these wait events?
    What may be the cause?
    I am afraid that its not possible to discuss each of these wait event here in complete details and also not about what to do when you see them.Please read teh Performance Tuning book which narrates these wait events and corresponding actions.
    Please read and follow this link,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm#i18202
    Aman....

  • [Logical Standby] Which table/SQL caused paging-out

    We have a Primary-Logical DR configuration.
    Recently, it has a problem with the logical: it's continuously paging out data from some transactions:
    SELECT SUBSTR(name, 1, 40) AS NAME, SUBSTR(value,1,32) AS VALUE FROM GV$LOGSTDBY_STATS;
    number of preparers     3
    number of appliers     18
    maximum SGA for LCR cache     4095
    parallel servers in use     24
    maximum events recorded     1000000
    preserve commit order     TRUE
    transaction consistency     FULL
    record skip errors     Y
    record skip DDL     Y
    record applied DDL     N
    record unsupported operations     Y
    coordinator state     IDLE
    transactions ready     7
    transactions applied     0
    coordinator uptime     9646
    realtime logmining     Y
    apply delay     0
    Log Miner session ID     1
    txns delivered to client     1068651
    DML txns delivered     1017135
    DDL txns delivered     15
    CTAS txns delivered     0
    Recursive txns delivered     51501
    Rolled back txns seen     23463
    LCRs delivered to client     11682189
    bytes of redo processed     14475529508
    bytes paged out     1482524624
    seconds spent in pageout     8922
    bytes checkpointed     0
    seconds spent in checkpoint     0
    bytes rolled back     7500032
    seconds spent in rollback     90
    seconds system is idle     0
    SELECT SID, SERIAL#, SPID, TYPE, HIGH_SCN, STATUS_CODE, STATUS
    FROM GV$LOGSTDBY_PROCESS
    ORDER BY TYPE, SPID;
    ANALYZER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    APPLIER     16116     ORA-16116: no work available
    BUILDER     16243     ORA-16243: paging out 4752 bytes of memory to disk
    COORDINATOR     16116     ORA-16116: no work available
    PREPARER     16127     ORA-16127: stalled waiting for additional transactions to be applied
    PREPARER     16127     ORA-16127: stalled waiting for additional transactions to be applied
    PREPARER     16127     ORA-16127: stalled waiting for additional transactions to be applied
    READER     16127     ORA-16127: stalled waiting for additional transactions to be applied
    select xidusn, xidslt, xidsqn, count(*) from system.logmnr_spill$
    group by xidusn, xidslt, xidsqn;
    996     46     249     254
    710     37     838     825
    623     3     706     254
    478     7     42564     254
    765     38     649     824
    42     6     415494     3729
    264     35     4817     3738
    How can we identify the table/SQL to skip & instantiate it later so the logical DB will not being lag far behind.
    Thank you.

    Hi,
    Best way to find SQL is to mine the current archive log getting applied on standby and check for the SQL, you might not get the exact SQL, but you will get the object which is getting updated.
    Or
    You can use AWR report from logical standby of this time to find the update statement which is resource extensive.
    There is no way to find the exact SQL on primary which is causing the issue on standby.
    Regards
    Anudeep

  • Possibility to register Pre-/Post-Procedures for an SQL Template Handler

    I would appreciate to see the possibility to register pre-/post-procedures for an SQL template handler in ORDS 3.0.
    Why:
    We use Oracle VPD/Row-Level-Security to secure data access. Hence a trigger sets a couple of attributes in the database session context at login time which are then used in static RLS predicates to limit which records the user can see/modify.
    With ORDS 3.0 all sessions are opened under the same technical user (e.g. APEX_REST_PUBLIC_USER), hence all users have the same/no attributes in the session context and could see/modify all data.
    To avoid this situation, I need to set the attributes (e.g. the authenticated user) in the database session context before the actual query/plsql handler is executed.
    Also, resetting the session context after the handler is executed would be good.
    This scenario is in line with scenarios 'One Big Application User' and 'Web-based applications' in http://docs.oracle.com/cd/B28359_01/network.111/b28531/vpd.htm#DBSEG98291.
    Different solution approach:
    Kris suggested to write a PL/SQL handler where the pre-procedure is called before the business logic procedure/query. This is ok for me as long as I modify data and only need to return no or little data.
    As soon as I need to return a lot of data (e.g. select c1, c19, c30 from t1), this approach will force me to write a lot of code in the PL/SQL handler in order to marshal the c1, c19 and c30 to JSON and put it in the HTTP response.
    But this is the beauty of ORDS - I can simply define a template, write a GET SQL Handler 'select c1, c19, c30 from t1'  and have the data available as REST service, without writing any code to write JSON marshaled data in the HTTP response.

    I tried to log the request at Oracle REST Data Services (ORDS) but I could only start a new discussion: Possibility to register Pre-/Post-Procedures for an SQL Template Handler
    As I mentioned there, the PL/SQL handler approach works for me as long as I have no or only little data to send back to the client (e.g. put/post/delete succeeded or an error message why the call failed).
    If I need to return a lot of data from the PL/SQL handler I would need to, as far as I understand, to marshal the data to JSON and write it to the response body in the PL/SQL handler.
    I don't want to do the marshaling, because ORDS does it better.
    However, this works for me:
    I write a pipelined stored procedure that takes as input the attributes I need to set in the session context. I then can reference it in the SQL handler:
    select * from table(my_pipelined_function(:USER, ....)
    Now the JSON/HTTP response is created by ORDS again.
    I still needed to code a couple of lines, but it is way better than duplicating the functionality already existing in ORDS.
    With the hooks it would be perfect because I would not have to write any code (apart from the procedure to set the session context attributes), just configure the REST services in ORDS.

  • Stored Procedures for Simple SQL statements

    Hi Guys,
    We are using Oracle 10g database and Web logic for frontend.
    The Product is previously developed in DotNet and SQL Server and now its going to develop into Java (Web Logic) and Oracle 10g database.
    Since the project is developed in SQL Server, there are lot many procedures written for simple sql queries. Now I would like to gather your suggestions / pointers on using procedures for simple select statements or Inserts from Java.
    I have gathered some list for using PL/SQL procedure for simple select queries like
    Cons
    If we use procedures for select statements there are lot many Ref Cursors opened for Simple select statements (Open cursors at huge rate)
    Simple select statements are much faster than executing them from Procedure
    Pros
    Code changes for modifying select query in PL/SQL much easier than in Java
    Your help in this regard is more valuable. Please post your points / thoughts here.
    Thanks & Regards
    Srinivas
    Edited by: Srinivas_Reddy on Dec 1, 2009 4:52 PM

    Srinivas_Reddy wrote:
    Cons
    If we use procedures for select statements there are lot many Ref Cursors opened for Simple select statements (Open cursors at huge rate)No entirely correct. All SQLs that hit the SQL engine are stored as cursors.
    On the client side, you have an interface that deals with this SQL cursor. It can be a Java class, a Delphi dataset, or a PL/SQL refcursor.
    Yes, cursors are created/opened at a huge rate by the SQL engine. But is is capable of doing that. What you need to do to facilitate that is send it SQLs that uses bind variables. This enables the SQL engine to simply re-use the existing cursor for that SQL.
    Simple select statements are much faster than executing them from ProcedureAlso not really correct. SQL performance is SQL performance. It has nothing to do with how you create the SQL on the client side and what client interface you use. The SQL engine does not care whether you use a PL/SQL ref cursor or a Java class as your client interface. That does not change the SQL engine's performance.
    Yes, this can change the performance on the client side. But that is entirely in the hands of the developer and how the developer selected to use the available client interfaces to interface with the SQL cursor in the SQL engine.
    Pros
    Code changes for modifying select query in PL/SQL much easier than in JavaThis is not a pro merely for ref cursors, but using PL/SQL as the abstraction layer for the data model implemented, and having it provide a "business function" interface to clients, instead of having the clients dealing with the complexities of the data model and SQL.
    I would seriously consider ref cursors in your environment. With PL/SQL servicing as the interface, there is a single place to tune SQL, and a single place to update SQL. It allows one to make data model changes without changing or even recompiling the client. It allows one to add new business logical and processing rules, again without having to touch the client.

  • Understanding Statistics io and Logical reads - is logical reads information correct

    Hi,
    This question arises during a performance test - on SQL Server 2012 with SP2.
    In the following example, table has only column and that is of data type INT.
    When inserted 592 records of data type INT it is doing only 1 logical read but as soon another record is inserted SP is reporting 2 logical reads. Why?
    In the code, i have highlighted difference between 
    statistics io - logical reads and sys.dm_exec_procedure_stats.total_logical_reads
    to understand the difference between these 2 information.
    set nocount on
    GO
    create table dbo.test_storage_and_logical_reads
    employee_number int --primary key
    GO
    go
    CREATE procedure dbo.test_sp_logical_reads
    as
    begin
    select
    employee_number
    from dbo.test_storage_and_logical_reads
    order by employee_number desc
    end
    go
    insert into dbo.test_storage_and_logical_reads
    (employee_number)
    VALUES (1)
    GO 592
    EXEC sp_spaceused 'dbo.test_storage_and_logical_reads'
    --set statistics io on
    --GO
    exec dbo.test_sp_logical_reads
    GO
    ----Table 'test_storage_and_logical_reads'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    ---- But sys.dm_exec_procedure_stats.total_logical_reads is reporting 3 instead.
    truncate table dbo.test_storage_and_logical_reads
    GO
    insert into dbo.test_storage_and_logical_reads
    (employee_number)
    VALUES (1)
    GO 593
    EXEC sp_spaceused 'dbo.test_storage_and_logical_reads'
    exec dbo.test_sp_logical_reads
    GO
    ----Table 'test_storage_and_logical_reads'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    ---- But sys.dm_exec_procedure_stats.total_logical_reads is reporting 4 instead.
    --drop procedure dbo.test_sp_logical_reads
    --drop table dbo.test_storage_and_logical_reads
    GO
    NB: I do understand the logical and physical reads. Thanks.
    For quick review of new features, try virtual labs: http://msdn.microsoft.com/en-us/aa570323

    Hi.
    I still need to test the scenario but if you read definition of this DMV it says 'Returns aggregate performance statistics for cached stored procedures' so I guess, I am not sure 3 can be due to this aggregated output given by this DMV. Will test it
    on SS 2012 SP2 will get back to you.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles
    Hi Shanky / Sean Gallardy
    i think Sean Gallardy created the "test_sp_logical_reads" table in master DB, i have that same issue when i accidentiatly created the table "test_sp_logical_reads" in master, but when i create the same table in USER DB like "sample1"
    it only allocate 1 page for 592,
    hi Asam,
    Let me narrow down the your question, if my understanding is correct ...
    <<When you have the free space in a page of a Heap, why SQL Server is assigning New page for a New Record>>
    Answer is in PFS bytes
    The answer is that PFS bytes are not fully reset until the page is reallocated. On deallocation, the only bit in the PFS byte that's changed is the allocation status bit - this makes it very easy to rollback a deallocation
    --Before inserting the 593 record plese execute the below query
    --Note Replace "database1" with your DBName
    DBCC TRACEON (3604);
    DBCC IND ('database1', 'test_storage_and_logical_reads', 1);
    --Result
    PageFID PagePID IAMFID IAMPID ObjectID IndexID PartitionNumber PartitionID iam_chain_type PageType IndexLevel NextPageFID NextPagePID PrevPageFID PrevPagePID
    1 2770 NULL NULL 517576882 0 1 72057594039828480 In-row data 10 NULL 0 0 0 0
    1 2769 1 2770 517576882 0 1 72057594039828480 In-row data 1 0 0 0 0 0
    --your intrested in the second record(page)
    DBCC PAGE ('database1', 1, 2769,3) WITH TABLERESULTS;
    --you can find 38th row as
    --PFS (1:1) = 0x64 MIXED_EXT ALLOCATED 100_PCT_FULL
    --which means your page is full
    --you can try inserting 300 rows it will show you PFS (1:1) = 0x61 MIXED_EXT ALLOCATED 50_PCT_FULL
    please refer in the below link
    http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/07/08/under-the-covers-gam-sgam-and-pfs-pages.aspx
    http://aboutsqlserver.com/2013/12/17/sql-server-storage-engine-heap-tables/
    "This page has a PFS byte value of 0x04 - how can it be full when its not allocated?"
    The answer is that PFS bytes are not fully reset until the page is reallocated. On deallocation, the only bit in the PFS byte that's changed is the allocation status bit - this makes it very easy to rollback a deallocation.
    Here's an example. Using a database with a simple table with one row.
    A DBCC PAGE of the IAM page includes:
    PFS (1:1) = 0x70 IAM_PG MIXED_EXT ALLOCATED 0_PCT_FULL
    If I run the following:
    BEGIN
    TRANSACTION
    DROP
    TABLE T1
    GO
    And then do the DBCC PAGE again, the output now includes:
    PFS (1:1) = 0x30 IAM_PG MIXED_EXT 0_PCT_FULL
    And if I rollback then transaction, the DBCC PAGE output reverts to:
    PFS (1:1) = 0x70 IAM_PG MIXED_EXT ALLOCATED 0_PCT_FULL
    Thanks
    Saravana Kumar C

  • Logical reads per second

    I have two databases - one is a clone of the other, amde a few months ago. Database A has somewhat more data, since it's the active production database, but not significantly more - perhaps 10% greater. They are on different boxes. Database A is on a Sun 280R 2-processor box. Database B is on a Dell 2950 with 2 dual-core processors. So this isn't exactly comparing apples to apples. However, when I run the same query on the two databases, I get radically different results. Against Database A, the query takes about 7 minutes. On Database B, it takes about 2 seconds. Logical reads per second on Database A reach 80,000-90,000; on Database B, they're about 3,000. There are a few configuration differences (both databases use automatic memory management):
    Database A Database B
    db_file_multiblock_read_count 64 16
    log_buffer 14290432 2104832
    open_cursors 1250 300
    sga_max_size 4194304000 536870912
    sga_target 2634022912 536870912
    shared_pool_reserved_size 38587596 7340032
    The timings were taken off-hours so neither database would be busy. I'm baffled by the extreme difference in execution times. Any help appreciated!
    Thanks,
    Harry
    Edited by: harryb on Apr 8, 2009 7:26 PM

    OK, let's start here....
    Database A (TEMPOP)
    SQL> show parameter optimizer
    NAME TYPE VALUE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.3
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    SQL> show parameter db_file_multi
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 64
    SQL> show parameter db_block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    ===================================================
    Database B (TEMPO11)
    SQL> show parameter optimizer
    NAME TYPE VALUE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.1
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    SQL> show parameter db_file_multi
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 16
    SQL> show parameter db_block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    =================================================================
    Now for the query that's causing the problem:
    SELECT dsk_document_attribute.value_text inspect_permit_no,
              NVL (activity_task_list.revised_due_date,
                   activity_task_list.default_due_date
                 inspect_report_due_date,
              agency_interest.master_ai_id agency_interest_id,
              agency_interest.master_ai_name agency_interest_name,
              get_county_code_single (agency_interest.master_ai_id)
                 parish_or_county_code,
              agency_interest_address.physical_address_line_1 inspect_addr_1,
              agency_interest_address.physical_address_line_2 inspect_addr_2,
              agency_interest_address.physical_address_line_3 inspect_addr_3,
              agency_interest_address.physical_address_municipality inspect_city,
              agency_interest_address.physical_address_state_code state_id,
              agency_interest_address.physical_address_zip inspect_zip,
              person.master_person_first_name person_first_name,
              person.master_person_middle_initial person_middle_initial,
              person.master_person_last_name person__last_name,
              SUBSTR (person_telecom.address_or_phone, 1, 14) person_phone,
              activity_task_list.requirement_id
       FROM dsk_document_attribute,
            agency_interest,
            activity_task_list,
            agency_interest_address,
            dsk_central_file dsk_aaa,
            dsk_central_file dsk_frm,
            person,
            person_telecom
       WHERE agency_interest.int_doc_id = 0
             AND agency_interest.master_ai_id =
                   agency_interest_address.master_ai_id
             AND agency_interest.int_doc_id = agency_interest_address.int_doc_id
             AND agency_interest.master_ai_id = dsk_frm.master_ai_id
             AND dsk_aaa.int_doc_id = activity_task_list.int_doc_id
             AND dsk_frm.int_doc_id = dsk_document_attribute.int_doc_id
             AND dsk_frm.doc_type_specific_code =
                   dsk_document_attribute.doc_type_specific_code
             AND dsk_frm.activity_category_code = 'PER'
             AND dsk_frm.activity_class_code = 'GNP'
             AND dsk_frm.activity_type_code IN ('MAB', 'NAB', 'REB')
             AND dsk_frm.program_code = '80'
             AND dsk_frm.doc_type_general_code = 'FRM'
             AND dsk_frm.doc_type_specific_code = 'PERSET'
             AND dsk_aaa.doc_template_id = 2000
             AND dsk_frm.master_ai_id = dsk_aaa.master_ai_id
             AND dsk_frm.activity_category_code = dsk_aaa.activity_category_code
             AND dsk_frm.program_code = dsk_aaa.program_code
             AND dsk_frm.activity_class_code = dsk_aaa.activity_class_code
             AND dsk_frm.activity_type_code = dsk_aaa.activity_type_code
             AND dsk_frm.activity_year = dsk_aaa.activity_year
             AND dsk_frm.activity_num = dsk_aaa.activity_num
             AND dsk_document_attribute.doc_attribute_code = 'PERMIT_NO'
             AND activity_task_list.requirement_id IN ('3406', '3548', '3474')
             AND activity_task_list.reference_task_id = 0
             AND NVL (activity_task_list.status_code, '$$$') <> '%  '
             AND person.master_person_id(+) =
                   f_get_gp_contact (agency_interest.master_ai_id)
             AND person.int_doc_id(+) = 0
             AND person.master_person_id = person_telecom.master_person_id(+)
             AND person.int_doc_id = person_telecom.int_doc_id(+)
             AND person_telecom.telecom_type_code(+) = 'wp';Here's the explain plan for Database A, where the query takes 7-8 minutes or more:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                           | Name                       | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                    |                            |     1 |   253 |    34   (3)|
    |   1 |  NESTED LOOPS                       |                            |     1 |   253 |    34   (3)|
    |   2 |   NESTED LOOPS                      |                            |     1 |   224 |    32   (0)|
    |   3 |    NESTED LOOPS OUTER               |                            |     1 |   169 |    31   (0)|
    |   4 |     NESTED LOOPS OUTER              |                            |     1 |   144 |    29   (0)|
    |   5 |      NESTED LOOPS                   |                            |     1 |   122 |    27   (0)|
    |   6 |       NESTED LOOPS                  |                            |     1 |    81 |    26   (0)|
    PLAN_TABLE_OUTPUT
    |   7 |        NESTED LOOPS                 |                            |     1 |    48 |    19   (0)|
    |   8 |         INLIST ITERATOR             |                            |       |       |            |
    |*  9 |          TABLE ACCESS BY INDEX ROWID| ACTIVITY_TASK_LIST         |     1 |    21 |    17   (0)|
    |* 10 |           INDEX RANGE SCAN          | ACTIVITY_TASK_LIST_FK11    |   106 |       |     4   (0)|
    |* 11 |         TABLE ACCESS BY INDEX ROWID | DSK_CENTRAL_FILE           |     1 |    27 |     2   (0)|
    |* 12 |          INDEX UNIQUE SCAN          | PK_DSK_CENTRAL_FILE        |     1 |       |     1   (0)|
    |* 13 |        TABLE ACCESS BY INDEX ROWID  | DSK_CENTRAL_FILE           |     1 |    33 |     7   (0)|
    |* 14 |         INDEX RANGE SCAN            | CF_MASTER_AI_ID_IND        |     9 |       |     2   (0)|
    |  15 |       TABLE ACCESS BY INDEX ROWID   | AGENCY_INTEREST            |     1 |    41 |     1   (0)|
    |* 16 |        INDEX UNIQUE SCAN            | PK_AGENCY_INTEREST         |     1 |       |     0   (0)|
    |  17 |      TABLE ACCESS BY INDEX ROWID    | PERSON                     |     1 |    22 |     2   (0)|
    PLAN_TABLE_OUTPUT
    |* 18 |       INDEX UNIQUE SCAN             | PK_PERSON                  |     1 |       |     1   (0)|
    |  19 |     TABLE ACCESS BY INDEX ROWID     | PERSON_TELECOM             |     1 |    25 |     2   (0)|
    |* 20 |      INDEX UNIQUE SCAN              | PK_PERSON_TELECOM          |     1 |       |     1   (0)|
    |  21 |    TABLE ACCESS BY INDEX ROWID      | AGENCY_INTEREST_ADDRESS    |     1 |    55 |     1   (0)|
    |* 22 |     INDEX UNIQUE SCAN               | PK_AGENCY_INTEREST_ADDRESS |     1 |       |     0   (0)|
    |  23 |   TABLE ACCESS BY INDEX ROWID       | DSK_DOCUMENT_ATTRIBUTE     |     1 |    29 |     1   (0)|
    |* 24 |    INDEX UNIQUE SCAN                | PK_DSK_DOCUMENT_ATTRIBUTE  |     1 |       |     0   (0)|
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       9 - filter("ACTIVITY_TASK_LIST"."REFERENCE_TASK_ID"=0 AND
                  NVL("ACTIVITY_TASK_LIST"."STATUS_CODE",'$$$')<>'%  ')
      10 - access("ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3406 OR
                  "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3474 OR "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3548)
      11 - filter("DSK_AAA"."DOC_TEMPLATE_ID"=2000 AND "DSK_AAA"."ACTIVITY_CLASS_CODE"='GNP' AND
                  "DSK_AAA"."PROGRAM_CODE"='80' AND "DSK_AAA"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_AAA"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_AAA"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_AAA"."ACTIVITY_TYPE_CODE"='REB'))
      12 - access("ACTIVITY_TASK_LIST"."INT_DOC_ID"="DSK_AAA"."INT_DOC_ID")
      13 - filter("DSK_FRM"."ACTIVITY_CLASS_CODE"='GNP' AND "DSK_FRM"."PROGRAM_CODE"='80' AND
    PLAN_TABLE_OUTPUT
                  "DSK_FRM"."DOC_TYPE_SPECIFIC_CODE"='PERSET' AND "DSK_FRM"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  "DSK_FRM"."DOC_TYPE_GENERAL_CODE"='FRM' AND ("DSK_FRM"."ACTIVITY_TYPE_CODE"='MAB' OR
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"='NAB' OR "DSK_FRM"."ACTIVITY_TYPE_CODE"='REB') AND
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"="DSK_AAA"."ACTIVITY_TYPE_CODE" AND
                  "DSK_FRM"."ACTIVITY_YEAR"="DSK_AAA"."ACTIVITY_YEAR" AND
                  "DSK_FRM"."ACTIVITY_NUM"="DSK_AAA"."ACTIVITY_NUM")
      14 - access("DSK_FRM"."MASTER_AI_ID"="DSK_AAA"."MASTER_AI_ID")
      16 - access("AGENCY_INTEREST"."MASTER_AI_ID"="DSK_FRM"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST"."INT_DOC_ID"=0)
      18 - access("PERSON"."MASTER_PERSON_ID"(+)="F_GET_GP_CONTACT"("AGENCY_INTEREST"."MASTER_AI_ID
                  ") AND "PERSON"."INT_DOC_ID"(+)=0)
    PLAN_TABLE_OUTPUT
      20 - access("PERSON"."MASTER_PERSON_ID"="PERSON_TELECOM"."MASTER_PERSON_ID"(+) AND
                  "PERSON_TELECOM"."TELECOM_TYPE_CODE"(+)='wp' AND
                  "PERSON"."INT_DOC_ID"="PERSON_TELECOM"."INT_DOC_ID"(+))
      22 - access("AGENCY_INTEREST"."MASTER_AI_ID"="AGENCY_INTEREST_ADDRESS"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST_ADDRESS"."INT_DOC_ID"=0)
      24 - access("DSK_FRM"."INT_DOC_ID"="DSK_DOCUMENT_ATTRIBUTE"."INT_DOC_ID" AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_ATTRIBUTE_CODE"='PERMIT_NO' AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_TYPE_SPECIFIC_CODE"='PERSET')============================================================================
    Here's the explan plan output for Database B, where the query takes 2-3 seconds:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                           | Name                       | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                    |                            |     1 |   289 |    39   (0)|
    |   1 |  NESTED LOOPS OUTER                 |                            |     1 |   289 |    39   (0)|
    |   2 |   NESTED LOOPS                      |                            |     1 |   260 |    37   (0)|
    |   3 |    NESTED LOOPS                     |                            |     1 |   205 |    36   (0)|
    |   4 |     NESTED LOOPS OUTER              |                            |     1 |   172 |    35   (0)|
    |   5 |      NESTED LOOPS                   |                            |     1 |   145 |    34   (0)|
    |   6 |       NESTED LOOPS                  |                            |     1 |   104 |    33   (0)|
    PLAN_TABLE_OUTPUT
    |   7 |        NESTED LOOPS                 |                            |     1 |    61 |    26   (0)|
    |   8 |         INLIST ITERATOR             |                            |       |       |            |
    |*  9 |          TABLE ACCESS BY INDEX ROWID| ACTIVITY_TASK_LIST         |     1 |    25 |    24   (0)|
    |* 10 |           INDEX RANGE SCAN          | ACTIVITY_TASK_LIST_FK11    |   145 |       |     4   (0)|
    |* 11 |         TABLE ACCESS BY INDEX ROWID | DSK_CENTRAL_FILE           |     1 |    36 |     2   (0)|
    |* 12 |          INDEX UNIQUE SCAN          | PK_DSK_CENTRAL_FILE        |     1 |       |     1   (0)|
    |* 13 |        TABLE ACCESS BY INDEX ROWID  | DSK_CENTRAL_FILE           |     1 |    43 |     7   (0)|
    |* 14 |         INDEX RANGE SCAN            | CF_MASTER_AI_ID_IND        |     9 |       |     2   (0)|
    |  15 |       TABLE ACCESS BY INDEX ROWID   | AGENCY_INTEREST            |     1 |    41 |     1   (0)|
    |* 16 |        INDEX UNIQUE SCAN            | PK_AGENCY_INTEREST         |     1 |       |     0   (0)|
    |  17 |      TABLE ACCESS BY INDEX ROWID    | PERSON                     |     8 |   216 |     1   (0)|
    PLAN_TABLE_OUTPUT
    |* 18 |       INDEX UNIQUE SCAN             | PK_PERSON                  |     1 |       |     0   (0)|
    |  19 |     TABLE ACCESS BY INDEX ROWID     | DSK_DOCUMENT_ATTRIBUTE     |     1 |    33 |     1   (0)|
    |* 20 |      INDEX UNIQUE SCAN              | PK_DSK_DOCUMENT_ATTRIBUTE  |     1 |       |     0   (0)|
    |  21 |    TABLE ACCESS BY INDEX ROWID      | AGENCY_INTEREST_ADDRESS    |     1 |    55 |     1   (0)|
    |* 22 |     INDEX UNIQUE SCAN               | PK_AGENCY_INTEREST_ADDRESS |     1 |       |     0   (0)|
    |  23 |   TABLE ACCESS BY INDEX ROWID       | PERSON_TELECOM             |     1 |    29 |     2   (0)|
    |* 24 |    INDEX UNIQUE SCAN                | PK_PERSON_TELECOM          |     1 |       |     1   (0)|
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       9 - filter("ACTIVITY_TASK_LIST"."REFERENCE_TASK_ID"=0 AND
                  NVL("ACTIVITY_TASK_LIST"."STATUS_CODE",'$$$')<>'%  ')
      10 - access("ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3406 OR
                  "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3474 OR "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3548)
      11 - filter("DSK_AAA"."DOC_TEMPLATE_ID"=2000 AND "DSK_AAA"."ACTIVITY_CLASS_CODE"='GNP' AND
                  "DSK_AAA"."PROGRAM_CODE"='80' AND "DSK_AAA"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_AAA"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_AAA"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_AAA"."ACTIVITY_TYPE_CODE"='REB'))
      12 - access("ACTIVITY_TASK_LIST"."INT_DOC_ID"="DSK_AAA"."INT_DOC_ID")
      13 - filter("DSK_FRM"."DOC_TYPE_SPECIFIC_CODE"='PERSET' AND
    PLAN_TABLE_OUTPUT
                  "DSK_FRM"."ACTIVITY_CLASS_CODE"='GNP' AND "DSK_FRM"."PROGRAM_CODE"='80' AND
                  "DSK_FRM"."DOC_TYPE_GENERAL_CODE"='FRM' AND "DSK_FRM"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_FRM"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_FRM"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"='REB') AND "DSK_FRM"."ACTIVITY_TYPE_CODE"="DSK_AAA"."ACTIVITY_TY
                  PE_CODE" AND "DSK_FRM"."ACTIVITY_YEAR"="DSK_AAA"."ACTIVITY_YEAR" AND
                  "DSK_FRM"."ACTIVITY_NUM"="DSK_AAA"."ACTIVITY_NUM")
      14 - access("DSK_FRM"."MASTER_AI_ID"="DSK_AAA"."MASTER_AI_ID")
      16 - access("AGENCY_INTEREST"."MASTER_AI_ID"="DSK_FRM"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST"."INT_DOC_ID"=0)
      18 - access("PERSON"."MASTER_PERSON_ID"(+)="F_GET_GP_CONTACT"("AGENCY_INTEREST"."MASTER_AI_ID
                  ") AND "PERSON"."INT_DOC_ID"(+)=0)
    PLAN_TABLE_OUTPUT
      20 - access("DSK_FRM"."INT_DOC_ID"="DSK_DOCUMENT_ATTRIBUTE"."INT_DOC_ID" AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_ATTRIBUTE_CODE"='PERMIT_NO' AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_TYPE_SPECIFIC_CODE"='PERSET')
      22 - access("AGENCY_INTEREST"."MASTER_AI_ID"="AGENCY_INTEREST_ADDRESS"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST_ADDRESS"."INT_DOC_ID"=0)
      24 - access("PERSON"."MASTER_PERSON_ID"="PERSON_TELECOM"."MASTER_PERSON_ID"(+) AND
                  "PERSON_TELECOM"."TELECOM_TYPE_CODE"(+)='wp' AND
                  "PERSON"."INT_DOC_ID"="PERSON_TELECOM"."INT_DOC_ID"(+))===============================================================================
    Edited by: harryb on Apr 9, 2009 3:29 PM

  • Buffer_gets and 'session logical reads'

    When I compare the following values:
    select sum(buffer_gets) from v$sqlarea
    and
    select value from v$sysstat where name='session logical reads'
    it usually happens that sum(buffer_gets) > 'session logical reads'.
    Does buffer_gets include logical and physical reads? As long as the SGA keeps all the SQL statments, I guess sum(buffer_gets) will be bigger than 'session logical reads'. Is this correct?
    Thxs

    Hi,
    buffer gets = number of times a block was requested from buffer cache. A buffer get always request in a logical read. Depending on whether or not a copy of the block is available in the buffer cache, a logical read may or may not involve a physical read. So "buffer gets" and "logical reads" are basically synonyms and are often used interchangeably.
    Oracle doesn't have a special "undo buffer". Undo blocks are stored in rollback segments in UNDO tablespace, and are managed in the same way data blocks are (they're even protected by redo). If a consistent get requires reading from UNDO tablespace, then statistics counters will show that, i.e. there will be one more consistent get in your autotrace.
    For more information and some examples, see a thread at askTom:
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:549546900346542976
    Best regards,
    Nikolay

  • STATISTICS IO: Scan Count vs Logical reads

    Hello
    I'm doing som performance research, I have a index with following priority: ClientId, Active, ProductId. Active is a bit field telling whether the Product is active or not, it can be inactive products than active, but always at least one active product.
    When I'm executing
    SELECT * FROM [table] WHERE ClientId = [id] AND ProductId IN (1,2,3,5,7,9,20)
    I'm getting following result: Scan count 1, logical reads 490
    When I'm leading SQL Server to the right paths by including the to possible values in Active by executing the following SQL:
    SELECT * FROM [table] WHERE ClientId = [id] AND ProductId IN (1,2,3,5,7,9,20) AND Active IN (0,1)
    I'm getting following results: Scan count 14, logical reads 123
    With this information, which version would you say is fastest and why?
    When I was running this query 1000 times with different ClientId I got a average time of 172 ms for the first query, and 155 ms for the second one. I have been told that scan count is very expensive... out of this example it seems that the cost of 1 scan count is like 20 logical reads?

    When I was running this query 1000 times with different ClientId I got a average time of 172 ms for the first query, and 155 ms for the second one. I have been told that scan count is very expensive... out of this example
    it seems that the cost of 1 scan count is like 20 logical reads?
    Typically the pages are forced out of buffer (DBCC DROPCLEANBUFFERS) when doing such a measurement:
    http://www.sqlusa.com/bestpractices2005/executioninms/
    BOL: "Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server."
    LINK: http://technet.microsoft.com/en-us/library/ms187762.aspx
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Logical Reads are very high when run as sproc and very less logical reads when run as a script

    Hello 
    Have a question, 
    when i execute a sproc. i get a very high logical reads count and when i run the same sproc converted into script it has very low logical reads what does it mean..

    I would like you to check query plan during ad-hoc run versus stored procedure execution. As other pointed out it could be due to parameter sniffing.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • Advice needed for pl/sql

    Hey guys,
    I am look out for book/ articles for learning pl/sql . I am new to pl/sql adn i want book/ articles that help me learn pl/sql from scratch. I have see many books oriented towards DBA. I dont want such type of books. Please let me know the books/ articles that beginner needs, preferably artcles// books with lot of developer examples. ALso let me know any books/artciles to learn forms, reports.
    Please advice me on this.
    The response will be highly appreciated.
    Thanks
    u can mail me at : [email protected]

    It depends what you mean when you call yourself 'new'. There are two main concepts to learn for pl/sql: the sql language, and procedural logic. If you have previously been a programmer in any 3GL, you will be familiar with the procedural part, which is pretty much the same principal in any programming language. If you have no experience with sql, you really need to become proficient with that, otherwise there is a danger of writing everything procedurally when you should be doing it with sql.
    I find the Steven Feuerstein book very good, combined with Oracle manuals.

  • ALE logical systems for business service or process

    Hi,
    I am trying to send an IDOC into my R/3 system. It is the same IDOC type that I have sucessfully sent out to parties. All the config is in place for communicating with the system, because I have sent others IDOCs in the past. However, they always were sent from another R/3 business system. They processed sucessfully.
    I am now trying to sending IDOC's from a BPM,as well as, from a business service. They are under the "With Party" section of the directory. These obviously do not have logical systems, so when I try to send the IDOC I get
    "Unable to convert the sender service Purchase_Orders to an ALE logical system", where purchase_Orders being the name of the business serice.
    How can I handle this problem?
    Should I hard code values somewhere?
    Which ones?
    I have read the other posts for the error above, but my situation seems to be different.
    Help!
    Chris

    Hi,
    To solve your problem, here my solution:
    1/ In your Business Process of your ID (configurator), go to menu "Service > Adapter-specific identifier" and in the part "IDoc Adapter", write your Logical System.
    2/ Create this Logical System on your target system (R/3).
    WARNING: XI allows you to used a Logical System for only one service (Business Process). Thus, if you are several interfaces, you must used different Logical Systems.
    Mickael.

Maybe you are looking for

  • After ugrading to Firefox 4, all my bookmarks are now gone. Is there a way to get them back.

    Was using the latest version of Firefox. Had a ton of bookmarks. Upgraded to Firefox 4. The installation wizard lost all my book marks. Is there a way get my bookmarks back from the previous version of Firefox?

  • PC to Mac, transfer itunes

    I am waiting for my first mac, (27" i7) and want to put transfer itunes library from my pc to mac. What is the easiest way to do this? Go to my music and copy itunes library to disk and install on mac?

  • Is the FCP crossgrade still available?

    We just upgraded one of our systems to a Mac Pro from an older G4 setup. I looked all over the Apple site and can't find a place to buy the FCS1 crossgrade for Intel Macs. Is it sill available, or do I need to go all the way with FCS2?

  • How to Search a Knowledge Article

    HI Experts, I have created a knowledge Artilce and released, when i am trying to search the KA, i am unable to search. Do i need to make any settings for SAF? do i need to have a separate TEREX server? Please advice and guide me in setting the SAF, i

  • Arabic Characters data is retrieved ??? from database

    Hello on entering Arabic Characters data is retrieved ??? from database 10G 2 any suggestions pls my pc lang is Arabic default my nls_lang is ARABIC_SAUDI ARABIA.AR8MSWIN1256 both for : KEY_DevSuiteHome1 and the : KEY_OraDb10g_home1 Regards, Abdetu