Performance: query forced to parallel and thus cost

On a customer site and they have problem with a query, broken this down to one offending table.
Put a select * from table through explain plan, result shows that parallel execution is being chosen and
that the parser is pulling stats from somewhere.
OK, OPTIIMZER GOAL is RULE
PARALLEL SERVER is FALSE
There are no stats on the table
what is happening?

Maybe partitionning ?

Similar Messages

  • Ad-Hoc Query for OU's and their Cost Center

    Cost Centers are often inherited from parent OU's. If I try to make an Ad-Hoc query, select some OU's and show the Cost Centers via the relationship A11, the fields are empty unless the Cost Center is hardcoded for the OU.
    Is there a way to create an Ad-Hoc query that shows the Cost Centers even if they are inherited? I can't make any changes to system tables or other customizing.
    Thanks in advance!

    Unfortunately I'm not authorized to use SE16.
    This is my selection:
    Object ID: xxx (Selection)
    Object Type: O (Selection)
    Plan Version: 01 (Selection)
    Relationship Between Objects: 011 (IT1001)
    Relationship Specification: A (IT1001)
    Type of Related Object: K (IT1001)
    Output:
    Object ID (IT1000)
    Object Abbreviation (IT1000)
    Object Name (IT1000)
    ID of Related Object (IT1001)
    Any other way?

  • What is the difference between est. cost, CPU cost, and IO cost

    When looking at and execution plan there is est. cost, CPU cost, and IO cost.
    Can someone explain these and more importantly explain which one is the most
    important to tune and the step to take to reduce these cost to a respect level.
    I have seen many program hit 10,000 cost and 1 Mio CPU cost and I need to get that down to improve performance.
    I am running on 10.2.
    Thanks
    Mikie

    Hi Olivier,
    sorry to disagree but:
    > They're both important and you'll have to balance
    > both. For instance a program accessing the database
    > too much (lots of I/O) may run for ages. You can tune
    > this program and lower your I/Os, therefore
    > increasing your CPU time; the quantity of work is
    > still the same!
    is just not right.
    The amount of work to do for the database to get the job done (that is to deliver the requested results) may vary A LOT!
    That is why there is a optimizer at all.
    It's true: I/O cost model the time required to perform necessary read/write operations and CPU cost do the same for handling the data in memory.
    With indexes, database features active and database versions the optimizer can choose from selecting different access, filter and sorting mechanisms to answer the db requests.
    Usually there's not much you can do to the CPU costs - the database has it's internal rating concerning how "expensive" a sort-merge-join will be or a hash-distinct function.
    What you can (and usually should) do, is to lower the main contributor to processing and I/O costs: the volume of data.
    Therefore one will try to reduce the number of rows to be processed early in the query process. That can either be done by changing the selection itself or via the indexing scheme. Furthermore physical I/O might be reduced by using specific db features (compression, datablock sizing, bitmap indexes, partitioning etc.).
    Anyhow, to come back to the original post: it's important to think of costs as time-to-execute modelled in CPU-activity and I/O-operations.
    The goal is to deliver the result at the fastest - so, nobody should bother about the absolute numbers when it comes to costs. The question has to be: what execution time is meant with this?
    KR Lars

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • Dear all, I have a MacBook Pro bought in April 2011. Last April, it got stuck while the wheel was spinning and I had to force it shut. That cost me my hard drive. Now, it's stuck again. I tried esc, cmd tab, moving the trackpad on the apple, but no. HELP!

    Dear all, I have a MacBook Pro bought in April 2011. Last April, it got stuck while the wheel was spinning and I had to force it shut. That cost me my hard drive. Now, it's stuck again. I tried esc, cmd+tab, moving the trackpad on the apple, but no. HELP!

    Dear John,
    thank you for your prompt reply. After I sent the email, I left the wheel spinning to see if it would eventually stop. It didn't, but after a while the screen turned blue, so I decided to switch everything off with a forced shut-down. I could not restart with the OS DVDs because I am currently away from home, but fortunately everything was more or less fine after restarting. It then got stuck again, I forced it shut again, restarted again, and now it seem to work but I won't overdo it and will switch off smoothly after writing this (unless it gets stuck again). Last April I have replaced the HD, so it's now just under 6 months old. And, of course, since the crash occurred when I was abroad and had backed up just before I left, I have lost only a few days' work. Still, and I know this will make me hugely unpopular here, with my 5 previous laptops running on MS Windows I never had a problem. Which does not mean I loved the OS, or I would not have switched to Mac.
    Anyway, and this is the main reason for this message, thank you very much again for your speedy intervention.
    Claudia

  • How does CEF perform equal and unequal cost load balancing?

    hello
    How does CEF perform equal and unequal cost load balancing?
    thanks

    Hello Wang,
    it is only EIGRP that can perform load balancing over unequal cost links.
    For equal cost links CEF allocates 16 buckets and maps them to the the physical links.
    the result of a binary operation is used to associated a packet to an outgoing interface:
    Source IP address EXOR DEstination IP Address EXOR hash
    the hash is a seed that changes only at every reload.
    Actually the last 4 bits are used so that each flow can be classified in one bucket.
    then the outgoing interface is the one asscociated to the result of the exor operation.
    Another way to see is that m bits are used so that 2^m is equal to N number of links (if N is even)
    the rule is simple and pre-established
    Hope to help
    Giuseppe

  • Infoset query of vendor payments at the cost distribution level

    We would like an infoset query of vendor payments at the cost distribution level of the document.  The issue seems to be joining vendor to the document cost distribution lines. 
    1.) BSAK + BSIK can be combined with an infoset data structure but only contain the vendor line of a document;  the cost distribution lines are not in the tables. 
    2.) BSIS + BSAS can be combined with an infoset data structure but lack vendor data and joins (to a vendor source) are not an option with data structures.  Vendor data added with an additional field is too slow to be a primary selection field. 
    3.) Logical data base KDF in an infoset returns only the vendor line of a document, not the cost distribution lines. 
    4.) Logical data base BRM in an infoset can have vendor from BSAK/BSIK attached by an additional field but performance is too slow to be useful.  Joins are not an option in a logical data base infoset.
    5.) Complete data is lacking when table joins between document cost distribution tables and vendor data tables are possible, (SPL actual line item table & BSIP or FMIFIIT & FMIFIHD).  BSIP lacks AB documents (reversals).  FM tables lack general ledger only documents. 
    6.) BSAK and BSIK together have complete vendor data but joins of both to a basis table do not work well.  Left outer joins are too slow, inner joins won’t work since the tables have mutually exclusive data. 
    It would be ideal to have vendor in BKPF, like FMIFIHD has, but it isn't a field.
    Does anyone know of any other options?  I have seen the helpful thread on How to Read BSEG Efficiently

    Hi,
    This is SAP Business one reporting and printing forum. Please find correct forum and repost above discussion to get quick response.
    Please close this thread here with helpful answer.
    Thanks & Regards,
    Nagarajan

  • Infoset query of vendor payments at the cost distribution level of the document

    We would like an infoset query of vendor payments at the cost distribution level of the document.  The issue seems to be joining vendor to the document cost distribution lines. 
    1.) BSAK + BSIK can be combined with an infoset data structure but only contain the vendor line of a document;  the cost distribution lines are not in the tables. 
    2.) BSIS + BSAS can be combined with an infoset data structure but lack vendor data and joins (to a vendor source) are not an option with data structures.  Vendor data added with an additional field is too slow to be a primary selection field. 
    3.) Logical data base KDF in an infoset returns only the vendor line of a document, not the cost distribution lines. 
    4.) Logical data base BRM in an infoset can have vendor from BSAK/BSIK attached by an additional field but performance is too slow to be useful.  Joins are not an option in a logical data base infoset.
    5.) Complete data is lacking when table joins between document cost distribution tables and vendor data tables are possible, (SPL actual line item table & BSIP or FMIFIHD & FMIFIIT).  BSIP lacks AB documents (reversals).  FM tables lack general ledger only documents. 
    6.) BSAK and BSIK together have complete vendor data but joins of both to a basis table are not an option.  Left outer joins are too slow, inner joins won’t work since the tables have mutually exclusive data. 
    Does anyone know of any other options?

    Hi,
    This is SAP Business one reporting and printing forum. Please find correct forum and repost above discussion to get quick response.
    Please close this thread here with helpful answer.
    Thanks & Regards,
    Nagarajan

  • How do I improve performance while doing pull, push and delete from Azure Storage Queue

           
    Hi,
    I am working on a distributed application with Azure Storage Queue for message queuing. queue will be used by multiple clients across the clock and thus it is expected that it would be heavily loaded most on the time in usage. business case is typical as in
    it pulls message from queue, process the message then deletes the message from queue. this module also sends back a notification to user indicating process is complete. functions/modules work fine as in they meet the logical requirement. pretty typical queue
    scenario.
    Now, coming to the problem statement. since it is envisaged that the queue would be heavily loaded most of the time, I am pushing towards to speed up processing of the overall message lifetime. the faster I can clear messages, the better overall experience
    it would be for everyone, system and users.
    To improve on performance I did multiple cycles for performance profiling and then improving on the identified "HOT" path/function.
    It all came down to a point where only the Azure Queue pull and delete are the only two most time consuming calls outside. I can further improve on pull, which i did by batch pulling 32 message at a time (which is the max message count i can pull from Azure
    queue at once at the time of writing this question.), this returned me a favor as in by reducing processing time to a big margin. all good till this as well.
    i am processing these messages in parallel so as to improve on overall performance.
    pseudo code:
    //AzureQueue Class is encapsulating calls to Azure Storage Queue.
    //assume nothing fancy inside, vanila calls to queue for pull/push/delete
    var batchMessages = AzureQueue.Pull(32); Parallel.ForEach(batchMessages, bMessage =>
    //DoSomething does some background processing;
    try{DoSomething(bMessage);}
    catch()
    //Log exception
    AzureQueue.Delete(bMessage);
    With this change now, profiling results show that up-to 90% of time is only taken by the Azure Message delete calls. As it is good to delete message as soon as processing is done, i remove it just after "DoSomething" is finished.
    what i need now is suggestions on how to further improve performance of this function when 90% of the time is being eaten up by the Azure Queue Delete call itself? is there a better faster way to perform delete/bulk delete etc?
    with the implementation mentioned here, i get speed of close to 25 messages/sec. Right now Azure queue delete calls are choking application performance. so is there any hope to push it further.
    Does it also makes difference in performance which queue delete call am making? as of now queue has overloaded method for deleting message, one which except message object and another which accepts message identifier and pop receipt. i am using the later
    one here with message identifier nad pop receipt to delete message from queue.
    Let me know if you need any additional information or any clarification in question.
    Inputs/suggestions are welcome.
    Many thanks.

    The first thing that came to mind was to use a parallel delete at the same time you run the work in DoSomething.  If DoSomething fails, add the message back into the queue.  This won't work for every application, and work that was in the queue
    near the head could be pushed back to the tail, so you'd have to think about how that may effect your workload.
    Or, make a threadpool queued delete after the work was successful.  Fire and forget.  However, if you're loading the processing at 25/sec, and 90% of time sits on the delete, you'd quickly accumulate delete calls for the threadpool until you'd
    never catch up.  At 70-80% duty cycle this may work, but the closer you get to always being busy could make this dangerous.
    I wonder if calling the delete REST API yourself may offer any improvements.  If you find the delete sets up a TCP connection each time, this may be all you need.  Try to keep the connection open, or see if the REST API can delete more at a time
    than the SDK API can.
    Or, if you have the funds, just have more VM instances doing the work in parallel, so the first machine handles 25/sec, the second at 25/sec also - and you just live with the slow delete.  If that's still not good enough, add more instances.
    Darin R.

  • Performance :Query is taking Longtime

    Hi,
    Query on Cube jumps to Query  on ODS and Query on ods takes very long time how can we optimize/improve?
    Rgds,
    C.V.
    Message was edited by:
            C.V. P

    Hi,
    well, i am sure you are aware that Data Stores are not optimized for reporting.
    The Data Store active table can become very large and thus reporting on that table means reporting on a HUGE amount of data.
    The common solution is the creation of additional index on the ODS table to speed up reporting performance. This can be doen in 3.x from the ODS maintenance.
    Also make sure the DB Statistics are active (Put ODS active table in db20)
    Look at this thread for the options you have:
    ODS Performance?
    Please assign points if useful,
    Gili

  • BPF configuration: parallel and dependent steps

    Hi all,
    I have a BPF of let say 5 steps regarding 5 input form.
    step 1 is the main step and inputted by budget team.It is the rates input form.
    step 2 is the sales volume step and depend on step 1 (like other steps) inputted by sales team.
    step 3 is the sales discount step and depend on step 1 -2 inputted by budget team.
    step 4 is the cost step depend on step 1-2 and can be performed regardless of step 3 and inputted by finance team.
    step 5 is the distribution of costs to sales cube to see the profitableness and depend on step 1-2-4 .
    As you see there are both parallel and related steps.Of course there are many other steps but they are similar so I summarized them as 5 steps.
    Note that sales cube consists of sales  group and cost cube consists of cost center( candidates for drive dimension)
    Let me depict the scheme  for that:
    I could not achieve this by creating steps of openning criteria as "matched " or creating more than 1 template and instance. I want to know whether it is possible or there is a workaround.

    I am joking Güneş Of course I have an idea. First of all, i have some question about your modelling.
    If you will make these activities in same model, it is possible.
    1- all or matched it does not matter
    2- matched
    4- matched
    5 - all
    3 - matched
    i hope, it will work.
    Good luck bro

  • Is this the best performed query?

    Hi Guys,
    Is this the best performed query or i can still improve it ?
    I am new to SQL performacne tune, please help to get best performance of the query.
    SQL> EXPLAIN PLAN SET STATEMENT_ID = 'ASH'
    2 FOR
    3 SELECT /*+ FIRST_ROWS(30) */ PSP.PatientNumber, PSP.IntakeID, U.OperationCenterCode OpCenterProcessed,
    4 PSP.ServiceCode, PSP.UOMcode, PSP.StartDt, PSP.ProvID, PSP.ExpDt, NVL(PSP.Units, 0) Units,
    5 PAS.Descript, PAS.ServiceCatID, PSP.CreatedBy AuthCreatedBy, PSP.CreatedDateTime AuthCreatedDateTime,
    6 PSP.AuthorizationID, PSP.ExtracontractReasonCode, PAS.ServiceTypeCode,
    7 NVL(PSP.ProvNotToExceedRate, 0) ProvOverrideRate,
    8 prov.ShortName ProvShortName, PSP.OverrideReasonCode, PAS.ContractProdClassId
    9 ,prov.ProvParentID ProvParentID, prov.ProvTypeCd ProvTypeCd
    10 FROM tblPatServProv psp, tblProductsAndSvcs pas, tblProv prov, tblUser u, tblGlMonthlyClose GLMC
    11 WHERE GLMC.AUTHORIZATIONID >= 239
    12 AND GLMC.AUTHORIZATIONID < 11039696
    13 AND PSP.AuthorizationID = GLMC.AUTHORIZATIONID
    14 AND PSP.Authorizationid < 11039696
    15 AND (PSP.ExpDt >= to_date('01/03/2000','MM/DD/YYYY') OR PSP.ExpDt IS NULL)
    16 AND PSP.ServiceCode = PAS.ServiceCode(+)
    17 AND prov.ProvID(+) = PSP.ProvID
    18* AND U.UserId(+) = PSP.CreatedBy
    19 /
    Explained.
    Elapsed: 00:00:00.46
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    Plan hash value: 3602678330
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 8503K| 3073M| 91 (2)| 00:00:02 |
    |* 1 | HASH JOIN RIGHT OUTER | | 8503K| 3073M| 91 (2)| 00:00:02 |
    | 2 | TABLE ACCESS FULL | TBLPRODUCTSANDSVCS | 4051 | 209K| 16 (0)| 00:00:01 |
    | 3 | NESTED LOOPS | | 31 | 6200 | 75 (2)| 00:00:01 |
    | 4 | NESTED LOOPS OUTER | | 30 | 5820 | 45 (3)| 00:00:01 |
    |* 5 | HASH JOIN RIGHT OUTER | | 30 | 4950 | 15 (7)| 00:00:01 |
    | 6 | TABLE ACCESS FULL | TBLUSER | 3444 | 58548 | 12 (0)| 00:00:01 |
    |* 7 | TABLE ACCESS FULL | TBLPATSERVPROV | 8301K| 585M| 2 (0)| 00:00:01 |
    | 8 | TABLE ACCESS BY INDEX ROWID| TBLPROV | 1 | 29 | 1 (0)| 00:00:01 |
    |* 9 | INDEX UNIQUE SCAN | PK_TBLPROV | 1 | | 0 (0)| 00:00:01 |
    |* 10 | INDEX UNIQUE SCAN | PK_W_GLMONTHLYCLOSE | 1 | 6 | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - access("PSP"."SERVICECODE"="PAS"."SERVICECODE"(+))
    5 - access("U"."USERID"(+)="PSP"."CREATEDBY")
    7 - filter(("PSP"."EXPDT">=TO_DATE('2000-01-03 00:00:00', 'yyyy-mm-dd hh24:mi:ss') OR
    "PSP"."EXPDT" IS NULL) AND "PSP"."AUTHORIZATIONID">=239 AND "PSP"."AUTHORIZATIONID"<11039696)
    9 - access("PROV"."PROVID"(+)="PSP"."PROVID")
    10 - access("PSP"."AUTHORIZATIONID"="GLMC"."AUTHORIZATIONID")
    filter("GLMC"."AUTHORIZATIONID">=239 AND "GLMC"."AUTHORIZATIONID"<11039696)
    28 rows selected.
    Elapsed: 00:00:00.42

    Thanks a lot for your reply.
    Here are the indexes on those tables.
    table --> TBLPATSERVPROV ---> index PK_TBLPATSERVPROV ---> column AUTHORIZATIONID
    table --> TBLPRODUCTSANDSVCS ---> index PK_TBLPRODUCTSANDSVCS ---> column SERVICECODE
    table --> TBLUSER ---> index PK_TBLUSER ---> column USERID

  • MAIN DIFFERENCES BETWEEN PARALLEL AND SEQUENTAIL PRCESSING???

    HI PALS,
    I WANT THE COMPLETE DIFFERENCES BETWEEN PARALLEL AND SEQUENTIAL PROCESSING!
    IN THE CONTEXT OF RFC.

    Hi
    Parallel Processing
    To achieve a balanced distribution of the system load, you can use destination additions to execute function modules in parallel tasks in any application server or in a predefined application server group of an SAP system.
    Parallel-processing is implemented with a special variant of asynchonous RFC. Itu2019s important that you use only the correct variant for your own parallel processing applications: the CALL FUNCTION STARTING NEW TASK DESTINATION IN GROUP keyword. Using other variants of asynchronous RFC circumvents the built-in safeguards in the correct keyword, and can bring your system to its knees
    Details are discussed in the following subsections:
    ·        Prerequisites for Parallel Processing
    ·        Function Modules and ABAP Keywords for Parallel Processing
    ·        Managing Resources in Parallel Processing
    Prerequisites for Parallel Processing
    Before you implement parallel processing, make sure that your application and your SAP system meet these requirements:
    ·        Logically-independent units of work:
    The data processing task that is to be carried out in parallel must be logically independent of other instances of the task. That is, the task can be carried out without reference to other records from the same data set that are also being processed in parallel, and the task is not dependent upon the results of others of the parallel operations. For example, parallel processing is not suitable for data that must be sequentially processed or in which the processing of one data item is dependent upon the processing of another item of the data.
    By definition, there is no guarantee that data will be processed in a particular order in parallel processing or that a particular result will be available at a given point in processing. 
    ·        ABAP requirements:
    ¡        The function module that you call must be marked as externally callable. This attribute is specified in the Remote function call supported field in the function module definition (transaction SE37).
    ¡        The called function module may not include a function call to the destination u201CBACK.u201D
    ¡        The calling program should not change to a new internal session after making an asynchronous RFC call. That is, you should not use SUBMIT or CALL TRANSACTION in such a report after using CALL FUNCTION STARTING NEW TASK.  
    ¡        You cannot use the CALL FUNCTION STARTING NEW TASK DESTINATION IN GROUP keyword to start external programs. 
    ·        System resources: 
    In order to process tasks from parallel jobs, a server in your SAP system must have at least 3 dialog work processes. It must also meet the workload criteria of the parallel processing system: Dispatcher queue less than 10% full, at least one dialog work process free for processing tasks from the parallel job.
    Function Modules and ABAP Keywords for Parallel Processing
    You can implement parallel processing in your applications by using the following function modules and ABAP keywords:
    ·        SPBT_INITIALIZE: Optional function module. 
    Use to determine the availability of resources for parallel processing. 
    You can do the following:
    ¡        check that the parallel processing group that you have specified is correct.
    ¡        find out how many work processes are available so that you can more efficiently size the packets of data that are to be processed in your data.
    ·        CALL FUNCTION Remotefunction STARTING NEW TASK Taskname DESTINATION IN GROUP:
    With this ABAP statement, you are telling the SAP system to process function module calls in parallel. Typically, youu2019ll place this keyword in a loop in which you divide up the data that is to be processed into work packets. You can pass the data that is to be processed in the form of an internal table (EXPORT, TABLE arguments). The keyword implements parallel processing by dispatching asynchronous RFC calls to the servers that are available in the RFC server group specified for the processing.
    Note that your RFC calls with CALL FUNCTION are processed in work processes of type DIALOG. The DIALOG limit on processing of one dialog step (by default 300 seconds, system profile parameter rdisp/max_wprun_time) applies to these RFC calls. Keep this limit in mind when you divide up data for parallel processing calls. 
    ·        SPBT_GET_PP_DESTINATION: Optional function module. 
    Call immediately after the CALL FUNCTION keyword to get the name of the server on which the parallel processing task will be run. 
    ·        SPBT_DO_NOT_USE_SERVER: Optional function module. 
    Excludes a particular server from further use for processing parallel processing tasks. Use in conjunction with SPBT_GET_PP_DESTINATION if you determine that a particular server is not available for parallel processing (for example, COMMUNICATION FAILURE exception if a server becomes unavailable).
    ·        WAIT: ABAP keyword
    WAIT UNTIL
    Required if you wish to wait for all of the asynchronous parallel tasks created with CALL FUNCTION to return. This is normally a requirement for orderly background processing. May be used only if the CALL FUNCTION includes the PERFORMING ON RETURN addition.
    ·        RECEIVE: ABAP keyword
    RECEIVE RESULTS FROM FUNCTION Remotefunction
    Required if you wish to receive the results of the processing of an asynchronous RFC. RECEIVE retrieves IMPORT and TABLE parameters as well as messages and return codes.
    Managing Resources in Parallel Processing
    You use the following destination additions to perform parallel execution of function modules (asynchronous calls) in the SAP system:
    In a predefined group of application servers:
    CALL FUNCTION Remotefunction STARTING NEW TASK Taskname
    DESTINATION IN GROUP Groupname
    In all currently available and active application servers:
    CALL FUNCTION Remotefunction STARTING NEW TASK Taskname
    DESTINATION IN GROUP DEFAULT
    Sequential Processing
    In the following cases, the system chooses sequential (non-parallel) processing:
    ●      In table RSADMIN, entry QUERY_MAX_WP_DIAG has value (column value) 1.
    ●      The entire query consists of one sub-access only.
    ●      The query is running in a batch process.
    ●      The query was started from the query monitor (transaction RSRT) using various debug options (for example, SQL query display, execution plan display). See, Dividing a MultiProvider Query into Sub-Queries.
    ●      The query requests non-cumulative key figures.
    ●      Insufficient dialog processes are available when the query is executed. These are required for parallel processing.
    ●      The query is configured for non-parallel processing.
    ●      You want to save the result of the query in a file or a table.
    In Release SAP NetWeaver 7.0, the system can efficiently manage the large intermediate results produced by parallel processing. In previous releases, the system terminated when it reached a particular intermediate result size and proceeded to read data sequentially. This is no longer the case. Therefore, the RSADMIN parameter that was used in previous releases for reading a MultiProvider sequentially is no longer used.
    Reward If Helpfull,
    Naresh

  • RE: Case 59063: performance issues w/ C TLIB and Forte3M

    Hi James,
    Could you give me a call, I am at my desk.
    I had meetings all day and couldn't respond to your calls earlier.
    -----Original Message-----
    From: James Min [mailto:jminbrio.forte.com]
    Sent: Thursday, March 30, 2000 2:50 PM
    To: Sharma, Sandeep; Pyatetskiy, Alexander
    Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
    Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
    Hello,
    I just want to reiterate that we are very committed to working on
    this issue, and that our goal is to find out the root of the problem. But
    first I'd like to narrow down the avenues by process of elimination.
    Open Cursor is something that is commonly used in today's RDBMS. I
    know that you must test your query in ISQL using some kind of execute
    immediate, but Sybase should be able to handle an open cursor. I was
    wondering if your Sybase expert commented on the fact that the server is
    not responding to commonly used command like 'open cursor'. According to
    our developer, we are merely following the API from Sybase, and open cursor
    is not something that particularly slows down a query for several minutes
    (except maybe the very first time). The logs show that Forte is waiting for
    a status from the DB server. Actually, using prepared statements and open
    cursor ends up being more efficient in the long run.
    Some questions:
    1) Have you tried to do a prepared statement with open cursor in your ISQL
    session? If so, did it have the same slowness?
    2) How big is the table you are querying? How many rows are there? How many
    are returned?
    3) When there is a hang in Forte, is there disk-spinning or CPU usage in
    the database server side? On the Forte side? Absolutely no activity at all?
    We actually have a Sybase set-up here, and if you wish, we could test out
    your database and Forte PEX here. Since your queries seems to be running
    off of only one table, this might be the best option, as we could look at
    everything here, in house. To do this:
    a) BCP out the data into a flat file. (character format to make it portable)
    b) we need a script to create the table and indexes.
    c) the Forte PEX file of the app to test this out.
    d) the SQL staement that you issue in ISQL for comparison.
    If the situation warrants, we can give a concrete example of
    possible errors/bugs to a developer. Dial-in is still an option, but to be
    able to look at the TOOL code, database setup, etc. without the limitations
    of dial-up may be faster and more efficient. Please let me know if you can
    provide this, as well as the answers to the above questions, or if you have
    any questions.
    Regards,
    At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
    James, Ken:
    FYI, see attached response from our Sybase expert, Dani Sasmita. She has
    already tried what you suggested and results are enclosed.
    ++
    Sandeep
    -----Original Message-----
    From: SASMITA, DANIAR
    Sent: Wednesday, March 29, 2000 6:43 PM
    To: Pyatetskiy, Alexander
    Cc: Sharma, Sandeep; Tenerelli, Mike
    Subject: Re: FW: Case 59063: Select using LIKE has performance
    issues
    w/ CTLIB and Forte 3M
    We did that trick already.
    When it is hanging, I can see what is doing.
    It is doing OPEN CURSOR. But not clear the exact statement of the cursor
    it is trying to open.
    When we run the query directly to Sybase, not using Forte, it is clearly
    not opening any cursor.
    And running it directly to Sybase many times, the response is always
    consistently fast.
    It is just when the query runs from Forte to Sybase, it opens a cursor.
    But again, in the Forte code, Alex is not using any cursor.
    In trying to capture the query,we even tried to audit any statementcoming
    to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
    James Min
    Technical Support Engineer - Forte Tools
    Sun Microsystems, Inc.
    1800 Harrison St., 17th Fl.
    Oakland, CA 94612
    james.minsun.com
    510.869.2056
    ==============================================
    Support Hotline: 510-451-5400
    CUSTOMERS open a NEW CASE with Technical Support:
    http://www.forte.com/support/case_entry.html
    CUSTOMERS view your cases and enter follow-up transactions:
    http://www.forte.com/support/view_calls.html

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • GR/IR Clearing When to perform F.13/F.19 and MR11

    Dear,
    Can you pls explain GR/IR Clearing and its grouping, whats the process for GR/IR
    adjustments done?
    What is Grouping and Regroping?
    When to perform F.13/F.19 and MR11?
    I have read the threads discussed here on this topic, still I do not understand.
    Thanks All
    Krishna

    Hi
    When we execute SAP124 (F.13) with the flag "only documents which can be cleared" it will display in detailed list only those documents that can be cleared.
    If dont check this field it will generate a list even of documents that cannot be cleared.
    In this case in OB74 you have given ASSIGNMENT and INVOICE REFERENCE as the criteria.
    But what is probably happening is that the ASSIGNMENT is not matching in these documents and thus the system does not find
    the item suitable or rather eligible for clearing.
    If you check the documentation relative to the "Special processing of GR/IR clearing accounts" in SE38 -> SAPF124, you will find the following information:
    "If you set the GR/IR accounts special processing indicator: The program then automatically uses the EBELN and EBELP fields as well a as the XREF3 reference field as grouping criteria.
    This means that documents with the most recent posting date are initially ignored."
    So the system is checking these three fields for grouping the documents.
    For clearing GR/IR account it is not obligatory to set indicator GR/IR Account Special Processing. As described in note 546410 it can be more effective not use this indicator.
    Also read note 546410 and 574482 thoroughly.

Maybe you are looking for